public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-03-18 23:27 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-03-18 23:27 UTC (permalink / raw
  To: gentoo-commits

commit:     aca5f6281d96053a892f47fb707516f7df7d56a9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 18 23:16:43 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar 18 23:16:43 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=aca5f628

Patch to enable link security restrictions by default. Patch to disable Windows 8 compatibility for some Lenovo ThinkPads.  Patch to ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.  Path to not not lock when UMH is waiting on current thread spawned by linuxrc. (bug #481344) fbcondecor bootsplash patch.  Add Gentoo Linux support config settings and defaults.  Kernel patch that enables gcc < v4.9 optimizations for additional CPUs.  Kernel patch enables gcc >= v4.9 optimizations for additional CPUs.

 0000_README                                        |   28 +
 ...ble-link-security-restrictions-by-default.patch |   22 +
 2700_ThinkPad-30-brightness-control-fix.patch      |   67 +
 2900_dev-root-proc-mount-fix.patch                 |   30 +
 2905_2disk-resume-image-fix.patch                  |   24 +
 4200_fbcondecor-3.19.patch                         | 2119 ++++++++++++++++++++
 ...able-additional-cpu-optimizations-for-gcc.patch |  327 +++
 ...-additional-cpu-optimizations-for-gcc-4.9.patch |  387 ++++
 8 files changed, 3004 insertions(+)

diff --git a/0000_README b/0000_README
index 36c2b96..ca06e06 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,34 @@ Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.
 
+Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
+From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc:   Enable link security restrictions by default
+
+Patch:  2700_ThinkPad-30-brightness-control-fix.patch
+From:   Seth Forshee <seth.forshee@canonical.com>
+Desc:   ACPI: Disable Windows 8 compatibility for some Lenovo ThinkPads.
+
+Patch:  2900_dev-root-proc-mount-fix.patch
+From:   https://bugs.gentoo.org/show_bug.cgi?id=438380
+Desc:   Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.
+
+Patch:  2905_s2disk-resume-image-fix.patch
+From:   Al Viro <viro <at> ZenIV.linux.org.uk>
+Desc:   Do not lock when UMH is waiting on current thread spawned by linuxrc. (bug #481344)
+
+Patch:  4200_fbcondecor-3.19.patch
+From:   http://www.mepiscommunity.org/fbcondecor
+Desc:   Bootsplash ported by Marco. (Bug #539616)
+
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
+
+Patch:  5000_enable-additional-cpu-optimizations-for-gcc.patch
+From:   https://github.com/graysky2/kernel_gcc_patch/
+Desc:   Kernel patch enables gcc < v4.9 optimizations for additional CPUs.
+
+Patch:  5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
+From:   https://github.com/graysky2/kernel_gcc_patch/
+Desc:   Kernel patch enables gcc >= v4.9 optimizations for additional CPUs.

diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 0000000..639fb3c
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,22 @@
+From: Ben Hutchings <ben@decadent.org.uk>
+Subject: fs: Enable link security restrictions by default
+Date: Fri, 02 Nov 2012 05:32:06 +0000
+Bug-Debian: https://bugs.debian.org/609455
+Forwarded: not-needed
+
+This reverts commit 561ec64ae67ef25cac8d72bb9c4bfc955edfd415
+('VFS: don't do protected {sym,hard}links by default').
+
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -651,8 +651,8 @@ static inline void put_link(struct namei
+ 	path_put(link);
+ }
+ 
+-int sysctl_protected_symlinks __read_mostly = 0;
+-int sysctl_protected_hardlinks __read_mostly = 0;
++int sysctl_protected_symlinks __read_mostly = 1;
++int sysctl_protected_hardlinks __read_mostly = 1;
+ 
+ /**
+  * may_follow_link - Check symlink following for unsafe situations

diff --git a/2700_ThinkPad-30-brightness-control-fix.patch b/2700_ThinkPad-30-brightness-control-fix.patch
new file mode 100644
index 0000000..b548c6d
--- /dev/null
+++ b/2700_ThinkPad-30-brightness-control-fix.patch
@@ -0,0 +1,67 @@
+diff --git a/drivers/acpi/blacklist.c b/drivers/acpi/blacklist.c
+index cb96296..6c242ed 100644
+--- a/drivers/acpi/blacklist.c
++++ b/drivers/acpi/blacklist.c
+@@ -269,6 +276,61 @@  static struct dmi_system_id acpi_osi_dmi_table[] __initdata = {
+ 	},
+ 
+ 	/*
++	 * The following Lenovo models have a broken workaround in the
++	 * acpi_video backlight implementation to meet the Windows 8
++	 * requirement of 101 backlight levels. Reverting to pre-Win8
++	 * behavior fixes the problem.
++	 */
++	{
++	.callback = dmi_disable_osi_win8,
++	.ident = "Lenovo ThinkPad L430",
++	.matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		     DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L430"),
++		},
++	},
++	{
++	.callback = dmi_disable_osi_win8,
++	.ident = "Lenovo ThinkPad T430s",
++	.matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		     DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T430s"),
++		},
++	},
++	{
++	.callback = dmi_disable_osi_win8,
++	.ident = "Lenovo ThinkPad T530",
++	.matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		     DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T530"),
++		},
++	},
++	{
++	.callback = dmi_disable_osi_win8,
++	.ident = "Lenovo ThinkPad W530",
++	.matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		     DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W530"),
++		},
++	},
++	{
++	.callback = dmi_disable_osi_win8,
++	.ident = "Lenovo ThinkPad X1 Carbon",
++	.matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		     DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X1 Carbon"),
++		},
++	},
++	{
++	.callback = dmi_disable_osi_win8,
++	.ident = "Lenovo ThinkPad X230",
++	.matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		     DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X230"),
++		},
++	},
++
++	/*
+ 	 * BIOS invocation of _OSI(Linux) is almost always a BIOS bug.
+ 	 * Linux ignores it, except for the machines enumerated below.
+ 	 */
+

diff --git a/2900_dev-root-proc-mount-fix.patch b/2900_dev-root-proc-mount-fix.patch
new file mode 100644
index 0000000..6ea86e2
--- /dev/null
+++ b/2900_dev-root-proc-mount-fix.patch
@@ -0,0 +1,30 @@
+--- a/init/do_mounts.c	2014-08-26 08:03:30.000013100 -0400
++++ b/init/do_mounts.c	2014-08-26 08:11:19.720014712 -0400
+@@ -484,7 +484,10 @@ void __init change_floppy(char *fmt, ...
+ 	va_start(args, fmt);
+ 	vsprintf(buf, fmt, args);
+ 	va_end(args);
+-	fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
++	if (saved_root_name[0])
++		fd = sys_open(saved_root_name, O_RDWR | O_NDELAY, 0);
++	else
++		fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
+ 	if (fd >= 0) {
+ 		sys_ioctl(fd, FDEJECT, 0);
+ 		sys_close(fd);
+@@ -527,8 +530,13 @@ void __init mount_root(void)
+ 	}
+ #endif
+ #ifdef CONFIG_BLOCK
+-	create_dev("/dev/root", ROOT_DEV);
+-	mount_block_root("/dev/root", root_mountflags);
++	if (saved_root_name[0]) {
++		create_dev(saved_root_name, ROOT_DEV);
++		mount_block_root(saved_root_name, root_mountflags);
++	} else {
++		create_dev("/dev/root", ROOT_DEV);
++		mount_block_root("/dev/root", root_mountflags);
++	}
+ #endif
+ }
+ 

diff --git a/2905_2disk-resume-image-fix.patch b/2905_2disk-resume-image-fix.patch
new file mode 100644
index 0000000..7e95d29
--- /dev/null
+++ b/2905_2disk-resume-image-fix.patch
@@ -0,0 +1,24 @@
+diff --git a/kernel/kmod.c b/kernel/kmod.c
+index fb32636..d968882 100644
+--- a/kernel/kmod.c
++++ b/kernel/kmod.c
+@@ -575,7 +575,8 @@
+ 		call_usermodehelper_freeinfo(sub_info);
+ 		return -EINVAL;
+ 	}
+-	helper_lock();
++	if (!(current->flags & PF_FREEZER_SKIP))
++		helper_lock();
+ 	if (!khelper_wq || usermodehelper_disabled) {
+ 		retval = -EBUSY;
+ 		goto out;
+@@ -611,7 +612,8 @@ wait_done:
+ out:
+ 	call_usermodehelper_freeinfo(sub_info);
+ unlock:
+-	helper_unlock();
++	if (!(current->flags & PF_FREEZER_SKIP))
++		helper_unlock();
+ 	return retval;
+ }
+ EXPORT_SYMBOL(call_usermodehelper_exec);

diff --git a/4200_fbcondecor-3.19.patch b/4200_fbcondecor-3.19.patch
new file mode 100644
index 0000000..29c379f
--- /dev/null
+++ b/4200_fbcondecor-3.19.patch
@@ -0,0 +1,2119 @@
+diff --git a/Documentation/fb/00-INDEX b/Documentation/fb/00-INDEX
+index fe85e7c..2230930 100644
+--- a/Documentation/fb/00-INDEX
++++ b/Documentation/fb/00-INDEX
+@@ -23,6 +23,8 @@ ep93xx-fb.txt
+ 	- info on the driver for EP93xx LCD controller.
+ fbcon.txt
+ 	- intro to and usage guide for the framebuffer console (fbcon).
++fbcondecor.txt
++	- info on the Framebuffer Console Decoration
+ framebuffer.txt
+ 	- introduction to frame buffer devices.
+ gxfb.txt
+diff --git a/Documentation/fb/fbcondecor.txt b/Documentation/fb/fbcondecor.txt
+new file mode 100644
+index 0000000..3388c61
+--- /dev/null
++++ b/Documentation/fb/fbcondecor.txt
+@@ -0,0 +1,207 @@
++What is it?
++-----------
++
++The framebuffer decorations are a kernel feature which allows displaying a 
++background picture on selected consoles.
++
++What do I need to get it to work?
++---------------------------------
++
++To get fbcondecor up-and-running you will have to:
++ 1) get a copy of splashutils [1] or a similar program
++ 2) get some fbcondecor themes
++ 3) build the kernel helper program
++ 4) build your kernel with the FB_CON_DECOR option enabled.
++
++To get fbcondecor operational right after fbcon initialization is finished, you
++will have to include a theme and the kernel helper into your initramfs image.
++Please refer to splashutils documentation for instructions on how to do that.
++
++[1] The splashutils package can be downloaded from:
++    http://github.com/alanhaggai/fbsplash
++
++The userspace helper
++--------------------
++
++The userspace fbcondecor helper (by default: /sbin/fbcondecor_helper) is called by the
++kernel whenever an important event occurs and the kernel needs some kind of
++job to be carried out. Important events include console switches and video
++mode switches (the kernel requests background images and configuration
++parameters for the current console). The fbcondecor helper must be accessible at
++all times. If it's not, fbcondecor will be switched off automatically.
++
++It's possible to set path to the fbcondecor helper by writing it to
++/proc/sys/kernel/fbcondecor.
++
++*****************************************************************************
++
++The information below is mostly technical stuff. There's probably no need to
++read it unless you plan to develop a userspace helper.
++
++The fbcondecor protocol
++-----------------------
++
++The fbcondecor protocol defines a communication interface between the kernel and
++the userspace fbcondecor helper.
++
++The kernel side is responsible for:
++
++ * rendering console text, using an image as a background (instead of a
++   standard solid color fbcon uses),
++ * accepting commands from the user via ioctls on the fbcondecor device,
++ * calling the userspace helper to set things up as soon as the fb subsystem 
++   is initialized.
++
++The userspace helper is responsible for everything else, including parsing
++configuration files, decompressing the image files whenever the kernel needs
++it, and communicating with the kernel if necessary.
++
++The fbcondecor protocol specifies how communication is done in both ways:
++kernel->userspace and userspace->helper.
++  
++Kernel -> Userspace
++-------------------
++
++The kernel communicates with the userspace helper by calling it and specifying
++the task to be done in a series of arguments.
++
++The arguments follow the pattern:
++<fbcondecor protocol version> <command> <parameters>
++
++All commands defined in fbcondecor protocol v2 have the following parameters:
++ virtual console
++ framebuffer number
++ theme
++
++Fbcondecor protocol v1 specified an additional 'fbcondecor mode' after the
++framebuffer number. Fbcondecor protocol v1 is deprecated and should not be used.
++
++Fbcondecor protocol v2 specifies the following commands:
++
++getpic
++------
++ The kernel issues this command to request image data. It's up to the 
++ userspace  helper to find a background image appropriate for the specified 
++ theme and the current resolution. The userspace helper should respond by 
++ issuing the FBIOCONDECOR_SETPIC ioctl.
++
++init
++----
++ The kernel issues this command after the fbcondecor device is created and
++ the fbcondecor interface is initialized. Upon receiving 'init', the userspace
++ helper should parse the kernel command line (/proc/cmdline) or otherwise
++ decide whether fbcondecor is to be activated.
++
++ To activate fbcondecor on the first console the helper should issue the
++ FBIOCONDECOR_SETCFG, FBIOCONDECOR_SETPIC and FBIOCONDECOR_SETSTATE commands,
++ in the above-mentioned order.
++
++ When the userspace helper is called in an early phase of the boot process
++ (right after the initialization of fbcon), no filesystems will be mounted.
++ The helper program should mount sysfs and then create the appropriate
++ framebuffer, fbcondecor and tty0 devices (if they don't already exist) to get
++ current display settings and to be able to communicate with the kernel side.
++ It should probably also mount the procfs to be able to parse the kernel
++ command line parameters.
++
++ Note that the console sem is not held when the kernel calls fbcondecor_helper
++ with the 'init' command. The fbcondecor helper should perform all ioctls with
++ origin set to FBCON_DECOR_IO_ORIG_USER.
++
++modechange
++----------
++ The kernel issues this command on a mode change. The helper's response should
++ be similar to the response to the 'init' command. Note that this time the
++ console sem is held and all ioctls must be performed with origin set to
++ FBCON_DECOR_IO_ORIG_KERNEL.
++
++
++Userspace -> Kernel
++-------------------
++
++Userspace programs can communicate with fbcondecor via ioctls on the
++fbcondecor device. These ioctls are to be used by both the userspace helper
++(called only by the kernel) and userspace configuration tools (run by the users).
++
++The fbcondecor helper should set the origin field to FBCON_DECOR_IO_ORIG_KERNEL
++when doing the appropriate ioctls. All userspace configuration tools should
++use FBCON_DECOR_IO_ORIG_USER. Failure to set the appropriate value in the origin
++field when performing ioctls from the kernel helper will most likely result
++in a console deadlock.
++
++FBCON_DECOR_IO_ORIG_KERNEL instructs fbcondecor not to try to acquire the console
++semaphore. Not surprisingly, FBCON_DECOR_IO_ORIG_USER instructs it to acquire
++the console sem.
++
++The framebuffer console decoration provides the following ioctls (all defined in 
++linux/fb.h):
++
++FBIOCONDECOR_SETPIC
++description: loads a background picture for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct fb_image*
++notes: 
++If called for consoles other than the current foreground one, the picture data
++will be ignored.
++
++If the current virtual console is running in a 8-bpp mode, the cmap substruct
++of fb_image has to be filled appropriately: start should be set to 16 (first
++16 colors are reserved for fbcon), len to a value <= 240 and red, green and
++blue should point to valid cmap data. The transp field is ingored. The fields
++dx, dy, bg_color, fg_color in fb_image are ignored as well.
++
++FBIOCONDECOR_SETCFG
++description: sets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++notes: The structure has to be filled with valid data.
++
++FBIOCONDECOR_GETCFG
++description: gets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++
++FBIOCONDECOR_SETSTATE
++description: sets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++          values: 0 = disabled, 1 = enabled.
++
++FBIOCONDECOR_GETSTATE
++description: gets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++          values: as in FBIOCONDECOR_SETSTATE
++
++Info on used structures:
++
++Definition of struct vc_decor can be found in linux/console_decor.h. It's
++heavily commented. Note that the 'theme' field should point to a string
++no longer than FBCON_DECOR_THEME_LEN. When FBIOCONDECOR_GETCFG call is
++performed, the theme field should point to a char buffer of length
++FBCON_DECOR_THEME_LEN.
++
++Definition of struct fbcon_decor_iowrapper can be found in linux/fb.h.
++The fields in this struct have the following meaning:
++
++vc: 
++Virtual console number.
++
++origin: 
++Specifies if the ioctl is performed as a response to a kernel request. The
++fbcondecor helper should set this field to FBCON_DECOR_IO_ORIG_KERNEL, userspace
++programs should set it to FBCON_DECOR_IO_ORIG_USER. This field is necessary to
++avoid console semaphore deadlocks.
++
++data: 
++Pointer to a data structure appropriate for the performed ioctl. Type of
++the data struct is specified in the ioctls description.
++
++*****************************************************************************
++
++Credit
++------
++
++Original 'bootsplash' project & implementation by:
++  Volker Poplawski <volker@poplawski.de>, Stefan Reinauer <stepan@suse.de>,
++  Steffen Winterfeldt <snwint@suse.de>, Michael Schroeder <mls@suse.de>,
++  Ken Wimer <wimer@suse.de>.
++
++Fbcondecor, fbcondecor protocol design, current implementation & docs by:
++  Michal Januszewski <michalj+fbcondecor@gmail.com>
++
+diff --git a/drivers/Makefile b/drivers/Makefile
+index 7183b6a..d576148 100644
+--- a/drivers/Makefile
++++ b/drivers/Makefile
+@@ -17,6 +17,10 @@ obj-y				+= pwm/
+ obj-$(CONFIG_PCI)		+= pci/
+ obj-$(CONFIG_PARISC)		+= parisc/
+ obj-$(CONFIG_RAPIDIO)		+= rapidio/
++# tty/ comes before char/ so that the VT console is the boot-time
++# default.
++obj-y				+= tty/
++obj-y				+= char/
+ obj-y				+= video/
+ obj-y				+= idle/
+ 
+@@ -42,11 +46,6 @@ obj-$(CONFIG_REGULATOR)		+= regulator/
+ # reset controllers early, since gpu drivers might rely on them to initialize
+ obj-$(CONFIG_RESET_CONTROLLER)	+= reset/
+ 
+-# tty/ comes before char/ so that the VT console is the boot-time
+-# default.
+-obj-y				+= tty/
+-obj-y				+= char/
+-
+ # iommu/ comes before gpu as gpu are using iommu controllers
+ obj-$(CONFIG_IOMMU_SUPPORT) += iommu/
+
+diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
+index fe1cd01..6d2e87a 100644
+--- a/drivers/video/console/Kconfig
++++ b/drivers/video/console/Kconfig
+@@ -126,6 +126,19 @@ config FRAMEBUFFER_CONSOLE_ROTATION
+          such that other users of the framebuffer will remain normally
+          oriented.
+ 
++config FB_CON_DECOR
++	bool "Support for the Framebuffer Console Decorations"
++	depends on FRAMEBUFFER_CONSOLE=y && !FB_TILEBLITTING
++	default n
++	---help---
++	  This option enables support for framebuffer console decorations which
++	  makes it possible to display images in the background of the system
++	  consoles.  Note that userspace utilities are necessary in order to take 
++	  advantage of these features. Refer to Documentation/fb/fbcondecor.txt 
++	  for more information.
++
++	  If unsure, say N.
++
+ config STI_CONSOLE
+         bool "STI text console"
+         depends on PARISC
+diff --git a/drivers/video/console/Makefile b/drivers/video/console/Makefile
+index 43bfa48..cc104b6f 100644
+--- a/drivers/video/console/Makefile
++++ b/drivers/video/console/Makefile
+@@ -16,4 +16,5 @@ obj-$(CONFIG_FRAMEBUFFER_CONSOLE)     += fbcon_rotate.o fbcon_cw.o fbcon_ud.o \
+                                          fbcon_ccw.o
+ endif
+ 
++obj-$(CONFIG_FB_CON_DECOR)     	  += fbcondecor.o cfbcondecor.o
+ obj-$(CONFIG_FB_STI)              += sticore.o
+diff --git a/drivers/video/console/bitblit.c b/drivers/video/console/bitblit.c
+index 61b182b..984384b 100644
+--- a/drivers/video/console/bitblit.c
++++ b/drivers/video/console/bitblit.c
+@@ -18,6 +18,7 @@
+ #include <linux/console.h>
+ #include <asm/types.h>
+ #include "fbcon.h"
++#include "fbcondecor.h"
+ 
+ /*
+  * Accelerated handlers.
+@@ -55,6 +56,13 @@ static void bit_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ 	area.height = height * vc->vc_font.height;
+ 	area.width = width * vc->vc_font.width;
+ 
++	if (fbcon_decor_active(info, vc)) {
++ 		area.sx += vc->vc_decor.tx;
++ 		area.sy += vc->vc_decor.ty;
++ 		area.dx += vc->vc_decor.tx;
++ 		area.dy += vc->vc_decor.ty;
++ 	}
++
+ 	info->fbops->fb_copyarea(info, &area);
+ }
+ 
+@@ -380,11 +388,15 @@ static void bit_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ 	cursor.image.depth = 1;
+ 	cursor.rop = ROP_XOR;
+ 
+-	if (info->fbops->fb_cursor)
+-		err = info->fbops->fb_cursor(info, &cursor);
++	if (fbcon_decor_active(info, vc)) {
++		fbcon_decor_cursor(info, &cursor);
++	} else {
++		if (info->fbops->fb_cursor)
++			err = info->fbops->fb_cursor(info, &cursor);
+ 
+-	if (err)
+-		soft_cursor(info, &cursor);
++		if (err)
++			soft_cursor(info, &cursor);
++	}
+ 
+ 	ops->cursor_reset = 0;
+ }
+diff --git a/drivers/video/console/cfbcondecor.c b/drivers/video/console/cfbcondecor.c
+new file mode 100644
+index 0000000..a2b4497
+--- /dev/null
++++ b/drivers/video/console/cfbcondecor.c
+@@ -0,0 +1,471 @@
++/*
++ *  linux/drivers/video/cfbcon_decor.c -- Framebuffer decor render functions
++ *
++ *  Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ *  Code based upon "Bootdecor" (C) 2001-2003
++ *       Volker Poplawski <volker@poplawski.de>,
++ *       Stefan Reinauer <stepan@suse.de>,
++ *       Steffen Winterfeldt <snwint@suse.de>,
++ *       Michael Schroeder <mls@suse.de>,
++ *       Ken Wimer <wimer@suse.de>.
++ *
++ *  This file is subject to the terms and conditions of the GNU General Public
++ *  License.  See the file COPYING in the main directory of this archive for
++ *  more details.
++ */
++#include <linux/module.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/selection.h>
++#include <linux/slab.h>
++#include <linux/vt_kern.h>
++#include <asm/irq.h>
++
++#include "fbcon.h"
++#include "fbcondecor.h"
++
++#define parse_pixel(shift,bpp,type)						\
++	do {									\
++		if (d & (0x80 >> (shift)))					\
++			dd2[(shift)] = fgx;					\
++		else								\
++			dd2[(shift)] = transparent ? *(type *)decor_src : bgx;	\
++		decor_src += (bpp);						\
++	} while (0)								\
++
++extern int get_color(struct vc_data *vc, struct fb_info *info,
++		     u16 c, int is_fg);
++
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc)
++{
++	int i, j, k;
++	int minlen = min(min(info->var.red.length, info->var.green.length),
++			     info->var.blue.length);
++	u32 col;
++
++	for (j = i = 0; i < 16; i++) {
++		k = color_table[i];
++
++		col = ((vc->vc_palette[j++]  >> (8-minlen))
++			<< info->var.red.offset);
++		col |= ((vc->vc_palette[j++] >> (8-minlen))
++			<< info->var.green.offset);
++		col |= ((vc->vc_palette[j++] >> (8-minlen))
++			<< info->var.blue.offset);
++			((u32 *)info->pseudo_palette)[k] = col;
++	}
++}
++
++void fbcon_decor_renderc(struct fb_info *info, int ypos, int xpos, int height,
++		      int width, u8* src, u32 fgx, u32 bgx, u8 transparent)
++{
++	unsigned int x, y;
++	u32 dd;
++	int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++	unsigned int d = ypos * info->fix.line_length + xpos * bytespp;
++	unsigned int ds = (ypos * info->var.xres + xpos) * bytespp;
++	u16 dd2[4];
++
++	u8* decor_src = (u8 *)(info->bgdecor.data + ds);
++	u8* dst = (u8 *)(info->screen_base + d);
++
++	if ((ypos + height) > info->var.yres || (xpos + width) > info->var.xres)
++		return;
++
++	for (y = 0; y < height; y++) {
++		switch (info->var.bits_per_pixel) {
++
++		case 32:
++			for (x = 0; x < width; x++) {
++
++				if ((x & 7) == 0)
++					d = *src++;
++				if (d & 0x80)
++					dd = fgx;
++				else
++					dd = transparent ?
++					     *(u32 *)decor_src : bgx;
++
++				d <<= 1;
++				decor_src += 4;
++				fb_writel(dd, dst);
++				dst += 4;
++			}
++			break;
++		case 24:
++			for (x = 0; x < width; x++) {
++
++				if ((x & 7) == 0)
++					d = *src++;
++				if (d & 0x80)
++					dd = fgx;
++				else
++					dd = transparent ?
++					     (*(u32 *)decor_src & 0xffffff) : bgx;
++
++				d <<= 1;
++				decor_src += 3;
++#ifdef __LITTLE_ENDIAN
++				fb_writew(dd & 0xffff, dst);
++				dst += 2;
++				fb_writeb((dd >> 16), dst);
++#else
++				fb_writew(dd >> 8, dst);
++				dst += 2;
++				fb_writeb(dd & 0xff, dst);
++#endif
++				dst++;
++			}
++			break;
++		case 16:
++			for (x = 0; x < width; x += 2) {
++				if ((x & 7) == 0)
++					d = *src++;
++
++				parse_pixel(0, 2, u16);
++				parse_pixel(1, 2, u16);
++#ifdef __LITTLE_ENDIAN
++				dd = dd2[0] | (dd2[1] << 16);
++#else
++				dd = dd2[1] | (dd2[0] << 16);
++#endif
++				d <<= 2;
++				fb_writel(dd, dst);
++				dst += 4;
++			}
++			break;
++
++		case 8:
++			for (x = 0; x < width; x += 4) {
++				if ((x & 7) == 0)
++					d = *src++;
++
++				parse_pixel(0, 1, u8);
++				parse_pixel(1, 1, u8);
++				parse_pixel(2, 1, u8);
++				parse_pixel(3, 1, u8);
++
++#ifdef __LITTLE_ENDIAN
++				dd = dd2[0] | (dd2[1] << 8) | (dd2[2] << 16) | (dd2[3] << 24);
++#else
++				dd = dd2[3] | (dd2[2] << 8) | (dd2[1] << 16) | (dd2[0] << 24);
++#endif
++				d <<= 4;
++				fb_writel(dd, dst);
++				dst += 4;
++			}
++		}
++
++		dst += info->fix.line_length - width * bytespp;
++		decor_src += (info->var.xres - width) * bytespp;
++	}
++}
++
++#define cc2cx(a) 						\
++	((info->fix.visual == FB_VISUAL_TRUECOLOR || 		\
++	  info->fix.visual == FB_VISUAL_DIRECTCOLOR) ? 		\
++	 ((u32*)info->pseudo_palette)[a] : a)
++
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info,
++		   const unsigned short *s, int count, int yy, int xx)
++{
++	unsigned short charmask = vc->vc_hi_font_mask ? 0x1ff : 0xff;
++	struct fbcon_ops *ops = info->fbcon_par;
++	int fg_color, bg_color, transparent;
++	u8 *src;
++	u32 bgx, fgx;
++	u16 c = scr_readw(s);
++
++	fg_color = get_color(vc, info, c, 1);
++        bg_color = get_color(vc, info, c, 0);
++
++	/* Don't paint the background image if console is blanked */
++	transparent = ops->blank_state ? 0 :
++		(vc->vc_decor.bg_color == bg_color);
++
++	xx = xx * vc->vc_font.width + vc->vc_decor.tx;
++	yy = yy * vc->vc_font.height + vc->vc_decor.ty;
++
++	fgx = cc2cx(fg_color);
++	bgx = cc2cx(bg_color);
++
++	while (count--) {
++		c = scr_readw(s++);
++		src = vc->vc_font.data + (c & charmask) * vc->vc_font.height *
++		      ((vc->vc_font.width + 7) >> 3);
++
++		fbcon_decor_renderc(info, yy, xx, vc->vc_font.height,
++			       vc->vc_font.width, src, fgx, bgx, transparent);
++		xx += vc->vc_font.width;
++	}
++}
++
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor)
++{
++	int i;
++	unsigned int dsize, s_pitch;
++	struct fbcon_ops *ops = info->fbcon_par;
++	struct vc_data* vc;
++	u8 *src;
++
++	/* we really don't need any cursors while the console is blanked */
++	if (info->state != FBINFO_STATE_RUNNING || ops->blank_state)
++		return;
++
++	vc = vc_cons[ops->currcon].d;
++
++	src = kmalloc(64 + sizeof(struct fb_image), GFP_ATOMIC);
++	if (!src)
++		return;
++
++	s_pitch = (cursor->image.width + 7) >> 3;
++	dsize = s_pitch * cursor->image.height;
++	if (cursor->enable) {
++		switch (cursor->rop) {
++		case ROP_XOR:
++			for (i = 0; i < dsize; i++)
++				src[i] = cursor->image.data[i] ^ cursor->mask[i];
++                        break;
++		case ROP_COPY:
++		default:
++			for (i = 0; i < dsize; i++)
++				src[i] = cursor->image.data[i] & cursor->mask[i];
++			break;
++		}
++	} else
++		memcpy(src, cursor->image.data, dsize);
++
++	fbcon_decor_renderc(info,
++			cursor->image.dy + vc->vc_decor.ty,
++			cursor->image.dx + vc->vc_decor.tx,
++			cursor->image.height,
++			cursor->image.width,
++			(u8*)src,
++			cc2cx(cursor->image.fg_color),
++			cc2cx(cursor->image.bg_color),
++			cursor->image.bg_color == vc->vc_decor.bg_color);
++
++	kfree(src);
++}
++
++static void decorset(u8 *dst, int height, int width, int dstbytes,
++		        u32 bgx, int bpp)
++{
++	int i;
++
++	if (bpp == 8)
++		bgx |= bgx << 8;
++	if (bpp == 16 || bpp == 8)
++		bgx |= bgx << 16;
++
++	while (height-- > 0) {
++		u8 *p = dst;
++
++		switch (bpp) {
++
++		case 32:
++			for (i=0; i < width; i++) {
++				fb_writel(bgx, p); p += 4;
++			}
++			break;
++		case 24:
++			for (i=0; i < width; i++) {
++#ifdef __LITTLE_ENDIAN
++				fb_writew((bgx & 0xffff),(u16*)p); p += 2;
++				fb_writeb((bgx >> 16),p++);
++#else
++				fb_writew((bgx >> 8),(u16*)p); p += 2;
++				fb_writeb((bgx & 0xff),p++);
++#endif
++			}
++		case 16:
++			for (i=0; i < width/4; i++) {
++				fb_writel(bgx,p); p += 4;
++				fb_writel(bgx,p); p += 4;
++			}
++			if (width & 2) {
++				fb_writel(bgx,p); p += 4;
++			}
++			if (width & 1)
++				fb_writew(bgx,(u16*)p);
++			break;
++		case 8:
++			for (i=0; i < width/4; i++) {
++				fb_writel(bgx,p); p += 4;
++			}
++
++			if (width & 2) {
++				fb_writew(bgx,p); p += 2;
++			}
++			if (width & 1)
++				fb_writeb(bgx,(u8*)p);
++			break;
++
++		}
++		dst += dstbytes;
++	}
++}
++
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes,
++		   int srclinebytes, int bpp)
++{
++	int i;
++
++	while (height-- > 0) {
++		u32 *p = (u32 *)dst;
++		u32 *q = (u32 *)src;
++
++		switch (bpp) {
++
++		case 32:
++			for (i=0; i < width; i++)
++				fb_writel(*q++, p++);
++			break;
++		case 24:
++			for (i=0; i < (width*3/4); i++)
++				fb_writel(*q++, p++);
++			if ((width*3) % 4) {
++				if (width & 2) {
++					fb_writeb(*(u8*)q, (u8*)p);
++				} else if (width & 1) {
++					fb_writew(*(u16*)q, (u16*)p);
++					fb_writeb(*(u8*)((u16*)q+1),(u8*)((u16*)p+2));
++				}
++			}
++			break;
++		case 16:
++			for (i=0; i < width/4; i++) {
++				fb_writel(*q++, p++);
++				fb_writel(*q++, p++);
++			}
++			if (width & 2)
++				fb_writel(*q++, p++);
++			if (width & 1)
++				fb_writew(*(u16*)q, (u16*)p);
++			break;
++		case 8:
++			for (i=0; i < width/4; i++)
++				fb_writel(*q++, p++);
++
++			if (width & 2) {
++				fb_writew(*(u16*)q, (u16*)p);
++				q = (u32*) ((u16*)q + 1);
++				p = (u32*) ((u16*)p + 1);
++			}
++			if (width & 1)
++				fb_writeb(*(u8*)q, (u8*)p);
++			break;
++		}
++
++		dst += linebytes;
++		src += srclinebytes;
++	}
++}
++
++static void decorfill(struct fb_info *info, int sy, int sx, int height,
++		       int width)
++{
++	int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++	int d  = sy * info->fix.line_length + sx * bytespp;
++	int ds = (sy * info->var.xres + sx) * bytespp;
++
++	fbcon_decor_copy((u8 *)(info->screen_base + d), (u8 *)(info->bgdecor.data + ds),
++		    height, width, info->fix.line_length, info->var.xres * bytespp,
++		    info->var.bits_per_pixel);
++}
++
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx,
++		    int height, int width)
++{
++	int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
++	struct fbcon_ops *ops = info->fbcon_par;
++	u8 *dst;
++	int transparent, bg_color = attr_bgcol_ec(bgshift, vc, info);
++
++	transparent = (vc->vc_decor.bg_color == bg_color);
++	sy = sy * vc->vc_font.height + vc->vc_decor.ty;
++	sx = sx * vc->vc_font.width + vc->vc_decor.tx;
++	height *= vc->vc_font.height;
++	width *= vc->vc_font.width;
++
++	/* Don't paint the background image if console is blanked */
++	if (transparent && !ops->blank_state) {
++		decorfill(info, sy, sx, height, width);
++	} else {
++		dst = (u8 *)(info->screen_base + sy * info->fix.line_length +
++			     sx * ((info->var.bits_per_pixel + 7) >> 3));
++		decorset(dst, height, width, info->fix.line_length, cc2cx(bg_color),
++			  info->var.bits_per_pixel);
++	}
++}
++
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info,
++			    int bottom_only)
++{
++	unsigned int tw = vc->vc_cols*vc->vc_font.width;
++	unsigned int th = vc->vc_rows*vc->vc_font.height;
++
++	if (!bottom_only) {
++		/* top margin */
++		decorfill(info, 0, 0, vc->vc_decor.ty, info->var.xres);
++		/* left margin */
++		decorfill(info, vc->vc_decor.ty, 0, th, vc->vc_decor.tx);
++		/* right margin */
++		decorfill(info, vc->vc_decor.ty, vc->vc_decor.tx + tw, th, 
++			   info->var.xres - vc->vc_decor.tx - tw);
++	}
++	decorfill(info, vc->vc_decor.ty + th, 0, 
++		   info->var.yres - vc->vc_decor.ty - th, info->var.xres);
++}
++
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, 
++			   int sx, int dx, int width)
++{
++	u16 *d = (u16 *) (vc->vc_origin + vc->vc_size_row * y + dx * 2);
++	u16 *s = d + (dx - sx);
++	u16 *start = d;
++	u16 *ls = d;
++	u16 *le = d + width;
++	u16 c;
++	int x = dx;
++	u16 attr = 1;
++
++	do {
++		c = scr_readw(d);
++		if (attr != (c & 0xff00)) {
++			attr = c & 0xff00;
++			if (d > start) {
++				fbcon_decor_putcs(vc, info, start, d - start, y, x);
++				x += d - start;
++				start = d;
++			}
++		}
++		if (s >= ls && s < le && c == scr_readw(s)) {
++			if (d > start) {
++				fbcon_decor_putcs(vc, info, start, d - start, y, x);
++				x += d - start + 1;
++				start = d + 1;
++			} else {
++				x++;
++				start++;
++			}
++		}
++		s++;
++		d++;
++	} while (d < le);
++	if (d > start)
++		fbcon_decor_putcs(vc, info, start, d - start, y, x);
++}
++
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank)
++{
++	if (blank) {
++		decorset((u8 *)info->screen_base, info->var.yres, info->var.xres,
++			  info->fix.line_length, 0, info->var.bits_per_pixel);
++	} else {
++		update_screen(vc);
++		fbcon_decor_clear_margins(vc, info, 0);
++	}
++}
++
+diff --git a/drivers/video/console/fbcon.c b/drivers/video/console/fbcon.c
+index f447734..da50d61 100644
+--- a/drivers/video/console/fbcon.c
++++ b/drivers/video/console/fbcon.c
+@@ -79,6 +79,7 @@
+ #include <asm/irq.h>
+ 
+ #include "fbcon.h"
++#include "../console/fbcondecor.h"
+ 
+ #ifdef FBCONDEBUG
+ #  define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __func__ , ## args)
+@@ -94,7 +95,7 @@ enum {
+ 
+ static struct display fb_display[MAX_NR_CONSOLES];
+ 
+-static signed char con2fb_map[MAX_NR_CONSOLES];
++signed char con2fb_map[MAX_NR_CONSOLES];
+ static signed char con2fb_map_boot[MAX_NR_CONSOLES];
+ 
+ static int logo_lines;
+@@ -286,7 +287,7 @@ static inline int fbcon_is_inactive(struct vc_data *vc, struct fb_info *info)
+ 		!vt_force_oops_output(vc);
+ }
+ 
+-static int get_color(struct vc_data *vc, struct fb_info *info,
++int get_color(struct vc_data *vc, struct fb_info *info,
+ 	      u16 c, int is_fg)
+ {
+ 	int depth = fb_get_color_depth(&info->var, &info->fix);
+@@ -551,6 +552,9 @@ static int do_fbcon_takeover(int show_logo)
+ 		info_idx = -1;
+ 	} else {
+ 		fbcon_has_console_bind = 1;
++#ifdef CONFIG_FB_CON_DECOR
++		fbcon_decor_init();
++#endif
+ 	}
+ 
+ 	return err;
+@@ -1007,6 +1011,12 @@ static const char *fbcon_startup(void)
+ 	rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ 	cols /= vc->vc_font.width;
+ 	rows /= vc->vc_font.height;
++
++	if (fbcon_decor_active(info, vc)) {
++		cols = vc->vc_decor.twidth / vc->vc_font.width;
++		rows = vc->vc_decor.theight / vc->vc_font.height;
++	}
++
+ 	vc_resize(vc, cols, rows);
+ 
+ 	DPRINTK("mode:   %s\n", info->fix.id);
+@@ -1036,7 +1046,7 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 	cap = info->flags;
+ 
+ 	if (vc != svc || logo_shown == FBCON_LOGO_DONTSHOW ||
+-	    (info->fix.type == FB_TYPE_TEXT))
++	    (info->fix.type == FB_TYPE_TEXT) || fbcon_decor_active(info, vc))
+ 		logo = 0;
+ 
+ 	if (var_to_display(p, &info->var, info))
+@@ -1260,6 +1270,11 @@ static void fbcon_clear(struct vc_data *vc, int sy, int sx, int height,
+ 		fbcon_clear_margins(vc, 0);
+ 	}
+ 
++ 	if (fbcon_decor_active(info, vc)) {
++ 		fbcon_decor_clear(vc, info, sy, sx, height, width);
++ 		return;
++ 	}
++
+ 	/* Split blits that cross physical y_wrap boundary */
+ 
+ 	y_break = p->vrows - p->yscroll;
+@@ -1279,10 +1294,15 @@ static void fbcon_putcs(struct vc_data *vc, const unsigned short *s,
+ 	struct display *p = &fb_display[vc->vc_num];
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 
+-	if (!fbcon_is_inactive(vc, info))
+-		ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
+-			   get_color(vc, info, scr_readw(s), 1),
+-			   get_color(vc, info, scr_readw(s), 0));
++	if (!fbcon_is_inactive(vc, info)) {
++
++		if (fbcon_decor_active(info, vc))
++			fbcon_decor_putcs(vc, info, s, count, ypos, xpos);
++		else
++			ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
++				   get_color(vc, info, scr_readw(s), 1),
++				   get_color(vc, info, scr_readw(s), 0));
++	}
+ }
+ 
+ static void fbcon_putc(struct vc_data *vc, int c, int ypos, int xpos)
+@@ -1298,8 +1318,13 @@ static void fbcon_clear_margins(struct vc_data *vc, int bottom_only)
+ 	struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 
+-	if (!fbcon_is_inactive(vc, info))
+-		ops->clear_margins(vc, info, bottom_only);
++	if (!fbcon_is_inactive(vc, info)) {
++	 	if (fbcon_decor_active(info, vc)) {
++	 		fbcon_decor_clear_margins(vc, info, bottom_only);
++ 		} else {
++			ops->clear_margins(vc, info, bottom_only);
++		}
++	}
+ }
+ 
+ static void fbcon_cursor(struct vc_data *vc, int mode)
+@@ -1819,7 +1844,7 @@ static int fbcon_scroll(struct vc_data *vc, int t, int b, int dir,
+ 			count = vc->vc_rows;
+ 		if (softback_top)
+ 			fbcon_softback_note(vc, t, count);
+-		if (logo_shown >= 0)
++		if (logo_shown >= 0 || fbcon_decor_active(info, vc))
+ 			goto redraw_up;
+ 		switch (p->scrollmode) {
+ 		case SCROLL_MOVE:
+@@ -1912,6 +1937,8 @@ static int fbcon_scroll(struct vc_data *vc, int t, int b, int dir,
+ 			count = vc->vc_rows;
+ 		if (logo_shown >= 0)
+ 			goto redraw_down;
++		if (fbcon_decor_active(info, vc))
++			goto redraw_down;
+ 		switch (p->scrollmode) {
+ 		case SCROLL_MOVE:
+ 			fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
+@@ -2060,6 +2087,13 @@ static void fbcon_bmove_rec(struct vc_data *vc, struct display *p, int sy, int s
+ 		}
+ 		return;
+ 	}
++
++	if (fbcon_decor_active(info, vc) && sy == dy && height == 1) {
++ 		/* must use slower redraw bmove to keep background pic intact */
++ 		fbcon_decor_bmove_redraw(vc, info, sy, sx, dx, width);
++ 		return;
++ 	}
++
+ 	ops->bmove(vc, info, real_y(p, sy), sx, real_y(p, dy), dx,
+ 		   height, width);
+ }
+@@ -2130,8 +2164,8 @@ static int fbcon_resize(struct vc_data *vc, unsigned int width,
+ 	var.yres = virt_h * virt_fh;
+ 	x_diff = info->var.xres - var.xres;
+ 	y_diff = info->var.yres - var.yres;
+-	if (x_diff < 0 || x_diff > virt_fw ||
+-	    y_diff < 0 || y_diff > virt_fh) {
++	if ((x_diff < 0 || x_diff > virt_fw ||
++		y_diff < 0 || y_diff > virt_fh) && !vc->vc_decor.state) {
+ 		const struct fb_videomode *mode;
+ 
+ 		DPRINTK("attempting resize %ix%i\n", var.xres, var.yres);
+@@ -2167,6 +2201,21 @@ static int fbcon_switch(struct vc_data *vc)
+ 
+ 	info = registered_fb[con2fb_map[vc->vc_num]];
+ 	ops = info->fbcon_par;
++	prev_console = ops->currcon;
++	if (prev_console != -1)
++		old_info = registered_fb[con2fb_map[prev_console]];
++
++#ifdef CONFIG_FB_CON_DECOR
++	if (!fbcon_decor_active_vc(vc) && info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++		struct vc_data *vc_curr = vc_cons[prev_console].d;
++		if (vc_curr && fbcon_decor_active_vc(vc_curr)) {
++			/* Clear the screen to avoid displaying funky colors during
++			 * palette updates. */
++			memset((u8*)info->screen_base + info->fix.line_length * info->var.yoffset,
++			       0, info->var.yres * info->fix.line_length);
++		}
++	}
++#endif
+ 
+ 	if (softback_top) {
+ 		if (softback_lines)
+@@ -2185,9 +2234,6 @@ static int fbcon_switch(struct vc_data *vc)
+ 		logo_shown = FBCON_LOGO_CANSHOW;
+ 	}
+ 
+-	prev_console = ops->currcon;
+-	if (prev_console != -1)
+-		old_info = registered_fb[con2fb_map[prev_console]];
+ 	/*
+ 	 * FIXME: If we have multiple fbdev's loaded, we need to
+ 	 * update all info->currcon.  Perhaps, we can place this
+@@ -2231,6 +2277,18 @@ static int fbcon_switch(struct vc_data *vc)
+ 			fbcon_del_cursor_timer(old_info);
+ 	}
+ 
++	if (fbcon_decor_active_vc(vc)) {
++		struct vc_data *vc_curr = vc_cons[prev_console].d;
++
++		if (!vc_curr->vc_decor.theme ||
++			strcmp(vc->vc_decor.theme, vc_curr->vc_decor.theme) ||
++			(fbcon_decor_active_nores(info, vc_curr) &&
++			 !fbcon_decor_active(info, vc_curr))) {
++			fbcon_decor_disable(vc, 0);
++			fbcon_decor_call_helper("modechange", vc->vc_num);
++		}
++	}
++
+ 	if (fbcon_is_inactive(vc, info) ||
+ 	    ops->blank_state != FB_BLANK_UNBLANK)
+ 		fbcon_del_cursor_timer(info);
+@@ -2339,15 +2397,20 @@ static int fbcon_blank(struct vc_data *vc, int blank, int mode_switch)
+ 		}
+ 	}
+ 
+- 	if (!fbcon_is_inactive(vc, info)) {
++	if (!fbcon_is_inactive(vc, info)) {
+ 		if (ops->blank_state != blank) {
+ 			ops->blank_state = blank;
+ 			fbcon_cursor(vc, blank ? CM_ERASE : CM_DRAW);
+ 			ops->cursor_flash = (!blank);
+ 
+-			if (!(info->flags & FBINFO_MISC_USEREVENT))
+-				if (fb_blank(info, blank))
+-					fbcon_generic_blank(vc, info, blank);
++			if (!(info->flags & FBINFO_MISC_USEREVENT)) {
++				if (fb_blank(info, blank)) {
++					if (fbcon_decor_active(info, vc))
++						fbcon_decor_blank(vc, info, blank);
++					else
++						fbcon_generic_blank(vc, info, blank);
++				}
++			}
+ 		}
+ 
+ 		if (!blank)
+@@ -2522,13 +2585,22 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
+ 	}
+ 
+ 	if (resize) {
++		/* reset wrap/pan */
+ 		int cols, rows;
+ 
+ 		cols = FBCON_SWAP(ops->rotate, info->var.xres, info->var.yres);
+ 		rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
++
++		if (fbcon_decor_active(info, vc)) {
++			info->var.xoffset = info->var.yoffset = p->yscroll = 0;
++			cols = vc->vc_decor.twidth;
++			rows = vc->vc_decor.theight;
++		}
+ 		cols /= w;
+ 		rows /= h;
++
+ 		vc_resize(vc, cols, rows);
++
+ 		if (CON_IS_VISIBLE(vc) && softback_buf)
+ 			fbcon_update_softback(vc);
+ 	} else if (CON_IS_VISIBLE(vc)
+@@ -2657,7 +2729,11 @@ static int fbcon_set_palette(struct vc_data *vc, unsigned char *table)
+ 	int i, j, k, depth;
+ 	u8 val;
+ 
+-	if (fbcon_is_inactive(vc, info))
++	if (fbcon_is_inactive(vc, info)
++#ifdef CONFIG_FB_CON_DECOR
++			|| vc->vc_num != fg_console
++#endif
++		)
+ 		return -EINVAL;
+ 
+ 	if (!CON_IS_VISIBLE(vc))
+@@ -2683,14 +2759,56 @@ static int fbcon_set_palette(struct vc_data *vc, unsigned char *table)
+ 	} else
+ 		fb_copy_cmap(fb_default_cmap(1 << depth), &palette_cmap);
+ 
+-	return fb_set_cmap(&palette_cmap, info);
++	if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++	    info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++
++		u16 *red, *green, *blue;
++		int minlen = min(min(info->var.red.length, info->var.green.length),
++				     info->var.blue.length);
++		int h;
++
++		struct fb_cmap cmap = {
++			.start = 0,
++			.len = (1 << minlen),
++			.red = NULL,
++			.green = NULL,
++			.blue = NULL,
++			.transp = NULL
++		};
++
++		red = kmalloc(256 * sizeof(u16) * 3, GFP_KERNEL);
++
++		if (!red)
++			goto out;
++
++		green = red + 256;
++		blue = green + 256;
++		cmap.red = red;
++		cmap.green = green;
++		cmap.blue = blue;
++
++		for (i = 0; i < cmap.len; i++) {
++			red[i] = green[i] = blue[i] = (0xffff * i)/(cmap.len-1);
++		}
++
++		h = fb_set_cmap(&cmap, info);
++		fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++		kfree(red);
++
++		return h;
++
++	} else if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++		   info->var.bits_per_pixel == 8 && info->bgdecor.cmap.red != NULL)
++		fb_set_cmap(&info->bgdecor.cmap, info);
++
++out:	return fb_set_cmap(&palette_cmap, info);
+ }
+ 
+ static u16 *fbcon_screen_pos(struct vc_data *vc, int offset)
+ {
+ 	unsigned long p;
+ 	int line;
+-	
++
+ 	if (vc->vc_num != fg_console || !softback_lines)
+ 		return (u16 *) (vc->vc_origin + offset);
+ 	line = offset / vc->vc_size_row;
+@@ -2909,7 +3027,14 @@ static void fbcon_modechanged(struct fb_info *info)
+ 		rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ 		cols /= vc->vc_font.width;
+ 		rows /= vc->vc_font.height;
+-		vc_resize(vc, cols, rows);
++
++		if (!fbcon_decor_active_nores(info, vc)) {
++			vc_resize(vc, cols, rows);
++		} else {
++			fbcon_decor_disable(vc, 0);
++			fbcon_decor_call_helper("modechange", vc->vc_num);
++		}
++
+ 		updatescrollmode(p, info, vc);
+ 		scrollback_max = 0;
+ 		scrollback_current = 0;
+@@ -2954,7 +3079,9 @@ static void fbcon_set_all_vcs(struct fb_info *info)
+ 		rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ 		cols /= vc->vc_font.width;
+ 		rows /= vc->vc_font.height;
+-		vc_resize(vc, cols, rows);
++		if (!fbcon_decor_active_nores(info, vc)) {
++			vc_resize(vc, cols, rows);
++		}
+ 	}
+ 
+ 	if (fg != -1)
+@@ -3596,6 +3723,7 @@ static void fbcon_exit(void)
+ 		}
+ 	}
+ 
++	fbcon_decor_exit();
+ 	fbcon_has_exited = 1;
+ }
+ 
+diff --git a/drivers/video/console/fbcondecor.c b/drivers/video/console/fbcondecor.c
+new file mode 100644
+index 0000000..babc8c5
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.c
+@@ -0,0 +1,555 @@
++/*
++ *  linux/drivers/video/console/fbcondecor.c -- Framebuffer console decorations
++ *
++ *  Copyright (C) 2004-2009 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ *  Code based upon "Bootsplash" (C) 2001-2003
++ *       Volker Poplawski <volker@poplawski.de>,
++ *       Stefan Reinauer <stepan@suse.de>,
++ *       Steffen Winterfeldt <snwint@suse.de>,
++ *       Michael Schroeder <mls@suse.de>,
++ *       Ken Wimer <wimer@suse.de>.
++ *
++ *  Compat ioctl support by Thorsten Klein <TK@Thorsten-Klein.de>.
++ *
++ *  This file is subject to the terms and conditions of the GNU General Public
++ *  License.  See the file COPYING in the main directory of this archive for
++ *  more details.
++ *
++ */
++#include <linux/module.h>
++#include <linux/kernel.h>
++#include <linux/string.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/vt_kern.h>
++#include <linux/vmalloc.h>
++#include <linux/unistd.h>
++#include <linux/syscalls.h>
++#include <linux/init.h>
++#include <linux/proc_fs.h>
++#include <linux/workqueue.h>
++#include <linux/kmod.h>
++#include <linux/miscdevice.h>
++#include <linux/device.h>
++#include <linux/fs.h>
++#include <linux/compat.h>
++#include <linux/console.h>
++
++#include <asm/uaccess.h>
++#include <asm/irq.h>
++
++#include "fbcon.h"
++#include "fbcondecor.h"
++
++extern signed char con2fb_map[];
++static int fbcon_decor_enable(struct vc_data *vc);
++char fbcon_decor_path[KMOD_PATH_LEN] = "/sbin/fbcondecor_helper";
++static int initialized = 0;
++
++int fbcon_decor_call_helper(char* cmd, unsigned short vc)
++{
++	char *envp[] = {
++		"HOME=/",
++		"PATH=/sbin:/bin",
++		NULL
++	};
++
++	char tfb[5];
++	char tcons[5];
++	unsigned char fb = (int) con2fb_map[vc];
++
++	char *argv[] = {
++		fbcon_decor_path,
++		"2",
++		cmd,
++		tcons,
++		tfb,
++		vc_cons[vc].d->vc_decor.theme,
++		NULL
++	};
++
++	snprintf(tfb,5,"%d",fb);
++	snprintf(tcons,5,"%d",vc);
++
++	return call_usermodehelper(fbcon_decor_path, argv, envp, UMH_WAIT_EXEC);
++}
++
++/* Disables fbcondecor on a virtual console; called with console sem held. */
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw)
++{
++	struct fb_info* info;
++
++	if (!vc->vc_decor.state)
++		return -EINVAL;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (info == NULL)
++		return -EINVAL;
++
++	vc->vc_decor.state = 0;
++	vc_resize(vc, info->var.xres / vc->vc_font.width,
++		  info->var.yres / vc->vc_font.height);
++
++	if (fg_console == vc->vc_num && redraw) {
++		redraw_screen(vc, 0);
++		update_region(vc, vc->vc_origin +
++			      vc->vc_size_row * vc->vc_top,
++			      vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++	}
++
++	printk(KERN_INFO "fbcondecor: switched decor state to 'off' on console %d\n",
++			 vc->vc_num);
++
++	return 0;
++}
++
++/* Enables fbcondecor on a virtual console; called with console sem held. */
++static int fbcon_decor_enable(struct vc_data *vc)
++{
++	struct fb_info* info;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (vc->vc_decor.twidth == 0 || vc->vc_decor.theight == 0 ||
++	    info == NULL || vc->vc_decor.state || (!info->bgdecor.data &&
++	    vc->vc_num == fg_console))
++		return -EINVAL;
++
++	vc->vc_decor.state = 1;
++	vc_resize(vc, vc->vc_decor.twidth / vc->vc_font.width,
++		  vc->vc_decor.theight / vc->vc_font.height);
++
++	if (fg_console == vc->vc_num) {
++		redraw_screen(vc, 0);
++		update_region(vc, vc->vc_origin +
++			      vc->vc_size_row * vc->vc_top,
++			      vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++		fbcon_decor_clear_margins(vc, info, 0);
++	}
++
++	printk(KERN_INFO "fbcondecor: switched decor state to 'on' on console %d\n",
++			 vc->vc_num);
++
++	return 0;
++}
++
++static inline int fbcon_decor_ioctl_dosetstate(struct vc_data *vc, unsigned int state, unsigned char origin)
++{
++	int ret;
++
++//	if (origin == FBCON_DECOR_IO_ORIG_USER)
++		console_lock();
++	if (!state)
++		ret = fbcon_decor_disable(vc, 1);
++	else
++		ret = fbcon_decor_enable(vc);
++//	if (origin == FBCON_DECOR_IO_ORIG_USER)
++		console_unlock();
++
++	return ret;
++}
++
++static inline void fbcon_decor_ioctl_dogetstate(struct vc_data *vc, unsigned int *state)
++{
++	*state = vc->vc_decor.state;
++}
++
++static int fbcon_decor_ioctl_dosetcfg(struct vc_data *vc, struct vc_decor *cfg, unsigned char origin)
++{
++	struct fb_info *info;
++	int len;
++	char *tmp;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (info == NULL || !cfg->twidth || !cfg->theight ||
++	    cfg->tx + cfg->twidth  > info->var.xres ||
++	    cfg->ty + cfg->theight > info->var.yres)
++		return -EINVAL;
++
++	len = strlen_user(cfg->theme);
++	if (!len || len > FBCON_DECOR_THEME_LEN)
++		return -EINVAL;
++	tmp = kmalloc(len, GFP_KERNEL);
++	if (!tmp)
++		return -ENOMEM;
++	if (copy_from_user(tmp, (void __user *)cfg->theme, len))
++		return -EFAULT;
++	cfg->theme = tmp;
++	cfg->state = 0;
++
++	/* If this ioctl is a response to a request from kernel, the console sem
++	 * is already held; we also don't need to disable decor because either the
++	 * new config and background picture will be successfully loaded, and the
++	 * decor will stay on, or in case of a failure it'll be turned off in fbcon. */
++//	if (origin == FBCON_DECOR_IO_ORIG_USER) {
++		console_lock();
++		if (vc->vc_decor.state)
++			fbcon_decor_disable(vc, 1);
++//	}
++
++	if (vc->vc_decor.theme)
++		kfree(vc->vc_decor.theme);
++
++	vc->vc_decor = *cfg;
++
++//	if (origin == FBCON_DECOR_IO_ORIG_USER)
++		console_unlock();
++
++	printk(KERN_INFO "fbcondecor: console %d using theme '%s'\n",
++			 vc->vc_num, vc->vc_decor.theme);
++	return 0;
++}
++
++static int fbcon_decor_ioctl_dogetcfg(struct vc_data *vc, struct vc_decor *decor)
++{
++	char __user *tmp;
++
++	tmp = decor->theme;
++	*decor = vc->vc_decor;
++	decor->theme = tmp;
++
++	if (vc->vc_decor.theme) {
++		if (copy_to_user(tmp, vc->vc_decor.theme, strlen(vc->vc_decor.theme) + 1))
++			return -EFAULT;
++	} else
++		if (put_user(0, tmp))
++			return -EFAULT;
++
++	return 0;
++}
++
++static int fbcon_decor_ioctl_dosetpic(struct vc_data *vc, struct fb_image *img, unsigned char origin)
++{
++	struct fb_info *info;
++	int len;
++	u8 *tmp;
++
++	if (vc->vc_num != fg_console)
++		return -EINVAL;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (info == NULL)
++		return -EINVAL;
++
++	if (img->width != info->var.xres || img->height != info->var.yres) {
++		printk(KERN_ERR "fbcondecor: picture dimensions mismatch\n");
++		printk(KERN_ERR "%dx%d vs %dx%d\n", img->width, img->height, info->var.xres, info->var.yres);
++		return -EINVAL;
++	}
++
++	if (img->depth != info->var.bits_per_pixel) {
++		printk(KERN_ERR "fbcondecor: picture depth mismatch\n");
++		return -EINVAL;
++	}
++
++	if (img->depth == 8) {
++		if (!img->cmap.len || !img->cmap.red || !img->cmap.green ||
++		    !img->cmap.blue)
++			return -EINVAL;
++
++		tmp = vmalloc(img->cmap.len * 3 * 2);
++		if (!tmp)
++			return -ENOMEM;
++
++		if (copy_from_user(tmp,
++			    	   (void __user*)img->cmap.red, (img->cmap.len << 1)) ||
++		    copy_from_user(tmp + (img->cmap.len << 1),
++			    	   (void __user*)img->cmap.green, (img->cmap.len << 1)) ||
++		    copy_from_user(tmp + (img->cmap.len << 2),
++			    	   (void __user*)img->cmap.blue, (img->cmap.len << 1))) {
++			vfree(tmp);
++			return -EFAULT;
++		}
++
++		img->cmap.transp = NULL;
++		img->cmap.red = (u16*)tmp;
++		img->cmap.green = img->cmap.red + img->cmap.len;
++		img->cmap.blue = img->cmap.green + img->cmap.len;
++	} else {
++		img->cmap.red = NULL;
++	}
++
++	len = ((img->depth + 7) >> 3) * img->width * img->height;
++
++	/*
++	 * Allocate an additional byte so that we never go outside of the
++	 * buffer boundaries in the rendering functions in a 24 bpp mode.
++	 */
++	tmp = vmalloc(len + 1);
++
++	if (!tmp)
++		goto out;
++
++	if (copy_from_user(tmp, (void __user*)img->data, len))
++		goto out;
++
++	img->data = tmp;
++
++	/* If this ioctl is a response to a request from kernel, the console sem
++	 * is already held. */
++//	if (origin == FBCON_DECOR_IO_ORIG_USER)
++		console_lock();
++
++	if (info->bgdecor.data)
++		vfree((u8*)info->bgdecor.data);
++	if (info->bgdecor.cmap.red)
++		vfree(info->bgdecor.cmap.red);
++
++	info->bgdecor = *img;
++
++	if (fbcon_decor_active_vc(vc) && fg_console == vc->vc_num) {
++		redraw_screen(vc, 0);
++		update_region(vc, vc->vc_origin +
++			      vc->vc_size_row * vc->vc_top,
++			      vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++		fbcon_decor_clear_margins(vc, info, 0);
++	}
++
++//	if (origin == FBCON_DECOR_IO_ORIG_USER)
++		console_unlock();
++
++	return 0;
++
++out:	if (img->cmap.red)
++		vfree(img->cmap.red);
++
++	if (tmp)
++		vfree(tmp);
++	return -ENOMEM;
++}
++
++static long fbcon_decor_ioctl(struct file *filp, u_int cmd, u_long arg)
++{
++	struct fbcon_decor_iowrapper __user *wrapper = (void __user*) arg;
++	struct vc_data *vc = NULL;
++	unsigned short vc_num = 0;
++	unsigned char origin = 0;
++	void __user *data = NULL;
++
++	if (!access_ok(VERIFY_READ, wrapper,
++			sizeof(struct fbcon_decor_iowrapper)))
++		return -EFAULT;
++
++	__get_user(vc_num, &wrapper->vc);
++	__get_user(origin, &wrapper->origin);
++	__get_user(data, &wrapper->data);
++
++	if (!vc_cons_allocated(vc_num))
++		return -EINVAL;
++
++	vc = vc_cons[vc_num].d;
++
++	switch (cmd) {
++	case FBIOCONDECOR_SETPIC:
++	{
++		struct fb_image img;
++		if (copy_from_user(&img, (struct fb_image __user *)data, sizeof(struct fb_image)))
++			return -EFAULT;
++
++		return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++	}
++	case FBIOCONDECOR_SETCFG:
++	{
++		struct vc_decor cfg;
++		if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++			return -EFAULT;
++
++		return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++	}
++	case FBIOCONDECOR_GETCFG:
++	{
++		int rval;
++		struct vc_decor cfg;
++
++		if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++			return -EFAULT;
++
++		rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++		if (copy_to_user(data, &cfg, sizeof(struct vc_decor)))
++			return -EFAULT;
++		return rval;
++	}
++	case FBIOCONDECOR_SETSTATE:
++	{
++		unsigned int state = 0;
++		if (get_user(state, (unsigned int __user *)data))
++			return -EFAULT;
++		return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++	}
++	case FBIOCONDECOR_GETSTATE:
++	{
++		unsigned int state = 0;
++		fbcon_decor_ioctl_dogetstate(vc, &state);
++		return put_user(state, (unsigned int __user *)data);
++	}
++
++	default:
++		return -ENOIOCTLCMD;
++	}
++}
++
++#ifdef CONFIG_COMPAT
++
++static long fbcon_decor_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) {
++
++	struct fbcon_decor_iowrapper32 __user *wrapper = (void __user *)arg;
++	struct vc_data *vc = NULL;
++	unsigned short vc_num = 0;
++	unsigned char origin = 0;
++	compat_uptr_t data_compat = 0;
++	void __user *data = NULL;
++
++	if (!access_ok(VERIFY_READ, wrapper,
++                       sizeof(struct fbcon_decor_iowrapper32)))
++		return -EFAULT;
++
++	__get_user(vc_num, &wrapper->vc);
++	__get_user(origin, &wrapper->origin);
++	__get_user(data_compat, &wrapper->data);
++	data = compat_ptr(data_compat);
++
++	if (!vc_cons_allocated(vc_num))
++		return -EINVAL;
++
++	vc = vc_cons[vc_num].d;
++
++	switch (cmd) {
++	case FBIOCONDECOR_SETPIC32:
++	{
++		struct fb_image32 img_compat;
++		struct fb_image img;
++
++		if (copy_from_user(&img_compat, (struct fb_image32 __user *)data, sizeof(struct fb_image32)))
++			return -EFAULT;
++
++		fb_image_from_compat(img, img_compat);
++
++		return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++	}
++
++	case FBIOCONDECOR_SETCFG32:
++	{
++		struct vc_decor32 cfg_compat;
++		struct vc_decor cfg;
++
++		if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++			return -EFAULT;
++
++		vc_decor_from_compat(cfg, cfg_compat);
++
++		return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++	}
++
++	case FBIOCONDECOR_GETCFG32:
++	{
++		int rval;
++		struct vc_decor32 cfg_compat;
++		struct vc_decor cfg;
++
++		if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++			return -EFAULT;
++		cfg.theme = compat_ptr(cfg_compat.theme);
++
++		rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++		vc_decor_to_compat(cfg_compat, cfg);
++
++		if (copy_to_user((struct vc_decor32 __user *)data, &cfg_compat, sizeof(struct vc_decor32)))
++			return -EFAULT;
++		return rval;
++	}
++
++	case FBIOCONDECOR_SETSTATE32:
++	{
++		compat_uint_t state_compat = 0;
++		unsigned int state = 0;
++
++		if (get_user(state_compat, (compat_uint_t __user *)data))
++			return -EFAULT;
++
++		state = (unsigned int)state_compat;
++
++		return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++	}
++
++	case FBIOCONDECOR_GETSTATE32:
++	{
++		compat_uint_t state_compat = 0;
++		unsigned int state = 0;
++
++		fbcon_decor_ioctl_dogetstate(vc, &state);
++		state_compat = (compat_uint_t)state;
++
++		return put_user(state_compat, (compat_uint_t __user *)data);
++	}
++
++	default:
++		return -ENOIOCTLCMD;
++	}
++}
++#else
++  #define fbcon_decor_compat_ioctl NULL
++#endif
++
++static struct file_operations fbcon_decor_ops = {
++	.owner = THIS_MODULE,
++	.unlocked_ioctl = fbcon_decor_ioctl,
++	.compat_ioctl = fbcon_decor_compat_ioctl
++};
++
++static struct miscdevice fbcon_decor_dev = {
++	.minor = MISC_DYNAMIC_MINOR,
++	.name = "fbcondecor",
++	.fops = &fbcon_decor_ops
++};
++
++void fbcon_decor_reset(void)
++{
++	int i;
++
++	for (i = 0; i < num_registered_fb; i++) {
++		registered_fb[i]->bgdecor.data = NULL;
++		registered_fb[i]->bgdecor.cmap.red = NULL;
++	}
++
++	for (i = 0; i < MAX_NR_CONSOLES && vc_cons[i].d; i++) {
++		vc_cons[i].d->vc_decor.state = vc_cons[i].d->vc_decor.twidth =
++						vc_cons[i].d->vc_decor.theight = 0;
++		vc_cons[i].d->vc_decor.theme = NULL;
++	}
++
++	return;
++}
++
++int fbcon_decor_init(void)
++{
++	int i;
++
++	fbcon_decor_reset();
++
++	if (initialized)
++		return 0;
++
++	i = misc_register(&fbcon_decor_dev);
++	if (i) {
++		printk(KERN_ERR "fbcondecor: failed to register device\n");
++		return i;
++	}
++
++	fbcon_decor_call_helper("init", 0);
++	initialized = 1;
++	return 0;
++}
++
++int fbcon_decor_exit(void)
++{
++	fbcon_decor_reset();
++	return 0;
++}
++
++EXPORT_SYMBOL(fbcon_decor_path);
+diff --git a/drivers/video/console/fbcondecor.h b/drivers/video/console/fbcondecor.h
+new file mode 100644
+index 0000000..3b3724b
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.h
+@@ -0,0 +1,78 @@
++/* 
++ *  linux/drivers/video/console/fbcondecor.h -- Framebuffer Console Decoration headers
++ *
++ *  Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ */
++
++#ifndef __FBCON_DECOR_H
++#define __FBCON_DECOR_H
++
++#ifndef _LINUX_FB_H
++#include <linux/fb.h>
++#endif
++
++/* This is needed for vc_cons in fbcmap.c */
++#include <linux/vt_kern.h>
++
++struct fb_cursor;
++struct fb_info;
++struct vc_data;
++
++#ifdef CONFIG_FB_CON_DECOR
++/* fbcondecor.c */
++int fbcon_decor_init(void);
++int fbcon_decor_exit(void);
++int fbcon_decor_call_helper(char* cmd, unsigned short cons);
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw);
++
++/* cfbcondecor.c */
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx);
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor);
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width);
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only);
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank);
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width);
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes, int srclinesbytes, int bpp);
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc);
++
++/* vt.c */
++void acquire_console_sem(void);
++void release_console_sem(void);
++void do_unblank_screen(int entering_gfx);
++
++/* struct vc_data *y */
++#define fbcon_decor_active_vc(y) (y->vc_decor.state && y->vc_decor.theme) 
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active_nores(x,y) (x->bgdecor.data && fbcon_decor_active_vc(y))
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active(x,y) (fbcon_decor_active_nores(x,y) &&		\
++			      x->bgdecor.width == x->var.xres && 	\
++			      x->bgdecor.height == x->var.yres &&	\
++			      x->bgdecor.depth == x->var.bits_per_pixel)
++
++
++#else /* CONFIG_FB_CON_DECOR */
++
++static inline void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx) {}
++static inline void fbcon_decor_putc(struct vc_data *vc, struct fb_info *info, int c, int ypos, int xpos) {}
++static inline void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor) {}
++static inline void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width) {}
++static inline void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only) {}
++static inline void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank) {}
++static inline void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width) {}
++static inline void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc) {}
++static inline int fbcon_decor_call_helper(char* cmd, unsigned short cons) { return 0; }
++static inline int fbcon_decor_init(void) { return 0; }
++static inline int fbcon_decor_exit(void) { return 0; }
++static inline int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw) { return 0; }
++
++#define fbcon_decor_active_vc(y) (0)
++#define fbcon_decor_active_nores(x,y) (0)
++#define fbcon_decor_active(x,y) (0)
++
++#endif /* CONFIG_FB_CON_DECOR */
++
++#endif /* __FBCON_DECOR_H */
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index e1f4727..2952e33 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -1204,7 +1204,6 @@ config FB_MATROX
+ 	select FB_CFB_FILLRECT
+ 	select FB_CFB_COPYAREA
+ 	select FB_CFB_IMAGEBLIT
+-	select FB_TILEBLITTING
+ 	select FB_MACMODES if PPC_PMAC
+ 	---help---
+ 	  Say Y here if you have a Matrox Millennium, Matrox Millennium II,
+diff --git a/drivers/video/fbdev/core/fbcmap.c b/drivers/video/fbdev/core/fbcmap.c
+index f89245b..05e036c 100644
+--- a/drivers/video/fbdev/core/fbcmap.c
++++ b/drivers/video/fbdev/core/fbcmap.c
+@@ -17,6 +17,8 @@
+ #include <linux/slab.h>
+ #include <linux/uaccess.h>
+ 
++#include "../../console/fbcondecor.h"
++
+ static u16 red2[] __read_mostly = {
+     0x0000, 0xaaaa
+ };
+@@ -249,14 +251,17 @@ int fb_set_cmap(struct fb_cmap *cmap, struct fb_info *info)
+ 			if (transp)
+ 				htransp = *transp++;
+ 			if (info->fbops->fb_setcolreg(start++,
+-						      hred, hgreen, hblue,
++						      hred, hgreen, hblue, 
+ 						      htransp, info))
+ 				break;
+ 		}
+ 	}
+-	if (rc == 0)
++	if (rc == 0) {
+ 		fb_copy_cmap(cmap, &info->cmap);
+-
++		if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++		    info->fix.visual == FB_VISUAL_DIRECTCOLOR)
++			fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++	}
+ 	return rc;
+ }
+ 
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index b6d5008..d6703f2 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1250,15 +1250,6 @@ struct fb_fix_screeninfo32 {
+ 	u16			reserved[3];
+ };
+ 
+-struct fb_cmap32 {
+-	u32			start;
+-	u32			len;
+-	compat_caddr_t	red;
+-	compat_caddr_t	green;
+-	compat_caddr_t	blue;
+-	compat_caddr_t	transp;
+-};
+-
+ static int fb_getput_cmap(struct fb_info *info, unsigned int cmd,
+ 			  unsigned long arg)
+ {
+diff --git a/include/linux/console_decor.h b/include/linux/console_decor.h
+new file mode 100644
+index 0000000..04b8d80
+--- /dev/null
++++ b/include/linux/console_decor.h
+@@ -0,0 +1,46 @@
++#ifndef _LINUX_CONSOLE_DECOR_H_
++#define _LINUX_CONSOLE_DECOR_H_ 1
++
++/* A structure used by the framebuffer console decorations (drivers/video/console/fbcondecor.c) */
++struct vc_decor {
++	__u8 bg_color;				/* The color that is to be treated as transparent */
++	__u8 state;				/* Current decor state: 0 = off, 1 = on */
++	__u16 tx, ty;				/* Top left corner coordinates of the text field */
++	__u16 twidth, theight;			/* Width and height of the text field */
++	char* theme;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++
++struct vc_decor32 {
++	__u8 bg_color;				/* The color that is to be treated as transparent */
++	__u8 state;				/* Current decor state: 0 = off, 1 = on */
++	__u16 tx, ty;				/* Top left corner coordinates of the text field */
++	__u16 twidth, theight;			/* Width and height of the text field */
++	compat_uptr_t theme;
++};
++
++#define vc_decor_from_compat(to, from) \
++	(to).bg_color = (from).bg_color; \
++	(to).state    = (from).state; \
++	(to).tx       = (from).tx; \
++	(to).ty       = (from).ty; \
++	(to).twidth   = (from).twidth; \
++	(to).theight  = (from).theight; \
++	(to).theme    = compat_ptr((from).theme)
++
++#define vc_decor_to_compat(to, from) \
++	(to).bg_color = (from).bg_color; \
++	(to).state    = (from).state; \
++	(to).tx       = (from).tx; \
++	(to).ty       = (from).ty; \
++	(to).twidth   = (from).twidth; \
++	(to).theight  = (from).theight; \
++	(to).theme    = ptr_to_compat((from).theme)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#endif
+diff --git a/include/linux/console_struct.h b/include/linux/console_struct.h
+index 7f0c329..98f5d60 100644
+--- a/include/linux/console_struct.h
++++ b/include/linux/console_struct.h
+@@ -19,6 +19,7 @@
+ struct vt_struct;
+ 
+ #define NPAR 16
++#include <linux/console_decor.h>
+ 
+ struct vc_data {
+ 	struct tty_port port;			/* Upper level data */
+@@ -107,6 +108,8 @@ struct vc_data {
+ 	unsigned long	vc_uni_pagedir;
+ 	unsigned long	*vc_uni_pagedir_loc;  /* [!] Location of uni_pagedir variable for this console */
+ 	bool vc_panic_force_write; /* when oops/panic this VC can accept forced output/blanking */
++
++	struct vc_decor vc_decor;
+ 	/* additional information is in vt_kern.h */
+ };
+ 
+diff --git a/include/linux/fb.h b/include/linux/fb.h
+index fe6ac95..1e36b03 100644
+--- a/include/linux/fb.h
++++ b/include/linux/fb.h
+@@ -219,6 +219,34 @@ struct fb_deferred_io {
+ };
+ #endif
+ 
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_image32 {
++	__u32 dx;			/* Where to place image */
++	__u32 dy;
++	__u32 width;			/* Size of image */
++	__u32 height;
++	__u32 fg_color;			/* Only used when a mono bitmap */
++	__u32 bg_color;
++	__u8  depth;			/* Depth of the image */
++	const compat_uptr_t data;	/* Pointer to image data */
++	struct fb_cmap32 cmap;		/* color map info */
++};
++
++#define fb_image_from_compat(to, from) \
++	(to).dx       = (from).dx; \
++	(to).dy       = (from).dy; \
++	(to).width    = (from).width; \
++	(to).height   = (from).height; \
++	(to).fg_color = (from).fg_color; \
++	(to).bg_color = (from).bg_color; \
++	(to).depth    = (from).depth; \
++	(to).data     = compat_ptr((from).data); \
++	fb_cmap_from_compat((to).cmap, (from).cmap)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /*
+  * Frame buffer operations
+  *
+@@ -489,6 +517,9 @@ struct fb_info {
+ #define FBINFO_STATE_SUSPENDED	1
+ 	u32 state;			/* Hardware state i.e suspend */
+ 	void *fbcon_par;                /* fbcon use-only private area */
++
++	struct fb_image bgdecor;
++
+ 	/* From here on everything is device dependent */
+ 	void *par;
+ 	/* we need the PCI or similar aperture base/size not
+diff --git a/include/uapi/linux/fb.h b/include/uapi/linux/fb.h
+index fb795c3..dc77a03 100644
+--- a/include/uapi/linux/fb.h
++++ b/include/uapi/linux/fb.h
+@@ -8,6 +8,25 @@
+ 
+ #define FB_MAX			32	/* sufficient for now */
+ 
++struct fbcon_decor_iowrapper
++{
++	unsigned short vc;		/* Virtual console */
++	unsigned char origin;		/* Point of origin of the request */
++	void *data;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++struct fbcon_decor_iowrapper32
++{
++	unsigned short vc;		/* Virtual console */
++	unsigned char origin;		/* Point of origin of the request */
++	compat_uptr_t data;
++};
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /* ioctls
+    0x46 is 'F'								*/
+ #define FBIOGET_VSCREENINFO	0x4600
+@@ -35,6 +54,25 @@
+ #define FBIOGET_DISPINFO        0x4618
+ #define FBIO_WAITFORVSYNC	_IOW('F', 0x20, __u32)
+ 
++#define FBIOCONDECOR_SETCFG	_IOWR('F', 0x19, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETCFG	_IOR('F', 0x1A, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETSTATE	_IOWR('F', 0x1B, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETSTATE	_IOR('F', 0x1C, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETPIC 	_IOWR('F', 0x1D, struct fbcon_decor_iowrapper)
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#define FBIOCONDECOR_SETCFG32	_IOWR('F', 0x19, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETCFG32	_IOR('F', 0x1A, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETSTATE32	_IOWR('F', 0x1B, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETSTATE32	_IOR('F', 0x1C, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETPIC32	_IOWR('F', 0x1D, struct fbcon_decor_iowrapper32)
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#define FBCON_DECOR_THEME_LEN		128	/* Maximum lenght of a theme name */
++#define FBCON_DECOR_IO_ORIG_KERNEL	0	/* Kernel ioctl origin */
++#define FBCON_DECOR_IO_ORIG_USER	1	/* User ioctl origin */
++ 
+ #define FB_TYPE_PACKED_PIXELS		0	/* Packed Pixels	*/
+ #define FB_TYPE_PLANES			1	/* Non interleaved planes */
+ #define FB_TYPE_INTERLEAVED_PLANES	2	/* Interleaved planes	*/
+@@ -277,6 +315,29 @@ struct fb_var_screeninfo {
+ 	__u32 reserved[4];		/* Reserved for future compatibility */
+ };
+ 
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_cmap32 {
++	__u32 start;
++	__u32 len;			/* Number of entries */
++	compat_uptr_t red;		/* Red values	*/
++	compat_uptr_t green;
++	compat_uptr_t blue;
++	compat_uptr_t transp;		/* transparency, can be NULL */
++};
++
++#define fb_cmap_from_compat(to, from) \
++	(to).start  = (from).start; \
++	(to).len    = (from).len; \
++	(to).red    = compat_ptr((from).red); \
++	(to).green  = compat_ptr((from).green); \
++	(to).blue   = compat_ptr((from).blue); \
++	(to).transp = compat_ptr((from).transp)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++
+ struct fb_cmap {
+ 	__u32 start;			/* First entry	*/
+ 	__u32 len;			/* Number of entries */
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 74f5b58..6386ab0 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -146,6 +146,10 @@ static const int cap_last_cap = CAP_LAST_CAP;
+ static unsigned long hung_task_timeout_max = (LONG_MAX/HZ);
+ #endif
+ 
++#ifdef CONFIG_FB_CON_DECOR
++extern char fbcon_decor_path[];
++#endif
++
+ #ifdef CONFIG_INOTIFY_USER
+ #include <linux/inotify.h>
+ #endif
+@@ -255,6 +259,15 @@ static struct ctl_table sysctl_base_table[] = {
+ 		.mode		= 0555,
+ 		.child		= dev_table,
+ 	},
++#ifdef CONFIG_FB_CON_DECOR
++	{
++		.procname	= "fbcondecor",
++		.data		= &fbcon_decor_path,
++		.maxlen		= KMOD_PATH_LEN,
++		.mode		= 0644,
++		.proc_handler	= &proc_dostring,
++	},
++#endif
+ 	{ }
+ };
+ 

diff --git a/5000_enable-additional-cpu-optimizations-for-gcc.patch b/5000_enable-additional-cpu-optimizations-for-gcc.patch
new file mode 100644
index 0000000..f7ab6f0
--- /dev/null
+++ b/5000_enable-additional-cpu-optimizations-for-gcc.patch
@@ -0,0 +1,327 @@
+This patch has been tested on and known to work with kernel versions from 3.2
+up to the latest git version (pulled on 12/14/2013).
+
+This patch will expand the number of microarchitectures to include new
+processors including: AMD K10-family, AMD Family 10h (Barcelona), AMD Family
+14h (Bobcat), AMD Family 15h (Bulldozer), AMD Family 15h (Piledriver), AMD
+Family 16h (Jaguar), Intel 1st Gen Core i3/i5/i7 (Nehalem), Intel 2nd Gen Core
+i3/i5/i7 (Sandybridge), Intel 3rd Gen Core i3/i5/i7 (Ivybridge), and Intel 4th
+Gen Core i3/i5/i7 (Haswell). It also offers the compiler the 'native' flag.
+
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=3.15
+gcc version <4.9
+
+---
+diff -uprN a/arch/x86/include/asm/module.h b/arch/x86/include/asm/module.h
+--- a/arch/x86/include/asm/module.h	2013-11-03 18:41:51.000000000 -0500
++++ b/arch/x86/include/asm/module.h	2013-12-15 06:21:24.351122516 -0500
+@@ -15,6 +15,16 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MCOREI7
++#define MODULE_PROC_FAMILY "COREI7 "
++#elif defined CONFIG_MCOREI7AVX
++#define MODULE_PROC_FAMILY "COREI7AVX "
++#elif defined CONFIG_MCOREAVXI
++#define MODULE_PROC_FAMILY "COREAVXI "
++#elif defined CONFIG_MCOREAVX2
++#define MODULE_PROC_FAMILY "COREAVX2 "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -33,6 +43,18 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+diff -uprN a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+--- a/arch/x86/Kconfig.cpu	2013-11-03 18:41:51.000000000 -0500
++++ b/arch/x86/Kconfig.cpu	2013-12-15 06:21:24.351122516 -0500
+@@ -139,7 +139,7 @@ config MPENTIUM4
+ 
+ 
+ config MK6
+-	bool "K6/K6-II/K6-III"
++	bool "AMD K6/K6-II/K6-III"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD K6-family processor.  Enables use of
+@@ -147,7 +147,7 @@ config MK6
+ 	  flags to GCC.
+ 
+ config MK7
+-	bool "Athlon/Duron/K7"
++	bool "AMD Athlon/Duron/K7"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD Athlon K7-family processor.  Enables use of
+@@ -155,12 +155,55 @@ config MK7
+ 	  flags to GCC.
+ 
+ config MK8
+-	bool "Opteron/Athlon64/Hammer/K8"
++	bool "AMD Opteron/Athlon64/Hammer/K8"
+ 	---help---
+ 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ 	  Enables use of some extended instructions, and passes appropriate
+ 	  optimization flags to GCC.
+ 
++config MK10
++	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++	---help---
++	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++		Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MBARCELONA
++	bool "AMD Barcelona"
++	---help---
++	  Select this for AMD Barcelona and newer processors.
++
++	  Enables -march=barcelona
++
++config MBOBCAT
++	bool "AMD Bobcat"
++	---help---
++	  Select this for AMD Bobcat processors.
++
++	  Enables -march=btver1
++
++config MBULLDOZER
++	bool "AMD Bulldozer"
++	---help---
++	  Select this for AMD Bulldozer processors.
++
++	  Enables -march=bdver1
++
++config MPILEDRIVER
++	bool "AMD Piledriver"
++	---help---
++	  Select this for AMD Piledriver processors.
++
++	  Enables -march=bdver2
++
++config MJAGUAR
++	bool "AMD Jaguar"
++	---help---
++	  Select this for AMD Jaguar processors.
++
++	  Enables -march=btver2
++
+ config MCRUSOE
+ 	bool "Crusoe"
+ 	depends on X86_32
+@@ -251,8 +294,17 @@ config MPSC
+ 	  using the cpu family field
+ 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+ 
++config MATOM
++	bool "Intel Atom"
++	---help---
++
++	  Select this for the Intel Atom platform. Intel Atom CPUs have an
++	  in-order pipelining architecture and thus can benefit from
++	  accordingly optimized code. Use a recent GCC with specific Atom
++	  support in order to fully benefit from selecting this option.
++
+ config MCORE2
+-	bool "Core 2/newer Xeon"
++	bool "Intel Core 2"
+ 	---help---
+ 
+ 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -260,14 +312,40 @@ config MCORE2
+ 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ 	  (not a typo)
+ 
+-config MATOM
+-	bool "Intel Atom"
++	  Enables -march=core2
++
++config MCOREI7
++	bool "Intel Core i7"
+ 	---help---
+ 
+-	  Select this for the Intel Atom platform. Intel Atom CPUs have an
+-	  in-order pipelining architecture and thus can benefit from
+-	  accordingly optimized code. Use a recent GCC with specific Atom
+-	  support in order to fully benefit from selecting this option.
++	  Select this for the Intel Nehalem platform. Intel Nehalem proecessors
++	  include Core i3, i5, i7, Xeon: 34xx, 35xx, 55xx, 56xx, 75xx processors.
++
++	  Enables -march=corei7
++
++config MCOREI7AVX
++	bool "Intel Core 2nd Gen AVX"
++	---help---
++
++	  Select this for 2nd Gen Core processors including Sandy Bridge.
++
++	  Enables -march=corei7-avx
++
++config MCOREAVXI
++	bool "Intel Core 3rd Gen AVX"
++	---help---
++
++	  Select this for 3rd Gen Core processors including Ivy Bridge.
++
++	  Enables -march=core-avx-i
++
++config MCOREAVX2
++	bool "Intel Core AVX2"
++	---help---
++
++	  Select this for AVX2 enabled processors including Haswell.
++
++	  Enables -march=core-avx2
+ 
+ config GENERIC_CPU
+ 	bool "Generic-x86-64"
+@@ -276,6 +354,19 @@ config GENERIC_CPU
+ 	  Generic x86-64 CPU.
+ 	  Run equally well on all x86-64 CPUs.
+ 
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++   GCC 4.2 and above support -march=native, which automatically detects
++   the optimum settings to use based on your processor. -march=native
++   also detects and applies additional settings beyond -march specific
++   to your CPU, (eg. -msse4). Unless you have a specific reason not to
++   (e.g. distcc cross-compiling), you should probably be using
++   -march=native rather than anything listed below.
++
++   Enables -march=native
++
+ endchoice
+ 
+ config X86_GENERIC
+@@ -300,7 +391,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ 	int
+ 	default "7" if MPENTIUM4 || MPSC
+-	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MPENTIUMM || MCORE2 || MCOREI7 || MCOREI7AVX || MCOREAVXI || MCOREAVX2 || MATOM || MVIAC7 || X86_GENERIC || MNATIVE || GENERIC_CPU
+ 	default "4" if MELAN || M486 || MGEODEGX1
+ 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+ 
+@@ -331,11 +422,11 @@ config X86_ALIGNMENT_16
+ 
+ config X86_INTEL_USERCOPY
+ 	def_bool y
+-	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || MNATIVE || X86_GENERIC || MK8 || MK7 || MK10 || MBARCELONA || MEFFICEON || MCORE2 || MCOREI7 || MCOREI7AVX || MCOREAVXI || MCOREAVX2
+ 
+ config X86_USE_PPRO_CHECKSUM
+ 	def_bool y
+-	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MCOREI7 || MCOREI7AVX || MCOREAVXI || MCOREAVX2 || MATOM || MNATIVE
+ 
+ config X86_USE_3DNOW
+ 	def_bool y
+@@ -363,17 +454,17 @@ config X86_P6_NOP
+ 
+ config X86_TSC
+ 	def_bool y
+-	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MCOREI7 || MCOREI7-AVX || MATOM) || X86_64 || MNATIVE
+ 
+ config X86_CMPXCHG64
+ 	def_bool y
+-	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM
++	depends on X86_PAE || X86_64 || MCORE2 || MCOREI7 || MCOREI7AVX || MCOREAVXI || MCOREAVX2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM || MNATIVE
+ 
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+ config X86_CMOV
+ 	def_bool y
+-	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++	depends on (MK8 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MK7 || MCORE2 || MCOREI7 || MCOREI7AVX || MCOREAVXI || MCOREAVX2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+ 
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+diff -uprN a/arch/x86/Makefile b/arch/x86/Makefile
+--- a/arch/x86/Makefile	2013-11-03 18:41:51.000000000 -0500
++++ b/arch/x86/Makefile	2013-12-15 06:21:24.354455723 -0500
+@@ -61,11 +61,26 @@ else
+ 	KBUILD_CFLAGS += $(call cc-option,-mno-sse -mpreferred-stack-boundary=3)
+ 
+         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+ 
+         cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
++                $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++        cflags-$(CONFIG_MCOREI7) += \
++                $(call cc-option,-march=corei7,$(call cc-option,-mtune=corei7))
++        cflags-$(CONFIG_MCOREI7AVX) += \
++                $(call cc-option,-march=corei7-avx,$(call cc-option,-mtune=corei7-avx))
++        cflags-$(CONFIG_MCOREAVXI) += \
++                $(call cc-option,-march=core-avx-i,$(call cc-option,-mtune=core-avx-i))
++        cflags-$(CONFIG_MCOREAVX2) += \
++                $(call cc-option,-march=core-avx2,$(call cc-option,-mtune=core-avx2))
+ 	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+ 		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
+         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+diff -uprN a/arch/x86/Makefile_32.cpu b/arch/x86/Makefile_32.cpu
+--- a/arch/x86/Makefile_32.cpu	2013-11-03 18:41:51.000000000 -0500
++++ b/arch/x86/Makefile_32.cpu	2013-12-15 06:21:24.354455723 -0500
+@@ -23,7 +23,14 @@ cflags-$(CONFIG_MK6)		+= -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7)		+= -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8)		+= $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK10)	+= $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA)	+= $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT)	+= $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER)	+= $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER)	+= $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MJAGUAR)	+= $(call cc-option,-march=btver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE)	+= -march=i686 $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -32,6 +39,10 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-
+ cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7)		+= -march=i686
+ cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
++cflags-$(CONFIG_MCOREI7)	+= -march=i686 $(call tune,corei7)
++cflags-$(CONFIG_MCOREI7AVX)	+= -march=i686 $(call tune,corei7-avx)
++cflags-$(CONFIG_MCOREAVXI)	+= -march=i686 $(call tune,core-avx-i)
++cflags-$(CONFIG_MCOREAVX2)	+= -march=i686 $(call tune,core-avx2)
+ cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+ 	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))

diff --git a/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch b/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
new file mode 100644
index 0000000..f931f75
--- /dev/null
+++ b/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
@@ -0,0 +1,387 @@
+WARNING - this version of the patch works with version 4.9+ of gcc and with
+kernel version 3.15.x+ and should NOT be applied when compiling on older
+versions due to name changes of the flags with the 4.9 release of gcc.
+Use the older version of this patch hosted on the same github for older
+versions of gcc. For example:
+
+corei7 --> nehalem
+corei7-avx --> sandybridge
+core-avx-i --> ivybridge
+core-avx2 --> haswell
+
+For more, see: https://gcc.gnu.org/gcc-4.9/changes.html
+
+It also changes 'atom' to 'bonnell' in accordance with the gcc v4.9 changes.
+Note that upstream is using the deprecated 'match=atom' flags when I believe it
+should use the newer 'march=bonnell' flag for atom processors.
+
+I have made that change to this patch set as well.  See the following kernel
+bug report to see if I'm right: https://bugzilla.kernel.org/show_bug.cgi?id=77461
+
+This patch will expand the number of microarchitectures to include new
+processors including: AMD K10-family, AMD Family 10h (Barcelona), AMD Family
+14h (Bobcat), AMD Family 15h (Bulldozer), AMD Family 15h (Piledriver), AMD
+Family 16h (Jaguar), Intel 1st Gen Core i3/i5/i7 (Nehalem), Intel 1.5 Gen Core
+i3/i5/i7 (Westmere), Intel 2nd Gen Core i3/i5/i7 (Sandybridge), Intel 3rd Gen
+Core i3/i5/i7 (Ivybridge), Intel 4th Gen Core i3/i5/i7 (Haswell), and Intel 5th
+Gen Core i3/i5/i7 (Broadwell). It also offers the compiler the 'native' flag.
+
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=3.15
+gcc version >=4.9
+
+--- a/arch/x86/include/asm/module.h	2014-08-03 18:25:02.000000000 -0400
++++ b/arch/x86/include/asm/module.h	2014-09-13 09:37:16.721385247 -0400
+@@ -15,6 +15,20 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -33,6 +47,20 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu	2014-08-03 18:25:02.000000000 -0400
++++ b/arch/x86/Kconfig.cpu	2014-09-13 09:37:16.721385247 -0400
+@@ -137,9 +137,8 @@ config MPENTIUM4
+ 		-Paxville
+ 		-Dempsey
+ 
+-
+ config MK6
+-	bool "K6/K6-II/K6-III"
++	bool "AMD K6/K6-II/K6-III"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD K6-family processor.  Enables use of
+@@ -147,7 +146,7 @@ config MK6
+ 	  flags to GCC.
+ 
+ config MK7
+-	bool "Athlon/Duron/K7"
++	bool "AMD Athlon/Duron/K7"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD Athlon K7-family processor.  Enables use of
+@@ -155,12 +154,62 @@ config MK7
+ 	  flags to GCC.
+ 
+ config MK8
+-	bool "Opteron/Athlon64/Hammer/K8"
++	bool "AMD Opteron/Athlon64/Hammer/K8"
+ 	---help---
+ 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ 	  Enables use of some extended instructions, and passes appropriate
+ 	  optimization flags to GCC.
+ 
++config MK8SSE3
++	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++	---help---
++	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MK10
++	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++	---help---
++	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++		Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MBARCELONA
++	bool "AMD Barcelona"
++	---help---
++	  Select this for AMD Barcelona and newer processors.
++
++	  Enables -march=barcelona
++
++config MBOBCAT
++	bool "AMD Bobcat"
++	---help---
++	  Select this for AMD Bobcat processors.
++
++	  Enables -march=btver1
++
++config MBULLDOZER
++	bool "AMD Bulldozer"
++	---help---
++	  Select this for AMD Bulldozer processors.
++
++	  Enables -march=bdver1
++
++config MPILEDRIVER
++	bool "AMD Piledriver"
++	---help---
++	  Select this for AMD Piledriver processors.
++
++	  Enables -march=bdver2
++
++config MJAGUAR
++	bool "AMD Jaguar"
++	---help---
++	  Select this for AMD Jaguar processors.
++
++	  Enables -march=btver2
++
+ config MCRUSOE
+ 	bool "Crusoe"
+ 	depends on X86_32
+@@ -251,8 +300,17 @@ config MPSC
+ 	  using the cpu family field
+ 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+ 
++config MATOM
++	bool "Intel Atom"
++	---help---
++
++	  Select this for the Intel Atom platform. Intel Atom CPUs have an
++	  in-order pipelining architecture and thus can benefit from
++	  accordingly optimized code. Use a recent GCC with specific Atom
++	  support in order to fully benefit from selecting this option.
++
+ config MCORE2
+-	bool "Core 2/newer Xeon"
++	bool "Intel Core 2"
+ 	---help---
+ 
+ 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -260,14 +318,55 @@ config MCORE2
+ 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ 	  (not a typo)
+ 
+-config MATOM
+-	bool "Intel Atom"
++	  Enables -march=core2
++
++config MNEHALEM
++	bool "Intel Nehalem"
+ 	---help---
+ 
+-	  Select this for the Intel Atom platform. Intel Atom CPUs have an
+-	  in-order pipelining architecture and thus can benefit from
+-	  accordingly optimized code. Use a recent GCC with specific Atom
+-	  support in order to fully benefit from selecting this option.
++	  Select this for 1st Gen Core processors in the Nehalem family.
++
++	  Enables -march=nehalem
++
++config MWESTMERE
++	bool "Intel Westmere"
++	---help---
++
++	  Select this for the Intel Westmere formerly Nehalem-C family.
++
++	  Enables -march=westmere
++
++config MSANDYBRIDGE
++	bool "Intel Sandy Bridge"
++	---help---
++
++	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++	  Enables -march=sandybridge
++
++config MIVYBRIDGE
++	bool "Intel Ivy Bridge"
++	---help---
++
++	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++	  Enables -march=ivybridge
++
++config MHASWELL
++	bool "Intel Haswell"
++	---help---
++
++	  Select this for 4th Gen Core processors in the Haswell family.
++
++	  Enables -march=haswell
++
++config MBROADWELL
++	bool "Intel Broadwell"
++	---help---
++
++	  Select this for 5th Gen Core processors in the Broadwell family.
++
++	  Enables -march=broadwell
+ 
+ config GENERIC_CPU
+ 	bool "Generic-x86-64"
+@@ -276,6 +375,19 @@ config GENERIC_CPU
+ 	  Generic x86-64 CPU.
+ 	  Run equally well on all x86-64 CPUs.
+ 
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++   GCC 4.2 and above support -march=native, which automatically detects
++   the optimum settings to use based on your processor. -march=native 
++   also detects and applies additional settings beyond -march specific
++   to your CPU, (eg. -msse4). Unless you have a specific reason not to
++   (e.g. distcc cross-compiling), you should probably be using
++   -march=native rather than anything listed below.
++
++   Enables -march=native
++
+ endchoice
+ 
+ config X86_GENERIC
+@@ -300,7 +412,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ 	int
+ 	default "7" if MPENTIUM4 || MPSC
+-	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || BROADWELL || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ 	default "4" if MELAN || M486 || MGEODEGX1
+ 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+ 
+@@ -331,11 +443,11 @@ config X86_ALIGNMENT_16
+ 
+ config X86_INTEL_USERCOPY
+ 	def_bool y
+-	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MNATIVE
+ 
+ config X86_USE_PPRO_CHECKSUM
+ 	def_bool y
+-	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MATOM || MNATIVE
+ 
+ config X86_USE_3DNOW
+ 	def_bool y
+@@ -359,17 +471,17 @@ config X86_P6_NOP
+ 
+ config X86_TSC
+ 	def_bool y
+-	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MNATIVE || MATOM) || X86_64
+ 
+ config X86_CMPXCHG64
+ 	def_bool y
+-	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM
++	depends on X86_PAE || X86_64 || MCORE2 || MNEHALEM || MWESTMERE || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM || MNATIVE
+ 
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+ config X86_CMOV
+ 	def_bool y
+-	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++	depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+ 
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+--- a/arch/x86/Makefile	2014-08-03 18:25:02.000000000 -0400
++++ b/arch/x86/Makefile	2014-09-13 09:37:16.721385247 -0400
+@@ -92,13 +92,33 @@ else
+ 	KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=3)
+ 
+         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+ 
+         cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+-		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++                $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++        cflags-$(CONFIG_MNEHALEM) += \
++                $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++        cflags-$(CONFIG_MWESTMERE) += \
++                $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++        cflags-$(CONFIG_MSANDYBRIDGE) += \
++                $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++        cflags-$(CONFIG_MIVYBRIDGE) += \
++                $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++        cflags-$(CONFIG_MHASWELL) += \
++                $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++        cflags-$(CONFIG_MBROADWELL) += \
++                $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++                $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+         KBUILD_CFLAGS += $(cflags-y)
+ 
+--- a/arch/x86/Makefile_32.cpu	2014-08-03 18:25:02.000000000 -0400
++++ b/arch/x86/Makefile_32.cpu	2014-09-13 09:37:16.721385247 -0400
+@@ -23,7 +23,15 @@ cflags-$(CONFIG_MK6)		+= -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7)		+= -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8)		+= $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3)		+= $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10)	+= $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA)	+= $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT)	+= $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER)	+= $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER)	+= $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MJAGUAR)	+= $(call cc-option,-march=btver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE)	+= -march=i686 $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -32,8 +40,14 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-
+ cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7)		+= -march=i686
+ cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+-	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM)	+= -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE)	+= -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSANDYBRIDGE)	+= -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE)	+= -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL)	+= -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL)	+= -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++	$(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ 
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN)		+= -march=i486


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
  2015-06-23 12:48 [gentoo-commits] proj/linux-patches:master " Mike Pagano
@ 2015-03-20  0:23 ` Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-03-20  0:23 UTC (permalink / raw
  To: gentoo-commits

commit:     7940d2a9fd1c415d391b9878ef3e6e18294243c8
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 20 00:23:37 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar 20 00:23:37 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7940d2a9

Update the distro kernel patch to add an option to the Gentoo menu that enables CGROUPS for cgroup, IPC_NS for ipc-sandbox, and NET_NS for network-sandbox.

 4567_distro-Gentoo-Kconfig.patch | 39 +++++++++++++++++++++++++++++++--------
 1 file changed, 31 insertions(+), 8 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 652e2a7..c7af596 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,5 +1,5 @@
---- a/Kconfig	2014-04-02 09:45:05.389224541 -0400
-+++ b/Kconfig	2014-04-02 09:45:39.269224273 -0400
+--- a/Kconfig
++++ b/Kconfig
 @@ -8,4 +8,6 @@ config SRCARCH
  	string
  	option env="SRCARCH"
@@ -7,9 +7,9 @@
 +source "distro/Kconfig"
 +
  source "arch/$SRCARCH/Kconfig"
---- 	1969-12-31 19:00:00.000000000 -0500
-+++ b/distro/Kconfig	2014-04-02 09:57:03.539218861 -0400
-@@ -0,0 +1,108 @@
+--- /dev/null
++++ b/distro/Kconfig
+@@ -0,0 +1,131 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -30,7 +30,7 @@
 +
 +	depends on GENTOO_LINUX
 +	default y if GENTOO_LINUX
-+	
++
 +	select DEVTMPFS
 +	select TMPFS
 +
@@ -51,7 +51,29 @@
 +		boot process; if not available, it causes sysfs and udev to malfunction.
 +
 +		To ensure Gentoo Linux boots, it is best to leave this setting enabled;
-+		if you run a custom setup, you could consider whether to disable this. 
++		if you run a custom setup, you could consider whether to disable this.
++
++config GENTOO_LINUX_PORTAGE
++	bool "Select options required by Portage features"
++
++	depends on GENTOO_LINUX
++	default y if GENTOO_LINUX
++
++	select CGROUPS
++	select NAMESPACES
++	select IPC_NS
++	select NET_NS
++
++	help
++		This enables options required by various Portage FEATURES.
++		Currently this selects:
++
++		CGROUPS     (required for FEATURES=cgroup)
++		IPC_NS      (required for FEATURES=ipc-sandbox)
++		NET_NS      (required for FEATURES=network-sandbox)
++
++		It is highly recommended that you leave this enabled as these FEATURES
++		are, or will soon be, enabled by default.
 +
 +menu "Support for init systems, system and service managers"
 +	visible if GENTOO_LINUX
@@ -87,12 +109,13 @@
 +	select AUTOFS4_FS
 +	select BLK_DEV_BSG
 +	select CGROUPS
++	select DEVPTS_MULTIPLE_INSTANCES
 +	select EPOLL
 +	select FANOTIFY
 +	select FHANDLE
 +	select INOTIFY_USER
 +	select NET
-+	select NET_NS 
++	select NET_NS
 +	select PROC_FS
 +	select SIGNALFD
 +	select SYSFS


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-03-21 20:00 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-03-21 20:00 UTC (permalink / raw
  To: gentoo-commits

commit:     18f6a4706fd8339bf905e5a36d5fcff525915340
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Mar 21 20:00:01 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Mar 21 20:00:01 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=18f6a470

Update gcc >= 4.9 optimization patch. See bug #544028.

 ...-additional-cpu-optimizations-for-gcc-4.9.patch | 67 +++++++++++++---------
 1 file changed, 41 insertions(+), 26 deletions(-)

diff --git a/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch b/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
index f931f75..c4efd06 100644
--- a/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
+++ b/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
@@ -18,13 +18,14 @@ should use the newer 'march=bonnell' flag for atom processors.
 I have made that change to this patch set as well.  See the following kernel
 bug report to see if I'm right: https://bugzilla.kernel.org/show_bug.cgi?id=77461
 
-This patch will expand the number of microarchitectures to include new
+This patch will expand the number of microarchitectures to include newer
 processors including: AMD K10-family, AMD Family 10h (Barcelona), AMD Family
 14h (Bobcat), AMD Family 15h (Bulldozer), AMD Family 15h (Piledriver), AMD
 Family 16h (Jaguar), Intel 1st Gen Core i3/i5/i7 (Nehalem), Intel 1.5 Gen Core
 i3/i5/i7 (Westmere), Intel 2nd Gen Core i3/i5/i7 (Sandybridge), Intel 3rd Gen
-Core i3/i5/i7 (Ivybridge), Intel 4th Gen Core i3/i5/i7 (Haswell), and Intel 5th
-Gen Core i3/i5/i7 (Broadwell). It also offers the compiler the 'native' flag.
+Core i3/i5/i7 (Ivybridge), Intel 4th Gen Core i3/i5/i7 (Haswell), Intel 5th
+Gen Core i3/i5/i7 (Broadwell), and the low power Silvermont series of Atom
+processors (Silvermont). It also offers the compiler the 'native' flag.
 
 Small but real speed increases are measurable using a make endpoint comparing
 a generic kernel to one built with one of the respective microarchs.
@@ -36,9 +37,9 @@ REQUIREMENTS
 linux version >=3.15
 gcc version >=4.9
 
---- a/arch/x86/include/asm/module.h	2014-08-03 18:25:02.000000000 -0400
-+++ b/arch/x86/include/asm/module.h	2014-09-13 09:37:16.721385247 -0400
-@@ -15,6 +15,20 @@
+--- a/arch/x86/include/asm/module.h	2014-06-16 16:44:27.000000000 -0400
++++ b/arch/x86/include/asm/module.h	2015-03-07 03:27:32.556672424 -0500
+@@ -15,6 +15,22 @@
  #define MODULE_PROC_FAMILY "586MMX "
  #elif defined CONFIG_MCORE2
  #define MODULE_PROC_FAMILY "CORE2 "
@@ -48,6 +49,8 @@ gcc version >=4.9
 +#define MODULE_PROC_FAMILY "NEHALEM "
 +#elif defined CONFIG_MWESTMERE
 +#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
 +#elif defined CONFIG_MSANDYBRIDGE
 +#define MODULE_PROC_FAMILY "SANDYBRIDGE "
 +#elif defined CONFIG_MIVYBRIDGE
@@ -59,7 +62,7 @@ gcc version >=4.9
  #elif defined CONFIG_MATOM
  #define MODULE_PROC_FAMILY "ATOM "
  #elif defined CONFIG_M686
-@@ -33,6 +47,20 @@
+@@ -33,6 +49,20 @@
  #define MODULE_PROC_FAMILY "K7 "
  #elif defined CONFIG_MK8
  #define MODULE_PROC_FAMILY "K8 "
@@ -80,8 +83,8 @@ gcc version >=4.9
  #elif defined CONFIG_MELAN
  #define MODULE_PROC_FAMILY "ELAN "
  #elif defined CONFIG_MCRUSOE
---- a/arch/x86/Kconfig.cpu	2014-08-03 18:25:02.000000000 -0400
-+++ b/arch/x86/Kconfig.cpu	2014-09-13 09:37:16.721385247 -0400
+--- a/arch/x86/Kconfig.cpu	2014-06-16 16:44:27.000000000 -0400
++++ b/arch/x86/Kconfig.cpu	2015-03-07 03:32:14.337713226 -0500
 @@ -137,9 +137,8 @@ config MPENTIUM4
  		-Paxville
  		-Dempsey
@@ -185,7 +188,7 @@ gcc version >=4.9
  	---help---
  
  	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
-@@ -260,14 +318,55 @@ config MCORE2
+@@ -260,14 +318,63 @@ config MCORE2
  	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
  	  (not a typo)
  
@@ -213,6 +216,14 @@ gcc version >=4.9
 +
 +	  Enables -march=westmere
 +
++config MSILVERMONT
++	bool "Intel Silvermont"
++	---help---
++
++	  Select this for the Intel Silvermont platform.
++
++	  Enables -march=silvermont
++
 +config MSANDYBRIDGE
 +	bool "Intel Sandy Bridge"
 +	---help---
@@ -247,7 +258,7 @@ gcc version >=4.9
  
  config GENERIC_CPU
  	bool "Generic-x86-64"
-@@ -276,6 +375,19 @@ config GENERIC_CPU
+@@ -276,6 +383,19 @@ config GENERIC_CPU
  	  Generic x86-64 CPU.
  	  Run equally well on all x86-64 CPUs.
  
@@ -267,53 +278,53 @@ gcc version >=4.9
  endchoice
  
  config X86_GENERIC
-@@ -300,7 +412,7 @@ config X86_INTERNODE_CACHE_SHIFT
+@@ -300,7 +420,7 @@ config X86_INTERNODE_CACHE_SHIFT
  config X86_L1_CACHE_SHIFT
  	int
  	default "7" if MPENTIUM4 || MPSC
 -	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
-+	default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || BROADWELL || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || BROADWELL || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
  	default "4" if MELAN || M486 || MGEODEGX1
  	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
  
-@@ -331,11 +443,11 @@ config X86_ALIGNMENT_16
+@@ -331,11 +451,11 @@ config X86_ALIGNMENT_16
  
  config X86_INTEL_USERCOPY
  	def_bool y
 -	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
-+	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MNATIVE
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MNATIVE
  
  config X86_USE_PPRO_CHECKSUM
  	def_bool y
 -	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
-+	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MATOM || MNATIVE
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MATOM || MNATIVE
  
  config X86_USE_3DNOW
  	def_bool y
-@@ -359,17 +471,17 @@ config X86_P6_NOP
+@@ -359,17 +479,17 @@ config X86_P6_NOP
  
  config X86_TSC
  	def_bool y
 -	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
-+	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MNATIVE || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MNATIVE || MATOM) || X86_64
  
  config X86_CMPXCHG64
  	def_bool y
 -	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM
-+	depends on X86_PAE || X86_64 || MCORE2 || MNEHALEM || MWESTMERE || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM || MNATIVE
++	depends on X86_PAE || X86_64 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM || MNATIVE
  
  # this should be set for all -march=.. options where the compiler
  # generates cmov.
  config X86_CMOV
  	def_bool y
 -	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
-+	depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
++	depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
  
  config X86_MINIMUM_CPU_FAMILY
  	int
---- a/arch/x86/Makefile	2014-08-03 18:25:02.000000000 -0400
-+++ b/arch/x86/Makefile	2014-09-13 09:37:16.721385247 -0400
-@@ -92,13 +92,33 @@ else
+--- a/arch/x86/Makefile	2014-06-16 16:44:27.000000000 -0400
++++ b/arch/x86/Makefile	2015-03-07 03:33:27.650843211 -0500
+@@ -92,13 +92,35 @@ else
  	KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=3)
  
          # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
@@ -337,6 +348,8 @@ gcc version >=4.9
 +                $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
 +        cflags-$(CONFIG_MWESTMERE) += \
 +                $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++        cflags-$(CONFIG_MSILVERMONT) += \
++                $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
 +        cflags-$(CONFIG_MSANDYBRIDGE) += \
 +                $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
 +        cflags-$(CONFIG_MIVYBRIDGE) += \
@@ -350,8 +363,8 @@ gcc version >=4.9
          cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
          KBUILD_CFLAGS += $(cflags-y)
  
---- a/arch/x86/Makefile_32.cpu	2014-08-03 18:25:02.000000000 -0400
-+++ b/arch/x86/Makefile_32.cpu	2014-09-13 09:37:16.721385247 -0400
+--- a/arch/x86/Makefile_32.cpu	2014-06-16 16:44:27.000000000 -0400
++++ b/arch/x86/Makefile_32.cpu	2015-03-07 03:34:15.203586024 -0500
 @@ -23,7 +23,15 @@ cflags-$(CONFIG_MK6)		+= -march=k6
  # Please note, that patches that add -march=athlon-xp and friends are pointless.
  # They make zero difference whatsosever to performance at this time.
@@ -368,7 +381,7 @@ gcc version >=4.9
  cflags-$(CONFIG_MCRUSOE)	+= -march=i686 $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
  cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
  cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
-@@ -32,8 +40,14 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-
+@@ -32,8 +40,15 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-
  cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
  cflags-$(CONFIG_MVIAC7)		+= -march=i686
  cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
@@ -376,6 +389,7 @@ gcc version >=4.9
 -	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
 +cflags-$(CONFIG_MNEHALEM)	+= -march=i686 $(call tune,nehalem)
 +cflags-$(CONFIG_MWESTMERE)	+= -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT)	+= -march=i686 $(call tune,silvermont)
 +cflags-$(CONFIG_MSANDYBRIDGE)	+= -march=i686 $(call tune,sandybridge)
 +cflags-$(CONFIG_MIVYBRIDGE)	+= -march=i686 $(call tune,ivybridge)
 +cflags-$(CONFIG_MHASWELL)	+= -march=i686 $(call tune,haswell)
@@ -385,3 +399,4 @@ gcc version >=4.9
  
  # AMD Elan support
  cflags-$(CONFIG_MELAN)		+= -march=i486
+


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
  2015-06-23 12:48 [gentoo-commits] proj/linux-patches:master commit in: / Mike Pagano
@ 2015-04-27 18:08 ` Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-04-27 18:08 UTC (permalink / raw
  To: gentoo-commits

commit:     f2dffc7244ec86ad41fde2ee164a4082c974ade5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Apr 27 17:56:11 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Apr 27 17:56:11 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f2dffc72

Patch to select REGMAP_IRQ for rt5033 mfd driver. See bug #546938.

 0000_README                             |  6 +++++-
 2600_select-REGMAP_IRQ-for-rt5033.patch | 30 ++++++++++++++++++++++++++++++
 2 files changed, 35 insertions(+), 1 deletion(-)

diff --git a/0000_README b/0000_README
index ca06e06..0cdee6d 100644
--- a/0000_README
+++ b/0000_README
@@ -49,7 +49,11 @@ Desc:   Support for namespace user.pax.* on tmpfs.
 
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
-Desc:   Enable link security restrictions by default
+Desc:   Enable link security restrictions by default.
+
+Patch:  2600_select-REGMAP_IRQ-for-rt5033.patch
+From:   http://git.kernel.org/
+Desc:   mfd: rt5033: MFD_RT5033 needs to select REGMAP_IRQ. See bug #546938.
 
 Patch:  2700_ThinkPad-30-brightness-control-fix.patch
 From:   Seth Forshee <seth.forshee@canonical.com>

diff --git a/2600_select-REGMAP_IRQ-for-rt5033.patch b/2600_select-REGMAP_IRQ-for-rt5033.patch
new file mode 100644
index 0000000..92fb2e0
--- /dev/null
+++ b/2600_select-REGMAP_IRQ-for-rt5033.patch
@@ -0,0 +1,30 @@
+From 23a2a22a3f3f17de094f386a893f7047c10e44a0 Mon Sep 17 00:00:00 2001
+From: Artem Savkov <asavkov@redhat.com>
+Date: Thu, 5 Mar 2015 12:42:27 +0100
+Subject: mfd: rt5033: MFD_RT5033 needs to select REGMAP_IRQ
+
+Since commit 0b2712585(linux-next.git) this driver uses regmap_irq and so needs
+to select REGMAP_IRQ.
+
+This fixes the following compilation errors:
+ERROR: "regmap_irq_get_domain" [drivers/mfd/rt5033.ko] undefined!
+ERROR: "regmap_add_irq_chip" [drivers/mfd/rt5033.ko] undefined!
+
+Signed-off-by: Artem Savkov <asavkov@redhat.com>
+Signed-off-by: Lee Jones <lee.jones@linaro.org>
+
+diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
+index f8ef77d9a..f49f404 100644
+--- a/drivers/mfd/Kconfig
++++ b/drivers/mfd/Kconfig
+@@ -680,6 +680,7 @@ config MFD_RT5033
+ 	depends on I2C=y
+ 	select MFD_CORE
+ 	select REGMAP_I2C
++	select REGMAP_IRQ
+ 	help
+ 	  This driver provides for the Richtek RT5033 Power Management IC,
+ 	  which includes the I2C driver and the Core APIs. This driver provides
+-- 
+cgit v0.10.2
+


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
  2015-06-23 12:48 [gentoo-commits] proj/linux-patches:master " Mike Pagano
@ 2015-04-29 13:35 ` Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-04-29 13:35 UTC (permalink / raw
  To: gentoo-commits

commit:     b5c2b5b2947190cece9bf6218aa9dca795670288
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 29 13:35:22 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 29 13:35:22 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b5c2b5b2

Linux patch 4.0.1

 0000_README            |   4 +
 1000_linux-4.0.1.patch | 479 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 483 insertions(+)

diff --git a/0000_README b/0000_README
index 0cdee6d..483ca42 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-4.0.1.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.0.1
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1000_linux-4.0.1.patch b/1000_linux-4.0.1.patch
new file mode 100644
index 0000000..ac58552
--- /dev/null
+++ b/1000_linux-4.0.1.patch
@@ -0,0 +1,479 @@
+diff --git a/Makefile b/Makefile
+index fbd43bfe4445..f499cd2f5738 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+index 4085c4b31047..355d5fea5be9 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+@@ -531,20 +531,8 @@ struct bnx2x_fastpath {
+ 	struct napi_struct	napi;
+ 
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+-	unsigned int state;
+-#define BNX2X_FP_STATE_IDLE		      0
+-#define BNX2X_FP_STATE_NAPI		(1 << 0)    /* NAPI owns this FP */
+-#define BNX2X_FP_STATE_POLL		(1 << 1)    /* poll owns this FP */
+-#define BNX2X_FP_STATE_DISABLED		(1 << 2)
+-#define BNX2X_FP_STATE_NAPI_YIELD	(1 << 3)    /* NAPI yielded this FP */
+-#define BNX2X_FP_STATE_POLL_YIELD	(1 << 4)    /* poll yielded this FP */
+-#define BNX2X_FP_OWNED	(BNX2X_FP_STATE_NAPI | BNX2X_FP_STATE_POLL)
+-#define BNX2X_FP_YIELD	(BNX2X_FP_STATE_NAPI_YIELD | BNX2X_FP_STATE_POLL_YIELD)
+-#define BNX2X_FP_LOCKED	(BNX2X_FP_OWNED | BNX2X_FP_STATE_DISABLED)
+-#define BNX2X_FP_USER_PEND (BNX2X_FP_STATE_POLL | BNX2X_FP_STATE_POLL_YIELD)
+-	/* protect state */
+-	spinlock_t lock;
+-#endif /* CONFIG_NET_RX_BUSY_POLL */
++	unsigned long		busy_poll_state;
++#endif
+ 
+ 	union host_hc_status_block	status_blk;
+ 	/* chip independent shortcuts into sb structure */
+@@ -619,104 +607,83 @@ struct bnx2x_fastpath {
+ #define bnx2x_fp_qstats(bp, fp)	(&((bp)->fp_stats[(fp)->index].eth_q_stats))
+ 
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+-static inline void bnx2x_fp_init_lock(struct bnx2x_fastpath *fp)
++
++enum bnx2x_fp_state {
++	BNX2X_STATE_FP_NAPI	= BIT(0), /* NAPI handler owns the queue */
++
++	BNX2X_STATE_FP_NAPI_REQ_BIT = 1, /* NAPI would like to own the queue */
++	BNX2X_STATE_FP_NAPI_REQ = BIT(1),
++
++	BNX2X_STATE_FP_POLL_BIT = 2,
++	BNX2X_STATE_FP_POLL     = BIT(2), /* busy_poll owns the queue */
++
++	BNX2X_STATE_FP_DISABLE_BIT = 3, /* queue is dismantled */
++};
++
++static inline void bnx2x_fp_busy_poll_init(struct bnx2x_fastpath *fp)
+ {
+-	spin_lock_init(&fp->lock);
+-	fp->state = BNX2X_FP_STATE_IDLE;
++	WRITE_ONCE(fp->busy_poll_state, 0);
+ }
+ 
+ /* called from the device poll routine to get ownership of a FP */
+ static inline bool bnx2x_fp_lock_napi(struct bnx2x_fastpath *fp)
+ {
+-	bool rc = true;
+-
+-	spin_lock_bh(&fp->lock);
+-	if (fp->state & BNX2X_FP_LOCKED) {
+-		WARN_ON(fp->state & BNX2X_FP_STATE_NAPI);
+-		fp->state |= BNX2X_FP_STATE_NAPI_YIELD;
+-		rc = false;
+-	} else {
+-		/* we don't care if someone yielded */
+-		fp->state = BNX2X_FP_STATE_NAPI;
++	unsigned long prev, old = READ_ONCE(fp->busy_poll_state);
++
++	while (1) {
++		switch (old) {
++		case BNX2X_STATE_FP_POLL:
++			/* make sure bnx2x_fp_lock_poll() wont starve us */
++			set_bit(BNX2X_STATE_FP_NAPI_REQ_BIT,
++				&fp->busy_poll_state);
++			/* fallthrough */
++		case BNX2X_STATE_FP_POLL | BNX2X_STATE_FP_NAPI_REQ:
++			return false;
++		default:
++			break;
++		}
++		prev = cmpxchg(&fp->busy_poll_state, old, BNX2X_STATE_FP_NAPI);
++		if (unlikely(prev != old)) {
++			old = prev;
++			continue;
++		}
++		return true;
+ 	}
+-	spin_unlock_bh(&fp->lock);
+-	return rc;
+ }
+ 
+-/* returns true is someone tried to get the FP while napi had it */
+-static inline bool bnx2x_fp_unlock_napi(struct bnx2x_fastpath *fp)
++static inline void bnx2x_fp_unlock_napi(struct bnx2x_fastpath *fp)
+ {
+-	bool rc = false;
+-
+-	spin_lock_bh(&fp->lock);
+-	WARN_ON(fp->state &
+-		(BNX2X_FP_STATE_POLL | BNX2X_FP_STATE_NAPI_YIELD));
+-
+-	if (fp->state & BNX2X_FP_STATE_POLL_YIELD)
+-		rc = true;
+-
+-	/* state ==> idle, unless currently disabled */
+-	fp->state &= BNX2X_FP_STATE_DISABLED;
+-	spin_unlock_bh(&fp->lock);
+-	return rc;
++	smp_wmb();
++	fp->busy_poll_state = 0;
+ }
+ 
+ /* called from bnx2x_low_latency_poll() */
+ static inline bool bnx2x_fp_lock_poll(struct bnx2x_fastpath *fp)
+ {
+-	bool rc = true;
+-
+-	spin_lock_bh(&fp->lock);
+-	if ((fp->state & BNX2X_FP_LOCKED)) {
+-		fp->state |= BNX2X_FP_STATE_POLL_YIELD;
+-		rc = false;
+-	} else {
+-		/* preserve yield marks */
+-		fp->state |= BNX2X_FP_STATE_POLL;
+-	}
+-	spin_unlock_bh(&fp->lock);
+-	return rc;
++	return cmpxchg(&fp->busy_poll_state, 0, BNX2X_STATE_FP_POLL) == 0;
+ }
+ 
+-/* returns true if someone tried to get the FP while it was locked */
+-static inline bool bnx2x_fp_unlock_poll(struct bnx2x_fastpath *fp)
++static inline void bnx2x_fp_unlock_poll(struct bnx2x_fastpath *fp)
+ {
+-	bool rc = false;
+-
+-	spin_lock_bh(&fp->lock);
+-	WARN_ON(fp->state & BNX2X_FP_STATE_NAPI);
+-
+-	if (fp->state & BNX2X_FP_STATE_POLL_YIELD)
+-		rc = true;
+-
+-	/* state ==> idle, unless currently disabled */
+-	fp->state &= BNX2X_FP_STATE_DISABLED;
+-	spin_unlock_bh(&fp->lock);
+-	return rc;
++	smp_mb__before_atomic();
++	clear_bit(BNX2X_STATE_FP_POLL_BIT, &fp->busy_poll_state);
+ }
+ 
+-/* true if a socket is polling, even if it did not get the lock */
++/* true if a socket is polling */
+ static inline bool bnx2x_fp_ll_polling(struct bnx2x_fastpath *fp)
+ {
+-	WARN_ON(!(fp->state & BNX2X_FP_OWNED));
+-	return fp->state & BNX2X_FP_USER_PEND;
++	return READ_ONCE(fp->busy_poll_state) & BNX2X_STATE_FP_POLL;
+ }
+ 
+ /* false if fp is currently owned */
+ static inline bool bnx2x_fp_ll_disable(struct bnx2x_fastpath *fp)
+ {
+-	int rc = true;
+-
+-	spin_lock_bh(&fp->lock);
+-	if (fp->state & BNX2X_FP_OWNED)
+-		rc = false;
+-	fp->state |= BNX2X_FP_STATE_DISABLED;
+-	spin_unlock_bh(&fp->lock);
++	set_bit(BNX2X_STATE_FP_DISABLE_BIT, &fp->busy_poll_state);
++	return !bnx2x_fp_ll_polling(fp);
+ 
+-	return rc;
+ }
+ #else
+-static inline void bnx2x_fp_init_lock(struct bnx2x_fastpath *fp)
++static inline void bnx2x_fp_busy_poll_init(struct bnx2x_fastpath *fp)
+ {
+ }
+ 
+@@ -725,9 +692,8 @@ static inline bool bnx2x_fp_lock_napi(struct bnx2x_fastpath *fp)
+ 	return true;
+ }
+ 
+-static inline bool bnx2x_fp_unlock_napi(struct bnx2x_fastpath *fp)
++static inline void bnx2x_fp_unlock_napi(struct bnx2x_fastpath *fp)
+ {
+-	return false;
+ }
+ 
+ static inline bool bnx2x_fp_lock_poll(struct bnx2x_fastpath *fp)
+@@ -735,9 +701,8 @@ static inline bool bnx2x_fp_lock_poll(struct bnx2x_fastpath *fp)
+ 	return false;
+ }
+ 
+-static inline bool bnx2x_fp_unlock_poll(struct bnx2x_fastpath *fp)
++static inline void bnx2x_fp_unlock_poll(struct bnx2x_fastpath *fp)
+ {
+-	return false;
+ }
+ 
+ static inline bool bnx2x_fp_ll_polling(struct bnx2x_fastpath *fp)
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+index 0a9faa134a9a..2f63467bce46 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -1849,7 +1849,7 @@ static void bnx2x_napi_enable_cnic(struct bnx2x *bp)
+ 	int i;
+ 
+ 	for_each_rx_queue_cnic(bp, i) {
+-		bnx2x_fp_init_lock(&bp->fp[i]);
++		bnx2x_fp_busy_poll_init(&bp->fp[i]);
+ 		napi_enable(&bnx2x_fp(bp, i, napi));
+ 	}
+ }
+@@ -1859,7 +1859,7 @@ static void bnx2x_napi_enable(struct bnx2x *bp)
+ 	int i;
+ 
+ 	for_each_eth_queue(bp, i) {
+-		bnx2x_fp_init_lock(&bp->fp[i]);
++		bnx2x_fp_busy_poll_init(&bp->fp[i]);
+ 		napi_enable(&bnx2x_fp(bp, i, napi));
+ 	}
+ }
+@@ -3191,9 +3191,10 @@ static int bnx2x_poll(struct napi_struct *napi, int budget)
+ 			}
+ 		}
+ 
++		bnx2x_fp_unlock_napi(fp);
++
+ 		/* Fall out from the NAPI loop if needed */
+-		if (!bnx2x_fp_unlock_napi(fp) &&
+-		    !(bnx2x_has_rx_work(fp) || bnx2x_has_tx_work(fp))) {
++		if (!(bnx2x_has_rx_work(fp) || bnx2x_has_tx_work(fp))) {
+ 
+ 			/* No need to update SB for FCoE L2 ring as long as
+ 			 * it's connected to the default SB and the SB
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index f8528a4cf54f..fceb637efd6b 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -1713,12 +1713,6 @@ static int vxlan6_xmit_skb(struct dst_entry *dst, struct sk_buff *skb,
+ 		}
+ 	}
+ 
+-	skb = iptunnel_handle_offloads(skb, udp_sum, type);
+-	if (IS_ERR(skb)) {
+-		err = -EINVAL;
+-		goto err;
+-	}
+-
+ 	skb_scrub_packet(skb, xnet);
+ 
+ 	min_headroom = LL_RESERVED_SPACE(dst->dev) + dst->header_len
+@@ -1738,6 +1732,12 @@ static int vxlan6_xmit_skb(struct dst_entry *dst, struct sk_buff *skb,
+ 		goto err;
+ 	}
+ 
++	skb = iptunnel_handle_offloads(skb, udp_sum, type);
++	if (IS_ERR(skb)) {
++		err = -EINVAL;
++		goto err;
++	}
++
+ 	vxh = (struct vxlanhdr *) __skb_push(skb, sizeof(*vxh));
+ 	vxh->vx_flags = htonl(VXLAN_HF_VNI);
+ 	vxh->vx_vni = md->vni;
+@@ -1798,10 +1798,6 @@ int vxlan_xmit_skb(struct rtable *rt, struct sk_buff *skb,
+ 		}
+ 	}
+ 
+-	skb = iptunnel_handle_offloads(skb, udp_sum, type);
+-	if (IS_ERR(skb))
+-		return PTR_ERR(skb);
+-
+ 	min_headroom = LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len
+ 			+ VXLAN_HLEN + sizeof(struct iphdr)
+ 			+ (skb_vlan_tag_present(skb) ? VLAN_HLEN : 0);
+@@ -1817,6 +1813,10 @@ int vxlan_xmit_skb(struct rtable *rt, struct sk_buff *skb,
+ 	if (WARN_ON(!skb))
+ 		return -ENOMEM;
+ 
++	skb = iptunnel_handle_offloads(skb, udp_sum, type);
++	if (IS_ERR(skb))
++		return PTR_ERR(skb);
++
+ 	vxh = (struct vxlanhdr *) __skb_push(skb, sizeof(*vxh));
+ 	vxh->vx_flags = htonl(VXLAN_HF_VNI);
+ 	vxh->vx_vni = md->vni;
+diff --git a/fs/exec.c b/fs/exec.c
+index c7f9b733406d..00400cf522dc 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1265,6 +1265,53 @@ static void check_unsafe_exec(struct linux_binprm *bprm)
+ 	spin_unlock(&p->fs->lock);
+ }
+ 
++static void bprm_fill_uid(struct linux_binprm *bprm)
++{
++	struct inode *inode;
++	unsigned int mode;
++	kuid_t uid;
++	kgid_t gid;
++
++	/* clear any previous set[ug]id data from a previous binary */
++	bprm->cred->euid = current_euid();
++	bprm->cred->egid = current_egid();
++
++	if (bprm->file->f_path.mnt->mnt_flags & MNT_NOSUID)
++		return;
++
++	if (task_no_new_privs(current))
++		return;
++
++	inode = file_inode(bprm->file);
++	mode = READ_ONCE(inode->i_mode);
++	if (!(mode & (S_ISUID|S_ISGID)))
++		return;
++
++	/* Be careful if suid/sgid is set */
++	mutex_lock(&inode->i_mutex);
++
++	/* reload atomically mode/uid/gid now that lock held */
++	mode = inode->i_mode;
++	uid = inode->i_uid;
++	gid = inode->i_gid;
++	mutex_unlock(&inode->i_mutex);
++
++	/* We ignore suid/sgid if there are no mappings for them in the ns */
++	if (!kuid_has_mapping(bprm->cred->user_ns, uid) ||
++		 !kgid_has_mapping(bprm->cred->user_ns, gid))
++		return;
++
++	if (mode & S_ISUID) {
++		bprm->per_clear |= PER_CLEAR_ON_SETID;
++		bprm->cred->euid = uid;
++	}
++
++	if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP)) {
++		bprm->per_clear |= PER_CLEAR_ON_SETID;
++		bprm->cred->egid = gid;
++	}
++}
++
+ /*
+  * Fill the binprm structure from the inode.
+  * Check permissions, then read the first 128 (BINPRM_BUF_SIZE) bytes
+@@ -1273,36 +1320,9 @@ static void check_unsafe_exec(struct linux_binprm *bprm)
+  */
+ int prepare_binprm(struct linux_binprm *bprm)
+ {
+-	struct inode *inode = file_inode(bprm->file);
+-	umode_t mode = inode->i_mode;
+ 	int retval;
+ 
+-
+-	/* clear any previous set[ug]id data from a previous binary */
+-	bprm->cred->euid = current_euid();
+-	bprm->cred->egid = current_egid();
+-
+-	if (!(bprm->file->f_path.mnt->mnt_flags & MNT_NOSUID) &&
+-	    !task_no_new_privs(current) &&
+-	    kuid_has_mapping(bprm->cred->user_ns, inode->i_uid) &&
+-	    kgid_has_mapping(bprm->cred->user_ns, inode->i_gid)) {
+-		/* Set-uid? */
+-		if (mode & S_ISUID) {
+-			bprm->per_clear |= PER_CLEAR_ON_SETID;
+-			bprm->cred->euid = inode->i_uid;
+-		}
+-
+-		/* Set-gid? */
+-		/*
+-		 * If setgid is set but no group execute bit then this
+-		 * is a candidate for mandatory locking, not a setgid
+-		 * executable.
+-		 */
+-		if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP)) {
+-			bprm->per_clear |= PER_CLEAR_ON_SETID;
+-			bprm->cred->egid = inode->i_gid;
+-		}
+-	}
++	bprm_fill_uid(bprm);
+ 
+ 	/* fill in binprm security blob */
+ 	retval = security_bprm_set_creds(bprm);
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index a28e09c7825d..36508e69e92a 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1380,7 +1380,8 @@ peek_stack:
+ 			/* tell verifier to check for equivalent states
+ 			 * after every call and jump
+ 			 */
+-			env->explored_states[t + 1] = STATE_LIST_MARK;
++			if (t + 1 < insn_cnt)
++				env->explored_states[t + 1] = STATE_LIST_MARK;
+ 		} else {
+ 			/* conditional jump with two edges */
+ 			ret = push_insn(t, t + 1, FALLTHROUGH, env);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 8e4ac97c8477..98d45fe72f51 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -4169,19 +4169,21 @@ EXPORT_SYMBOL(skb_try_coalesce);
+  */
+ void skb_scrub_packet(struct sk_buff *skb, bool xnet)
+ {
+-	if (xnet)
+-		skb_orphan(skb);
+ 	skb->tstamp.tv64 = 0;
+ 	skb->pkt_type = PACKET_HOST;
+ 	skb->skb_iif = 0;
+ 	skb->ignore_df = 0;
+ 	skb_dst_drop(skb);
+-	skb->mark = 0;
+ 	skb_sender_cpu_clear(skb);
+-	skb_init_secmark(skb);
+ 	secpath_reset(skb);
+ 	nf_reset(skb);
+ 	nf_reset_trace(skb);
++
++	if (!xnet)
++		return;
++
++	skb_orphan(skb);
++	skb->mark = 0;
+ }
+ EXPORT_SYMBOL_GPL(skb_scrub_packet);
+ 
+diff --git a/net/ipv4/geneve.c b/net/ipv4/geneve.c
+index 5a4828ba05ad..a566a2e4715b 100644
+--- a/net/ipv4/geneve.c
++++ b/net/ipv4/geneve.c
+@@ -113,10 +113,6 @@ int geneve_xmit_skb(struct geneve_sock *gs, struct rtable *rt,
+ 	int min_headroom;
+ 	int err;
+ 
+-	skb = udp_tunnel_handle_offloads(skb, csum);
+-	if (IS_ERR(skb))
+-		return PTR_ERR(skb);
+-
+ 	min_headroom = LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len
+ 			+ GENEVE_BASE_HLEN + opt_len + sizeof(struct iphdr)
+ 			+ (skb_vlan_tag_present(skb) ? VLAN_HLEN : 0);
+@@ -131,6 +127,10 @@ int geneve_xmit_skb(struct geneve_sock *gs, struct rtable *rt,
+ 	if (unlikely(!skb))
+ 		return -ENOMEM;
+ 
++	skb = udp_tunnel_handle_offloads(skb, csum);
++	if (IS_ERR(skb))
++		return PTR_ERR(skb);
++
+ 	gnvh = (struct genevehdr *)__skb_push(skb, sizeof(*gnvh) + opt_len);
+ 	geneve_build_header(gnvh, tun_flags, vni, opt_len, opt);
+ 
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 1db253e36045..d520492ba698 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -2929,6 +2929,8 @@ struct sk_buff *tcp_make_synack(struct sock *sk, struct dst_entry *dst,
+ 	}
+ #endif
+ 
++	/* Do not fool tcpdump (if any), clean our debris */
++	skb->tstamp.tv64 = 0;
+ 	return skb;
+ }
+ EXPORT_SYMBOL(tcp_make_synack);


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-04-29 17:33 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-04-29 17:33 UTC (permalink / raw
  To: gentoo-commits

commit:     f8edf410c4ddd523917f01dfbef4378b4ad4c1b0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 29 17:21:43 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 29 17:21:43 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f8edf410

BFQ patchset for 4.0, v7r7.

 0000_README                                        |   12 +
 ...roups-kconfig-build-bits-for-BFQ-v7r7-4.0.patch |  104 +
 ...introduce-the-BFQ-v7r7-I-O-sched-for-4.0.patch1 | 6966 ++++++++++++++++++++
 ...rly-Queue-Merge-EQM-to-BFQ-v7r7-for-4.0.0.patch | 1222 ++++
 4 files changed, 8304 insertions(+)

diff --git a/0000_README b/0000_README
index 483ca42..bcce967 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,18 @@ Patch:  5000_enable-additional-cpu-optimizations-for-gcc.patch
 From:   https://github.com/graysky2/kernel_gcc_patch/
 Desc:   Kernel patch enables gcc < v4.9 optimizations for additional CPUs.
 
+Patch:  5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r7-4.0.patch
+From:   http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc:   BFQ v7r7 patch 1 for 4.0: Build, cgroups and kconfig bits
+
+Patch:  5002_block-introduce-the-BFQ-v7r7-I-O-sched-for-4.0.patch1
+From:   http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc:   BFQ v7r7 patch 2 for 4.0: BFQ Scheduler
+
+Patch:  5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r7-for-4.0.0.patch
+From:   http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc:   BFQ v7r7 patch 3 for 4.0: Early Queue Merge (EQM)
+
 Patch:  5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
 From:   https://github.com/graysky2/kernel_gcc_patch/
 Desc:   Kernel patch enables gcc >= v4.9 optimizations for additional CPUs.

diff --git a/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r7-4.0.patch b/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r7-4.0.patch
new file mode 100644
index 0000000..468d157
--- /dev/null
+++ b/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r7-4.0.patch
@@ -0,0 +1,104 @@
+From 63e26848e2df36a3c29d2d38ce8b008539d64a5d Mon Sep 17 00:00:00 2001
+From: Paolo Valente <paolo.valente@unimore.it>
+Date: Tue, 7 Apr 2015 13:39:12 +0200
+Subject: [PATCH 1/3] block: cgroups, kconfig, build bits for BFQ-v7r7-4.0
+
+Update Kconfig.iosched and do the related Makefile changes to include
+kernel configuration options for BFQ. Also add the bfqio controller
+to the cgroups subsystem.
+
+Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
+Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com>
+---
+ block/Kconfig.iosched         | 32 ++++++++++++++++++++++++++++++++
+ block/Makefile                |  1 +
+ include/linux/cgroup_subsys.h |  4 ++++
+ 3 files changed, 37 insertions(+)
+
+diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
+index 421bef9..0ee5f0f 100644
+--- a/block/Kconfig.iosched
++++ b/block/Kconfig.iosched
+@@ -39,6 +39,27 @@ config CFQ_GROUP_IOSCHED
+ 	---help---
+ 	  Enable group IO scheduling in CFQ.
+ 
++config IOSCHED_BFQ
++	tristate "BFQ I/O scheduler"
++	default n
++	---help---
++	  The BFQ I/O scheduler tries to distribute bandwidth among
++	  all processes according to their weights.
++	  It aims at distributing the bandwidth as desired, independently of
++	  the disk parameters and with any workload. It also tries to
++	  guarantee low latency to interactive and soft real-time
++	  applications. If compiled built-in (saying Y here), BFQ can
++	  be configured to support hierarchical scheduling.
++
++config CGROUP_BFQIO
++	bool "BFQ hierarchical scheduling support"
++	depends on CGROUPS && IOSCHED_BFQ=y
++	default n
++	---help---
++	  Enable hierarchical scheduling in BFQ, using the cgroups
++	  filesystem interface.  The name of the subsystem will be
++	  bfqio.
++
+ choice
+ 	prompt "Default I/O scheduler"
+ 	default DEFAULT_CFQ
+@@ -52,6 +73,16 @@ choice
+ 	config DEFAULT_CFQ
+ 		bool "CFQ" if IOSCHED_CFQ=y
+ 
++	config DEFAULT_BFQ
++		bool "BFQ" if IOSCHED_BFQ=y
++		help
++		  Selects BFQ as the default I/O scheduler which will be
++		  used by default for all block devices.
++		  The BFQ I/O scheduler aims at distributing the bandwidth
++		  as desired, independently of the disk parameters and with
++		  any workload. It also tries to guarantee low latency to
++		  interactive and soft real-time applications.
++
+ 	config DEFAULT_NOOP
+ 		bool "No-op"
+ 
+@@ -61,6 +92,7 @@ config DEFAULT_IOSCHED
+ 	string
+ 	default "deadline" if DEFAULT_DEADLINE
+ 	default "cfq" if DEFAULT_CFQ
++	default "bfq" if DEFAULT_BFQ
+ 	default "noop" if DEFAULT_NOOP
+ 
+ endmenu
+diff --git a/block/Makefile b/block/Makefile
+index 00ecc97..1ed86d5 100644
+--- a/block/Makefile
++++ b/block/Makefile
+@@ -18,6 +18,7 @@ obj-$(CONFIG_BLK_DEV_THROTTLING)	+= blk-throttle.o
+ obj-$(CONFIG_IOSCHED_NOOP)	+= noop-iosched.o
+ obj-$(CONFIG_IOSCHED_DEADLINE)	+= deadline-iosched.o
+ obj-$(CONFIG_IOSCHED_CFQ)	+= cfq-iosched.o
++obj-$(CONFIG_IOSCHED_BFQ)	+= bfq-iosched.o
+ 
+ obj-$(CONFIG_BLOCK_COMPAT)	+= compat_ioctl.o
+ obj-$(CONFIG_BLK_CMDLINE_PARSER)	+= cmdline-parser.o
+diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
+index e4a96fb..267d681 100644
+--- a/include/linux/cgroup_subsys.h
++++ b/include/linux/cgroup_subsys.h
+@@ -35,6 +35,10 @@ SUBSYS(freezer)
+ SUBSYS(net_cls)
+ #endif
+ 
++#if IS_ENABLED(CONFIG_CGROUP_BFQIO)
++SUBSYS(bfqio)
++#endif
++
+ #if IS_ENABLED(CONFIG_CGROUP_PERF)
+ SUBSYS(perf_event)
+ #endif
+-- 
+2.1.0
+

diff --git a/5002_block-introduce-the-BFQ-v7r7-I-O-sched-for-4.0.patch1 b/5002_block-introduce-the-BFQ-v7r7-I-O-sched-for-4.0.patch1
new file mode 100644
index 0000000..a6cfc58
--- /dev/null
+++ b/5002_block-introduce-the-BFQ-v7r7-I-O-sched-for-4.0.patch1
@@ -0,0 +1,6966 @@
+From 8cdf2dae6ee87049c7bb086d34e2ce981b545813 Mon Sep 17 00:00:00 2001
+From: Paolo Valente <paolo.valente@unimore.it>
+Date: Thu, 9 May 2013 19:10:02 +0200
+Subject: [PATCH 2/3] block: introduce the BFQ-v7r7 I/O sched for 4.0
+
+Add the BFQ-v7r7 I/O scheduler to 4.0.
+The general structure is borrowed from CFQ, as much of the code for
+handling I/O contexts. Over time, several useful features have been
+ported from CFQ as well (details in the changelog in README.BFQ). A
+(bfq_)queue is associated to each task doing I/O on a device, and each
+time a scheduling decision has to be made a queue is selected and served
+until it expires.
+
+    - Slices are given in the service domain: tasks are assigned
+      budgets, measured in number of sectors. Once got the disk, a task
+      must however consume its assigned budget within a configurable
+      maximum time (by default, the maximum possible value of the
+      budgets is automatically computed to comply with this timeout).
+      This allows the desired latency vs "throughput boosting" tradeoff
+      to be set.
+
+    - Budgets are scheduled according to a variant of WF2Q+, implemented
+      using an augmented rb-tree to take eligibility into account while
+      preserving an O(log N) overall complexity.
+
+    - A low-latency tunable is provided; if enabled, both interactive
+      and soft real-time applications are guaranteed a very low latency.
+
+    - Latency guarantees are preserved also in the presence of NCQ.
+
+    - Also with flash-based devices, a high throughput is achieved
+      while still preserving latency guarantees.
+
+    - BFQ features Early Queue Merge (EQM), a sort of fusion of the
+      cooperating-queue-merging and the preemption mechanisms present
+      in CFQ. EQM is in fact a unified mechanism that tries to get a
+      sequential read pattern, and hence a high throughput, with any
+      set of processes performing interleaved I/O over a contiguous
+      sequence of sectors.
+
+    - BFQ supports full hierarchical scheduling, exporting a cgroups
+      interface.  Since each node has a full scheduler, each group can
+      be assigned its own weight.
+
+    - If the cgroups interface is not used, only I/O priorities can be
+      assigned to processes, with ioprio values mapped to weights
+      with the relation weight = IOPRIO_BE_NR - ioprio.
+
+    - ioprio classes are served in strict priority order, i.e., lower
+      priority queues are not served as long as there are higher
+      priority queues.  Among queues in the same class the bandwidth is
+      distributed in proportion to the weight of each queue. A very
+      thin extra bandwidth is however guaranteed to the Idle class, to
+      prevent it from starving.
+
+Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
+Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com>
+---
+ block/bfq-cgroup.c  |  936 ++++++++++++
+ block/bfq-ioc.c     |   36 +
+ block/bfq-iosched.c | 3902 +++++++++++++++++++++++++++++++++++++++++++++++++++
+ block/bfq-sched.c   | 1214 ++++++++++++++++
+ block/bfq.h         |  775 ++++++++++
+ 5 files changed, 6863 insertions(+)
+ create mode 100644 block/bfq-cgroup.c
+ create mode 100644 block/bfq-ioc.c
+ create mode 100644 block/bfq-iosched.c
+ create mode 100644 block/bfq-sched.c
+ create mode 100644 block/bfq.h
+
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+new file mode 100644
+index 0000000..11e2f1d
+--- /dev/null
++++ b/block/bfq-cgroup.c
+@@ -0,0 +1,936 @@
++/*
++ * BFQ: CGROUPS support.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ *		      Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ
++ * file.
++ */
++
++#ifdef CONFIG_CGROUP_BFQIO
++
++static DEFINE_MUTEX(bfqio_mutex);
++
++static bool bfqio_is_removed(struct bfqio_cgroup *bgrp)
++{
++	return bgrp ? !bgrp->online : false;
++}
++
++static struct bfqio_cgroup bfqio_root_cgroup = {
++	.weight = BFQ_DEFAULT_GRP_WEIGHT,
++	.ioprio = BFQ_DEFAULT_GRP_IOPRIO,
++	.ioprio_class = BFQ_DEFAULT_GRP_CLASS,
++};
++
++static inline void bfq_init_entity(struct bfq_entity *entity,
++				   struct bfq_group *bfqg)
++{
++	entity->weight = entity->new_weight;
++	entity->orig_weight = entity->new_weight;
++	entity->ioprio = entity->new_ioprio;
++	entity->ioprio_class = entity->new_ioprio_class;
++	entity->parent = bfqg->my_entity;
++	entity->sched_data = &bfqg->sched_data;
++}
++
++static struct bfqio_cgroup *css_to_bfqio(struct cgroup_subsys_state *css)
++{
++	return css ? container_of(css, struct bfqio_cgroup, css) : NULL;
++}
++
++/*
++ * Search the bfq_group for bfqd into the hash table (by now only a list)
++ * of bgrp.  Must be called under rcu_read_lock().
++ */
++static struct bfq_group *bfqio_lookup_group(struct bfqio_cgroup *bgrp,
++					    struct bfq_data *bfqd)
++{
++	struct bfq_group *bfqg;
++	void *key;
++
++	hlist_for_each_entry_rcu(bfqg, &bgrp->group_data, group_node) {
++		key = rcu_dereference(bfqg->bfqd);
++		if (key == bfqd)
++			return bfqg;
++	}
++
++	return NULL;
++}
++
++static inline void bfq_group_init_entity(struct bfqio_cgroup *bgrp,
++					 struct bfq_group *bfqg)
++{
++	struct bfq_entity *entity = &bfqg->entity;
++
++	/*
++	 * If the weight of the entity has never been set via the sysfs
++	 * interface, then bgrp->weight == 0. In this case we initialize
++	 * the weight from the current ioprio value. Otherwise, the group
++	 * weight, if set, has priority over the ioprio value.
++	 */
++	if (bgrp->weight == 0) {
++		entity->new_weight = bfq_ioprio_to_weight(bgrp->ioprio);
++		entity->new_ioprio = bgrp->ioprio;
++	} else {
++		if (bgrp->weight < BFQ_MIN_WEIGHT ||
++		    bgrp->weight > BFQ_MAX_WEIGHT) {
++			printk(KERN_CRIT "bfq_group_init_entity: "
++					 "bgrp->weight %d\n", bgrp->weight);
++			BUG();
++		}
++		entity->new_weight = bgrp->weight;
++		entity->new_ioprio = bfq_weight_to_ioprio(bgrp->weight);
++	}
++	entity->orig_weight = entity->weight = entity->new_weight;
++	entity->ioprio = entity->new_ioprio;
++	entity->ioprio_class = entity->new_ioprio_class = bgrp->ioprio_class;
++	entity->my_sched_data = &bfqg->sched_data;
++	bfqg->active_entities = 0;
++}
++
++static inline void bfq_group_set_parent(struct bfq_group *bfqg,
++					struct bfq_group *parent)
++{
++	struct bfq_entity *entity;
++
++	BUG_ON(parent == NULL);
++	BUG_ON(bfqg == NULL);
++
++	entity = &bfqg->entity;
++	entity->parent = parent->my_entity;
++	entity->sched_data = &parent->sched_data;
++}
++
++/**
++ * bfq_group_chain_alloc - allocate a chain of groups.
++ * @bfqd: queue descriptor.
++ * @css: the leaf cgroup_subsys_state this chain starts from.
++ *
++ * Allocate a chain of groups starting from the one belonging to
++ * @cgroup up to the root cgroup.  Stop if a cgroup on the chain
++ * to the root has already an allocated group on @bfqd.
++ */
++static struct bfq_group *bfq_group_chain_alloc(struct bfq_data *bfqd,
++					       struct cgroup_subsys_state *css)
++{
++	struct bfqio_cgroup *bgrp;
++	struct bfq_group *bfqg, *prev = NULL, *leaf = NULL;
++
++	for (; css != NULL; css = css->parent) {
++		bgrp = css_to_bfqio(css);
++
++		bfqg = bfqio_lookup_group(bgrp, bfqd);
++		if (bfqg != NULL) {
++			/*
++			 * All the cgroups in the path from there to the
++			 * root must have a bfq_group for bfqd, so we don't
++			 * need any more allocations.
++			 */
++			break;
++		}
++
++		bfqg = kzalloc(sizeof(*bfqg), GFP_ATOMIC);
++		if (bfqg == NULL)
++			goto cleanup;
++
++		bfq_group_init_entity(bgrp, bfqg);
++		bfqg->my_entity = &bfqg->entity;
++
++		if (leaf == NULL) {
++			leaf = bfqg;
++			prev = leaf;
++		} else {
++			bfq_group_set_parent(prev, bfqg);
++			/*
++			 * Build a list of allocated nodes using the bfqd
++			 * filed, that is still unused and will be
++			 * initialized only after the node will be
++			 * connected.
++			 */
++			prev->bfqd = bfqg;
++			prev = bfqg;
++		}
++	}
++
++	return leaf;
++
++cleanup:
++	while (leaf != NULL) {
++		prev = leaf;
++		leaf = leaf->bfqd;
++		kfree(prev);
++	}
++
++	return NULL;
++}
++
++/**
++ * bfq_group_chain_link - link an allocated group chain to a cgroup
++ *                        hierarchy.
++ * @bfqd: the queue descriptor.
++ * @css: the leaf cgroup_subsys_state to start from.
++ * @leaf: the leaf group (to be associated to @cgroup).
++ *
++ * Try to link a chain of groups to a cgroup hierarchy, connecting the
++ * nodes bottom-up, so we can be sure that when we find a cgroup in the
++ * hierarchy that already as a group associated to @bfqd all the nodes
++ * in the path to the root cgroup have one too.
++ *
++ * On locking: the queue lock protects the hierarchy (there is a hierarchy
++ * per device) while the bfqio_cgroup lock protects the list of groups
++ * belonging to the same cgroup.
++ */
++static void bfq_group_chain_link(struct bfq_data *bfqd,
++				 struct cgroup_subsys_state *css,
++				 struct bfq_group *leaf)
++{
++	struct bfqio_cgroup *bgrp;
++	struct bfq_group *bfqg, *next, *prev = NULL;
++	unsigned long flags;
++
++	assert_spin_locked(bfqd->queue->queue_lock);
++
++	for (; css != NULL && leaf != NULL; css = css->parent) {
++		bgrp = css_to_bfqio(css);
++		next = leaf->bfqd;
++
++		bfqg = bfqio_lookup_group(bgrp, bfqd);
++		BUG_ON(bfqg != NULL);
++
++		spin_lock_irqsave(&bgrp->lock, flags);
++
++		rcu_assign_pointer(leaf->bfqd, bfqd);
++		hlist_add_head_rcu(&leaf->group_node, &bgrp->group_data);
++		hlist_add_head(&leaf->bfqd_node, &bfqd->group_list);
++
++		spin_unlock_irqrestore(&bgrp->lock, flags);
++
++		prev = leaf;
++		leaf = next;
++	}
++
++	BUG_ON(css == NULL && leaf != NULL);
++	if (css != NULL && prev != NULL) {
++		bgrp = css_to_bfqio(css);
++		bfqg = bfqio_lookup_group(bgrp, bfqd);
++		bfq_group_set_parent(prev, bfqg);
++	}
++}
++
++/**
++ * bfq_find_alloc_group - return the group associated to @bfqd in @cgroup.
++ * @bfqd: queue descriptor.
++ * @cgroup: cgroup being searched for.
++ *
++ * Return a group associated to @bfqd in @cgroup, allocating one if
++ * necessary.  When a group is returned all the cgroups in the path
++ * to the root have a group associated to @bfqd.
++ *
++ * If the allocation fails, return the root group: this breaks guarantees
++ * but is a safe fallback.  If this loss becomes a problem it can be
++ * mitigated using the equivalent weight (given by the product of the
++ * weights of the groups in the path from @group to the root) in the
++ * root scheduler.
++ *
++ * We allocate all the missing nodes in the path from the leaf cgroup
++ * to the root and we connect the nodes only after all the allocations
++ * have been successful.
++ */
++static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
++					      struct cgroup_subsys_state *css)
++{
++	struct bfqio_cgroup *bgrp = css_to_bfqio(css);
++	struct bfq_group *bfqg;
++
++	bfqg = bfqio_lookup_group(bgrp, bfqd);
++	if (bfqg != NULL)
++		return bfqg;
++
++	bfqg = bfq_group_chain_alloc(bfqd, css);
++	if (bfqg != NULL)
++		bfq_group_chain_link(bfqd, css, bfqg);
++	else
++		bfqg = bfqd->root_group;
++
++	return bfqg;
++}
++
++/**
++ * bfq_bfqq_move - migrate @bfqq to @bfqg.
++ * @bfqd: queue descriptor.
++ * @bfqq: the queue to move.
++ * @entity: @bfqq's entity.
++ * @bfqg: the group to move to.
++ *
++ * Move @bfqq to @bfqg, deactivating it from its old group and reactivating
++ * it on the new one.  Avoid putting the entity on the old group idle tree.
++ *
++ * Must be called under the queue lock; the cgroup owning @bfqg must
++ * not disappear (by now this just means that we are called under
++ * rcu_read_lock()).
++ */
++static void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++			  struct bfq_entity *entity, struct bfq_group *bfqg)
++{
++	int busy, resume;
++
++	busy = bfq_bfqq_busy(bfqq);
++	resume = !RB_EMPTY_ROOT(&bfqq->sort_list);
++
++	BUG_ON(resume && !entity->on_st);
++	BUG_ON(busy && !resume && entity->on_st &&
++	       bfqq != bfqd->in_service_queue);
++
++	if (busy) {
++		BUG_ON(atomic_read(&bfqq->ref) < 2);
++
++		if (!resume)
++			bfq_del_bfqq_busy(bfqd, bfqq, 0);
++		else
++			bfq_deactivate_bfqq(bfqd, bfqq, 0);
++	} else if (entity->on_st)
++		bfq_put_idle_entity(bfq_entity_service_tree(entity), entity);
++
++	/*
++	 * Here we use a reference to bfqg.  We don't need a refcounter
++	 * as the cgroup reference will not be dropped, so that its
++	 * destroy() callback will not be invoked.
++	 */
++	entity->parent = bfqg->my_entity;
++	entity->sched_data = &bfqg->sched_data;
++
++	if (busy && resume)
++		bfq_activate_bfqq(bfqd, bfqq);
++
++	if (bfqd->in_service_queue == NULL && !bfqd->rq_in_driver)
++		bfq_schedule_dispatch(bfqd);
++}
++
++/**
++ * __bfq_bic_change_cgroup - move @bic to @cgroup.
++ * @bfqd: the queue descriptor.
++ * @bic: the bic to move.
++ * @cgroup: the cgroup to move to.
++ *
++ * Move bic to cgroup, assuming that bfqd->queue is locked; the caller
++ * has to make sure that the reference to cgroup is valid across the call.
++ *
++ * NOTE: an alternative approach might have been to store the current
++ * cgroup in bfqq and getting a reference to it, reducing the lookup
++ * time here, at the price of slightly more complex code.
++ */
++static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
++						struct bfq_io_cq *bic,
++						struct cgroup_subsys_state *css)
++{
++	struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0);
++	struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1);
++	struct bfq_entity *entity;
++	struct bfq_group *bfqg;
++	struct bfqio_cgroup *bgrp;
++
++	bgrp = css_to_bfqio(css);
++
++	bfqg = bfq_find_alloc_group(bfqd, css);
++	if (async_bfqq != NULL) {
++		entity = &async_bfqq->entity;
++
++		if (entity->sched_data != &bfqg->sched_data) {
++			bic_set_bfqq(bic, NULL, 0);
++			bfq_log_bfqq(bfqd, async_bfqq,
++				     "bic_change_group: %p %d",
++				     async_bfqq, atomic_read(&async_bfqq->ref));
++			bfq_put_queue(async_bfqq);
++		}
++	}
++
++	if (sync_bfqq != NULL) {
++		entity = &sync_bfqq->entity;
++		if (entity->sched_data != &bfqg->sched_data)
++			bfq_bfqq_move(bfqd, sync_bfqq, entity, bfqg);
++	}
++
++	return bfqg;
++}
++
++/**
++ * bfq_bic_change_cgroup - move @bic to @cgroup.
++ * @bic: the bic being migrated.
++ * @cgroup: the destination cgroup.
++ *
++ * When the task owning @bic is moved to @cgroup, @bic is immediately
++ * moved into its new parent group.
++ */
++static void bfq_bic_change_cgroup(struct bfq_io_cq *bic,
++				  struct cgroup_subsys_state *css)
++{
++	struct bfq_data *bfqd;
++	unsigned long uninitialized_var(flags);
++
++	bfqd = bfq_get_bfqd_locked(&(bic->icq.q->elevator->elevator_data),
++				   &flags);
++	if (bfqd != NULL) {
++		__bfq_bic_change_cgroup(bfqd, bic, css);
++		bfq_put_bfqd_unlock(bfqd, &flags);
++	}
++}
++
++/**
++ * bfq_bic_update_cgroup - update the cgroup of @bic.
++ * @bic: the @bic to update.
++ *
++ * Make sure that @bic is enqueued in the cgroup of the current task.
++ * We need this in addition to moving bics during the cgroup attach
++ * phase because the task owning @bic could be at its first disk
++ * access or we may end up in the root cgroup as the result of a
++ * memory allocation failure and here we try to move to the right
++ * group.
++ *
++ * Must be called under the queue lock.  It is safe to use the returned
++ * value even after the rcu_read_unlock() as the migration/destruction
++ * paths act under the queue lock too.  IOW it is impossible to race with
++ * group migration/destruction and end up with an invalid group as:
++ *   a) here cgroup has not yet been destroyed, nor its destroy callback
++ *      has started execution, as current holds a reference to it,
++ *   b) if it is destroyed after rcu_read_unlock() [after current is
++ *      migrated to a different cgroup] its attach() callback will have
++ *      taken care of remove all the references to the old cgroup data.
++ */
++static struct bfq_group *bfq_bic_update_cgroup(struct bfq_io_cq *bic)
++{
++	struct bfq_data *bfqd = bic_to_bfqd(bic);
++	struct bfq_group *bfqg;
++	struct cgroup_subsys_state *css;
++
++	BUG_ON(bfqd == NULL);
++
++	rcu_read_lock();
++	css = task_css(current, bfqio_cgrp_id);
++	bfqg = __bfq_bic_change_cgroup(bfqd, bic, css);
++	rcu_read_unlock();
++
++	return bfqg;
++}
++
++/**
++ * bfq_flush_idle_tree - deactivate any entity on the idle tree of @st.
++ * @st: the service tree being flushed.
++ */
++static inline void bfq_flush_idle_tree(struct bfq_service_tree *st)
++{
++	struct bfq_entity *entity = st->first_idle;
++
++	for (; entity != NULL; entity = st->first_idle)
++		__bfq_deactivate_entity(entity, 0);
++}
++
++/**
++ * bfq_reparent_leaf_entity - move leaf entity to the root_group.
++ * @bfqd: the device data structure with the root group.
++ * @entity: the entity to move.
++ */
++static inline void bfq_reparent_leaf_entity(struct bfq_data *bfqd,
++					    struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++	BUG_ON(bfqq == NULL);
++	bfq_bfqq_move(bfqd, bfqq, entity, bfqd->root_group);
++	return;
++}
++
++/**
++ * bfq_reparent_active_entities - move to the root group all active
++ *                                entities.
++ * @bfqd: the device data structure with the root group.
++ * @bfqg: the group to move from.
++ * @st: the service tree with the entities.
++ *
++ * Needs queue_lock to be taken and reference to be valid over the call.
++ */
++static inline void bfq_reparent_active_entities(struct bfq_data *bfqd,
++						struct bfq_group *bfqg,
++						struct bfq_service_tree *st)
++{
++	struct rb_root *active = &st->active;
++	struct bfq_entity *entity = NULL;
++
++	if (!RB_EMPTY_ROOT(&st->active))
++		entity = bfq_entity_of(rb_first(active));
++
++	for (; entity != NULL; entity = bfq_entity_of(rb_first(active)))
++		bfq_reparent_leaf_entity(bfqd, entity);
++
++	if (bfqg->sched_data.in_service_entity != NULL)
++		bfq_reparent_leaf_entity(bfqd,
++			bfqg->sched_data.in_service_entity);
++
++	return;
++}
++
++/**
++ * bfq_destroy_group - destroy @bfqg.
++ * @bgrp: the bfqio_cgroup containing @bfqg.
++ * @bfqg: the group being destroyed.
++ *
++ * Destroy @bfqg, making sure that it is not referenced from its parent.
++ */
++static void bfq_destroy_group(struct bfqio_cgroup *bgrp, struct bfq_group *bfqg)
++{
++	struct bfq_data *bfqd;
++	struct bfq_service_tree *st;
++	struct bfq_entity *entity = bfqg->my_entity;
++	unsigned long uninitialized_var(flags);
++	int i;
++
++	hlist_del(&bfqg->group_node);
++
++	/*
++	 * Empty all service_trees belonging to this group before
++	 * deactivating the group itself.
++	 */
++	for (i = 0; i < BFQ_IOPRIO_CLASSES; i++) {
++		st = bfqg->sched_data.service_tree + i;
++
++		/*
++		 * The idle tree may still contain bfq_queues belonging
++		 * to exited task because they never migrated to a different
++		 * cgroup from the one being destroyed now.  No one else
++		 * can access them so it's safe to act without any lock.
++		 */
++		bfq_flush_idle_tree(st);
++
++		/*
++		 * It may happen that some queues are still active
++		 * (busy) upon group destruction (if the corresponding
++		 * processes have been forced to terminate). We move
++		 * all the leaf entities corresponding to these queues
++		 * to the root_group.
++		 * Also, it may happen that the group has an entity
++		 * in service, which is disconnected from the active
++		 * tree: it must be moved, too.
++		 * There is no need to put the sync queues, as the
++		 * scheduler has taken no reference.
++		 */
++		bfqd = bfq_get_bfqd_locked(&bfqg->bfqd, &flags);
++		if (bfqd != NULL) {
++			bfq_reparent_active_entities(bfqd, bfqg, st);
++			bfq_put_bfqd_unlock(bfqd, &flags);
++		}
++		BUG_ON(!RB_EMPTY_ROOT(&st->active));
++		BUG_ON(!RB_EMPTY_ROOT(&st->idle));
++	}
++	BUG_ON(bfqg->sched_data.next_in_service != NULL);
++	BUG_ON(bfqg->sched_data.in_service_entity != NULL);
++
++	/*
++	 * We may race with device destruction, take extra care when
++	 * dereferencing bfqg->bfqd.
++	 */
++	bfqd = bfq_get_bfqd_locked(&bfqg->bfqd, &flags);
++	if (bfqd != NULL) {
++		hlist_del(&bfqg->bfqd_node);
++		__bfq_deactivate_entity(entity, 0);
++		bfq_put_async_queues(bfqd, bfqg);
++		bfq_put_bfqd_unlock(bfqd, &flags);
++	}
++	BUG_ON(entity->tree != NULL);
++
++	/*
++	 * No need to defer the kfree() to the end of the RCU grace
++	 * period: we are called from the destroy() callback of our
++	 * cgroup, so we can be sure that no one is a) still using
++	 * this cgroup or b) doing lookups in it.
++	 */
++	kfree(bfqg);
++}
++
++static void bfq_end_wr_async(struct bfq_data *bfqd)
++{
++	struct hlist_node *tmp;
++	struct bfq_group *bfqg;
++
++	hlist_for_each_entry_safe(bfqg, tmp, &bfqd->group_list, bfqd_node)
++		bfq_end_wr_async_queues(bfqd, bfqg);
++	bfq_end_wr_async_queues(bfqd, bfqd->root_group);
++}
++
++/**
++ * bfq_disconnect_groups - disconnect @bfqd from all its groups.
++ * @bfqd: the device descriptor being exited.
++ *
++ * When the device exits we just make sure that no lookup can return
++ * the now unused group structures.  They will be deallocated on cgroup
++ * destruction.
++ */
++static void bfq_disconnect_groups(struct bfq_data *bfqd)
++{
++	struct hlist_node *tmp;
++	struct bfq_group *bfqg;
++
++	bfq_log(bfqd, "disconnect_groups beginning");
++	hlist_for_each_entry_safe(bfqg, tmp, &bfqd->group_list, bfqd_node) {
++		hlist_del(&bfqg->bfqd_node);
++
++		__bfq_deactivate_entity(bfqg->my_entity, 0);
++
++		/*
++		 * Don't remove from the group hash, just set an
++		 * invalid key.  No lookups can race with the
++		 * assignment as bfqd is being destroyed; this
++		 * implies also that new elements cannot be added
++		 * to the list.
++		 */
++		rcu_assign_pointer(bfqg->bfqd, NULL);
++
++		bfq_log(bfqd, "disconnect_groups: put async for group %p",
++			bfqg);
++		bfq_put_async_queues(bfqd, bfqg);
++	}
++}
++
++static inline void bfq_free_root_group(struct bfq_data *bfqd)
++{
++	struct bfqio_cgroup *bgrp = &bfqio_root_cgroup;
++	struct bfq_group *bfqg = bfqd->root_group;
++
++	bfq_put_async_queues(bfqd, bfqg);
++
++	spin_lock_irq(&bgrp->lock);
++	hlist_del_rcu(&bfqg->group_node);
++	spin_unlock_irq(&bgrp->lock);
++
++	/*
++	 * No need to synchronize_rcu() here: since the device is gone
++	 * there cannot be any read-side access to its root_group.
++	 */
++	kfree(bfqg);
++}
++
++static struct bfq_group *bfq_alloc_root_group(struct bfq_data *bfqd, int node)
++{
++	struct bfq_group *bfqg;
++	struct bfqio_cgroup *bgrp;
++	int i;
++
++	bfqg = kzalloc_node(sizeof(*bfqg), GFP_KERNEL, node);
++	if (bfqg == NULL)
++		return NULL;
++
++	bfqg->entity.parent = NULL;
++	for (i = 0; i < BFQ_IOPRIO_CLASSES; i++)
++		bfqg->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT;
++
++	bgrp = &bfqio_root_cgroup;
++	spin_lock_irq(&bgrp->lock);
++	rcu_assign_pointer(bfqg->bfqd, bfqd);
++	hlist_add_head_rcu(&bfqg->group_node, &bgrp->group_data);
++	spin_unlock_irq(&bgrp->lock);
++
++	return bfqg;
++}
++
++#define SHOW_FUNCTION(__VAR)						\
++static u64 bfqio_cgroup_##__VAR##_read(struct cgroup_subsys_state *css, \
++				       struct cftype *cftype)		\
++{									\
++	struct bfqio_cgroup *bgrp = css_to_bfqio(css);			\
++	u64 ret = -ENODEV;						\
++									\
++	mutex_lock(&bfqio_mutex);					\
++	if (bfqio_is_removed(bgrp))					\
++		goto out_unlock;					\
++									\
++	spin_lock_irq(&bgrp->lock);					\
++	ret = bgrp->__VAR;						\
++	spin_unlock_irq(&bgrp->lock);					\
++									\
++out_unlock:								\
++	mutex_unlock(&bfqio_mutex);					\
++	return ret;							\
++}
++
++SHOW_FUNCTION(weight);
++SHOW_FUNCTION(ioprio);
++SHOW_FUNCTION(ioprio_class);
++#undef SHOW_FUNCTION
++
++#define STORE_FUNCTION(__VAR, __MIN, __MAX)				\
++static int bfqio_cgroup_##__VAR##_write(struct cgroup_subsys_state *css,\
++					struct cftype *cftype,		\
++					u64 val)			\
++{									\
++	struct bfqio_cgroup *bgrp = css_to_bfqio(css);			\
++	struct bfq_group *bfqg;						\
++	int ret = -EINVAL;						\
++									\
++	if (val < (__MIN) || val > (__MAX))				\
++		return ret;						\
++									\
++	ret = -ENODEV;							\
++	mutex_lock(&bfqio_mutex);					\
++	if (bfqio_is_removed(bgrp))					\
++		goto out_unlock;					\
++	ret = 0;							\
++									\
++	spin_lock_irq(&bgrp->lock);					\
++	bgrp->__VAR = (unsigned short)val;				\
++	hlist_for_each_entry(bfqg, &bgrp->group_data, group_node) {	\
++		/*							\
++		 * Setting the ioprio_changed flag of the entity        \
++		 * to 1 with new_##__VAR == ##__VAR would re-set        \
++		 * the value of the weight to its ioprio mapping.       \
++		 * Set the flag only if necessary.			\
++		 */							\
++		if ((unsigned short)val != bfqg->entity.new_##__VAR) {  \
++			bfqg->entity.new_##__VAR = (unsigned short)val; \
++			/*						\
++			 * Make sure that the above new value has been	\
++			 * stored in bfqg->entity.new_##__VAR before	\
++			 * setting the ioprio_changed flag. In fact,	\
++			 * this flag may be read asynchronously (in	\
++			 * critical sections protected by a different	\
++			 * lock than that held here), and finding this	\
++			 * flag set may cause the execution of the code	\
++			 * for updating parameters whose value may	\
++			 * depend also on bfqg->entity.new_##__VAR (in	\
++			 * __bfq_entity_update_weight_prio).		\
++			 * This barrier makes sure that the new value	\
++			 * of bfqg->entity.new_##__VAR is correctly	\
++			 * seen in that code.				\
++			 */						\
++			smp_wmb();                                      \
++			bfqg->entity.ioprio_changed = 1;                \
++		}							\
++	}								\
++	spin_unlock_irq(&bgrp->lock);					\
++									\
++out_unlock:								\
++	mutex_unlock(&bfqio_mutex);					\
++	return ret;							\
++}
++
++STORE_FUNCTION(weight, BFQ_MIN_WEIGHT, BFQ_MAX_WEIGHT);
++STORE_FUNCTION(ioprio, 0, IOPRIO_BE_NR - 1);
++STORE_FUNCTION(ioprio_class, IOPRIO_CLASS_RT, IOPRIO_CLASS_IDLE);
++#undef STORE_FUNCTION
++
++static struct cftype bfqio_files[] = {
++	{
++		.name = "weight",
++		.read_u64 = bfqio_cgroup_weight_read,
++		.write_u64 = bfqio_cgroup_weight_write,
++	},
++	{
++		.name = "ioprio",
++		.read_u64 = bfqio_cgroup_ioprio_read,
++		.write_u64 = bfqio_cgroup_ioprio_write,
++	},
++	{
++		.name = "ioprio_class",
++		.read_u64 = bfqio_cgroup_ioprio_class_read,
++		.write_u64 = bfqio_cgroup_ioprio_class_write,
++	},
++	{ },	/* terminate */
++};
++
++static struct cgroup_subsys_state *bfqio_create(struct cgroup_subsys_state
++						*parent_css)
++{
++	struct bfqio_cgroup *bgrp;
++
++	if (parent_css != NULL) {
++		bgrp = kzalloc(sizeof(*bgrp), GFP_KERNEL);
++		if (bgrp == NULL)
++			return ERR_PTR(-ENOMEM);
++	} else
++		bgrp = &bfqio_root_cgroup;
++
++	spin_lock_init(&bgrp->lock);
++	INIT_HLIST_HEAD(&bgrp->group_data);
++	bgrp->ioprio = BFQ_DEFAULT_GRP_IOPRIO;
++	bgrp->ioprio_class = BFQ_DEFAULT_GRP_CLASS;
++
++	return &bgrp->css;
++}
++
++/*
++ * We cannot support shared io contexts, as we have no means to support
++ * two tasks with the same ioc in two different groups without major rework
++ * of the main bic/bfqq data structures.  By now we allow a task to change
++ * its cgroup only if it's the only owner of its ioc; the drawback of this
++ * behavior is that a group containing a task that forked using CLONE_IO
++ * will not be destroyed until the tasks sharing the ioc die.
++ */
++static int bfqio_can_attach(struct cgroup_subsys_state *css,
++			    struct cgroup_taskset *tset)
++{
++	struct task_struct *task;
++	struct io_context *ioc;
++	int ret = 0;
++
++	cgroup_taskset_for_each(task, tset) {
++		/*
++		 * task_lock() is needed to avoid races with
++		 * exit_io_context()
++		 */
++		task_lock(task);
++		ioc = task->io_context;
++		if (ioc != NULL && atomic_read(&ioc->nr_tasks) > 1)
++			/*
++			 * ioc == NULL means that the task is either too
++			 * young or exiting: if it has still no ioc the
++			 * ioc can't be shared, if the task is exiting the
++			 * attach will fail anyway, no matter what we
++			 * return here.
++			 */
++			ret = -EINVAL;
++		task_unlock(task);
++		if (ret)
++			break;
++	}
++
++	return ret;
++}
++
++static void bfqio_attach(struct cgroup_subsys_state *css,
++			 struct cgroup_taskset *tset)
++{
++	struct task_struct *task;
++	struct io_context *ioc;
++	struct io_cq *icq;
++
++	/*
++	 * IMPORTANT NOTE: The move of more than one process at a time to a
++	 * new group has not yet been tested.
++	 */
++	cgroup_taskset_for_each(task, tset) {
++		ioc = get_task_io_context(task, GFP_ATOMIC, NUMA_NO_NODE);
++		if (ioc) {
++			/*
++			 * Handle cgroup change here.
++			 */
++			rcu_read_lock();
++			hlist_for_each_entry_rcu(icq, &ioc->icq_list, ioc_node)
++				if (!strncmp(
++					icq->q->elevator->type->elevator_name,
++					"bfq", ELV_NAME_MAX))
++					bfq_bic_change_cgroup(icq_to_bic(icq),
++							      css);
++			rcu_read_unlock();
++			put_io_context(ioc);
++		}
++	}
++}
++
++static void bfqio_destroy(struct cgroup_subsys_state *css)
++{
++	struct bfqio_cgroup *bgrp = css_to_bfqio(css);
++	struct hlist_node *tmp;
++	struct bfq_group *bfqg;
++
++	/*
++	 * Since we are destroying the cgroup, there are no more tasks
++	 * referencing it, and all the RCU grace periods that may have
++	 * referenced it are ended (as the destruction of the parent
++	 * cgroup is RCU-safe); bgrp->group_data will not be accessed by
++	 * anything else and we don't need any synchronization.
++	 */
++	hlist_for_each_entry_safe(bfqg, tmp, &bgrp->group_data, group_node)
++		bfq_destroy_group(bgrp, bfqg);
++
++	BUG_ON(!hlist_empty(&bgrp->group_data));
++
++	kfree(bgrp);
++}
++
++static int bfqio_css_online(struct cgroup_subsys_state *css)
++{
++	struct bfqio_cgroup *bgrp = css_to_bfqio(css);
++
++	mutex_lock(&bfqio_mutex);
++	bgrp->online = true;
++	mutex_unlock(&bfqio_mutex);
++
++	return 0;
++}
++
++static void bfqio_css_offline(struct cgroup_subsys_state *css)
++{
++	struct bfqio_cgroup *bgrp = css_to_bfqio(css);
++
++	mutex_lock(&bfqio_mutex);
++	bgrp->online = false;
++	mutex_unlock(&bfqio_mutex);
++}
++
++struct cgroup_subsys bfqio_cgrp_subsys = {
++	.css_alloc = bfqio_create,
++	.css_online = bfqio_css_online,
++	.css_offline = bfqio_css_offline,
++	.can_attach = bfqio_can_attach,
++	.attach = bfqio_attach,
++	.css_free = bfqio_destroy,
++	.legacy_cftypes = bfqio_files,
++};
++#else
++static inline void bfq_init_entity(struct bfq_entity *entity,
++				   struct bfq_group *bfqg)
++{
++	entity->weight = entity->new_weight;
++	entity->orig_weight = entity->new_weight;
++	entity->ioprio = entity->new_ioprio;
++	entity->ioprio_class = entity->new_ioprio_class;
++	entity->sched_data = &bfqg->sched_data;
++}
++
++static inline struct bfq_group *
++bfq_bic_update_cgroup(struct bfq_io_cq *bic)
++{
++	struct bfq_data *bfqd = bic_to_bfqd(bic);
++	return bfqd->root_group;
++}
++
++static inline void bfq_bfqq_move(struct bfq_data *bfqd,
++				 struct bfq_queue *bfqq,
++				 struct bfq_entity *entity,
++				 struct bfq_group *bfqg)
++{
++}
++
++static void bfq_end_wr_async(struct bfq_data *bfqd)
++{
++	bfq_end_wr_async_queues(bfqd, bfqd->root_group);
++}
++
++static inline void bfq_disconnect_groups(struct bfq_data *bfqd)
++{
++	bfq_put_async_queues(bfqd, bfqd->root_group);
++}
++
++static inline void bfq_free_root_group(struct bfq_data *bfqd)
++{
++	kfree(bfqd->root_group);
++}
++
++static struct bfq_group *bfq_alloc_root_group(struct bfq_data *bfqd, int node)
++{
++	struct bfq_group *bfqg;
++	int i;
++
++	bfqg = kmalloc_node(sizeof(*bfqg), GFP_KERNEL | __GFP_ZERO, node);
++	if (bfqg == NULL)
++		return NULL;
++
++	for (i = 0; i < BFQ_IOPRIO_CLASSES; i++)
++		bfqg->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT;
++
++	return bfqg;
++}
++#endif
+diff --git a/block/bfq-ioc.c b/block/bfq-ioc.c
+new file mode 100644
+index 0000000..7f6b000
+--- /dev/null
++++ b/block/bfq-ioc.c
+@@ -0,0 +1,36 @@
++/*
++ * BFQ: I/O context handling.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ *		      Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ */
++
++/**
++ * icq_to_bic - convert iocontext queue structure to bfq_io_cq.
++ * @icq: the iocontext queue.
++ */
++static inline struct bfq_io_cq *icq_to_bic(struct io_cq *icq)
++{
++	/* bic->icq is the first member, %NULL will convert to %NULL */
++	return container_of(icq, struct bfq_io_cq, icq);
++}
++
++/**
++ * bfq_bic_lookup - search into @ioc a bic associated to @bfqd.
++ * @bfqd: the lookup key.
++ * @ioc: the io_context of the process doing I/O.
++ *
++ * Queue lock must be held.
++ */
++static inline struct bfq_io_cq *bfq_bic_lookup(struct bfq_data *bfqd,
++					       struct io_context *ioc)
++{
++	if (ioc)
++		return icq_to_bic(ioc_lookup_icq(ioc, bfqd->queue));
++	return NULL;
++}
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+new file mode 100644
+index 0000000..97ee934
+--- /dev/null
++++ b/block/bfq-iosched.c
+@@ -0,0 +1,3902 @@
++/*
++ * Budget Fair Queueing (BFQ) disk scheduler.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ *		      Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ
++ * file.
++ *
++ * BFQ is a proportional-share storage-I/O scheduling algorithm based on
++ * the slice-by-slice service scheme of CFQ. But BFQ assigns budgets,
++ * measured in number of sectors, to processes instead of time slices. The
++ * device is not granted to the in-service process for a given time slice,
++ * but until it has exhausted its assigned budget. This change from the time
++ * to the service domain allows BFQ to distribute the device throughput
++ * among processes as desired, without any distortion due to ZBR, workload
++ * fluctuations or other factors. BFQ uses an ad hoc internal scheduler,
++ * called B-WF2Q+, to schedule processes according to their budgets. More
++ * precisely, BFQ schedules queues associated to processes. Thanks to the
++ * accurate policy of B-WF2Q+, BFQ can afford to assign high budgets to
++ * I/O-bound processes issuing sequential requests (to boost the
++ * throughput), and yet guarantee a low latency to interactive and soft
++ * real-time applications.
++ *
++ * BFQ is described in [1], where also a reference to the initial, more
++ * theoretical paper on BFQ can be found. The interested reader can find
++ * in the latter paper full details on the main algorithm, as well as
++ * formulas of the guarantees and formal proofs of all the properties.
++ * With respect to the version of BFQ presented in these papers, this
++ * implementation adds a few more heuristics, such as the one that
++ * guarantees a low latency to soft real-time applications, and a
++ * hierarchical extension based on H-WF2Q+.
++ *
++ * B-WF2Q+ is based on WF2Q+, that is described in [2], together with
++ * H-WF2Q+, while the augmented tree used to implement B-WF2Q+ with O(log N)
++ * complexity derives from the one introduced with EEVDF in [3].
++ *
++ * [1] P. Valente and M. Andreolini, ``Improving Application Responsiveness
++ *     with the BFQ Disk I/O Scheduler'',
++ *     Proceedings of the 5th Annual International Systems and Storage
++ *     Conference (SYSTOR '12), June 2012.
++ *
++ * http://algogroup.unimo.it/people/paolo/disk_sched/bf1-v1-suite-results.pdf
++ *
++ * [2] Jon C.R. Bennett and H. Zhang, ``Hierarchical Packet Fair Queueing
++ *     Algorithms,'' IEEE/ACM Transactions on Networking, 5(5):675-689,
++ *     Oct 1997.
++ *
++ * http://www.cs.cmu.edu/~hzhang/papers/TON-97-Oct.ps.gz
++ *
++ * [3] I. Stoica and H. Abdel-Wahab, ``Earliest Eligible Virtual Deadline
++ *     First: A Flexible and Accurate Mechanism for Proportional Share
++ *     Resource Allocation,'' technical report.
++ *
++ * http://www.cs.berkeley.edu/~istoica/papers/eevdf-tr-95.pdf
++ */
++#include <linux/module.h>
++#include <linux/slab.h>
++#include <linux/blkdev.h>
++#include <linux/cgroup.h>
++#include <linux/elevator.h>
++#include <linux/jiffies.h>
++#include <linux/rbtree.h>
++#include <linux/ioprio.h>
++#include "bfq.h"
++#include "blk.h"
++
++/* Max number of dispatches in one round of service. */
++static const int bfq_quantum = 4;
++
++/* Expiration time of sync (0) and async (1) requests, in jiffies. */
++static const int bfq_fifo_expire[2] = { HZ / 4, HZ / 8 };
++
++/* Maximum backwards seek, in KiB. */
++static const int bfq_back_max = 16 * 1024;
++
++/* Penalty of a backwards seek, in number of sectors. */
++static const int bfq_back_penalty = 2;
++
++/* Idling period duration, in jiffies. */
++static int bfq_slice_idle = HZ / 125;
++
++/* Default maximum budget values, in sectors and number of requests. */
++static const int bfq_default_max_budget = 16 * 1024;
++static const int bfq_max_budget_async_rq = 4;
++
++/*
++ * Async to sync throughput distribution is controlled as follows:
++ * when an async request is served, the entity is charged the number
++ * of sectors of the request, multiplied by the factor below
++ */
++static const int bfq_async_charge_factor = 10;
++
++/* Default timeout values, in jiffies, approximating CFQ defaults. */
++static const int bfq_timeout_sync = HZ / 8;
++static int bfq_timeout_async = HZ / 25;
++
++struct kmem_cache *bfq_pool;
++
++/* Below this threshold (in ms), we consider thinktime immediate. */
++#define BFQ_MIN_TT		2
++
++/* hw_tag detection: parallel requests threshold and min samples needed. */
++#define BFQ_HW_QUEUE_THRESHOLD	4
++#define BFQ_HW_QUEUE_SAMPLES	32
++
++#define BFQQ_SEEK_THR	 (sector_t)(8 * 1024)
++#define BFQQ_SEEKY(bfqq) ((bfqq)->seek_mean > BFQQ_SEEK_THR)
++
++/* Min samples used for peak rate estimation (for autotuning). */
++#define BFQ_PEAK_RATE_SAMPLES	32
++
++/* Shift used for peak rate fixed precision calculations. */
++#define BFQ_RATE_SHIFT		16
++
++/*
++ * By default, BFQ computes the duration of the weight raising for
++ * interactive applications automatically, using the following formula:
++ * duration = (R / r) * T, where r is the peak rate of the device, and
++ * R and T are two reference parameters.
++ * In particular, R is the peak rate of the reference device (see below),
++ * and T is a reference time: given the systems that are likely to be
++ * installed on the reference device according to its speed class, T is
++ * about the maximum time needed, under BFQ and while reading two files in
++ * parallel, to load typical large applications on these systems.
++ * In practice, the slower/faster the device at hand is, the more/less it
++ * takes to load applications with respect to the reference device.
++ * Accordingly, the longer/shorter BFQ grants weight raising to interactive
++ * applications.
++ *
++ * BFQ uses four different reference pairs (R, T), depending on:
++ * . whether the device is rotational or non-rotational;
++ * . whether the device is slow, such as old or portable HDDs, as well as
++ *   SD cards, or fast, such as newer HDDs and SSDs.
++ *
++ * The device's speed class is dynamically (re)detected in
++ * bfq_update_peak_rate() every time the estimated peak rate is updated.
++ *
++ * In the following definitions, R_slow[0]/R_fast[0] and T_slow[0]/T_fast[0]
++ * are the reference values for a slow/fast rotational device, whereas
++ * R_slow[1]/R_fast[1] and T_slow[1]/T_fast[1] are the reference values for
++ * a slow/fast non-rotational device. Finally, device_speed_thresh are the
++ * thresholds used to switch between speed classes.
++ * Both the reference peak rates and the thresholds are measured in
++ * sectors/usec, left-shifted by BFQ_RATE_SHIFT.
++ */
++static int R_slow[2] = {1536, 10752};
++static int R_fast[2] = {17415, 34791};
++/*
++ * To improve readability, a conversion function is used to initialize the
++ * following arrays, which entails that they can be initialized only in a
++ * function.
++ */
++static int T_slow[2];
++static int T_fast[2];
++static int device_speed_thresh[2];
++
++#define BFQ_SERVICE_TREE_INIT	((struct bfq_service_tree)		\
++				{ RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 })
++
++#define RQ_BIC(rq)		((struct bfq_io_cq *) (rq)->elv.priv[0])
++#define RQ_BFQQ(rq)		((rq)->elv.priv[1])
++
++static inline void bfq_schedule_dispatch(struct bfq_data *bfqd);
++
++#include "bfq-ioc.c"
++#include "bfq-sched.c"
++#include "bfq-cgroup.c"
++
++#define bfq_class_idle(bfqq)	((bfqq)->entity.ioprio_class ==\
++				 IOPRIO_CLASS_IDLE)
++#define bfq_class_rt(bfqq)	((bfqq)->entity.ioprio_class ==\
++				 IOPRIO_CLASS_RT)
++
++#define bfq_sample_valid(samples)	((samples) > 80)
++
++/*
++ * We regard a request as SYNC, if either it's a read or has the SYNC bit
++ * set (in which case it could also be a direct WRITE).
++ */
++static inline int bfq_bio_sync(struct bio *bio)
++{
++	if (bio_data_dir(bio) == READ || (bio->bi_rw & REQ_SYNC))
++		return 1;
++
++	return 0;
++}
++
++/*
++ * Scheduler run of queue, if there are requests pending and no one in the
++ * driver that will restart queueing.
++ */
++static inline void bfq_schedule_dispatch(struct bfq_data *bfqd)
++{
++	if (bfqd->queued != 0) {
++		bfq_log(bfqd, "schedule dispatch");
++		kblockd_schedule_work(&bfqd->unplug_work);
++	}
++}
++
++/*
++ * Lifted from AS - choose which of rq1 and rq2 that is best served now.
++ * We choose the request that is closesr to the head right now.  Distance
++ * behind the head is penalized and only allowed to a certain extent.
++ */
++static struct request *bfq_choose_req(struct bfq_data *bfqd,
++				      struct request *rq1,
++				      struct request *rq2,
++				      sector_t last)
++{
++	sector_t s1, s2, d1 = 0, d2 = 0;
++	unsigned long back_max;
++#define BFQ_RQ1_WRAP	0x01 /* request 1 wraps */
++#define BFQ_RQ2_WRAP	0x02 /* request 2 wraps */
++	unsigned wrap = 0; /* bit mask: requests behind the disk head? */
++
++	if (rq1 == NULL || rq1 == rq2)
++		return rq2;
++	if (rq2 == NULL)
++		return rq1;
++
++	if (rq_is_sync(rq1) && !rq_is_sync(rq2))
++		return rq1;
++	else if (rq_is_sync(rq2) && !rq_is_sync(rq1))
++		return rq2;
++	if ((rq1->cmd_flags & REQ_META) && !(rq2->cmd_flags & REQ_META))
++		return rq1;
++	else if ((rq2->cmd_flags & REQ_META) && !(rq1->cmd_flags & REQ_META))
++		return rq2;
++
++	s1 = blk_rq_pos(rq1);
++	s2 = blk_rq_pos(rq2);
++
++	/*
++	 * By definition, 1KiB is 2 sectors.
++	 */
++	back_max = bfqd->bfq_back_max * 2;
++
++	/*
++	 * Strict one way elevator _except_ in the case where we allow
++	 * short backward seeks which are biased as twice the cost of a
++	 * similar forward seek.
++	 */
++	if (s1 >= last)
++		d1 = s1 - last;
++	else if (s1 + back_max >= last)
++		d1 = (last - s1) * bfqd->bfq_back_penalty;
++	else
++		wrap |= BFQ_RQ1_WRAP;
++
++	if (s2 >= last)
++		d2 = s2 - last;
++	else if (s2 + back_max >= last)
++		d2 = (last - s2) * bfqd->bfq_back_penalty;
++	else
++		wrap |= BFQ_RQ2_WRAP;
++
++	/* Found required data */
++
++	/*
++	 * By doing switch() on the bit mask "wrap" we avoid having to
++	 * check two variables for all permutations: --> faster!
++	 */
++	switch (wrap) {
++	case 0: /* common case for CFQ: rq1 and rq2 not wrapped */
++		if (d1 < d2)
++			return rq1;
++		else if (d2 < d1)
++			return rq2;
++		else {
++			if (s1 >= s2)
++				return rq1;
++			else
++				return rq2;
++		}
++
++	case BFQ_RQ2_WRAP:
++		return rq1;
++	case BFQ_RQ1_WRAP:
++		return rq2;
++	case (BFQ_RQ1_WRAP|BFQ_RQ2_WRAP): /* both rqs wrapped */
++	default:
++		/*
++		 * Since both rqs are wrapped,
++		 * start with the one that's further behind head
++		 * (--> only *one* back seek required),
++		 * since back seek takes more time than forward.
++		 */
++		if (s1 <= s2)
++			return rq1;
++		else
++			return rq2;
++	}
++}
++
++static struct bfq_queue *
++bfq_rq_pos_tree_lookup(struct bfq_data *bfqd, struct rb_root *root,
++		     sector_t sector, struct rb_node **ret_parent,
++		     struct rb_node ***rb_link)
++{
++	struct rb_node **p, *parent;
++	struct bfq_queue *bfqq = NULL;
++
++	parent = NULL;
++	p = &root->rb_node;
++	while (*p) {
++		struct rb_node **n;
++
++		parent = *p;
++		bfqq = rb_entry(parent, struct bfq_queue, pos_node);
++
++		/*
++		 * Sort strictly based on sector. Smallest to the left,
++		 * largest to the right.
++		 */
++		if (sector > blk_rq_pos(bfqq->next_rq))
++			n = &(*p)->rb_right;
++		else if (sector < blk_rq_pos(bfqq->next_rq))
++			n = &(*p)->rb_left;
++		else
++			break;
++		p = n;
++		bfqq = NULL;
++	}
++
++	*ret_parent = parent;
++	if (rb_link)
++		*rb_link = p;
++
++	bfq_log(bfqd, "rq_pos_tree_lookup %llu: returning %d",
++		(long long unsigned)sector,
++		bfqq != NULL ? bfqq->pid : 0);
++
++	return bfqq;
++}
++
++static void bfq_rq_pos_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	struct rb_node **p, *parent;
++	struct bfq_queue *__bfqq;
++
++	if (bfqq->pos_root != NULL) {
++		rb_erase(&bfqq->pos_node, bfqq->pos_root);
++		bfqq->pos_root = NULL;
++	}
++
++	if (bfq_class_idle(bfqq))
++		return;
++	if (!bfqq->next_rq)
++		return;
++
++	bfqq->pos_root = &bfqd->rq_pos_tree;
++	__bfqq = bfq_rq_pos_tree_lookup(bfqd, bfqq->pos_root,
++			blk_rq_pos(bfqq->next_rq), &parent, &p);
++	if (__bfqq == NULL) {
++		rb_link_node(&bfqq->pos_node, parent, p);
++		rb_insert_color(&bfqq->pos_node, bfqq->pos_root);
++	} else
++		bfqq->pos_root = NULL;
++}
++
++/*
++ * Tell whether there are active queues or groups with differentiated weights.
++ */
++static inline bool bfq_differentiated_weights(struct bfq_data *bfqd)
++{
++	BUG_ON(!bfqd->hw_tag);
++	/*
++	 * For weights to differ, at least one of the trees must contain
++	 * at least two nodes.
++	 */
++	return (!RB_EMPTY_ROOT(&bfqd->queue_weights_tree) &&
++		(bfqd->queue_weights_tree.rb_node->rb_left ||
++		 bfqd->queue_weights_tree.rb_node->rb_right)
++#ifdef CONFIG_CGROUP_BFQIO
++	       ) ||
++	       (!RB_EMPTY_ROOT(&bfqd->group_weights_tree) &&
++		(bfqd->group_weights_tree.rb_node->rb_left ||
++		 bfqd->group_weights_tree.rb_node->rb_right)
++#endif
++	       );
++}
++
++/*
++ * If the weight-counter tree passed as input contains no counter for
++ * the weight of the input entity, then add that counter; otherwise just
++ * increment the existing counter.
++ *
++ * Note that weight-counter trees contain few nodes in mostly symmetric
++ * scenarios. For example, if all queues have the same weight, then the
++ * weight-counter tree for the queues may contain at most one node.
++ * This holds even if low_latency is on, because weight-raised queues
++ * are not inserted in the tree.
++ * In most scenarios, the rate at which nodes are created/destroyed
++ * should be low too.
++ */
++static void bfq_weights_tree_add(struct bfq_data *bfqd,
++				 struct bfq_entity *entity,
++				 struct rb_root *root)
++{
++	struct rb_node **new = &(root->rb_node), *parent = NULL;
++
++	/*
++	 * Do not insert if:
++	 * - the device does not support queueing;
++	 * - the entity is already associated with a counter, which happens if:
++	 *   1) the entity is associated with a queue, 2) a request arrival
++	 *   has caused the queue to become both non-weight-raised, and hence
++	 *   change its weight, and backlogged; in this respect, each
++	 *   of the two events causes an invocation of this function,
++	 *   3) this is the invocation of this function caused by the second
++	 *   event. This second invocation is actually useless, and we handle
++	 *   this fact by exiting immediately. More efficient or clearer
++	 *   solutions might possibly be adopted.
++	 */
++	if (!bfqd->hw_tag || entity->weight_counter)
++		return;
++
++	while (*new) {
++		struct bfq_weight_counter *__counter = container_of(*new,
++						struct bfq_weight_counter,
++						weights_node);
++		parent = *new;
++
++		if (entity->weight == __counter->weight) {
++			entity->weight_counter = __counter;
++			goto inc_counter;
++		}
++		if (entity->weight < __counter->weight)
++			new = &((*new)->rb_left);
++		else
++			new = &((*new)->rb_right);
++	}
++
++	entity->weight_counter = kzalloc(sizeof(struct bfq_weight_counter),
++					 GFP_ATOMIC);
++	entity->weight_counter->weight = entity->weight;
++	rb_link_node(&entity->weight_counter->weights_node, parent, new);
++	rb_insert_color(&entity->weight_counter->weights_node, root);
++
++inc_counter:
++	entity->weight_counter->num_active++;
++}
++
++/*
++ * Decrement the weight counter associated with the entity, and, if the
++ * counter reaches 0, remove the counter from the tree.
++ * See the comments to the function bfq_weights_tree_add() for considerations
++ * about overhead.
++ */
++static void bfq_weights_tree_remove(struct bfq_data *bfqd,
++				    struct bfq_entity *entity,
++				    struct rb_root *root)
++{
++	/*
++	 * Check whether the entity is actually associated with a counter.
++	 * In fact, the device may not be considered NCQ-capable for a while,
++	 * which implies that no insertion in the weight trees is performed,
++	 * after which the device may start to be deemed NCQ-capable, and hence
++	 * this function may start to be invoked. This may cause the function
++	 * to be invoked for entities that are not associated with any counter.
++	 */
++	if (!entity->weight_counter)
++		return;
++
++	BUG_ON(RB_EMPTY_ROOT(root));
++	BUG_ON(entity->weight_counter->weight != entity->weight);
++
++	BUG_ON(!entity->weight_counter->num_active);
++	entity->weight_counter->num_active--;
++	if (entity->weight_counter->num_active > 0)
++		goto reset_entity_pointer;
++
++	rb_erase(&entity->weight_counter->weights_node, root);
++	kfree(entity->weight_counter);
++
++reset_entity_pointer:
++	entity->weight_counter = NULL;
++}
++
++static struct request *bfq_find_next_rq(struct bfq_data *bfqd,
++					struct bfq_queue *bfqq,
++					struct request *last)
++{
++	struct rb_node *rbnext = rb_next(&last->rb_node);
++	struct rb_node *rbprev = rb_prev(&last->rb_node);
++	struct request *next = NULL, *prev = NULL;
++
++	BUG_ON(RB_EMPTY_NODE(&last->rb_node));
++
++	if (rbprev != NULL)
++		prev = rb_entry_rq(rbprev);
++
++	if (rbnext != NULL)
++		next = rb_entry_rq(rbnext);
++	else {
++		rbnext = rb_first(&bfqq->sort_list);
++		if (rbnext && rbnext != &last->rb_node)
++			next = rb_entry_rq(rbnext);
++	}
++
++	return bfq_choose_req(bfqd, next, prev, blk_rq_pos(last));
++}
++
++/* see the definition of bfq_async_charge_factor for details */
++static inline unsigned long bfq_serv_to_charge(struct request *rq,
++					       struct bfq_queue *bfqq)
++{
++	return blk_rq_sectors(rq) *
++		(1 + ((!bfq_bfqq_sync(bfqq)) * (bfqq->wr_coeff == 1) *
++		bfq_async_charge_factor));
++}
++
++/**
++ * bfq_updated_next_req - update the queue after a new next_rq selection.
++ * @bfqd: the device data the queue belongs to.
++ * @bfqq: the queue to update.
++ *
++ * If the first request of a queue changes we make sure that the queue
++ * has enough budget to serve at least its first request (if the
++ * request has grown).  We do this because if the queue has not enough
++ * budget for its first request, it has to go through two dispatch
++ * rounds to actually get it dispatched.
++ */
++static void bfq_updated_next_req(struct bfq_data *bfqd,
++				 struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++	struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++	struct request *next_rq = bfqq->next_rq;
++	unsigned long new_budget;
++
++	if (next_rq == NULL)
++		return;
++
++	if (bfqq == bfqd->in_service_queue)
++		/*
++		 * In order not to break guarantees, budgets cannot be
++		 * changed after an entity has been selected.
++		 */
++		return;
++
++	BUG_ON(entity->tree != &st->active);
++	BUG_ON(entity == entity->sched_data->in_service_entity);
++
++	new_budget = max_t(unsigned long, bfqq->max_budget,
++			   bfq_serv_to_charge(next_rq, bfqq));
++	if (entity->budget != new_budget) {
++		entity->budget = new_budget;
++		bfq_log_bfqq(bfqd, bfqq, "updated next rq: new budget %lu",
++					 new_budget);
++		bfq_activate_bfqq(bfqd, bfqq);
++	}
++}
++
++static inline unsigned int bfq_wr_duration(struct bfq_data *bfqd)
++{
++	u64 dur;
++
++	if (bfqd->bfq_wr_max_time > 0)
++		return bfqd->bfq_wr_max_time;
++
++	dur = bfqd->RT_prod;
++	do_div(dur, bfqd->peak_rate);
++
++	return dur;
++}
++
++/* Empty burst list and add just bfqq (see comments to bfq_handle_burst) */
++static inline void bfq_reset_burst_list(struct bfq_data *bfqd,
++					struct bfq_queue *bfqq)
++{
++	struct bfq_queue *item;
++	struct hlist_node *n;
++
++	hlist_for_each_entry_safe(item, n, &bfqd->burst_list, burst_list_node)
++		hlist_del_init(&item->burst_list_node);
++	hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
++	bfqd->burst_size = 1;
++}
++
++/* Add bfqq to the list of queues in current burst (see bfq_handle_burst) */
++static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	/* Increment burst size to take into account also bfqq */
++	bfqd->burst_size++;
++
++	if (bfqd->burst_size == bfqd->bfq_large_burst_thresh) {
++		struct bfq_queue *pos, *bfqq_item;
++		struct hlist_node *n;
++
++		/*
++		 * Enough queues have been activated shortly after each
++		 * other to consider this burst as large.
++		 */
++		bfqd->large_burst = true;
++
++		/*
++		 * We can now mark all queues in the burst list as
++		 * belonging to a large burst.
++		 */
++		hlist_for_each_entry(bfqq_item, &bfqd->burst_list,
++				     burst_list_node)
++		        bfq_mark_bfqq_in_large_burst(bfqq_item);
++		bfq_mark_bfqq_in_large_burst(bfqq);
++
++		/*
++		 * From now on, and until the current burst finishes, any
++		 * new queue being activated shortly after the last queue
++		 * was inserted in the burst can be immediately marked as
++		 * belonging to a large burst. So the burst list is not
++		 * needed any more. Remove it.
++		 */
++		hlist_for_each_entry_safe(pos, n, &bfqd->burst_list,
++					  burst_list_node)
++			hlist_del_init(&pos->burst_list_node);
++	} else /* burst not yet large: add bfqq to the burst list */
++		hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
++}
++
++/*
++ * If many queues happen to become active shortly after each other, then,
++ * to help the processes associated to these queues get their job done as
++ * soon as possible, it is usually better to not grant either weight-raising
++ * or device idling to these queues. In this comment we describe, firstly,
++ * the reasons why this fact holds, and, secondly, the next function, which
++ * implements the main steps needed to properly mark these queues so that
++ * they can then be treated in a different way.
++ *
++ * As for the terminology, we say that a queue becomes active, i.e.,
++ * switches from idle to backlogged, either when it is created (as a
++ * consequence of the arrival of an I/O request), or, if already existing,
++ * when a new request for the queue arrives while the queue is idle.
++ * Bursts of activations, i.e., activations of different queues occurring
++ * shortly after each other, are typically caused by services or applications
++ * that spawn or reactivate many parallel threads/processes. Examples are
++ * systemd during boot or git grep.
++ *
++ * These services or applications benefit mostly from a high throughput:
++ * the quicker the requests of the activated queues are cumulatively served,
++ * the sooner the target job of these queues gets completed. As a consequence,
++ * weight-raising any of these queues, which also implies idling the device
++ * for it, is almost always counterproductive: in most cases it just lowers
++ * throughput.
++ *
++ * On the other hand, a burst of activations may be also caused by the start
++ * of an application that does not consist in a lot of parallel I/O-bound
++ * threads. In fact, with a complex application, the burst may be just a
++ * consequence of the fact that several processes need to be executed to
++ * start-up the application. To start an application as quickly as possible,
++ * the best thing to do is to privilege the I/O related to the application
++ * with respect to all other I/O. Therefore, the best strategy to start as
++ * quickly as possible an application that causes a burst of activations is
++ * to weight-raise all the queues activated during the burst. This is the
++ * exact opposite of the best strategy for the other type of bursts.
++ *
++ * In the end, to take the best action for each of the two cases, the two
++ * types of bursts need to be distinguished. Fortunately, this seems
++ * relatively easy to do, by looking at the sizes of the bursts. In
++ * particular, we found a threshold such that bursts with a larger size
++ * than that threshold are apparently caused only by services or commands
++ * such as systemd or git grep. For brevity, hereafter we call just 'large'
++ * these bursts. BFQ *does not* weight-raise queues whose activations occur
++ * in a large burst. In addition, for each of these queues BFQ performs or
++ * does not perform idling depending on which choice boosts the throughput
++ * most. The exact choice depends on the device and request pattern at
++ * hand.
++ *
++ * Turning back to the next function, it implements all the steps needed
++ * to detect the occurrence of a large burst and to properly mark all the
++ * queues belonging to it (so that they can then be treated in a different
++ * way). This goal is achieved by maintaining a special "burst list" that
++ * holds, temporarily, the queues that belong to the burst in progress. The
++ * list is then used to mark these queues as belonging to a large burst if
++ * the burst does become large. The main steps are the following.
++ *
++ * . when the very first queue is activated, the queue is inserted into the
++ *   list (as it could be the first queue in a possible burst)
++ *
++ * . if the current burst has not yet become large, and a queue Q that does
++ *   not yet belong to the burst is activated shortly after the last time
++ *   at which a new queue entered the burst list, then the function appends
++ *   Q to the burst list
++ *
++ * . if, as a consequence of the previous step, the burst size reaches
++ *   the large-burst threshold, then
++ *
++ *     . all the queues in the burst list are marked as belonging to a
++ *       large burst
++ *
++ *     . the burst list is deleted; in fact, the burst list already served
++ *       its purpose (keeping temporarily track of the queues in a burst,
++ *       so as to be able to mark them as belonging to a large burst in the
++ *       previous sub-step), and now is not needed any more
++ *
++ *     . the device enters a large-burst mode
++ *
++ * . if a queue Q that does not belong to the burst is activated while
++ *   the device is in large-burst mode and shortly after the last time
++ *   at which a queue either entered the burst list or was marked as
++ *   belonging to the current large burst, then Q is immediately marked
++ *   as belonging to a large burst.
++ *
++ * . if a queue Q that does not belong to the burst is activated a while
++ *   later, i.e., not shortly after, than the last time at which a queue
++ *   either entered the burst list or was marked as belonging to the
++ *   current large burst, then the current burst is deemed as finished and:
++ *
++ *        . the large-burst mode is reset if set
++ *
++ *        . the burst list is emptied
++ *
++ *        . Q is inserted in the burst list, as Q may be the first queue
++ *          in a possible new burst (then the burst list contains just Q
++ *          after this step).
++ */
++static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++			     bool idle_for_long_time)
++{
++	/*
++	 * If bfqq happened to be activated in a burst, but has been idle
++	 * for at least as long as an interactive queue, then we assume
++	 * that, in the overall I/O initiated in the burst, the I/O
++	 * associated to bfqq is finished. So bfqq does not need to be
++	 * treated as a queue belonging to a burst anymore. Accordingly,
++	 * we reset bfqq's in_large_burst flag if set, and remove bfqq
++	 * from the burst list if it's there. We do not decrement instead
++	 * burst_size, because the fact that bfqq does not need to belong
++	 * to the burst list any more does not invalidate the fact that
++	 * bfqq may have been activated during the current burst.
++	 */
++	if (idle_for_long_time) {
++		hlist_del_init(&bfqq->burst_list_node);
++		bfq_clear_bfqq_in_large_burst(bfqq);
++	}
++
++	/*
++	 * If bfqq is already in the burst list or is part of a large
++	 * burst, then there is nothing else to do.
++	 */
++	if (!hlist_unhashed(&bfqq->burst_list_node) ||
++	    bfq_bfqq_in_large_burst(bfqq))
++		return;
++
++	/*
++	 * If bfqq's activation happens late enough, then the current
++	 * burst is finished, and related data structures must be reset.
++	 *
++	 * In this respect, consider the special case where bfqq is the very
++	 * first queue being activated. In this case, last_ins_in_burst is
++	 * not yet significant when we get here. But it is easy to verify
++	 * that, whether or not the following condition is true, bfqq will
++	 * end up being inserted into the burst list. In particular the
++	 * list will happen to contain only bfqq. And this is exactly what
++	 * has to happen, as bfqq may be the first queue in a possible
++	 * burst.
++	 */
++	if (time_is_before_jiffies(bfqd->last_ins_in_burst +
++	    bfqd->bfq_burst_interval)) {
++		bfqd->large_burst = false;
++		bfq_reset_burst_list(bfqd, bfqq);
++		return;
++	}
++
++	/*
++	 * If we get here, then bfqq is being activated shortly after the
++	 * last queue. So, if the current burst is also large, we can mark
++	 * bfqq as belonging to this large burst immediately.
++	 */
++	if (bfqd->large_burst) {
++		bfq_mark_bfqq_in_large_burst(bfqq);
++		return;
++	}
++
++	/*
++	 * If we get here, then a large-burst state has not yet been
++	 * reached, but bfqq is being activated shortly after the last
++	 * queue. Then we add bfqq to the burst.
++	 */
++	bfq_add_to_burst(bfqd, bfqq);
++}
++
++static void bfq_add_request(struct request *rq)
++{
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++	struct bfq_entity *entity = &bfqq->entity;
++	struct bfq_data *bfqd = bfqq->bfqd;
++	struct request *next_rq, *prev;
++	unsigned long old_wr_coeff = bfqq->wr_coeff;
++	bool interactive = false;
++
++	bfq_log_bfqq(bfqd, bfqq, "add_request %d", rq_is_sync(rq));
++	bfqq->queued[rq_is_sync(rq)]++;
++	bfqd->queued++;
++
++	elv_rb_add(&bfqq->sort_list, rq);
++
++	/*
++	 * Check if this request is a better next-serve candidate.
++	 */
++	prev = bfqq->next_rq;
++	next_rq = bfq_choose_req(bfqd, bfqq->next_rq, rq, bfqd->last_position);
++	BUG_ON(next_rq == NULL);
++	bfqq->next_rq = next_rq;
++
++	/*
++	 * Adjust priority tree position, if next_rq changes.
++	 */
++	if (prev != bfqq->next_rq)
++		bfq_rq_pos_tree_add(bfqd, bfqq);
++
++	if (!bfq_bfqq_busy(bfqq)) {
++		bool soft_rt,
++		     idle_for_long_time = time_is_before_jiffies(
++						bfqq->budget_timeout +
++						bfqd->bfq_wr_min_idle_time);
++
++		if (bfq_bfqq_sync(bfqq)) {
++			bool already_in_burst =
++			   !hlist_unhashed(&bfqq->burst_list_node) ||
++			   bfq_bfqq_in_large_burst(bfqq);
++			bfq_handle_burst(bfqd, bfqq, idle_for_long_time);
++			/*
++			 * If bfqq was not already in the current burst,
++			 * then, at this point, bfqq either has been
++			 * added to the current burst or has caused the
++			 * current burst to terminate. In particular, in
++			 * the second case, bfqq has become the first
++			 * queue in a possible new burst.
++			 * In both cases last_ins_in_burst needs to be
++			 * moved forward.
++			 */
++			if (!already_in_burst)
++				bfqd->last_ins_in_burst = jiffies;
++		}
++
++		soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
++			!bfq_bfqq_in_large_burst(bfqq) &&
++			time_is_before_jiffies(bfqq->soft_rt_next_start);
++		interactive = !bfq_bfqq_in_large_burst(bfqq) &&
++			      idle_for_long_time;
++		entity->budget = max_t(unsigned long, bfqq->max_budget,
++				       bfq_serv_to_charge(next_rq, bfqq));
++
++		if (!bfq_bfqq_IO_bound(bfqq)) {
++			if (time_before(jiffies,
++					RQ_BIC(rq)->ttime.last_end_request +
++					bfqd->bfq_slice_idle)) {
++				bfqq->requests_within_timer++;
++				if (bfqq->requests_within_timer >=
++				    bfqd->bfq_requests_within_timer)
++					bfq_mark_bfqq_IO_bound(bfqq);
++			} else
++				bfqq->requests_within_timer = 0;
++		}
++
++		if (!bfqd->low_latency)
++			goto add_bfqq_busy;
++
++		/*
++		 * If the queue is not being boosted and has been idle
++		 * for enough time, start a weight-raising period
++		 */
++		if (old_wr_coeff == 1 && (interactive || soft_rt)) {
++			bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++			if (interactive)
++				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++			else
++				bfqq->wr_cur_max_time =
++					bfqd->bfq_wr_rt_max_time;
++			bfq_log_bfqq(bfqd, bfqq,
++				     "wrais starting at %lu, rais_max_time %u",
++				     jiffies,
++				     jiffies_to_msecs(bfqq->wr_cur_max_time));
++		} else if (old_wr_coeff > 1) {
++			if (interactive)
++				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++			else if (bfq_bfqq_in_large_burst(bfqq) ||
++				 (bfqq->wr_cur_max_time ==
++				  bfqd->bfq_wr_rt_max_time &&
++				  !soft_rt)) {
++				bfqq->wr_coeff = 1;
++				bfq_log_bfqq(bfqd, bfqq,
++					"wrais ending at %lu, rais_max_time %u",
++					jiffies,
++					jiffies_to_msecs(bfqq->
++						wr_cur_max_time));
++			} else if (time_before(
++					bfqq->last_wr_start_finish +
++					bfqq->wr_cur_max_time,
++					jiffies +
++					bfqd->bfq_wr_rt_max_time) &&
++				   soft_rt) {
++				/*
++				 *
++				 * The remaining weight-raising time is lower
++				 * than bfqd->bfq_wr_rt_max_time, which
++				 * means that the application is enjoying
++				 * weight raising either because deemed soft-
++				 * rt in the near past, or because deemed
++				 * interactive a long ago. In both cases,
++				 * resetting now the current remaining weight-
++				 * raising time for the application to the
++				 * weight-raising duration for soft rt
++				 * applications would not cause any latency
++				 * increase for the application (as the new
++				 * duration would be higher than the remaining
++				 * time).
++				 *
++				 * In addition, the application is now meeting
++				 * the requirements for being deemed soft rt.
++				 * In the end we can correctly and safely
++				 * (re)charge the weight-raising duration for
++				 * the application with the weight-raising
++				 * duration for soft rt applications.
++				 *
++				 * In particular, doing this recharge now, i.e.,
++				 * before the weight-raising period for the
++				 * application finishes, reduces the probability
++				 * of the following negative scenario:
++				 * 1) the weight of a soft rt application is
++				 *    raised at startup (as for any newly
++				 *    created application),
++				 * 2) since the application is not interactive,
++				 *    at a certain time weight-raising is
++				 *    stopped for the application,
++				 * 3) at that time the application happens to
++				 *    still have pending requests, and hence
++				 *    is destined to not have a chance to be
++				 *    deemed soft rt before these requests are
++				 *    completed (see the comments to the
++				 *    function bfq_bfqq_softrt_next_start()
++				 *    for details on soft rt detection),
++				 * 4) these pending requests experience a high
++				 *    latency because the application is not
++				 *    weight-raised while they are pending.
++				 */
++				bfqq->last_wr_start_finish = jiffies;
++				bfqq->wr_cur_max_time =
++					bfqd->bfq_wr_rt_max_time;
++			}
++		}
++		if (old_wr_coeff != bfqq->wr_coeff)
++			entity->ioprio_changed = 1;
++add_bfqq_busy:
++		bfqq->last_idle_bklogged = jiffies;
++		bfqq->service_from_backlogged = 0;
++		bfq_clear_bfqq_softrt_update(bfqq);
++		bfq_add_bfqq_busy(bfqd, bfqq);
++	} else {
++		if (bfqd->low_latency && old_wr_coeff == 1 && !rq_is_sync(rq) &&
++		    time_is_before_jiffies(
++				bfqq->last_wr_start_finish +
++				bfqd->bfq_wr_min_inter_arr_async)) {
++			bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++			bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++
++			bfqd->wr_busy_queues++;
++			entity->ioprio_changed = 1;
++			bfq_log_bfqq(bfqd, bfqq,
++			    "non-idle wrais starting at %lu, rais_max_time %u",
++			    jiffies,
++			    jiffies_to_msecs(bfqq->wr_cur_max_time));
++		}
++		if (prev != bfqq->next_rq)
++			bfq_updated_next_req(bfqd, bfqq);
++	}
++
++	if (bfqd->low_latency &&
++		(old_wr_coeff == 1 || bfqq->wr_coeff == 1 || interactive))
++		bfqq->last_wr_start_finish = jiffies;
++}
++
++static struct request *bfq_find_rq_fmerge(struct bfq_data *bfqd,
++					  struct bio *bio)
++{
++	struct task_struct *tsk = current;
++	struct bfq_io_cq *bic;
++	struct bfq_queue *bfqq;
++
++	bic = bfq_bic_lookup(bfqd, tsk->io_context);
++	if (bic == NULL)
++		return NULL;
++
++	bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
++	if (bfqq != NULL)
++		return elv_rb_find(&bfqq->sort_list, bio_end_sector(bio));
++
++	return NULL;
++}
++
++static void bfq_activate_request(struct request_queue *q, struct request *rq)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++
++	bfqd->rq_in_driver++;
++	bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq);
++	bfq_log(bfqd, "activate_request: new bfqd->last_position %llu",
++		(long long unsigned)bfqd->last_position);
++}
++
++static inline void bfq_deactivate_request(struct request_queue *q,
++					  struct request *rq)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++
++	BUG_ON(bfqd->rq_in_driver == 0);
++	bfqd->rq_in_driver--;
++}
++
++static void bfq_remove_request(struct request *rq)
++{
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++	struct bfq_data *bfqd = bfqq->bfqd;
++	const int sync = rq_is_sync(rq);
++
++	if (bfqq->next_rq == rq) {
++		bfqq->next_rq = bfq_find_next_rq(bfqd, bfqq, rq);
++		bfq_updated_next_req(bfqd, bfqq);
++	}
++
++	list_del_init(&rq->queuelist);
++	BUG_ON(bfqq->queued[sync] == 0);
++	bfqq->queued[sync]--;
++	bfqd->queued--;
++	elv_rb_del(&bfqq->sort_list, rq);
++
++	if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
++		if (bfq_bfqq_busy(bfqq) && bfqq != bfqd->in_service_queue)
++			bfq_del_bfqq_busy(bfqd, bfqq, 1);
++		/*
++		 * Remove queue from request-position tree as it is empty.
++		 */
++		if (bfqq->pos_root != NULL) {
++			rb_erase(&bfqq->pos_node, bfqq->pos_root);
++			bfqq->pos_root = NULL;
++		}
++	}
++
++	if (rq->cmd_flags & REQ_META) {
++		BUG_ON(bfqq->meta_pending == 0);
++		bfqq->meta_pending--;
++	}
++}
++
++static int bfq_merge(struct request_queue *q, struct request **req,
++		     struct bio *bio)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct request *__rq;
++
++	__rq = bfq_find_rq_fmerge(bfqd, bio);
++	if (__rq != NULL && elv_rq_merge_ok(__rq, bio)) {
++		*req = __rq;
++		return ELEVATOR_FRONT_MERGE;
++	}
++
++	return ELEVATOR_NO_MERGE;
++}
++
++static void bfq_merged_request(struct request_queue *q, struct request *req,
++			       int type)
++{
++	if (type == ELEVATOR_FRONT_MERGE &&
++	    rb_prev(&req->rb_node) &&
++	    blk_rq_pos(req) <
++	    blk_rq_pos(container_of(rb_prev(&req->rb_node),
++				    struct request, rb_node))) {
++		struct bfq_queue *bfqq = RQ_BFQQ(req);
++		struct bfq_data *bfqd = bfqq->bfqd;
++		struct request *prev, *next_rq;
++
++		/* Reposition request in its sort_list */
++		elv_rb_del(&bfqq->sort_list, req);
++		elv_rb_add(&bfqq->sort_list, req);
++		/* Choose next request to be served for bfqq */
++		prev = bfqq->next_rq;
++		next_rq = bfq_choose_req(bfqd, bfqq->next_rq, req,
++					 bfqd->last_position);
++		BUG_ON(next_rq == NULL);
++		bfqq->next_rq = next_rq;
++		/*
++		 * If next_rq changes, update both the queue's budget to
++		 * fit the new request and the queue's position in its
++		 * rq_pos_tree.
++		 */
++		if (prev != bfqq->next_rq) {
++			bfq_updated_next_req(bfqd, bfqq);
++			bfq_rq_pos_tree_add(bfqd, bfqq);
++		}
++	}
++}
++
++static void bfq_merged_requests(struct request_queue *q, struct request *rq,
++				struct request *next)
++{
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++	/*
++	 * Reposition in fifo if next is older than rq.
++	 */
++	if (!list_empty(&rq->queuelist) && !list_empty(&next->queuelist) &&
++	    time_before(next->fifo_time, rq->fifo_time)) {
++		list_move(&rq->queuelist, &next->queuelist);
++		rq->fifo_time = next->fifo_time;
++	}
++
++	if (bfqq->next_rq == next)
++		bfqq->next_rq = rq;
++
++	bfq_remove_request(next);
++}
++
++/* Must be called with bfqq != NULL */
++static inline void bfq_bfqq_end_wr(struct bfq_queue *bfqq)
++{
++	BUG_ON(bfqq == NULL);
++	if (bfq_bfqq_busy(bfqq))
++		bfqq->bfqd->wr_busy_queues--;
++	bfqq->wr_coeff = 1;
++	bfqq->wr_cur_max_time = 0;
++	/* Trigger a weight change on the next activation of the queue */
++	bfqq->entity.ioprio_changed = 1;
++}
++
++static void bfq_end_wr_async_queues(struct bfq_data *bfqd,
++				    struct bfq_group *bfqg)
++{
++	int i, j;
++
++	for (i = 0; i < 2; i++)
++		for (j = 0; j < IOPRIO_BE_NR; j++)
++			if (bfqg->async_bfqq[i][j] != NULL)
++				bfq_bfqq_end_wr(bfqg->async_bfqq[i][j]);
++	if (bfqg->async_idle_bfqq != NULL)
++		bfq_bfqq_end_wr(bfqg->async_idle_bfqq);
++}
++
++static void bfq_end_wr(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq;
++
++	spin_lock_irq(bfqd->queue->queue_lock);
++
++	list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list)
++		bfq_bfqq_end_wr(bfqq);
++	list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list)
++		bfq_bfqq_end_wr(bfqq);
++	bfq_end_wr_async(bfqd);
++
++	spin_unlock_irq(bfqd->queue->queue_lock);
++}
++
++static int bfq_allow_merge(struct request_queue *q, struct request *rq,
++			   struct bio *bio)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_io_cq *bic;
++	struct bfq_queue *bfqq;
++
++	/*
++	 * Disallow merge of a sync bio into an async request.
++	 */
++	if (bfq_bio_sync(bio) && !rq_is_sync(rq))
++		return 0;
++
++	/*
++	 * Lookup the bfqq that this bio will be queued with. Allow
++	 * merge only if rq is queued there.
++	 * Queue lock is held here.
++	 */
++	bic = bfq_bic_lookup(bfqd, current->io_context);
++	if (bic == NULL)
++		return 0;
++
++	bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
++	return bfqq == RQ_BFQQ(rq);
++}
++
++static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
++				       struct bfq_queue *bfqq)
++{
++	if (bfqq != NULL) {
++		bfq_mark_bfqq_must_alloc(bfqq);
++		bfq_mark_bfqq_budget_new(bfqq);
++		bfq_clear_bfqq_fifo_expire(bfqq);
++
++		bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
++
++		bfq_log_bfqq(bfqd, bfqq,
++			     "set_in_service_queue, cur-budget = %lu",
++			     bfqq->entity.budget);
++	}
++
++	bfqd->in_service_queue = bfqq;
++}
++
++/*
++ * Get and set a new queue for service.
++ */
++static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd,
++						  struct bfq_queue *bfqq)
++{
++	if (!bfqq)
++		bfqq = bfq_get_next_queue(bfqd);
++	else
++		bfq_get_next_queue_forced(bfqd, bfqq);
++
++	__bfq_set_in_service_queue(bfqd, bfqq);
++	return bfqq;
++}
++
++static inline sector_t bfq_dist_from_last(struct bfq_data *bfqd,
++					  struct request *rq)
++{
++	if (blk_rq_pos(rq) >= bfqd->last_position)
++		return blk_rq_pos(rq) - bfqd->last_position;
++	else
++		return bfqd->last_position - blk_rq_pos(rq);
++}
++
++/*
++ * Return true if bfqq has no request pending and rq is close enough to
++ * bfqd->last_position, or if rq is closer to bfqd->last_position than
++ * bfqq->next_rq
++ */
++static inline int bfq_rq_close(struct bfq_data *bfqd, struct request *rq)
++{
++	return bfq_dist_from_last(bfqd, rq) <= BFQQ_SEEK_THR;
++}
++
++static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
++{
++	struct rb_root *root = &bfqd->rq_pos_tree;
++	struct rb_node *parent, *node;
++	struct bfq_queue *__bfqq;
++	sector_t sector = bfqd->last_position;
++
++	if (RB_EMPTY_ROOT(root))
++		return NULL;
++
++	/*
++	 * First, if we find a request starting at the end of the last
++	 * request, choose it.
++	 */
++	__bfqq = bfq_rq_pos_tree_lookup(bfqd, root, sector, &parent, NULL);
++	if (__bfqq != NULL)
++		return __bfqq;
++
++	/*
++	 * If the exact sector wasn't found, the parent of the NULL leaf
++	 * will contain the closest sector (rq_pos_tree sorted by
++	 * next_request position).
++	 */
++	__bfqq = rb_entry(parent, struct bfq_queue, pos_node);
++	if (bfq_rq_close(bfqd, __bfqq->next_rq))
++		return __bfqq;
++
++	if (blk_rq_pos(__bfqq->next_rq) < sector)
++		node = rb_next(&__bfqq->pos_node);
++	else
++		node = rb_prev(&__bfqq->pos_node);
++	if (node == NULL)
++		return NULL;
++
++	__bfqq = rb_entry(node, struct bfq_queue, pos_node);
++	if (bfq_rq_close(bfqd, __bfqq->next_rq))
++		return __bfqq;
++
++	return NULL;
++}
++
++/*
++ * bfqd - obvious
++ * cur_bfqq - passed in so that we don't decide that the current queue
++ *            is closely cooperating with itself.
++ *
++ * We are assuming that cur_bfqq has dispatched at least one request,
++ * and that bfqd->last_position reflects a position on the disk associated
++ * with the I/O issued by cur_bfqq.
++ */
++static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
++					      struct bfq_queue *cur_bfqq)
++{
++	struct bfq_queue *bfqq;
++
++	if (bfq_class_idle(cur_bfqq))
++		return NULL;
++	if (!bfq_bfqq_sync(cur_bfqq))
++		return NULL;
++	if (BFQQ_SEEKY(cur_bfqq))
++		return NULL;
++
++	/* If device has only one backlogged bfq_queue, don't search. */
++	if (bfqd->busy_queues == 1)
++		return NULL;
++
++	/*
++	 * We should notice if some of the queues are cooperating, e.g.
++	 * working closely on the same area of the disk. In that case,
++	 * we can group them together and don't waste time idling.
++	 */
++	bfqq = bfqq_close(bfqd);
++	if (bfqq == NULL || bfqq == cur_bfqq)
++		return NULL;
++
++	/*
++	 * Do not merge queues from different bfq_groups.
++	*/
++	if (bfqq->entity.parent != cur_bfqq->entity.parent)
++		return NULL;
++
++	/*
++	 * It only makes sense to merge sync queues.
++	 */
++	if (!bfq_bfqq_sync(bfqq))
++		return NULL;
++	if (BFQQ_SEEKY(bfqq))
++		return NULL;
++
++	/*
++	 * Do not merge queues of different priority classes.
++	 */
++	if (bfq_class_rt(bfqq) != bfq_class_rt(cur_bfqq))
++		return NULL;
++
++	return bfqq;
++}
++
++/*
++ * If enough samples have been computed, return the current max budget
++ * stored in bfqd, which is dynamically updated according to the
++ * estimated disk peak rate; otherwise return the default max budget
++ */
++static inline unsigned long bfq_max_budget(struct bfq_data *bfqd)
++{
++	if (bfqd->budgets_assigned < 194)
++		return bfq_default_max_budget;
++	else
++		return bfqd->bfq_max_budget;
++}
++
++/*
++ * Return min budget, which is a fraction of the current or default
++ * max budget (trying with 1/32)
++ */
++static inline unsigned long bfq_min_budget(struct bfq_data *bfqd)
++{
++	if (bfqd->budgets_assigned < 194)
++		return bfq_default_max_budget / 32;
++	else
++		return bfqd->bfq_max_budget / 32;
++}
++
++static void bfq_arm_slice_timer(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq = bfqd->in_service_queue;
++	struct bfq_io_cq *bic;
++	unsigned long sl;
++
++	BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
++
++	/* Processes have exited, don't wait. */
++	bic = bfqd->in_service_bic;
++	if (bic == NULL || atomic_read(&bic->icq.ioc->active_ref) == 0)
++		return;
++
++	bfq_mark_bfqq_wait_request(bfqq);
++
++	/*
++	 * We don't want to idle for seeks, but we do want to allow
++	 * fair distribution of slice time for a process doing back-to-back
++	 * seeks. So allow a little bit of time for him to submit a new rq.
++	 *
++	 * To prevent processes with (partly) seeky workloads from
++	 * being too ill-treated, grant them a small fraction of the
++	 * assigned budget before reducing the waiting time to
++	 * BFQ_MIN_TT. This happened to help reduce latency.
++	 */
++	sl = bfqd->bfq_slice_idle;
++	/*
++	 * Unless the queue is being weight-raised, grant only minimum idle
++	 * time if the queue either has been seeky for long enough or has
++	 * already proved to be constantly seeky.
++	 */
++	if (bfq_sample_valid(bfqq->seek_samples) &&
++	    ((BFQQ_SEEKY(bfqq) && bfqq->entity.service >
++				  bfq_max_budget(bfqq->bfqd) / 8) ||
++	      bfq_bfqq_constantly_seeky(bfqq)) && bfqq->wr_coeff == 1)
++		sl = min(sl, msecs_to_jiffies(BFQ_MIN_TT));
++	else if (bfqq->wr_coeff > 1)
++		sl = sl * 3;
++	bfqd->last_idling_start = ktime_get();
++	mod_timer(&bfqd->idle_slice_timer, jiffies + sl);
++	bfq_log(bfqd, "arm idle: %u/%u ms",
++		jiffies_to_msecs(sl), jiffies_to_msecs(bfqd->bfq_slice_idle));
++}
++
++/*
++ * Set the maximum time for the in-service queue to consume its
++ * budget. This prevents seeky processes from lowering the disk
++ * throughput (always guaranteed with a time slice scheme as in CFQ).
++ */
++static void bfq_set_budget_timeout(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq = bfqd->in_service_queue;
++	unsigned int timeout_coeff;
++	if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time)
++		timeout_coeff = 1;
++	else
++		timeout_coeff = bfqq->entity.weight / bfqq->entity.orig_weight;
++
++	bfqd->last_budget_start = ktime_get();
++
++	bfq_clear_bfqq_budget_new(bfqq);
++	bfqq->budget_timeout = jiffies +
++		bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] * timeout_coeff;
++
++	bfq_log_bfqq(bfqd, bfqq, "set budget_timeout %u",
++		jiffies_to_msecs(bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] *
++		timeout_coeff));
++}
++
++/*
++ * Move request from internal lists to the request queue dispatch list.
++ */
++static void bfq_dispatch_insert(struct request_queue *q, struct request *rq)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++	/*
++	 * For consistency, the next instruction should have been executed
++	 * after removing the request from the queue and dispatching it.
++	 * We execute instead this instruction before bfq_remove_request()
++	 * (and hence introduce a temporary inconsistency), for efficiency.
++	 * In fact, in a forced_dispatch, this prevents two counters related
++	 * to bfqq->dispatched to risk to be uselessly decremented if bfqq
++	 * is not in service, and then to be incremented again after
++	 * incrementing bfqq->dispatched.
++	 */
++	bfqq->dispatched++;
++	bfq_remove_request(rq);
++	elv_dispatch_sort(q, rq);
++
++	if (bfq_bfqq_sync(bfqq))
++		bfqd->sync_flight++;
++}
++
++/*
++ * Return expired entry, or NULL to just start from scratch in rbtree.
++ */
++static struct request *bfq_check_fifo(struct bfq_queue *bfqq)
++{
++	struct request *rq = NULL;
++
++	if (bfq_bfqq_fifo_expire(bfqq))
++		return NULL;
++
++	bfq_mark_bfqq_fifo_expire(bfqq);
++
++	if (list_empty(&bfqq->fifo))
++		return NULL;
++
++	rq = rq_entry_fifo(bfqq->fifo.next);
++
++	if (time_before(jiffies, rq->fifo_time))
++		return NULL;
++
++	return rq;
++}
++
++/* Must be called with the queue_lock held. */
++static int bfqq_process_refs(struct bfq_queue *bfqq)
++{
++	int process_refs, io_refs;
++
++	io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
++	process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
++	BUG_ON(process_refs < 0);
++	return process_refs;
++}
++
++static void bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
++{
++	int process_refs, new_process_refs;
++	struct bfq_queue *__bfqq;
++
++	/*
++	 * If there are no process references on the new_bfqq, then it is
++	 * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain
++	 * may have dropped their last reference (not just their last process
++	 * reference).
++	 */
++	if (!bfqq_process_refs(new_bfqq))
++		return;
++
++	/* Avoid a circular list and skip interim queue merges. */
++	while ((__bfqq = new_bfqq->new_bfqq)) {
++		if (__bfqq == bfqq)
++			return;
++		new_bfqq = __bfqq;
++	}
++
++	process_refs = bfqq_process_refs(bfqq);
++	new_process_refs = bfqq_process_refs(new_bfqq);
++	/*
++	 * If the process for the bfqq has gone away, there is no
++	 * sense in merging the queues.
++	 */
++	if (process_refs == 0 || new_process_refs == 0)
++		return;
++
++	/*
++	 * Merge in the direction of the lesser amount of work.
++	 */
++	if (new_process_refs >= process_refs) {
++		bfqq->new_bfqq = new_bfqq;
++		atomic_add(process_refs, &new_bfqq->ref);
++	} else {
++		new_bfqq->new_bfqq = bfqq;
++		atomic_add(new_process_refs, &bfqq->ref);
++	}
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
++		new_bfqq->pid);
++}
++
++static inline unsigned long bfq_bfqq_budget_left(struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++	return entity->budget - entity->service;
++}
++
++static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	BUG_ON(bfqq != bfqd->in_service_queue);
++
++	__bfq_bfqd_reset_in_service(bfqd);
++
++	/*
++	 * If this bfqq is shared between multiple processes, check
++	 * to make sure that those processes are still issuing I/Os
++	 * within the mean seek distance. If not, it may be time to
++	 * break the queues apart again.
++	 */
++	if (bfq_bfqq_coop(bfqq) && BFQQ_SEEKY(bfqq))
++		bfq_mark_bfqq_split_coop(bfqq);
++
++	if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
++		/*
++		 * Overloading budget_timeout field to store the time
++		 * at which the queue remains with no backlog; used by
++		 * the weight-raising mechanism.
++		 */
++		bfqq->budget_timeout = jiffies;
++		bfq_del_bfqq_busy(bfqd, bfqq, 1);
++	} else {
++		bfq_activate_bfqq(bfqd, bfqq);
++		/*
++		 * Resort priority tree of potential close cooperators.
++		 */
++		bfq_rq_pos_tree_add(bfqd, bfqq);
++	}
++}
++
++/**
++ * __bfq_bfqq_recalc_budget - try to adapt the budget to the @bfqq behavior.
++ * @bfqd: device data.
++ * @bfqq: queue to update.
++ * @reason: reason for expiration.
++ *
++ * Handle the feedback on @bfqq budget.  See the body for detailed
++ * comments.
++ */
++static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
++				     struct bfq_queue *bfqq,
++				     enum bfqq_expiration reason)
++{
++	struct request *next_rq;
++	unsigned long budget, min_budget;
++
++	budget = bfqq->max_budget;
++	min_budget = bfq_min_budget(bfqd);
++
++	BUG_ON(bfqq != bfqd->in_service_queue);
++
++	bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last budg %lu, budg left %lu",
++		bfqq->entity.budget, bfq_bfqq_budget_left(bfqq));
++	bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last max_budg %lu, min budg %lu",
++		budget, bfq_min_budget(bfqd));
++	bfq_log_bfqq(bfqd, bfqq, "recalc_budg: sync %d, seeky %d",
++		bfq_bfqq_sync(bfqq), BFQQ_SEEKY(bfqd->in_service_queue));
++
++	if (bfq_bfqq_sync(bfqq)) {
++		switch (reason) {
++		/*
++		 * Caveat: in all the following cases we trade latency
++		 * for throughput.
++		 */
++		case BFQ_BFQQ_TOO_IDLE:
++			/*
++			 * This is the only case where we may reduce
++			 * the budget: if there is no request of the
++			 * process still waiting for completion, then
++			 * we assume (tentatively) that the timer has
++			 * expired because the batch of requests of
++			 * the process could have been served with a
++			 * smaller budget.  Hence, betting that
++			 * process will behave in the same way when it
++			 * becomes backlogged again, we reduce its
++			 * next budget.  As long as we guess right,
++			 * this budget cut reduces the latency
++			 * experienced by the process.
++			 *
++			 * However, if there are still outstanding
++			 * requests, then the process may have not yet
++			 * issued its next request just because it is
++			 * still waiting for the completion of some of
++			 * the still outstanding ones.  So in this
++			 * subcase we do not reduce its budget, on the
++			 * contrary we increase it to possibly boost
++			 * the throughput, as discussed in the
++			 * comments to the BUDGET_TIMEOUT case.
++			 */
++			if (bfqq->dispatched > 0) /* still outstanding reqs */
++				budget = min(budget * 2, bfqd->bfq_max_budget);
++			else {
++				if (budget > 5 * min_budget)
++					budget -= 4 * min_budget;
++				else
++					budget = min_budget;
++			}
++			break;
++		case BFQ_BFQQ_BUDGET_TIMEOUT:
++			/*
++			 * We double the budget here because: 1) it
++			 * gives the chance to boost the throughput if
++			 * this is not a seeky process (which may have
++			 * bumped into this timeout because of, e.g.,
++			 * ZBR), 2) together with charge_full_budget
++			 * it helps give seeky processes higher
++			 * timestamps, and hence be served less
++			 * frequently.
++			 */
++			budget = min(budget * 2, bfqd->bfq_max_budget);
++			break;
++		case BFQ_BFQQ_BUDGET_EXHAUSTED:
++			/*
++			 * The process still has backlog, and did not
++			 * let either the budget timeout or the disk
++			 * idling timeout expire. Hence it is not
++			 * seeky, has a short thinktime and may be
++			 * happy with a higher budget too. So
++			 * definitely increase the budget of this good
++			 * candidate to boost the disk throughput.
++			 */
++			budget = min(budget * 4, bfqd->bfq_max_budget);
++			break;
++		case BFQ_BFQQ_NO_MORE_REQUESTS:
++		       /*
++			* Leave the budget unchanged.
++			*/
++		default:
++			return;
++		}
++	} else /* async queue */
++	    /* async queues get always the maximum possible budget
++	     * (their ability to dispatch is limited by
++	     * @bfqd->bfq_max_budget_async_rq).
++	     */
++		budget = bfqd->bfq_max_budget;
++
++	bfqq->max_budget = budget;
++
++	if (bfqd->budgets_assigned >= 194 && bfqd->bfq_user_max_budget == 0 &&
++	    bfqq->max_budget > bfqd->bfq_max_budget)
++		bfqq->max_budget = bfqd->bfq_max_budget;
++
++	/*
++	 * Make sure that we have enough budget for the next request.
++	 * Since the finish time of the bfqq must be kept in sync with
++	 * the budget, be sure to call __bfq_bfqq_expire() after the
++	 * update.
++	 */
++	next_rq = bfqq->next_rq;
++	if (next_rq != NULL)
++		bfqq->entity.budget = max_t(unsigned long, bfqq->max_budget,
++					    bfq_serv_to_charge(next_rq, bfqq));
++	else
++		bfqq->entity.budget = bfqq->max_budget;
++
++	bfq_log_bfqq(bfqd, bfqq, "head sect: %u, new budget %lu",
++			next_rq != NULL ? blk_rq_sectors(next_rq) : 0,
++			bfqq->entity.budget);
++}
++
++static unsigned long bfq_calc_max_budget(u64 peak_rate, u64 timeout)
++{
++	unsigned long max_budget;
++
++	/*
++	 * The max_budget calculated when autotuning is equal to the
++	 * amount of sectors transfered in timeout_sync at the
++	 * estimated peak rate.
++	 */
++	max_budget = (unsigned long)(peak_rate * 1000 *
++				     timeout >> BFQ_RATE_SHIFT);
++
++	return max_budget;
++}
++
++/*
++ * In addition to updating the peak rate, checks whether the process
++ * is "slow", and returns 1 if so. This slow flag is used, in addition
++ * to the budget timeout, to reduce the amount of service provided to
++ * seeky processes, and hence reduce their chances to lower the
++ * throughput. See the code for more details.
++ */
++static int bfq_update_peak_rate(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++				int compensate, enum bfqq_expiration reason)
++{
++	u64 bw, usecs, expected, timeout;
++	ktime_t delta;
++	int update = 0;
++
++	if (!bfq_bfqq_sync(bfqq) || bfq_bfqq_budget_new(bfqq))
++		return 0;
++
++	if (compensate)
++		delta = bfqd->last_idling_start;
++	else
++		delta = ktime_get();
++	delta = ktime_sub(delta, bfqd->last_budget_start);
++	usecs = ktime_to_us(delta);
++
++	/* Don't trust short/unrealistic values. */
++	if (usecs < 100 || usecs >= LONG_MAX)
++		return 0;
++
++	/*
++	 * Calculate the bandwidth for the last slice.  We use a 64 bit
++	 * value to store the peak rate, in sectors per usec in fixed
++	 * point math.  We do so to have enough precision in the estimate
++	 * and to avoid overflows.
++	 */
++	bw = (u64)bfqq->entity.service << BFQ_RATE_SHIFT;
++	do_div(bw, (unsigned long)usecs);
++
++	timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
++
++	/*
++	 * Use only long (> 20ms) intervals to filter out spikes for
++	 * the peak rate estimation.
++	 */
++	if (usecs > 20000) {
++		if (bw > bfqd->peak_rate ||
++		   (!BFQQ_SEEKY(bfqq) &&
++		    reason == BFQ_BFQQ_BUDGET_TIMEOUT)) {
++			bfq_log(bfqd, "measured bw =%llu", bw);
++			/*
++			 * To smooth oscillations use a low-pass filter with
++			 * alpha=7/8, i.e.,
++			 * new_rate = (7/8) * old_rate + (1/8) * bw
++			 */
++			do_div(bw, 8);
++			if (bw == 0)
++				return 0;
++			bfqd->peak_rate *= 7;
++			do_div(bfqd->peak_rate, 8);
++			bfqd->peak_rate += bw;
++			update = 1;
++			bfq_log(bfqd, "new peak_rate=%llu", bfqd->peak_rate);
++		}
++
++		update |= bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES - 1;
++
++		if (bfqd->peak_rate_samples < BFQ_PEAK_RATE_SAMPLES)
++			bfqd->peak_rate_samples++;
++
++		if (bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES &&
++		    update) {
++			int dev_type = blk_queue_nonrot(bfqd->queue);
++			if (bfqd->bfq_user_max_budget == 0) {
++				bfqd->bfq_max_budget =
++					bfq_calc_max_budget(bfqd->peak_rate,
++							    timeout);
++				bfq_log(bfqd, "new max_budget=%lu",
++					bfqd->bfq_max_budget);
++			}
++			if (bfqd->device_speed == BFQ_BFQD_FAST &&
++			    bfqd->peak_rate < device_speed_thresh[dev_type]) {
++				bfqd->device_speed = BFQ_BFQD_SLOW;
++				bfqd->RT_prod = R_slow[dev_type] *
++						T_slow[dev_type];
++			} else if (bfqd->device_speed == BFQ_BFQD_SLOW &&
++			    bfqd->peak_rate > device_speed_thresh[dev_type]) {
++				bfqd->device_speed = BFQ_BFQD_FAST;
++				bfqd->RT_prod = R_fast[dev_type] *
++						T_fast[dev_type];
++			}
++		}
++	}
++
++	/*
++	 * If the process has been served for a too short time
++	 * interval to let its possible sequential accesses prevail on
++	 * the initial seek time needed to move the disk head on the
++	 * first sector it requested, then give the process a chance
++	 * and for the moment return false.
++	 */
++	if (bfqq->entity.budget <= bfq_max_budget(bfqd) / 8)
++		return 0;
++
++	/*
++	 * A process is considered ``slow'' (i.e., seeky, so that we
++	 * cannot treat it fairly in the service domain, as it would
++	 * slow down too much the other processes) if, when a slice
++	 * ends for whatever reason, it has received service at a
++	 * rate that would not be high enough to complete the budget
++	 * before the budget timeout expiration.
++	 */
++	expected = bw * 1000 * timeout >> BFQ_RATE_SHIFT;
++
++	/*
++	 * Caveat: processes doing IO in the slower disk zones will
++	 * tend to be slow(er) even if not seeky. And the estimated
++	 * peak rate will actually be an average over the disk
++	 * surface. Hence, to not be too harsh with unlucky processes,
++	 * we keep a budget/3 margin of safety before declaring a
++	 * process slow.
++	 */
++	return expected > (4 * bfqq->entity.budget) / 3;
++}
++
++/*
++ * To be deemed as soft real-time, an application must meet two
++ * requirements. First, the application must not require an average
++ * bandwidth higher than the approximate bandwidth required to playback or
++ * record a compressed high-definition video.
++ * The next function is invoked on the completion of the last request of a
++ * batch, to compute the next-start time instant, soft_rt_next_start, such
++ * that, if the next request of the application does not arrive before
++ * soft_rt_next_start, then the above requirement on the bandwidth is met.
++ *
++ * The second requirement is that the request pattern of the application is
++ * isochronous, i.e., that, after issuing a request or a batch of requests,
++ * the application stops issuing new requests until all its pending requests
++ * have been completed. After that, the application may issue a new batch,
++ * and so on.
++ * For this reason the next function is invoked to compute
++ * soft_rt_next_start only for applications that meet this requirement,
++ * whereas soft_rt_next_start is set to infinity for applications that do
++ * not.
++ *
++ * Unfortunately, even a greedy application may happen to behave in an
++ * isochronous way if the CPU load is high. In fact, the application may
++ * stop issuing requests while the CPUs are busy serving other processes,
++ * then restart, then stop again for a while, and so on. In addition, if
++ * the disk achieves a low enough throughput with the request pattern
++ * issued by the application (e.g., because the request pattern is random
++ * and/or the device is slow), then the application may meet the above
++ * bandwidth requirement too. To prevent such a greedy application to be
++ * deemed as soft real-time, a further rule is used in the computation of
++ * soft_rt_next_start: soft_rt_next_start must be higher than the current
++ * time plus the maximum time for which the arrival of a request is waited
++ * for when a sync queue becomes idle, namely bfqd->bfq_slice_idle.
++ * This filters out greedy applications, as the latter issue instead their
++ * next request as soon as possible after the last one has been completed
++ * (in contrast, when a batch of requests is completed, a soft real-time
++ * application spends some time processing data).
++ *
++ * Unfortunately, the last filter may easily generate false positives if
++ * only bfqd->bfq_slice_idle is used as a reference time interval and one
++ * or both the following cases occur:
++ * 1) HZ is so low that the duration of a jiffy is comparable to or higher
++ *    than bfqd->bfq_slice_idle. This happens, e.g., on slow devices with
++ *    HZ=100.
++ * 2) jiffies, instead of increasing at a constant rate, may stop increasing
++ *    for a while, then suddenly 'jump' by several units to recover the lost
++ *    increments. This seems to happen, e.g., inside virtual machines.
++ * To address this issue, we do not use as a reference time interval just
++ * bfqd->bfq_slice_idle, but bfqd->bfq_slice_idle plus a few jiffies. In
++ * particular we add the minimum number of jiffies for which the filter
++ * seems to be quite precise also in embedded systems and KVM/QEMU virtual
++ * machines.
++ */
++static inline unsigned long bfq_bfqq_softrt_next_start(struct bfq_data *bfqd,
++						       struct bfq_queue *bfqq)
++{
++	return max(bfqq->last_idle_bklogged +
++		   HZ * bfqq->service_from_backlogged /
++		   bfqd->bfq_wr_max_softrt_rate,
++		   jiffies + bfqq->bfqd->bfq_slice_idle + 4);
++}
++
++/*
++ * Return the largest-possible time instant such that, for as long as possible,
++ * the current time will be lower than this time instant according to the macro
++ * time_is_before_jiffies().
++ */
++static inline unsigned long bfq_infinity_from_now(unsigned long now)
++{
++	return now + ULONG_MAX / 2;
++}
++
++/**
++ * bfq_bfqq_expire - expire a queue.
++ * @bfqd: device owning the queue.
++ * @bfqq: the queue to expire.
++ * @compensate: if true, compensate for the time spent idling.
++ * @reason: the reason causing the expiration.
++ *
++ *
++ * If the process associated to the queue is slow (i.e., seeky), or in
++ * case of budget timeout, or, finally, if it is async, we
++ * artificially charge it an entire budget (independently of the
++ * actual service it received). As a consequence, the queue will get
++ * higher timestamps than the correct ones upon reactivation, and
++ * hence it will be rescheduled as if it had received more service
++ * than what it actually received. In the end, this class of processes
++ * will receive less service in proportion to how slowly they consume
++ * their budgets (and hence how seriously they tend to lower the
++ * throughput).
++ *
++ * In contrast, when a queue expires because it has been idling for
++ * too much or because it exhausted its budget, we do not touch the
++ * amount of service it has received. Hence when the queue will be
++ * reactivated and its timestamps updated, the latter will be in sync
++ * with the actual service received by the queue until expiration.
++ *
++ * Charging a full budget to the first type of queues and the exact
++ * service to the others has the effect of using the WF2Q+ policy to
++ * schedule the former on a timeslice basis, without violating the
++ * service domain guarantees of the latter.
++ */
++static void bfq_bfqq_expire(struct bfq_data *bfqd,
++			    struct bfq_queue *bfqq,
++			    int compensate,
++			    enum bfqq_expiration reason)
++{
++	int slow;
++	BUG_ON(bfqq != bfqd->in_service_queue);
++
++	/* Update disk peak rate for autotuning and check whether the
++	 * process is slow (see bfq_update_peak_rate).
++	 */
++	slow = bfq_update_peak_rate(bfqd, bfqq, compensate, reason);
++
++	/*
++	 * As above explained, 'punish' slow (i.e., seeky), timed-out
++	 * and async queues, to favor sequential sync workloads.
++	 *
++	 * Processes doing I/O in the slower disk zones will tend to be
++	 * slow(er) even if not seeky. Hence, since the estimated peak
++	 * rate is actually an average over the disk surface, these
++	 * processes may timeout just for bad luck. To avoid punishing
++	 * them we do not charge a full budget to a process that
++	 * succeeded in consuming at least 2/3 of its budget.
++	 */
++	if (slow || (reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
++		     bfq_bfqq_budget_left(bfqq) >=  bfqq->entity.budget / 3))
++		bfq_bfqq_charge_full_budget(bfqq);
++
++	bfqq->service_from_backlogged += bfqq->entity.service;
++
++	if (BFQQ_SEEKY(bfqq) && reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
++	    !bfq_bfqq_constantly_seeky(bfqq)) {
++		bfq_mark_bfqq_constantly_seeky(bfqq);
++		if (!blk_queue_nonrot(bfqd->queue))
++			bfqd->const_seeky_busy_in_flight_queues++;
++	}
++
++	if (reason == BFQ_BFQQ_TOO_IDLE &&
++	    bfqq->entity.service <= 2 * bfqq->entity.budget / 10 )
++		bfq_clear_bfqq_IO_bound(bfqq);
++
++	if (bfqd->low_latency && bfqq->wr_coeff == 1)
++		bfqq->last_wr_start_finish = jiffies;
++
++	if (bfqd->low_latency && bfqd->bfq_wr_max_softrt_rate > 0 &&
++	    RB_EMPTY_ROOT(&bfqq->sort_list)) {
++		/*
++		 * If we get here, and there are no outstanding requests,
++		 * then the request pattern is isochronous (see the comments
++		 * to the function bfq_bfqq_softrt_next_start()). Hence we
++		 * can compute soft_rt_next_start. If, instead, the queue
++		 * still has outstanding requests, then we have to wait
++		 * for the completion of all the outstanding requests to
++		 * discover whether the request pattern is actually
++		 * isochronous.
++		 */
++		if (bfqq->dispatched == 0)
++			bfqq->soft_rt_next_start =
++				bfq_bfqq_softrt_next_start(bfqd, bfqq);
++		else {
++			/*
++			 * The application is still waiting for the
++			 * completion of one or more requests:
++			 * prevent it from possibly being incorrectly
++			 * deemed as soft real-time by setting its
++			 * soft_rt_next_start to infinity. In fact,
++			 * without this assignment, the application
++			 * would be incorrectly deemed as soft
++			 * real-time if:
++			 * 1) it issued a new request before the
++			 *    completion of all its in-flight
++			 *    requests, and
++			 * 2) at that time, its soft_rt_next_start
++			 *    happened to be in the past.
++			 */
++			bfqq->soft_rt_next_start =
++				bfq_infinity_from_now(jiffies);
++			/*
++			 * Schedule an update of soft_rt_next_start to when
++			 * the task may be discovered to be isochronous.
++			 */
++			bfq_mark_bfqq_softrt_update(bfqq);
++		}
++	}
++
++	bfq_log_bfqq(bfqd, bfqq,
++		"expire (%d, slow %d, num_disp %d, idle_win %d)", reason,
++		slow, bfqq->dispatched, bfq_bfqq_idle_window(bfqq));
++
++	/*
++	 * Increase, decrease or leave budget unchanged according to
++	 * reason.
++	 */
++	__bfq_bfqq_recalc_budget(bfqd, bfqq, reason);
++	__bfq_bfqq_expire(bfqd, bfqq);
++}
++
++/*
++ * Budget timeout is not implemented through a dedicated timer, but
++ * just checked on request arrivals and completions, as well as on
++ * idle timer expirations.
++ */
++static int bfq_bfqq_budget_timeout(struct bfq_queue *bfqq)
++{
++	if (bfq_bfqq_budget_new(bfqq) ||
++	    time_before(jiffies, bfqq->budget_timeout))
++		return 0;
++	return 1;
++}
++
++/*
++ * If we expire a queue that is waiting for the arrival of a new
++ * request, we may prevent the fictitious timestamp back-shifting that
++ * allows the guarantees of the queue to be preserved (see [1] for
++ * this tricky aspect). Hence we return true only if this condition
++ * does not hold, or if the queue is slow enough to deserve only to be
++ * kicked off for preserving a high throughput.
++*/
++static inline int bfq_may_expire_for_budg_timeout(struct bfq_queue *bfqq)
++{
++	bfq_log_bfqq(bfqq->bfqd, bfqq,
++		"may_budget_timeout: wait_request %d left %d timeout %d",
++		bfq_bfqq_wait_request(bfqq),
++			bfq_bfqq_budget_left(bfqq) >=  bfqq->entity.budget / 3,
++		bfq_bfqq_budget_timeout(bfqq));
++
++	return (!bfq_bfqq_wait_request(bfqq) ||
++		bfq_bfqq_budget_left(bfqq) >=  bfqq->entity.budget / 3)
++		&&
++		bfq_bfqq_budget_timeout(bfqq);
++}
++
++/*
++ * Device idling is allowed only for the queues for which this function
++ * returns true. For this reason, the return value of this function plays a
++ * critical role for both throughput boosting and service guarantees. The
++ * return value is computed through a logical expression. In this rather
++ * long comment, we try to briefly describe all the details and motivations
++ * behind the components of this logical expression.
++ *
++ * First, the expression is false if bfqq is not sync, or if: bfqq happened
++ * to become active during a large burst of queue activations, and the
++ * pattern of requests bfqq contains boosts the throughput if bfqq is
++ * expired. In fact, queues that became active during a large burst benefit
++ * only from throughput, as discussed in the comments to bfq_handle_burst.
++ * In this respect, expiring bfqq certainly boosts the throughput on NCQ-
++ * capable flash-based devices, whereas, on rotational devices, it boosts
++ * the throughput only if bfqq contains random requests.
++ *
++ * On the opposite end, if (a) bfqq is sync, (b) the above burst-related
++ * condition does not hold, and (c) bfqq is being weight-raised, then the
++ * expression always evaluates to true, as device idling is instrumental
++ * for preserving low-latency guarantees (see [1]). If, instead, conditions
++ * (a) and (b) do hold, but (c) does not, then the expression evaluates to
++ * true only if: (1) bfqq is I/O-bound and has a non-null idle window, and
++ * (2) at least one of the following two conditions holds.
++ * The first condition is that the device is not performing NCQ, because
++ * idling the device most certainly boosts the throughput if this condition
++ * holds and bfqq is I/O-bound and has been granted a non-null idle window.
++ * The second compound condition is made of the logical AND of two components.
++ *
++ * The first component is true only if there is no weight-raised busy
++ * queue. This guarantees that the device is not idled for a sync non-
++ * weight-raised queue when there are busy weight-raised queues. The former
++ * is then expired immediately if empty. Combined with the timestamping
++ * rules of BFQ (see [1] for details), this causes sync non-weight-raised
++ * queues to get a lower number of requests served, and hence to ask for a
++ * lower number of requests from the request pool, before the busy weight-
++ * raised queues get served again.
++ *
++ * This is beneficial for the processes associated with weight-raised
++ * queues, when the request pool is saturated (e.g., in the presence of
++ * write hogs). In fact, if the processes associated with the other queues
++ * ask for requests at a lower rate, then weight-raised processes have a
++ * higher probability to get a request from the pool immediately (or at
++ * least soon) when they need one. Hence they have a higher probability to
++ * actually get a fraction of the disk throughput proportional to their
++ * high weight. This is especially true with NCQ-capable drives, which
++ * enqueue several requests in advance and further reorder internally-
++ * queued requests.
++ *
++ * In the end, mistreating non-weight-raised queues when there are busy
++ * weight-raised queues seems to mitigate starvation problems in the
++ * presence of heavy write workloads and NCQ, and hence to guarantee a
++ * higher application and system responsiveness in these hostile scenarios.
++ *
++ * If the first component of the compound condition is instead true, i.e.,
++ * there is no weight-raised busy queue, then the second component of the
++ * compound condition takes into account service-guarantee and throughput
++ * issues related to NCQ (recall that the compound condition is evaluated
++ * only if the device is detected as supporting NCQ).
++ *
++ * As for service guarantees, allowing the drive to enqueue more than one
++ * request at a time, and hence delegating de facto final scheduling
++ * decisions to the drive's internal scheduler, causes loss of control on
++ * the actual request service order. In this respect, when the drive is
++ * allowed to enqueue more than one request at a time, the service
++ * distribution enforced by the drive's internal scheduler is likely to
++ * coincide with the desired device-throughput distribution only in the
++ * following, perfectly symmetric, scenario:
++ * 1) all active queues have the same weight,
++ * 2) all active groups at the same level in the groups tree have the same
++ *    weight,
++ * 3) all active groups at the same level in the groups tree have the same
++ *    number of children.
++ *
++ * Even in such a scenario, sequential I/O may still receive a preferential
++ * treatment, but this is not likely to be a big issue with flash-based
++ * devices, because of their non-dramatic loss of throughput with random
++ * I/O. Things do differ with HDDs, for which additional care is taken, as
++ * explained after completing the discussion for flash-based devices.
++ *
++ * Unfortunately, keeping the necessary state for evaluating exactly the
++ * above symmetry conditions would be quite complex and time-consuming.
++ * Therefore BFQ evaluates instead the following stronger sub-conditions,
++ * for which it is much easier to maintain the needed state:
++ * 1) all active queues have the same weight,
++ * 2) all active groups have the same weight,
++ * 3) all active groups have at most one active child each.
++ * In particular, the last two conditions are always true if hierarchical
++ * support and the cgroups interface are not enabled, hence no state needs
++ * to be maintained in this case.
++ *
++ * According to the above considerations, the second component of the
++ * compound condition evaluates to true if any of the above symmetry
++ * sub-condition does not hold, or the device is not flash-based. Therefore,
++ * if also the first component is true, then idling is allowed for a sync
++ * queue. These are the only sub-conditions considered if the device is
++ * flash-based, as, for such a device, it is sensible to force idling only
++ * for service-guarantee issues. In fact, as for throughput, idling
++ * NCQ-capable flash-based devices would not boost the throughput even
++ * with sequential I/O; rather it would lower the throughput in proportion
++ * to how fast the device is. In the end, (only) if all the three
++ * sub-conditions hold and the device is flash-based, the compound
++ * condition evaluates to false and therefore no idling is performed.
++ *
++ * As already said, things change with a rotational device, where idling
++ * boosts the throughput with sequential I/O (even with NCQ). Hence, for
++ * such a device the second component of the compound condition evaluates
++ * to true also if the following additional sub-condition does not hold:
++ * the queue is constantly seeky. Unfortunately, this different behavior
++ * with respect to flash-based devices causes an additional asymmetry: if
++ * some sync queues enjoy idling and some other sync queues do not, then
++ * the latter get a low share of the device throughput, simply because the
++ * former get many requests served after being set as in service, whereas
++ * the latter do not. As a consequence, to guarantee the desired throughput
++ * distribution, on HDDs the compound expression evaluates to true (and
++ * hence device idling is performed) also if the following last symmetry
++ * condition does not hold: no other queue is benefiting from idling. Also
++ * this last condition is actually replaced with a simpler-to-maintain and
++ * stronger condition: there is no busy queue which is not constantly seeky
++ * (and hence may also benefit from idling).
++ *
++ * To sum up, when all the required symmetry and throughput-boosting
++ * sub-conditions hold, the second component of the compound condition
++ * evaluates to false, and hence no idling is performed. This helps to
++ * keep the drives' internal queues full on NCQ-capable devices, and hence
++ * to boost the throughput, without causing 'almost' any loss of service
++ * guarantees. The 'almost' follows from the fact that, if the internal
++ * queue of one such device is filled while all the sub-conditions hold,
++ * but at some point in time some sub-condition stops to hold, then it may
++ * become impossible to let requests be served in the new desired order
++ * until all the requests already queued in the device have been served.
++ */
++static inline bool bfq_bfqq_must_not_expire(struct bfq_queue *bfqq)
++{
++	struct bfq_data *bfqd = bfqq->bfqd;
++#ifdef CONFIG_CGROUP_BFQIO
++#define symmetric_scenario	  (!bfqd->active_numerous_groups && \
++				   !bfq_differentiated_weights(bfqd))
++#else
++#define symmetric_scenario	  (!bfq_differentiated_weights(bfqd))
++#endif
++#define cond_for_seeky_on_ncq_hdd (bfq_bfqq_constantly_seeky(bfqq) && \
++				   bfqd->busy_in_flight_queues == \
++				   bfqd->const_seeky_busy_in_flight_queues)
++
++#define cond_for_expiring_in_burst	(bfq_bfqq_in_large_burst(bfqq) && \
++					 bfqd->hw_tag && \
++					 (blk_queue_nonrot(bfqd->queue) || \
++					  bfq_bfqq_constantly_seeky(bfqq)))
++
++/*
++ * Condition for expiring a non-weight-raised queue (and hence not idling
++ * the device).
++ */
++#define cond_for_expiring_non_wr  (bfqd->hw_tag && \
++				   (bfqd->wr_busy_queues > 0 || \
++				    (symmetric_scenario && \
++				     (blk_queue_nonrot(bfqd->queue) || \
++				      cond_for_seeky_on_ncq_hdd))))
++
++	return bfq_bfqq_sync(bfqq) &&
++		!cond_for_expiring_in_burst &&
++		(bfqq->wr_coeff > 1 ||
++		 (bfq_bfqq_IO_bound(bfqq) && bfq_bfqq_idle_window(bfqq) &&
++		  !cond_for_expiring_non_wr)
++	);
++}
++
++/*
++ * If the in-service queue is empty but sync, and the function
++ * bfq_bfqq_must_not_expire returns true, then:
++ * 1) the queue must remain in service and cannot be expired, and
++ * 2) the disk must be idled to wait for the possible arrival of a new
++ *    request for the queue.
++ * See the comments to the function bfq_bfqq_must_not_expire for the reasons
++ * why performing device idling is the best choice to boost the throughput
++ * and preserve service guarantees when bfq_bfqq_must_not_expire itself
++ * returns true.
++ */
++static inline bool bfq_bfqq_must_idle(struct bfq_queue *bfqq)
++{
++	struct bfq_data *bfqd = bfqq->bfqd;
++
++	return RB_EMPTY_ROOT(&bfqq->sort_list) && bfqd->bfq_slice_idle != 0 &&
++	       bfq_bfqq_must_not_expire(bfqq);
++}
++
++/*
++ * Select a queue for service.  If we have a current queue in service,
++ * check whether to continue servicing it, or retrieve and set a new one.
++ */
++static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq, *new_bfqq = NULL;
++	struct request *next_rq;
++	enum bfqq_expiration reason = BFQ_BFQQ_BUDGET_TIMEOUT;
++
++	bfqq = bfqd->in_service_queue;
++	if (bfqq == NULL)
++		goto new_queue;
++
++	bfq_log_bfqq(bfqd, bfqq, "select_queue: already in-service queue");
++
++	/*
++         * If another queue has a request waiting within our mean seek
++         * distance, let it run. The expire code will check for close
++         * cooperators and put the close queue at the front of the
++         * service tree. If possible, merge the expiring queue with the
++         * new bfqq.
++         */
++        new_bfqq = bfq_close_cooperator(bfqd, bfqq);
++        if (new_bfqq != NULL && bfqq->new_bfqq == NULL)
++                bfq_setup_merge(bfqq, new_bfqq);
++
++	if (bfq_may_expire_for_budg_timeout(bfqq) &&
++	    !timer_pending(&bfqd->idle_slice_timer) &&
++	    !bfq_bfqq_must_idle(bfqq))
++		goto expire;
++
++	next_rq = bfqq->next_rq;
++	/*
++	 * If bfqq has requests queued and it has enough budget left to
++	 * serve them, keep the queue, otherwise expire it.
++	 */
++	if (next_rq != NULL) {
++		if (bfq_serv_to_charge(next_rq, bfqq) >
++			bfq_bfqq_budget_left(bfqq)) {
++			reason = BFQ_BFQQ_BUDGET_EXHAUSTED;
++			goto expire;
++		} else {
++			/*
++			 * The idle timer may be pending because we may
++			 * not disable disk idling even when a new request
++			 * arrives.
++			 */
++			if (timer_pending(&bfqd->idle_slice_timer)) {
++				/*
++				 * If we get here: 1) at least a new request
++				 * has arrived but we have not disabled the
++				 * timer because the request was too small,
++				 * 2) then the block layer has unplugged
++				 * the device, causing the dispatch to be
++				 * invoked.
++				 *
++				 * Since the device is unplugged, now the
++				 * requests are probably large enough to
++				 * provide a reasonable throughput.
++				 * So we disable idling.
++				 */
++				bfq_clear_bfqq_wait_request(bfqq);
++				del_timer(&bfqd->idle_slice_timer);
++			}
++			if (new_bfqq == NULL)
++				goto keep_queue;
++			else
++				goto expire;
++		}
++	}
++
++	/*
++	 * No requests pending.  If the in-service queue still has requests
++	 * in flight (possibly waiting for a completion) or is idling for a
++	 * new request, then keep it.
++	 */
++	if (new_bfqq == NULL && (timer_pending(&bfqd->idle_slice_timer) ||
++	    (bfqq->dispatched != 0 && bfq_bfqq_must_not_expire(bfqq)))) {
++		bfqq = NULL;
++		goto keep_queue;
++	} else if (new_bfqq != NULL && timer_pending(&bfqd->idle_slice_timer)) {
++		/*
++		 * Expiring the queue because there is a close cooperator,
++		 * cancel timer.
++		 */
++		bfq_clear_bfqq_wait_request(bfqq);
++		del_timer(&bfqd->idle_slice_timer);
++	}
++
++	reason = BFQ_BFQQ_NO_MORE_REQUESTS;
++expire:
++	bfq_bfqq_expire(bfqd, bfqq, 0, reason);
++new_queue:
++	bfqq = bfq_set_in_service_queue(bfqd, new_bfqq);
++	bfq_log(bfqd, "select_queue: new queue %d returned",
++		bfqq != NULL ? bfqq->pid : 0);
++keep_queue:
++	return bfqq;
++}
++
++static void bfq_update_wr_data(struct bfq_data *bfqd,
++			       struct bfq_queue *bfqq)
++{
++	if (bfqq->wr_coeff > 1) { /* queue is being boosted */
++		struct bfq_entity *entity = &bfqq->entity;
++
++		bfq_log_bfqq(bfqd, bfqq,
++			"raising period dur %u/%u msec, old coeff %u, w %d(%d)",
++			jiffies_to_msecs(jiffies -
++				bfqq->last_wr_start_finish),
++			jiffies_to_msecs(bfqq->wr_cur_max_time),
++			bfqq->wr_coeff,
++			bfqq->entity.weight, bfqq->entity.orig_weight);
++
++		BUG_ON(bfqq != bfqd->in_service_queue && entity->weight !=
++		       entity->orig_weight * bfqq->wr_coeff);
++		if (entity->ioprio_changed)
++			bfq_log_bfqq(bfqd, bfqq, "WARN: pending prio change");
++		/*
++		 * If the queue was activated in a burst, or
++		 * too much time has elapsed from the beginning
++		 * of this weight-raising, then end weight raising.
++		 */
++		if (bfq_bfqq_in_large_burst(bfqq) ||
++		    time_is_before_jiffies(bfqq->last_wr_start_finish +
++					   bfqq->wr_cur_max_time)) {
++			bfqq->last_wr_start_finish = jiffies;
++			bfq_log_bfqq(bfqd, bfqq,
++				     "wrais ending at %lu, rais_max_time %u",
++				     bfqq->last_wr_start_finish,
++				     jiffies_to_msecs(bfqq->wr_cur_max_time));
++			bfq_bfqq_end_wr(bfqq);
++			__bfq_entity_update_weight_prio(
++				bfq_entity_service_tree(entity),
++				entity);
++		}
++	}
++}
++
++/*
++ * Dispatch one request from bfqq, moving it to the request queue
++ * dispatch list.
++ */
++static int bfq_dispatch_request(struct bfq_data *bfqd,
++				struct bfq_queue *bfqq)
++{
++	int dispatched = 0;
++	struct request *rq;
++	unsigned long service_to_charge;
++
++	BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list));
++
++	/* Follow expired path, else get first next available. */
++	rq = bfq_check_fifo(bfqq);
++	if (rq == NULL)
++		rq = bfqq->next_rq;
++	service_to_charge = bfq_serv_to_charge(rq, bfqq);
++
++	if (service_to_charge > bfq_bfqq_budget_left(bfqq)) {
++		/*
++		 * This may happen if the next rq is chosen in fifo order
++		 * instead of sector order. The budget is properly
++		 * dimensioned to be always sufficient to serve the next
++		 * request only if it is chosen in sector order. The reason
++		 * is that it would be quite inefficient and little useful
++		 * to always make sure that the budget is large enough to
++		 * serve even the possible next rq in fifo order.
++		 * In fact, requests are seldom served in fifo order.
++		 *
++		 * Expire the queue for budget exhaustion, and make sure
++		 * that the next act_budget is enough to serve the next
++		 * request, even if it comes from the fifo expired path.
++		 */
++		bfqq->next_rq = rq;
++		/*
++		 * Since this dispatch is failed, make sure that
++		 * a new one will be performed
++		 */
++		if (!bfqd->rq_in_driver)
++			bfq_schedule_dispatch(bfqd);
++		goto expire;
++	}
++
++	/* Finally, insert request into driver dispatch list. */
++	bfq_bfqq_served(bfqq, service_to_charge);
++	bfq_dispatch_insert(bfqd->queue, rq);
++
++	bfq_update_wr_data(bfqd, bfqq);
++
++	bfq_log_bfqq(bfqd, bfqq,
++			"dispatched %u sec req (%llu), budg left %lu",
++			blk_rq_sectors(rq),
++			(long long unsigned)blk_rq_pos(rq),
++			bfq_bfqq_budget_left(bfqq));
++
++	dispatched++;
++
++	if (bfqd->in_service_bic == NULL) {
++		atomic_long_inc(&RQ_BIC(rq)->icq.ioc->refcount);
++		bfqd->in_service_bic = RQ_BIC(rq);
++	}
++
++	if (bfqd->busy_queues > 1 && ((!bfq_bfqq_sync(bfqq) &&
++	    dispatched >= bfqd->bfq_max_budget_async_rq) ||
++	    bfq_class_idle(bfqq)))
++		goto expire;
++
++	return dispatched;
++
++expire:
++	bfq_bfqq_expire(bfqd, bfqq, 0, BFQ_BFQQ_BUDGET_EXHAUSTED);
++	return dispatched;
++}
++
++static int __bfq_forced_dispatch_bfqq(struct bfq_queue *bfqq)
++{
++	int dispatched = 0;
++
++	while (bfqq->next_rq != NULL) {
++		bfq_dispatch_insert(bfqq->bfqd->queue, bfqq->next_rq);
++		dispatched++;
++	}
++
++	BUG_ON(!list_empty(&bfqq->fifo));
++	return dispatched;
++}
++
++/*
++ * Drain our current requests.
++ * Used for barriers and when switching io schedulers on-the-fly.
++ */
++static int bfq_forced_dispatch(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq, *n;
++	struct bfq_service_tree *st;
++	int dispatched = 0;
++
++	bfqq = bfqd->in_service_queue;
++	if (bfqq != NULL)
++		__bfq_bfqq_expire(bfqd, bfqq);
++
++	/*
++	 * Loop through classes, and be careful to leave the scheduler
++	 * in a consistent state, as feedback mechanisms and vtime
++	 * updates cannot be disabled during the process.
++	 */
++	list_for_each_entry_safe(bfqq, n, &bfqd->active_list, bfqq_list) {
++		st = bfq_entity_service_tree(&bfqq->entity);
++
++		dispatched += __bfq_forced_dispatch_bfqq(bfqq);
++		bfqq->max_budget = bfq_max_budget(bfqd);
++
++		bfq_forget_idle(st);
++	}
++
++	BUG_ON(bfqd->busy_queues != 0);
++
++	return dispatched;
++}
++
++static int bfq_dispatch_requests(struct request_queue *q, int force)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_queue *bfqq;
++	int max_dispatch;
++
++	bfq_log(bfqd, "dispatch requests: %d busy queues", bfqd->busy_queues);
++	if (bfqd->busy_queues == 0)
++		return 0;
++
++	if (unlikely(force))
++		return bfq_forced_dispatch(bfqd);
++
++	bfqq = bfq_select_queue(bfqd);
++	if (bfqq == NULL)
++		return 0;
++
++	max_dispatch = bfqd->bfq_quantum;
++	if (bfq_class_idle(bfqq))
++		max_dispatch = 1;
++
++	if (!bfq_bfqq_sync(bfqq))
++		max_dispatch = bfqd->bfq_max_budget_async_rq;
++
++	if (bfqq->dispatched >= max_dispatch) {
++		if (bfqd->busy_queues > 1)
++			return 0;
++		if (bfqq->dispatched >= 4 * max_dispatch)
++			return 0;
++	}
++
++	if (bfqd->sync_flight != 0 && !bfq_bfqq_sync(bfqq))
++		return 0;
++
++	bfq_clear_bfqq_wait_request(bfqq);
++	BUG_ON(timer_pending(&bfqd->idle_slice_timer));
++
++	if (!bfq_dispatch_request(bfqd, bfqq))
++		return 0;
++
++	bfq_log_bfqq(bfqd, bfqq, "dispatched one request of %d (max_disp %d)",
++			bfqq->pid, max_dispatch);
++
++	return 1;
++}
++
++/*
++ * Task holds one reference to the queue, dropped when task exits.  Each rq
++ * in-flight on this queue also holds a reference, dropped when rq is freed.
++ *
++ * Queue lock must be held here.
++ */
++static void bfq_put_queue(struct bfq_queue *bfqq)
++{
++	struct bfq_data *bfqd = bfqq->bfqd;
++
++	BUG_ON(atomic_read(&bfqq->ref) <= 0);
++
++	bfq_log_bfqq(bfqd, bfqq, "put_queue: %p %d", bfqq,
++		     atomic_read(&bfqq->ref));
++	if (!atomic_dec_and_test(&bfqq->ref))
++		return;
++
++	BUG_ON(rb_first(&bfqq->sort_list) != NULL);
++	BUG_ON(bfqq->allocated[READ] + bfqq->allocated[WRITE] != 0);
++	BUG_ON(bfqq->entity.tree != NULL);
++	BUG_ON(bfq_bfqq_busy(bfqq));
++	BUG_ON(bfqd->in_service_queue == bfqq);
++
++	if (bfq_bfqq_sync(bfqq))
++		/*
++		 * The fact that this queue is being destroyed does not
++		 * invalidate the fact that this queue may have been
++		 * activated during the current burst. As a consequence,
++		 * although the queue does not exist anymore, and hence
++		 * needs to be removed from the burst list if there,
++		 * the burst size has not to be decremented.
++		 */
++		hlist_del_init(&bfqq->burst_list_node);
++
++	bfq_log_bfqq(bfqd, bfqq, "put_queue: %p freed", bfqq);
++
++	kmem_cache_free(bfq_pool, bfqq);
++}
++
++static void bfq_put_cooperator(struct bfq_queue *bfqq)
++{
++	struct bfq_queue *__bfqq, *next;
++
++	/*
++	 * If this queue was scheduled to merge with another queue, be
++	 * sure to drop the reference taken on that queue (and others in
++	 * the merge chain). See bfq_setup_merge and bfq_merge_bfqqs.
++	 */
++	__bfqq = bfqq->new_bfqq;
++	while (__bfqq) {
++		if (__bfqq == bfqq)
++			break;
++		next = __bfqq->new_bfqq;
++		bfq_put_queue(__bfqq);
++		__bfqq = next;
++	}
++}
++
++static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	if (bfqq == bfqd->in_service_queue) {
++		__bfq_bfqq_expire(bfqd, bfqq);
++		bfq_schedule_dispatch(bfqd);
++	}
++
++	bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq,
++		     atomic_read(&bfqq->ref));
++
++	bfq_put_cooperator(bfqq);
++
++	bfq_put_queue(bfqq);
++}
++
++static inline void bfq_init_icq(struct io_cq *icq)
++{
++	struct bfq_io_cq *bic = icq_to_bic(icq);
++
++	bic->ttime.last_end_request = jiffies;
++}
++
++static void bfq_exit_icq(struct io_cq *icq)
++{
++	struct bfq_io_cq *bic = icq_to_bic(icq);
++	struct bfq_data *bfqd = bic_to_bfqd(bic);
++
++	if (bic->bfqq[BLK_RW_ASYNC]) {
++		bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_ASYNC]);
++		bic->bfqq[BLK_RW_ASYNC] = NULL;
++	}
++
++	if (bic->bfqq[BLK_RW_SYNC]) {
++		bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_SYNC]);
++		bic->bfqq[BLK_RW_SYNC] = NULL;
++	}
++}
++
++/*
++ * Update the entity prio values; note that the new values will not
++ * be used until the next (re)activation.
++ */
++static void bfq_init_prio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
++{
++	struct task_struct *tsk = current;
++	int ioprio_class;
++
++	if (!bfq_bfqq_prio_changed(bfqq))
++		return;
++
++	ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
++	switch (ioprio_class) {
++	default:
++		dev_err(bfqq->bfqd->queue->backing_dev_info.dev,
++			"bfq: bad prio class %d\n", ioprio_class);
++	case IOPRIO_CLASS_NONE:
++		/*
++		 * No prio set, inherit CPU scheduling settings.
++		 */
++		bfqq->entity.new_ioprio = task_nice_ioprio(tsk);
++		bfqq->entity.new_ioprio_class = task_nice_ioclass(tsk);
++		break;
++	case IOPRIO_CLASS_RT:
++		bfqq->entity.new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
++		bfqq->entity.new_ioprio_class = IOPRIO_CLASS_RT;
++		break;
++	case IOPRIO_CLASS_BE:
++		bfqq->entity.new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
++		bfqq->entity.new_ioprio_class = IOPRIO_CLASS_BE;
++		break;
++	case IOPRIO_CLASS_IDLE:
++		bfqq->entity.new_ioprio_class = IOPRIO_CLASS_IDLE;
++		bfqq->entity.new_ioprio = 7;
++		bfq_clear_bfqq_idle_window(bfqq);
++		break;
++	}
++
++	if (bfqq->entity.new_ioprio < 0 ||
++	    bfqq->entity.new_ioprio >= IOPRIO_BE_NR) {
++		printk(KERN_CRIT "bfq_init_prio_data: new_ioprio %d\n",
++				 bfqq->entity.new_ioprio);
++		BUG();
++	}
++
++	bfqq->entity.ioprio_changed = 1;
++
++	bfq_clear_bfqq_prio_changed(bfqq);
++}
++
++static void bfq_changed_ioprio(struct bfq_io_cq *bic)
++{
++	struct bfq_data *bfqd;
++	struct bfq_queue *bfqq, *new_bfqq;
++	struct bfq_group *bfqg;
++	unsigned long uninitialized_var(flags);
++	int ioprio = bic->icq.ioc->ioprio;
++
++	bfqd = bfq_get_bfqd_locked(&(bic->icq.q->elevator->elevator_data),
++				   &flags);
++	/*
++	 * This condition may trigger on a newly created bic, be sure to
++	 * drop the lock before returning.
++	 */
++	if (unlikely(bfqd == NULL) || likely(bic->ioprio == ioprio))
++		goto out;
++
++	bfqq = bic->bfqq[BLK_RW_ASYNC];
++	if (bfqq != NULL) {
++		bfqg = container_of(bfqq->entity.sched_data, struct bfq_group,
++				    sched_data);
++		new_bfqq = bfq_get_queue(bfqd, bfqg, BLK_RW_ASYNC, bic,
++					 GFP_ATOMIC);
++		if (new_bfqq != NULL) {
++			bic->bfqq[BLK_RW_ASYNC] = new_bfqq;
++			bfq_log_bfqq(bfqd, bfqq,
++				     "changed_ioprio: bfqq %p %d",
++				     bfqq, atomic_read(&bfqq->ref));
++			bfq_put_queue(bfqq);
++		}
++	}
++
++	bfqq = bic->bfqq[BLK_RW_SYNC];
++	if (bfqq != NULL)
++		bfq_mark_bfqq_prio_changed(bfqq);
++
++	bic->ioprio = ioprio;
++
++out:
++	bfq_put_bfqd_unlock(bfqd, &flags);
++}
++
++static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++			  pid_t pid, int is_sync)
++{
++	RB_CLEAR_NODE(&bfqq->entity.rb_node);
++	INIT_LIST_HEAD(&bfqq->fifo);
++	INIT_HLIST_NODE(&bfqq->burst_list_node);
++
++	atomic_set(&bfqq->ref, 0);
++	bfqq->bfqd = bfqd;
++
++	bfq_mark_bfqq_prio_changed(bfqq);
++
++	if (is_sync) {
++		if (!bfq_class_idle(bfqq))
++			bfq_mark_bfqq_idle_window(bfqq);
++		bfq_mark_bfqq_sync(bfqq);
++	}
++	bfq_mark_bfqq_IO_bound(bfqq);
++
++	/* Tentative initial value to trade off between thr and lat */
++	bfqq->max_budget = (2 * bfq_max_budget(bfqd)) / 3;
++	bfqq->pid = pid;
++
++	bfqq->wr_coeff = 1;
++	bfqq->last_wr_start_finish = 0;
++	/*
++	 * Set to the value for which bfqq will not be deemed as
++	 * soft rt when it becomes backlogged.
++	 */
++	bfqq->soft_rt_next_start = bfq_infinity_from_now(jiffies);
++}
++
++static struct bfq_queue *bfq_find_alloc_queue(struct bfq_data *bfqd,
++					      struct bfq_group *bfqg,
++					      int is_sync,
++					      struct bfq_io_cq *bic,
++					      gfp_t gfp_mask)
++{
++	struct bfq_queue *bfqq, *new_bfqq = NULL;
++
++retry:
++	/* bic always exists here */
++	bfqq = bic_to_bfqq(bic, is_sync);
++
++	/*
++	 * Always try a new alloc if we fall back to the OOM bfqq
++	 * originally, since it should just be a temporary situation.
++	 */
++	if (bfqq == NULL || bfqq == &bfqd->oom_bfqq) {
++		bfqq = NULL;
++		if (new_bfqq != NULL) {
++			bfqq = new_bfqq;
++			new_bfqq = NULL;
++		} else if (gfp_mask & __GFP_WAIT) {
++			spin_unlock_irq(bfqd->queue->queue_lock);
++			new_bfqq = kmem_cache_alloc_node(bfq_pool,
++					gfp_mask | __GFP_ZERO,
++					bfqd->queue->node);
++			spin_lock_irq(bfqd->queue->queue_lock);
++			if (new_bfqq != NULL)
++				goto retry;
++		} else {
++			bfqq = kmem_cache_alloc_node(bfq_pool,
++					gfp_mask | __GFP_ZERO,
++					bfqd->queue->node);
++		}
++
++		if (bfqq != NULL) {
++			bfq_init_bfqq(bfqd, bfqq, current->pid, is_sync);
++			bfq_init_prio_data(bfqq, bic);
++			bfq_init_entity(&bfqq->entity, bfqg);
++			bfq_log_bfqq(bfqd, bfqq, "allocated");
++		} else {
++			bfqq = &bfqd->oom_bfqq;
++			bfq_log_bfqq(bfqd, bfqq, "using oom bfqq");
++		}
++	}
++
++	if (new_bfqq != NULL)
++		kmem_cache_free(bfq_pool, new_bfqq);
++
++	return bfqq;
++}
++
++static struct bfq_queue **bfq_async_queue_prio(struct bfq_data *bfqd,
++					       struct bfq_group *bfqg,
++					       int ioprio_class, int ioprio)
++{
++	switch (ioprio_class) {
++	case IOPRIO_CLASS_RT:
++		return &bfqg->async_bfqq[0][ioprio];
++	case IOPRIO_CLASS_NONE:
++		ioprio = IOPRIO_NORM;
++		/* fall through */
++	case IOPRIO_CLASS_BE:
++		return &bfqg->async_bfqq[1][ioprio];
++	case IOPRIO_CLASS_IDLE:
++		return &bfqg->async_idle_bfqq;
++	default:
++		BUG();
++	}
++}
++
++static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
++				       struct bfq_group *bfqg, int is_sync,
++				       struct bfq_io_cq *bic, gfp_t gfp_mask)
++{
++	const int ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
++	const int ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
++	struct bfq_queue **async_bfqq = NULL;
++	struct bfq_queue *bfqq = NULL;
++
++	if (!is_sync) {
++		async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
++						  ioprio);
++		bfqq = *async_bfqq;
++	}
++
++	if (bfqq == NULL)
++		bfqq = bfq_find_alloc_queue(bfqd, bfqg, is_sync, bic, gfp_mask);
++
++	/*
++	 * Pin the queue now that it's allocated, scheduler exit will
++	 * prune it.
++	 */
++	if (!is_sync && *async_bfqq == NULL) {
++		atomic_inc(&bfqq->ref);
++		bfq_log_bfqq(bfqd, bfqq, "get_queue, bfqq not in async: %p, %d",
++			     bfqq, atomic_read(&bfqq->ref));
++		*async_bfqq = bfqq;
++	}
++
++	atomic_inc(&bfqq->ref);
++	bfq_log_bfqq(bfqd, bfqq, "get_queue, at end: %p, %d", bfqq,
++		     atomic_read(&bfqq->ref));
++	return bfqq;
++}
++
++static void bfq_update_io_thinktime(struct bfq_data *bfqd,
++				    struct bfq_io_cq *bic)
++{
++	unsigned long elapsed = jiffies - bic->ttime.last_end_request;
++	unsigned long ttime = min(elapsed, 2UL * bfqd->bfq_slice_idle);
++
++	bic->ttime.ttime_samples = (7*bic->ttime.ttime_samples + 256) / 8;
++	bic->ttime.ttime_total = (7*bic->ttime.ttime_total + 256*ttime) / 8;
++	bic->ttime.ttime_mean = (bic->ttime.ttime_total + 128) /
++				bic->ttime.ttime_samples;
++}
++
++static void bfq_update_io_seektime(struct bfq_data *bfqd,
++				   struct bfq_queue *bfqq,
++				   struct request *rq)
++{
++	sector_t sdist;
++	u64 total;
++
++	if (bfqq->last_request_pos < blk_rq_pos(rq))
++		sdist = blk_rq_pos(rq) - bfqq->last_request_pos;
++	else
++		sdist = bfqq->last_request_pos - blk_rq_pos(rq);
++
++	/*
++	 * Don't allow the seek distance to get too large from the
++	 * odd fragment, pagein, etc.
++	 */
++	if (bfqq->seek_samples == 0) /* first request, not really a seek */
++		sdist = 0;
++	else if (bfqq->seek_samples <= 60) /* second & third seek */
++		sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*1024);
++	else
++		sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*64);
++
++	bfqq->seek_samples = (7*bfqq->seek_samples + 256) / 8;
++	bfqq->seek_total = (7*bfqq->seek_total + (u64)256*sdist) / 8;
++	total = bfqq->seek_total + (bfqq->seek_samples/2);
++	do_div(total, bfqq->seek_samples);
++	bfqq->seek_mean = (sector_t)total;
++
++	bfq_log_bfqq(bfqd, bfqq, "dist=%llu mean=%llu", (u64)sdist,
++			(u64)bfqq->seek_mean);
++}
++
++/*
++ * Disable idle window if the process thinks too long or seeks so much that
++ * it doesn't matter.
++ */
++static void bfq_update_idle_window(struct bfq_data *bfqd,
++				   struct bfq_queue *bfqq,
++				   struct bfq_io_cq *bic)
++{
++	int enable_idle;
++
++	/* Don't idle for async or idle io prio class. */
++	if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq))
++		return;
++
++	enable_idle = bfq_bfqq_idle_window(bfqq);
++
++	if (atomic_read(&bic->icq.ioc->active_ref) == 0 ||
++	    bfqd->bfq_slice_idle == 0 ||
++		(bfqd->hw_tag && BFQQ_SEEKY(bfqq) &&
++			bfqq->wr_coeff == 1))
++		enable_idle = 0;
++	else if (bfq_sample_valid(bic->ttime.ttime_samples)) {
++		if (bic->ttime.ttime_mean > bfqd->bfq_slice_idle &&
++			bfqq->wr_coeff == 1)
++			enable_idle = 0;
++		else
++			enable_idle = 1;
++	}
++	bfq_log_bfqq(bfqd, bfqq, "update_idle_window: enable_idle %d",
++		enable_idle);
++
++	if (enable_idle)
++		bfq_mark_bfqq_idle_window(bfqq);
++	else
++		bfq_clear_bfqq_idle_window(bfqq);
++}
++
++/*
++ * Called when a new fs request (rq) is added to bfqq.  Check if there's
++ * something we should do about it.
++ */
++static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++			    struct request *rq)
++{
++	struct bfq_io_cq *bic = RQ_BIC(rq);
++
++	if (rq->cmd_flags & REQ_META)
++		bfqq->meta_pending++;
++
++	bfq_update_io_thinktime(bfqd, bic);
++	bfq_update_io_seektime(bfqd, bfqq, rq);
++	if (!BFQQ_SEEKY(bfqq) && bfq_bfqq_constantly_seeky(bfqq)) {
++		bfq_clear_bfqq_constantly_seeky(bfqq);
++		if (!blk_queue_nonrot(bfqd->queue)) {
++			BUG_ON(!bfqd->const_seeky_busy_in_flight_queues);
++			bfqd->const_seeky_busy_in_flight_queues--;
++		}
++	}
++	if (bfqq->entity.service > bfq_max_budget(bfqd) / 8 ||
++	    !BFQQ_SEEKY(bfqq))
++		bfq_update_idle_window(bfqd, bfqq, bic);
++
++	bfq_log_bfqq(bfqd, bfqq,
++		     "rq_enqueued: idle_window=%d (seeky %d, mean %llu)",
++		     bfq_bfqq_idle_window(bfqq), BFQQ_SEEKY(bfqq),
++		     (long long unsigned)bfqq->seek_mean);
++
++	bfqq->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq);
++
++	if (bfqq == bfqd->in_service_queue && bfq_bfqq_wait_request(bfqq)) {
++		int small_req = bfqq->queued[rq_is_sync(rq)] == 1 &&
++				blk_rq_sectors(rq) < 32;
++		int budget_timeout = bfq_bfqq_budget_timeout(bfqq);
++
++		/*
++		 * There is just this request queued: if the request
++		 * is small and the queue is not to be expired, then
++		 * just exit.
++		 *
++		 * In this way, if the disk is being idled to wait for
++		 * a new request from the in-service queue, we avoid
++		 * unplugging the device and committing the disk to serve
++		 * just a small request. On the contrary, we wait for
++		 * the block layer to decide when to unplug the device:
++		 * hopefully, new requests will be merged to this one
++		 * quickly, then the device will be unplugged and
++		 * larger requests will be dispatched.
++		 */
++		if (small_req && !budget_timeout)
++			return;
++
++		/*
++		 * A large enough request arrived, or the queue is to
++		 * be expired: in both cases disk idling is to be
++		 * stopped, so clear wait_request flag and reset
++		 * timer.
++		 */
++		bfq_clear_bfqq_wait_request(bfqq);
++		del_timer(&bfqd->idle_slice_timer);
++
++		/*
++		 * The queue is not empty, because a new request just
++		 * arrived. Hence we can safely expire the queue, in
++		 * case of budget timeout, without risking that the
++		 * timestamps of the queue are not updated correctly.
++		 * See [1] for more details.
++		 */
++		if (budget_timeout)
++			bfq_bfqq_expire(bfqd, bfqq, 0, BFQ_BFQQ_BUDGET_TIMEOUT);
++
++		/*
++		 * Let the request rip immediately, or let a new queue be
++		 * selected if bfqq has just been expired.
++		 */
++		__blk_run_queue(bfqd->queue);
++	}
++}
++
++static void bfq_insert_request(struct request_queue *q, struct request *rq)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++	assert_spin_locked(bfqd->queue->queue_lock);
++	bfq_init_prio_data(bfqq, RQ_BIC(rq));
++
++	bfq_add_request(rq);
++
++	rq->fifo_time = jiffies + bfqd->bfq_fifo_expire[rq_is_sync(rq)];
++	list_add_tail(&rq->queuelist, &bfqq->fifo);
++
++	bfq_rq_enqueued(bfqd, bfqq, rq);
++}
++
++static void bfq_update_hw_tag(struct bfq_data *bfqd)
++{
++	bfqd->max_rq_in_driver = max(bfqd->max_rq_in_driver,
++				     bfqd->rq_in_driver);
++
++	if (bfqd->hw_tag == 1)
++		return;
++
++	/*
++	 * This sample is valid if the number of outstanding requests
++	 * is large enough to allow a queueing behavior.  Note that the
++	 * sum is not exact, as it's not taking into account deactivated
++	 * requests.
++	 */
++	if (bfqd->rq_in_driver + bfqd->queued < BFQ_HW_QUEUE_THRESHOLD)
++		return;
++
++	if (bfqd->hw_tag_samples++ < BFQ_HW_QUEUE_SAMPLES)
++		return;
++
++	bfqd->hw_tag = bfqd->max_rq_in_driver > BFQ_HW_QUEUE_THRESHOLD;
++	bfqd->max_rq_in_driver = 0;
++	bfqd->hw_tag_samples = 0;
++}
++
++static void bfq_completed_request(struct request_queue *q, struct request *rq)
++{
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++	struct bfq_data *bfqd = bfqq->bfqd;
++	bool sync = bfq_bfqq_sync(bfqq);
++
++	bfq_log_bfqq(bfqd, bfqq, "completed one req with %u sects left (%d)",
++		     blk_rq_sectors(rq), sync);
++
++	bfq_update_hw_tag(bfqd);
++
++	BUG_ON(!bfqd->rq_in_driver);
++	BUG_ON(!bfqq->dispatched);
++	bfqd->rq_in_driver--;
++	bfqq->dispatched--;
++
++	if (!bfqq->dispatched && !bfq_bfqq_busy(bfqq)) {
++		bfq_weights_tree_remove(bfqd, &bfqq->entity,
++					&bfqd->queue_weights_tree);
++		if (!blk_queue_nonrot(bfqd->queue)) {
++			BUG_ON(!bfqd->busy_in_flight_queues);
++			bfqd->busy_in_flight_queues--;
++			if (bfq_bfqq_constantly_seeky(bfqq)) {
++				BUG_ON(!bfqd->
++					const_seeky_busy_in_flight_queues);
++				bfqd->const_seeky_busy_in_flight_queues--;
++			}
++		}
++	}
++
++	if (sync) {
++		bfqd->sync_flight--;
++		RQ_BIC(rq)->ttime.last_end_request = jiffies;
++	}
++
++	/*
++	 * If we are waiting to discover whether the request pattern of the
++	 * task associated with the queue is actually isochronous, and
++	 * both requisites for this condition to hold are satisfied, then
++	 * compute soft_rt_next_start (see the comments to the function
++	 * bfq_bfqq_softrt_next_start()).
++	 */
++	if (bfq_bfqq_softrt_update(bfqq) && bfqq->dispatched == 0 &&
++	    RB_EMPTY_ROOT(&bfqq->sort_list))
++		bfqq->soft_rt_next_start =
++			bfq_bfqq_softrt_next_start(bfqd, bfqq);
++
++	/*
++	 * If this is the in-service queue, check if it needs to be expired,
++	 * or if we want to idle in case it has no pending requests.
++	 */
++	if (bfqd->in_service_queue == bfqq) {
++		if (bfq_bfqq_budget_new(bfqq))
++			bfq_set_budget_timeout(bfqd);
++
++		if (bfq_bfqq_must_idle(bfqq)) {
++			bfq_arm_slice_timer(bfqd);
++			goto out;
++		} else if (bfq_may_expire_for_budg_timeout(bfqq))
++			bfq_bfqq_expire(bfqd, bfqq, 0, BFQ_BFQQ_BUDGET_TIMEOUT);
++		else if (RB_EMPTY_ROOT(&bfqq->sort_list) &&
++			 (bfqq->dispatched == 0 ||
++			  !bfq_bfqq_must_not_expire(bfqq)))
++			bfq_bfqq_expire(bfqd, bfqq, 0,
++					BFQ_BFQQ_NO_MORE_REQUESTS);
++	}
++
++	if (!bfqd->rq_in_driver)
++		bfq_schedule_dispatch(bfqd);
++
++out:
++	return;
++}
++
++static inline int __bfq_may_queue(struct bfq_queue *bfqq)
++{
++	if (bfq_bfqq_wait_request(bfqq) && bfq_bfqq_must_alloc(bfqq)) {
++		bfq_clear_bfqq_must_alloc(bfqq);
++		return ELV_MQUEUE_MUST;
++	}
++
++	return ELV_MQUEUE_MAY;
++}
++
++static int bfq_may_queue(struct request_queue *q, int rw)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct task_struct *tsk = current;
++	struct bfq_io_cq *bic;
++	struct bfq_queue *bfqq;
++
++	/*
++	 * Don't force setup of a queue from here, as a call to may_queue
++	 * does not necessarily imply that a request actually will be
++	 * queued. So just lookup a possibly existing queue, or return
++	 * 'may queue' if that fails.
++	 */
++	bic = bfq_bic_lookup(bfqd, tsk->io_context);
++	if (bic == NULL)
++		return ELV_MQUEUE_MAY;
++
++	bfqq = bic_to_bfqq(bic, rw_is_sync(rw));
++	if (bfqq != NULL) {
++		bfq_init_prio_data(bfqq, bic);
++
++		return __bfq_may_queue(bfqq);
++	}
++
++	return ELV_MQUEUE_MAY;
++}
++
++/*
++ * Queue lock held here.
++ */
++static void bfq_put_request(struct request *rq)
++{
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++	if (bfqq != NULL) {
++		const int rw = rq_data_dir(rq);
++
++		BUG_ON(!bfqq->allocated[rw]);
++		bfqq->allocated[rw]--;
++
++		rq->elv.priv[0] = NULL;
++		rq->elv.priv[1] = NULL;
++
++		bfq_log_bfqq(bfqq->bfqd, bfqq, "put_request %p, %d",
++			     bfqq, atomic_read(&bfqq->ref));
++		bfq_put_queue(bfqq);
++	}
++}
++
++static struct bfq_queue *
++bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
++		struct bfq_queue *bfqq)
++{
++	bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
++		(long unsigned)bfqq->new_bfqq->pid);
++	bic_set_bfqq(bic, bfqq->new_bfqq, 1);
++	bfq_mark_bfqq_coop(bfqq->new_bfqq);
++	bfq_put_queue(bfqq);
++	return bic_to_bfqq(bic, 1);
++}
++
++/*
++ * Returns NULL if a new bfqq should be allocated, or the old bfqq if this
++ * was the last process referring to said bfqq.
++ */
++static struct bfq_queue *
++bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
++{
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue");
++	if (bfqq_process_refs(bfqq) == 1) {
++		bfqq->pid = current->pid;
++		bfq_clear_bfqq_coop(bfqq);
++		bfq_clear_bfqq_split_coop(bfqq);
++		return bfqq;
++	}
++
++	bic_set_bfqq(bic, NULL, 1);
++
++	bfq_put_cooperator(bfqq);
++
++	bfq_put_queue(bfqq);
++	return NULL;
++}
++
++/*
++ * Allocate bfq data structures associated with this request.
++ */
++static int bfq_set_request(struct request_queue *q, struct request *rq,
++			   struct bio *bio, gfp_t gfp_mask)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_io_cq *bic = icq_to_bic(rq->elv.icq);
++	const int rw = rq_data_dir(rq);
++	const int is_sync = rq_is_sync(rq);
++	struct bfq_queue *bfqq;
++	struct bfq_group *bfqg;
++	unsigned long flags;
++
++	might_sleep_if(gfp_mask & __GFP_WAIT);
++
++	bfq_changed_ioprio(bic);
++
++	spin_lock_irqsave(q->queue_lock, flags);
++
++	if (bic == NULL)
++		goto queue_fail;
++
++	bfqg = bfq_bic_update_cgroup(bic);
++
++new_queue:
++	bfqq = bic_to_bfqq(bic, is_sync);
++	if (bfqq == NULL || bfqq == &bfqd->oom_bfqq) {
++		bfqq = bfq_get_queue(bfqd, bfqg, is_sync, bic, gfp_mask);
++		bic_set_bfqq(bic, bfqq, is_sync);
++	} else {
++		/*
++		 * If the queue was seeky for too long, break it apart.
++		 */
++		if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq)) {
++			bfq_log_bfqq(bfqd, bfqq, "breaking apart bfqq");
++			bfqq = bfq_split_bfqq(bic, bfqq);
++			if (!bfqq)
++				goto new_queue;
++		}
++
++		/*
++		 * Check to see if this queue is scheduled to merge with
++		 * another closely cooperating queue. The merging of queues
++		 * happens here as it must be done in process context.
++		 * The reference on new_bfqq was taken in merge_bfqqs.
++		 */
++		if (bfqq->new_bfqq != NULL)
++			bfqq = bfq_merge_bfqqs(bfqd, bic, bfqq);
++	}
++
++	bfqq->allocated[rw]++;
++	atomic_inc(&bfqq->ref);
++	bfq_log_bfqq(bfqd, bfqq, "set_request: bfqq %p, %d", bfqq,
++		     atomic_read(&bfqq->ref));
++
++	rq->elv.priv[0] = bic;
++	rq->elv.priv[1] = bfqq;
++
++	spin_unlock_irqrestore(q->queue_lock, flags);
++
++	return 0;
++
++queue_fail:
++	bfq_schedule_dispatch(bfqd);
++	spin_unlock_irqrestore(q->queue_lock, flags);
++
++	return 1;
++}
++
++static void bfq_kick_queue(struct work_struct *work)
++{
++	struct bfq_data *bfqd =
++		container_of(work, struct bfq_data, unplug_work);
++	struct request_queue *q = bfqd->queue;
++
++	spin_lock_irq(q->queue_lock);
++	__blk_run_queue(q);
++	spin_unlock_irq(q->queue_lock);
++}
++
++/*
++ * Handler of the expiration of the timer running if the in-service queue
++ * is idling inside its time slice.
++ */
++static void bfq_idle_slice_timer(unsigned long data)
++{
++	struct bfq_data *bfqd = (struct bfq_data *)data;
++	struct bfq_queue *bfqq;
++	unsigned long flags;
++	enum bfqq_expiration reason;
++
++	spin_lock_irqsave(bfqd->queue->queue_lock, flags);
++
++	bfqq = bfqd->in_service_queue;
++	/*
++	 * Theoretical race here: the in-service queue can be NULL or
++	 * different from the queue that was idling if the timer handler
++	 * spins on the queue_lock and a new request arrives for the
++	 * current queue and there is a full dispatch cycle that changes
++	 * the in-service queue.  This can hardly happen, but in the worst
++	 * case we just expire a queue too early.
++	 */
++	if (bfqq != NULL) {
++		bfq_log_bfqq(bfqd, bfqq, "slice_timer expired");
++		if (bfq_bfqq_budget_timeout(bfqq))
++			/*
++			 * Also here the queue can be safely expired
++			 * for budget timeout without wasting
++			 * guarantees
++			 */
++			reason = BFQ_BFQQ_BUDGET_TIMEOUT;
++		else if (bfqq->queued[0] == 0 && bfqq->queued[1] == 0)
++			/*
++			 * The queue may not be empty upon timer expiration,
++			 * because we may not disable the timer when the
++			 * first request of the in-service queue arrives
++			 * during disk idling.
++			 */
++			reason = BFQ_BFQQ_TOO_IDLE;
++		else
++			goto schedule_dispatch;
++
++		bfq_bfqq_expire(bfqd, bfqq, 1, reason);
++	}
++
++schedule_dispatch:
++	bfq_schedule_dispatch(bfqd);
++
++	spin_unlock_irqrestore(bfqd->queue->queue_lock, flags);
++}
++
++static void bfq_shutdown_timer_wq(struct bfq_data *bfqd)
++{
++	del_timer_sync(&bfqd->idle_slice_timer);
++	cancel_work_sync(&bfqd->unplug_work);
++}
++
++static inline void __bfq_put_async_bfqq(struct bfq_data *bfqd,
++					struct bfq_queue **bfqq_ptr)
++{
++	struct bfq_group *root_group = bfqd->root_group;
++	struct bfq_queue *bfqq = *bfqq_ptr;
++
++	bfq_log(bfqd, "put_async_bfqq: %p", bfqq);
++	if (bfqq != NULL) {
++		bfq_bfqq_move(bfqd, bfqq, &bfqq->entity, root_group);
++		bfq_log_bfqq(bfqd, bfqq, "put_async_bfqq: putting %p, %d",
++			     bfqq, atomic_read(&bfqq->ref));
++		bfq_put_queue(bfqq);
++		*bfqq_ptr = NULL;
++	}
++}
++
++/*
++ * Release all the bfqg references to its async queues.  If we are
++ * deallocating the group these queues may still contain requests, so
++ * we reparent them to the root cgroup (i.e., the only one that will
++ * exist for sure until all the requests on a device are gone).
++ */
++static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg)
++{
++	int i, j;
++
++	for (i = 0; i < 2; i++)
++		for (j = 0; j < IOPRIO_BE_NR; j++)
++			__bfq_put_async_bfqq(bfqd, &bfqg->async_bfqq[i][j]);
++
++	__bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq);
++}
++
++static void bfq_exit_queue(struct elevator_queue *e)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++	struct request_queue *q = bfqd->queue;
++	struct bfq_queue *bfqq, *n;
++
++	bfq_shutdown_timer_wq(bfqd);
++
++	spin_lock_irq(q->queue_lock);
++
++	BUG_ON(bfqd->in_service_queue != NULL);
++	list_for_each_entry_safe(bfqq, n, &bfqd->idle_list, bfqq_list)
++		bfq_deactivate_bfqq(bfqd, bfqq, 0);
++
++	bfq_disconnect_groups(bfqd);
++	spin_unlock_irq(q->queue_lock);
++
++	bfq_shutdown_timer_wq(bfqd);
++
++	synchronize_rcu();
++
++	BUG_ON(timer_pending(&bfqd->idle_slice_timer));
++
++	bfq_free_root_group(bfqd);
++	kfree(bfqd);
++}
++
++static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
++{
++	struct bfq_group *bfqg;
++	struct bfq_data *bfqd;
++	struct elevator_queue *eq;
++
++	eq = elevator_alloc(q, e);
++	if (eq == NULL)
++		return -ENOMEM;
++
++	bfqd = kzalloc_node(sizeof(*bfqd), GFP_KERNEL, q->node);
++	if (bfqd == NULL) {
++		kobject_put(&eq->kobj);
++		return -ENOMEM;
++	}
++	eq->elevator_data = bfqd;
++
++	/*
++	 * Our fallback bfqq if bfq_find_alloc_queue() runs into OOM issues.
++	 * Grab a permanent reference to it, so that the normal code flow
++	 * will not attempt to free it.
++	 */
++	bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, 1, 0);
++	atomic_inc(&bfqd->oom_bfqq.ref);
++	bfqd->oom_bfqq.entity.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO;
++	bfqd->oom_bfqq.entity.new_ioprio_class = IOPRIO_CLASS_BE;
++	/*
++	 * Trigger weight initialization, according to ioprio, at the
++	 * oom_bfqq's first activation. The oom_bfqq's ioprio and ioprio
++	 * class won't be changed any more.
++	 */
++	bfqd->oom_bfqq.entity.ioprio_changed = 1;
++
++	bfqd->queue = q;
++
++	spin_lock_irq(q->queue_lock);
++	q->elevator = eq;
++	spin_unlock_irq(q->queue_lock);
++
++	bfqg = bfq_alloc_root_group(bfqd, q->node);
++	if (bfqg == NULL) {
++		kfree(bfqd);
++		kobject_put(&eq->kobj);
++		return -ENOMEM;
++	}
++
++	bfqd->root_group = bfqg;
++	bfq_init_entity(&bfqd->oom_bfqq.entity, bfqd->root_group);
++#ifdef CONFIG_CGROUP_BFQIO
++	bfqd->active_numerous_groups = 0;
++#endif
++
++	init_timer(&bfqd->idle_slice_timer);
++	bfqd->idle_slice_timer.function = bfq_idle_slice_timer;
++	bfqd->idle_slice_timer.data = (unsigned long)bfqd;
++
++	bfqd->rq_pos_tree = RB_ROOT;
++	bfqd->queue_weights_tree = RB_ROOT;
++	bfqd->group_weights_tree = RB_ROOT;
++
++	INIT_WORK(&bfqd->unplug_work, bfq_kick_queue);
++
++	INIT_LIST_HEAD(&bfqd->active_list);
++	INIT_LIST_HEAD(&bfqd->idle_list);
++	INIT_HLIST_HEAD(&bfqd->burst_list);
++
++	bfqd->hw_tag = -1;
++
++	bfqd->bfq_max_budget = bfq_default_max_budget;
++
++	bfqd->bfq_quantum = bfq_quantum;
++	bfqd->bfq_fifo_expire[0] = bfq_fifo_expire[0];
++	bfqd->bfq_fifo_expire[1] = bfq_fifo_expire[1];
++	bfqd->bfq_back_max = bfq_back_max;
++	bfqd->bfq_back_penalty = bfq_back_penalty;
++	bfqd->bfq_slice_idle = bfq_slice_idle;
++	bfqd->bfq_class_idle_last_service = 0;
++	bfqd->bfq_max_budget_async_rq = bfq_max_budget_async_rq;
++	bfqd->bfq_timeout[BLK_RW_ASYNC] = bfq_timeout_async;
++	bfqd->bfq_timeout[BLK_RW_SYNC] = bfq_timeout_sync;
++
++	bfqd->bfq_coop_thresh = 2;
++	bfqd->bfq_failed_cooperations = 7000;
++	bfqd->bfq_requests_within_timer = 120;
++
++	bfqd->bfq_large_burst_thresh = 11;
++	bfqd->bfq_burst_interval = msecs_to_jiffies(500);
++
++	bfqd->low_latency = true;
++
++	bfqd->bfq_wr_coeff = 20;
++	bfqd->bfq_wr_rt_max_time = msecs_to_jiffies(300);
++	bfqd->bfq_wr_max_time = 0;
++	bfqd->bfq_wr_min_idle_time = msecs_to_jiffies(2000);
++	bfqd->bfq_wr_min_inter_arr_async = msecs_to_jiffies(500);
++	bfqd->bfq_wr_max_softrt_rate = 7000; /*
++					      * Approximate rate required
++					      * to playback or record a
++					      * high-definition compressed
++					      * video.
++					      */
++	bfqd->wr_busy_queues = 0;
++	bfqd->busy_in_flight_queues = 0;
++	bfqd->const_seeky_busy_in_flight_queues = 0;
++
++	/*
++	 * Begin by assuming, optimistically, that the device peak rate is
++	 * equal to the highest reference rate.
++	 */
++	bfqd->RT_prod = R_fast[blk_queue_nonrot(bfqd->queue)] *
++			T_fast[blk_queue_nonrot(bfqd->queue)];
++	bfqd->peak_rate = R_fast[blk_queue_nonrot(bfqd->queue)];
++	bfqd->device_speed = BFQ_BFQD_FAST;
++
++	return 0;
++}
++
++static void bfq_slab_kill(void)
++{
++	if (bfq_pool != NULL)
++		kmem_cache_destroy(bfq_pool);
++}
++
++static int __init bfq_slab_setup(void)
++{
++	bfq_pool = KMEM_CACHE(bfq_queue, 0);
++	if (bfq_pool == NULL)
++		return -ENOMEM;
++	return 0;
++}
++
++static ssize_t bfq_var_show(unsigned int var, char *page)
++{
++	return sprintf(page, "%d\n", var);
++}
++
++static ssize_t bfq_var_store(unsigned long *var, const char *page,
++			     size_t count)
++{
++	unsigned long new_val;
++	int ret = kstrtoul(page, 10, &new_val);
++
++	if (ret == 0)
++		*var = new_val;
++
++	return count;
++}
++
++static ssize_t bfq_wr_max_time_show(struct elevator_queue *e, char *page)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++	return sprintf(page, "%d\n", bfqd->bfq_wr_max_time > 0 ?
++		       jiffies_to_msecs(bfqd->bfq_wr_max_time) :
++		       jiffies_to_msecs(bfq_wr_duration(bfqd)));
++}
++
++static ssize_t bfq_weights_show(struct elevator_queue *e, char *page)
++{
++	struct bfq_queue *bfqq;
++	struct bfq_data *bfqd = e->elevator_data;
++	ssize_t num_char = 0;
++
++	num_char += sprintf(page + num_char, "Tot reqs queued %d\n\n",
++			    bfqd->queued);
++
++	spin_lock_irq(bfqd->queue->queue_lock);
++
++	num_char += sprintf(page + num_char, "Active:\n");
++	list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list) {
++	  num_char += sprintf(page + num_char,
++			      "pid%d: weight %hu, nr_queued %d %d, dur %d/%u\n",
++			      bfqq->pid,
++			      bfqq->entity.weight,
++			      bfqq->queued[0],
++			      bfqq->queued[1],
++			jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish),
++			jiffies_to_msecs(bfqq->wr_cur_max_time));
++	}
++
++	num_char += sprintf(page + num_char, "Idle:\n");
++	list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list) {
++			num_char += sprintf(page + num_char,
++				"pid%d: weight %hu, dur %d/%u\n",
++				bfqq->pid,
++				bfqq->entity.weight,
++				jiffies_to_msecs(jiffies -
++					bfqq->last_wr_start_finish),
++				jiffies_to_msecs(bfqq->wr_cur_max_time));
++	}
++
++	spin_unlock_irq(bfqd->queue->queue_lock);
++
++	return num_char;
++}
++
++#define SHOW_FUNCTION(__FUNC, __VAR, __CONV)				\
++static ssize_t __FUNC(struct elevator_queue *e, char *page)		\
++{									\
++	struct bfq_data *bfqd = e->elevator_data;			\
++	unsigned int __data = __VAR;					\
++	if (__CONV)							\
++		__data = jiffies_to_msecs(__data);			\
++	return bfq_var_show(__data, (page));				\
++}
++SHOW_FUNCTION(bfq_quantum_show, bfqd->bfq_quantum, 0);
++SHOW_FUNCTION(bfq_fifo_expire_sync_show, bfqd->bfq_fifo_expire[1], 1);
++SHOW_FUNCTION(bfq_fifo_expire_async_show, bfqd->bfq_fifo_expire[0], 1);
++SHOW_FUNCTION(bfq_back_seek_max_show, bfqd->bfq_back_max, 0);
++SHOW_FUNCTION(bfq_back_seek_penalty_show, bfqd->bfq_back_penalty, 0);
++SHOW_FUNCTION(bfq_slice_idle_show, bfqd->bfq_slice_idle, 1);
++SHOW_FUNCTION(bfq_max_budget_show, bfqd->bfq_user_max_budget, 0);
++SHOW_FUNCTION(bfq_max_budget_async_rq_show,
++	      bfqd->bfq_max_budget_async_rq, 0);
++SHOW_FUNCTION(bfq_timeout_sync_show, bfqd->bfq_timeout[BLK_RW_SYNC], 1);
++SHOW_FUNCTION(bfq_timeout_async_show, bfqd->bfq_timeout[BLK_RW_ASYNC], 1);
++SHOW_FUNCTION(bfq_low_latency_show, bfqd->low_latency, 0);
++SHOW_FUNCTION(bfq_wr_coeff_show, bfqd->bfq_wr_coeff, 0);
++SHOW_FUNCTION(bfq_wr_rt_max_time_show, bfqd->bfq_wr_rt_max_time, 1);
++SHOW_FUNCTION(bfq_wr_min_idle_time_show, bfqd->bfq_wr_min_idle_time, 1);
++SHOW_FUNCTION(bfq_wr_min_inter_arr_async_show, bfqd->bfq_wr_min_inter_arr_async,
++	1);
++SHOW_FUNCTION(bfq_wr_max_softrt_rate_show, bfqd->bfq_wr_max_softrt_rate, 0);
++#undef SHOW_FUNCTION
++
++#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
++static ssize_t								\
++__FUNC(struct elevator_queue *e, const char *page, size_t count)	\
++{									\
++	struct bfq_data *bfqd = e->elevator_data;			\
++	unsigned long uninitialized_var(__data);			\
++	int ret = bfq_var_store(&__data, (page), count);		\
++	if (__data < (MIN))						\
++		__data = (MIN);						\
++	else if (__data > (MAX))					\
++		__data = (MAX);						\
++	if (__CONV)							\
++		*(__PTR) = msecs_to_jiffies(__data);			\
++	else								\
++		*(__PTR) = __data;					\
++	return ret;							\
++}
++STORE_FUNCTION(bfq_quantum_store, &bfqd->bfq_quantum, 1, INT_MAX, 0);
++STORE_FUNCTION(bfq_fifo_expire_sync_store, &bfqd->bfq_fifo_expire[1], 1,
++		INT_MAX, 1);
++STORE_FUNCTION(bfq_fifo_expire_async_store, &bfqd->bfq_fifo_expire[0], 1,
++		INT_MAX, 1);
++STORE_FUNCTION(bfq_back_seek_max_store, &bfqd->bfq_back_max, 0, INT_MAX, 0);
++STORE_FUNCTION(bfq_back_seek_penalty_store, &bfqd->bfq_back_penalty, 1,
++		INT_MAX, 0);
++STORE_FUNCTION(bfq_slice_idle_store, &bfqd->bfq_slice_idle, 0, INT_MAX, 1);
++STORE_FUNCTION(bfq_max_budget_async_rq_store, &bfqd->bfq_max_budget_async_rq,
++		1, INT_MAX, 0);
++STORE_FUNCTION(bfq_timeout_async_store, &bfqd->bfq_timeout[BLK_RW_ASYNC], 0,
++		INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_coeff_store, &bfqd->bfq_wr_coeff, 1, INT_MAX, 0);
++STORE_FUNCTION(bfq_wr_max_time_store, &bfqd->bfq_wr_max_time, 0, INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_rt_max_time_store, &bfqd->bfq_wr_rt_max_time, 0, INT_MAX,
++		1);
++STORE_FUNCTION(bfq_wr_min_idle_time_store, &bfqd->bfq_wr_min_idle_time, 0,
++		INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_min_inter_arr_async_store,
++		&bfqd->bfq_wr_min_inter_arr_async, 0, INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_max_softrt_rate_store, &bfqd->bfq_wr_max_softrt_rate, 0,
++		INT_MAX, 0);
++#undef STORE_FUNCTION
++
++/* do nothing for the moment */
++static ssize_t bfq_weights_store(struct elevator_queue *e,
++				    const char *page, size_t count)
++{
++	return count;
++}
++
++static inline unsigned long bfq_estimated_max_budget(struct bfq_data *bfqd)
++{
++	u64 timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
++
++	if (bfqd->peak_rate_samples >= BFQ_PEAK_RATE_SAMPLES)
++		return bfq_calc_max_budget(bfqd->peak_rate, timeout);
++	else
++		return bfq_default_max_budget;
++}
++
++static ssize_t bfq_max_budget_store(struct elevator_queue *e,
++				    const char *page, size_t count)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++	unsigned long uninitialized_var(__data);
++	int ret = bfq_var_store(&__data, (page), count);
++
++	if (__data == 0)
++		bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
++	else {
++		if (__data > INT_MAX)
++			__data = INT_MAX;
++		bfqd->bfq_max_budget = __data;
++	}
++
++	bfqd->bfq_user_max_budget = __data;
++
++	return ret;
++}
++
++static ssize_t bfq_timeout_sync_store(struct elevator_queue *e,
++				      const char *page, size_t count)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++	unsigned long uninitialized_var(__data);
++	int ret = bfq_var_store(&__data, (page), count);
++
++	if (__data < 1)
++		__data = 1;
++	else if (__data > INT_MAX)
++		__data = INT_MAX;
++
++	bfqd->bfq_timeout[BLK_RW_SYNC] = msecs_to_jiffies(__data);
++	if (bfqd->bfq_user_max_budget == 0)
++		bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
++
++	return ret;
++}
++
++static ssize_t bfq_low_latency_store(struct elevator_queue *e,
++				     const char *page, size_t count)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++	unsigned long uninitialized_var(__data);
++	int ret = bfq_var_store(&__data, (page), count);
++
++	if (__data > 1)
++		__data = 1;
++	if (__data == 0 && bfqd->low_latency != 0)
++		bfq_end_wr(bfqd);
++	bfqd->low_latency = __data;
++
++	return ret;
++}
++
++#define BFQ_ATTR(name) \
++	__ATTR(name, S_IRUGO|S_IWUSR, bfq_##name##_show, bfq_##name##_store)
++
++static struct elv_fs_entry bfq_attrs[] = {
++	BFQ_ATTR(quantum),
++	BFQ_ATTR(fifo_expire_sync),
++	BFQ_ATTR(fifo_expire_async),
++	BFQ_ATTR(back_seek_max),
++	BFQ_ATTR(back_seek_penalty),
++	BFQ_ATTR(slice_idle),
++	BFQ_ATTR(max_budget),
++	BFQ_ATTR(max_budget_async_rq),
++	BFQ_ATTR(timeout_sync),
++	BFQ_ATTR(timeout_async),
++	BFQ_ATTR(low_latency),
++	BFQ_ATTR(wr_coeff),
++	BFQ_ATTR(wr_max_time),
++	BFQ_ATTR(wr_rt_max_time),
++	BFQ_ATTR(wr_min_idle_time),
++	BFQ_ATTR(wr_min_inter_arr_async),
++	BFQ_ATTR(wr_max_softrt_rate),
++	BFQ_ATTR(weights),
++	__ATTR_NULL
++};
++
++static struct elevator_type iosched_bfq = {
++	.ops = {
++		.elevator_merge_fn =		bfq_merge,
++		.elevator_merged_fn =		bfq_merged_request,
++		.elevator_merge_req_fn =	bfq_merged_requests,
++		.elevator_allow_merge_fn =	bfq_allow_merge,
++		.elevator_dispatch_fn =		bfq_dispatch_requests,
++		.elevator_add_req_fn =		bfq_insert_request,
++		.elevator_activate_req_fn =	bfq_activate_request,
++		.elevator_deactivate_req_fn =	bfq_deactivate_request,
++		.elevator_completed_req_fn =	bfq_completed_request,
++		.elevator_former_req_fn =	elv_rb_former_request,
++		.elevator_latter_req_fn =	elv_rb_latter_request,
++		.elevator_init_icq_fn =		bfq_init_icq,
++		.elevator_exit_icq_fn =		bfq_exit_icq,
++		.elevator_set_req_fn =		bfq_set_request,
++		.elevator_put_req_fn =		bfq_put_request,
++		.elevator_may_queue_fn =	bfq_may_queue,
++		.elevator_init_fn =		bfq_init_queue,
++		.elevator_exit_fn =		bfq_exit_queue,
++	},
++	.icq_size =		sizeof(struct bfq_io_cq),
++	.icq_align =		__alignof__(struct bfq_io_cq),
++	.elevator_attrs =	bfq_attrs,
++	.elevator_name =	"bfq",
++	.elevator_owner =	THIS_MODULE,
++};
++
++static int __init bfq_init(void)
++{
++	/*
++	 * Can be 0 on HZ < 1000 setups.
++	 */
++	if (bfq_slice_idle == 0)
++		bfq_slice_idle = 1;
++
++	if (bfq_timeout_async == 0)
++		bfq_timeout_async = 1;
++
++	if (bfq_slab_setup())
++		return -ENOMEM;
++
++	/*
++	 * Times to load large popular applications for the typical systems
++	 * installed on the reference devices (see the comments before the
++	 * definitions of the two arrays).
++	 */
++	T_slow[0] = msecs_to_jiffies(2600);
++	T_slow[1] = msecs_to_jiffies(1000);
++	T_fast[0] = msecs_to_jiffies(5500);
++	T_fast[1] = msecs_to_jiffies(2000);
++
++	/*
++	 * Thresholds that determine the switch between speed classes (see
++	 * the comments before the definition of the array).
++	 */
++	device_speed_thresh[0] = (R_fast[0] + R_slow[0]) / 2;
++	device_speed_thresh[1] = (R_fast[1] + R_slow[1]) / 2;
++
++	elv_register(&iosched_bfq);
++	pr_info("BFQ I/O-scheduler version: v7r7");
++
++	return 0;
++}
++
++static void __exit bfq_exit(void)
++{
++	elv_unregister(&iosched_bfq);
++	bfq_slab_kill();
++}
++
++module_init(bfq_init);
++module_exit(bfq_exit);
++
++MODULE_AUTHOR("Fabio Checconi, Paolo Valente");
++MODULE_LICENSE("GPL");
+diff --git a/block/bfq-sched.c b/block/bfq-sched.c
+new file mode 100644
+index 0000000..2931563
+--- /dev/null
++++ b/block/bfq-sched.c
+@@ -0,0 +1,1214 @@
++/*
++ * BFQ: Hierarchical B-WF2Q+ scheduler.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ *		      Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ */
++
++#ifdef CONFIG_CGROUP_BFQIO
++#define for_each_entity(entity)	\
++	for (; entity != NULL; entity = entity->parent)
++
++#define for_each_entity_safe(entity, parent) \
++	for (; entity && ({ parent = entity->parent; 1; }); entity = parent)
++
++static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
++						 int extract,
++						 struct bfq_data *bfqd);
++
++static inline void bfq_update_budget(struct bfq_entity *next_in_service)
++{
++	struct bfq_entity *bfqg_entity;
++	struct bfq_group *bfqg;
++	struct bfq_sched_data *group_sd;
++
++	BUG_ON(next_in_service == NULL);
++
++	group_sd = next_in_service->sched_data;
++
++	bfqg = container_of(group_sd, struct bfq_group, sched_data);
++	/*
++	 * bfq_group's my_entity field is not NULL only if the group
++	 * is not the root group. We must not touch the root entity
++	 * as it must never become an in-service entity.
++	 */
++	bfqg_entity = bfqg->my_entity;
++	if (bfqg_entity != NULL)
++		bfqg_entity->budget = next_in_service->budget;
++}
++
++static int bfq_update_next_in_service(struct bfq_sched_data *sd)
++{
++	struct bfq_entity *next_in_service;
++
++	if (sd->in_service_entity != NULL)
++		/* will update/requeue at the end of service */
++		return 0;
++
++	/*
++	 * NOTE: this can be improved in many ways, such as returning
++	 * 1 (and thus propagating upwards the update) only when the
++	 * budget changes, or caching the bfqq that will be scheduled
++	 * next from this subtree.  By now we worry more about
++	 * correctness than about performance...
++	 */
++	next_in_service = bfq_lookup_next_entity(sd, 0, NULL);
++	sd->next_in_service = next_in_service;
++
++	if (next_in_service != NULL)
++		bfq_update_budget(next_in_service);
++
++	return 1;
++}
++
++static inline void bfq_check_next_in_service(struct bfq_sched_data *sd,
++					     struct bfq_entity *entity)
++{
++	BUG_ON(sd->next_in_service != entity);
++}
++#else
++#define for_each_entity(entity)	\
++	for (; entity != NULL; entity = NULL)
++
++#define for_each_entity_safe(entity, parent) \
++	for (parent = NULL; entity != NULL; entity = parent)
++
++static inline int bfq_update_next_in_service(struct bfq_sched_data *sd)
++{
++	return 0;
++}
++
++static inline void bfq_check_next_in_service(struct bfq_sched_data *sd,
++					     struct bfq_entity *entity)
++{
++}
++
++static inline void bfq_update_budget(struct bfq_entity *next_in_service)
++{
++}
++#endif
++
++/*
++ * Shift for timestamp calculations.  This actually limits the maximum
++ * service allowed in one timestamp delta (small shift values increase it),
++ * the maximum total weight that can be used for the queues in the system
++ * (big shift values increase it), and the period of virtual time
++ * wraparounds.
++ */
++#define WFQ_SERVICE_SHIFT	22
++
++/**
++ * bfq_gt - compare two timestamps.
++ * @a: first ts.
++ * @b: second ts.
++ *
++ * Return @a > @b, dealing with wrapping correctly.
++ */
++static inline int bfq_gt(u64 a, u64 b)
++{
++	return (s64)(a - b) > 0;
++}
++
++static inline struct bfq_queue *bfq_entity_to_bfqq(struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = NULL;
++
++	BUG_ON(entity == NULL);
++
++	if (entity->my_sched_data == NULL)
++		bfqq = container_of(entity, struct bfq_queue, entity);
++
++	return bfqq;
++}
++
++
++/**
++ * bfq_delta - map service into the virtual time domain.
++ * @service: amount of service.
++ * @weight: scale factor (weight of an entity or weight sum).
++ */
++static inline u64 bfq_delta(unsigned long service,
++					unsigned long weight)
++{
++	u64 d = (u64)service << WFQ_SERVICE_SHIFT;
++
++	do_div(d, weight);
++	return d;
++}
++
++/**
++ * bfq_calc_finish - assign the finish time to an entity.
++ * @entity: the entity to act upon.
++ * @service: the service to be charged to the entity.
++ */
++static inline void bfq_calc_finish(struct bfq_entity *entity,
++				   unsigned long service)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++	BUG_ON(entity->weight == 0);
++
++	entity->finish = entity->start +
++		bfq_delta(service, entity->weight);
++
++	if (bfqq != NULL) {
++		bfq_log_bfqq(bfqq->bfqd, bfqq,
++			"calc_finish: serv %lu, w %d",
++			service, entity->weight);
++		bfq_log_bfqq(bfqq->bfqd, bfqq,
++			"calc_finish: start %llu, finish %llu, delta %llu",
++			entity->start, entity->finish,
++			bfq_delta(service, entity->weight));
++	}
++}
++
++/**
++ * bfq_entity_of - get an entity from a node.
++ * @node: the node field of the entity.
++ *
++ * Convert a node pointer to the relative entity.  This is used only
++ * to simplify the logic of some functions and not as the generic
++ * conversion mechanism because, e.g., in the tree walking functions,
++ * the check for a %NULL value would be redundant.
++ */
++static inline struct bfq_entity *bfq_entity_of(struct rb_node *node)
++{
++	struct bfq_entity *entity = NULL;
++
++	if (node != NULL)
++		entity = rb_entry(node, struct bfq_entity, rb_node);
++
++	return entity;
++}
++
++/**
++ * bfq_extract - remove an entity from a tree.
++ * @root: the tree root.
++ * @entity: the entity to remove.
++ */
++static inline void bfq_extract(struct rb_root *root,
++			       struct bfq_entity *entity)
++{
++	BUG_ON(entity->tree != root);
++
++	entity->tree = NULL;
++	rb_erase(&entity->rb_node, root);
++}
++
++/**
++ * bfq_idle_extract - extract an entity from the idle tree.
++ * @st: the service tree of the owning @entity.
++ * @entity: the entity being removed.
++ */
++static void bfq_idle_extract(struct bfq_service_tree *st,
++			     struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct rb_node *next;
++
++	BUG_ON(entity->tree != &st->idle);
++
++	if (entity == st->first_idle) {
++		next = rb_next(&entity->rb_node);
++		st->first_idle = bfq_entity_of(next);
++	}
++
++	if (entity == st->last_idle) {
++		next = rb_prev(&entity->rb_node);
++		st->last_idle = bfq_entity_of(next);
++	}
++
++	bfq_extract(&st->idle, entity);
++
++	if (bfqq != NULL)
++		list_del(&bfqq->bfqq_list);
++}
++
++/**
++ * bfq_insert - generic tree insertion.
++ * @root: tree root.
++ * @entity: entity to insert.
++ *
++ * This is used for the idle and the active tree, since they are both
++ * ordered by finish time.
++ */
++static void bfq_insert(struct rb_root *root, struct bfq_entity *entity)
++{
++	struct bfq_entity *entry;
++	struct rb_node **node = &root->rb_node;
++	struct rb_node *parent = NULL;
++
++	BUG_ON(entity->tree != NULL);
++
++	while (*node != NULL) {
++		parent = *node;
++		entry = rb_entry(parent, struct bfq_entity, rb_node);
++
++		if (bfq_gt(entry->finish, entity->finish))
++			node = &parent->rb_left;
++		else
++			node = &parent->rb_right;
++	}
++
++	rb_link_node(&entity->rb_node, parent, node);
++	rb_insert_color(&entity->rb_node, root);
++
++	entity->tree = root;
++}
++
++/**
++ * bfq_update_min - update the min_start field of a entity.
++ * @entity: the entity to update.
++ * @node: one of its children.
++ *
++ * This function is called when @entity may store an invalid value for
++ * min_start due to updates to the active tree.  The function  assumes
++ * that the subtree rooted at @node (which may be its left or its right
++ * child) has a valid min_start value.
++ */
++static inline void bfq_update_min(struct bfq_entity *entity,
++				  struct rb_node *node)
++{
++	struct bfq_entity *child;
++
++	if (node != NULL) {
++		child = rb_entry(node, struct bfq_entity, rb_node);
++		if (bfq_gt(entity->min_start, child->min_start))
++			entity->min_start = child->min_start;
++	}
++}
++
++/**
++ * bfq_update_active_node - recalculate min_start.
++ * @node: the node to update.
++ *
++ * @node may have changed position or one of its children may have moved,
++ * this function updates its min_start value.  The left and right subtrees
++ * are assumed to hold a correct min_start value.
++ */
++static inline void bfq_update_active_node(struct rb_node *node)
++{
++	struct bfq_entity *entity = rb_entry(node, struct bfq_entity, rb_node);
++
++	entity->min_start = entity->start;
++	bfq_update_min(entity, node->rb_right);
++	bfq_update_min(entity, node->rb_left);
++}
++
++/**
++ * bfq_update_active_tree - update min_start for the whole active tree.
++ * @node: the starting node.
++ *
++ * @node must be the deepest modified node after an update.  This function
++ * updates its min_start using the values held by its children, assuming
++ * that they did not change, and then updates all the nodes that may have
++ * changed in the path to the root.  The only nodes that may have changed
++ * are the ones in the path or their siblings.
++ */
++static void bfq_update_active_tree(struct rb_node *node)
++{
++	struct rb_node *parent;
++
++up:
++	bfq_update_active_node(node);
++
++	parent = rb_parent(node);
++	if (parent == NULL)
++		return;
++
++	if (node == parent->rb_left && parent->rb_right != NULL)
++		bfq_update_active_node(parent->rb_right);
++	else if (parent->rb_left != NULL)
++		bfq_update_active_node(parent->rb_left);
++
++	node = parent;
++	goto up;
++}
++
++static void bfq_weights_tree_add(struct bfq_data *bfqd,
++				 struct bfq_entity *entity,
++				 struct rb_root *root);
++
++static void bfq_weights_tree_remove(struct bfq_data *bfqd,
++				    struct bfq_entity *entity,
++				    struct rb_root *root);
++
++
++/**
++ * bfq_active_insert - insert an entity in the active tree of its
++ *                     group/device.
++ * @st: the service tree of the entity.
++ * @entity: the entity being inserted.
++ *
++ * The active tree is ordered by finish time, but an extra key is kept
++ * per each node, containing the minimum value for the start times of
++ * its children (and the node itself), so it's possible to search for
++ * the eligible node with the lowest finish time in logarithmic time.
++ */
++static void bfq_active_insert(struct bfq_service_tree *st,
++			      struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct rb_node *node = &entity->rb_node;
++#ifdef CONFIG_CGROUP_BFQIO
++	struct bfq_sched_data *sd = NULL;
++	struct bfq_group *bfqg = NULL;
++	struct bfq_data *bfqd = NULL;
++#endif
++
++	bfq_insert(&st->active, entity);
++
++	if (node->rb_left != NULL)
++		node = node->rb_left;
++	else if (node->rb_right != NULL)
++		node = node->rb_right;
++
++	bfq_update_active_tree(node);
++
++#ifdef CONFIG_CGROUP_BFQIO
++	sd = entity->sched_data;
++	bfqg = container_of(sd, struct bfq_group, sched_data);
++	BUG_ON(!bfqg);
++	bfqd = (struct bfq_data *)bfqg->bfqd;
++#endif
++	if (bfqq != NULL)
++		list_add(&bfqq->bfqq_list, &bfqq->bfqd->active_list);
++#ifdef CONFIG_CGROUP_BFQIO
++	else { /* bfq_group */
++		BUG_ON(!bfqd);
++		bfq_weights_tree_add(bfqd, entity, &bfqd->group_weights_tree);
++	}
++	if (bfqg != bfqd->root_group) {
++		BUG_ON(!bfqg);
++		BUG_ON(!bfqd);
++		bfqg->active_entities++;
++		if (bfqg->active_entities == 2)
++			bfqd->active_numerous_groups++;
++	}
++#endif
++}
++
++/**
++ * bfq_ioprio_to_weight - calc a weight from an ioprio.
++ * @ioprio: the ioprio value to convert.
++ */
++static inline unsigned short bfq_ioprio_to_weight(int ioprio)
++{
++	BUG_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
++	return IOPRIO_BE_NR - ioprio;
++}
++
++/**
++ * bfq_weight_to_ioprio - calc an ioprio from a weight.
++ * @weight: the weight value to convert.
++ *
++ * To preserve as mush as possible the old only-ioprio user interface,
++ * 0 is used as an escape ioprio value for weights (numerically) equal or
++ * larger than IOPRIO_BE_NR
++ */
++static inline unsigned short bfq_weight_to_ioprio(int weight)
++{
++	BUG_ON(weight < BFQ_MIN_WEIGHT || weight > BFQ_MAX_WEIGHT);
++	return IOPRIO_BE_NR - weight < 0 ? 0 : IOPRIO_BE_NR - weight;
++}
++
++static inline void bfq_get_entity(struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++	if (bfqq != NULL) {
++		atomic_inc(&bfqq->ref);
++		bfq_log_bfqq(bfqq->bfqd, bfqq, "get_entity: %p %d",
++			     bfqq, atomic_read(&bfqq->ref));
++	}
++}
++
++/**
++ * bfq_find_deepest - find the deepest node that an extraction can modify.
++ * @node: the node being removed.
++ *
++ * Do the first step of an extraction in an rb tree, looking for the
++ * node that will replace @node, and returning the deepest node that
++ * the following modifications to the tree can touch.  If @node is the
++ * last node in the tree return %NULL.
++ */
++static struct rb_node *bfq_find_deepest(struct rb_node *node)
++{
++	struct rb_node *deepest;
++
++	if (node->rb_right == NULL && node->rb_left == NULL)
++		deepest = rb_parent(node);
++	else if (node->rb_right == NULL)
++		deepest = node->rb_left;
++	else if (node->rb_left == NULL)
++		deepest = node->rb_right;
++	else {
++		deepest = rb_next(node);
++		if (deepest->rb_right != NULL)
++			deepest = deepest->rb_right;
++		else if (rb_parent(deepest) != node)
++			deepest = rb_parent(deepest);
++	}
++
++	return deepest;
++}
++
++/**
++ * bfq_active_extract - remove an entity from the active tree.
++ * @st: the service_tree containing the tree.
++ * @entity: the entity being removed.
++ */
++static void bfq_active_extract(struct bfq_service_tree *st,
++			       struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct rb_node *node;
++#ifdef CONFIG_CGROUP_BFQIO
++	struct bfq_sched_data *sd = NULL;
++	struct bfq_group *bfqg = NULL;
++	struct bfq_data *bfqd = NULL;
++#endif
++
++	node = bfq_find_deepest(&entity->rb_node);
++	bfq_extract(&st->active, entity);
++
++	if (node != NULL)
++		bfq_update_active_tree(node);
++
++#ifdef CONFIG_CGROUP_BFQIO
++	sd = entity->sched_data;
++	bfqg = container_of(sd, struct bfq_group, sched_data);
++	BUG_ON(!bfqg);
++	bfqd = (struct bfq_data *)bfqg->bfqd;
++#endif
++	if (bfqq != NULL)
++		list_del(&bfqq->bfqq_list);
++#ifdef CONFIG_CGROUP_BFQIO
++	else { /* bfq_group */
++		BUG_ON(!bfqd);
++		bfq_weights_tree_remove(bfqd, entity,
++					&bfqd->group_weights_tree);
++	}
++	if (bfqg != bfqd->root_group) {
++		BUG_ON(!bfqg);
++		BUG_ON(!bfqd);
++		BUG_ON(!bfqg->active_entities);
++		bfqg->active_entities--;
++		if (bfqg->active_entities == 1) {
++			BUG_ON(!bfqd->active_numerous_groups);
++			bfqd->active_numerous_groups--;
++		}
++	}
++#endif
++}
++
++/**
++ * bfq_idle_insert - insert an entity into the idle tree.
++ * @st: the service tree containing the tree.
++ * @entity: the entity to insert.
++ */
++static void bfq_idle_insert(struct bfq_service_tree *st,
++			    struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct bfq_entity *first_idle = st->first_idle;
++	struct bfq_entity *last_idle = st->last_idle;
++
++	if (first_idle == NULL || bfq_gt(first_idle->finish, entity->finish))
++		st->first_idle = entity;
++	if (last_idle == NULL || bfq_gt(entity->finish, last_idle->finish))
++		st->last_idle = entity;
++
++	bfq_insert(&st->idle, entity);
++
++	if (bfqq != NULL)
++		list_add(&bfqq->bfqq_list, &bfqq->bfqd->idle_list);
++}
++
++/**
++ * bfq_forget_entity - remove an entity from the wfq trees.
++ * @st: the service tree.
++ * @entity: the entity being removed.
++ *
++ * Update the device status and forget everything about @entity, putting
++ * the device reference to it, if it is a queue.  Entities belonging to
++ * groups are not refcounted.
++ */
++static void bfq_forget_entity(struct bfq_service_tree *st,
++			      struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct bfq_sched_data *sd;
++
++	BUG_ON(!entity->on_st);
++
++	entity->on_st = 0;
++	st->wsum -= entity->weight;
++	if (bfqq != NULL) {
++		sd = entity->sched_data;
++		bfq_log_bfqq(bfqq->bfqd, bfqq, "forget_entity: %p %d",
++			     bfqq, atomic_read(&bfqq->ref));
++		bfq_put_queue(bfqq);
++	}
++}
++
++/**
++ * bfq_put_idle_entity - release the idle tree ref of an entity.
++ * @st: service tree for the entity.
++ * @entity: the entity being released.
++ */
++static void bfq_put_idle_entity(struct bfq_service_tree *st,
++				struct bfq_entity *entity)
++{
++	bfq_idle_extract(st, entity);
++	bfq_forget_entity(st, entity);
++}
++
++/**
++ * bfq_forget_idle - update the idle tree if necessary.
++ * @st: the service tree to act upon.
++ *
++ * To preserve the global O(log N) complexity we only remove one entry here;
++ * as the idle tree will not grow indefinitely this can be done safely.
++ */
++static void bfq_forget_idle(struct bfq_service_tree *st)
++{
++	struct bfq_entity *first_idle = st->first_idle;
++	struct bfq_entity *last_idle = st->last_idle;
++
++	if (RB_EMPTY_ROOT(&st->active) && last_idle != NULL &&
++	    !bfq_gt(last_idle->finish, st->vtime)) {
++		/*
++		 * Forget the whole idle tree, increasing the vtime past
++		 * the last finish time of idle entities.
++		 */
++		st->vtime = last_idle->finish;
++	}
++
++	if (first_idle != NULL && !bfq_gt(first_idle->finish, st->vtime))
++		bfq_put_idle_entity(st, first_idle);
++}
++
++static struct bfq_service_tree *
++__bfq_entity_update_weight_prio(struct bfq_service_tree *old_st,
++			 struct bfq_entity *entity)
++{
++	struct bfq_service_tree *new_st = old_st;
++
++	if (entity->ioprio_changed) {
++		struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++		unsigned short prev_weight, new_weight;
++		struct bfq_data *bfqd = NULL;
++		struct rb_root *root;
++#ifdef CONFIG_CGROUP_BFQIO
++		struct bfq_sched_data *sd;
++		struct bfq_group *bfqg;
++#endif
++
++		if (bfqq != NULL)
++			bfqd = bfqq->bfqd;
++#ifdef CONFIG_CGROUP_BFQIO
++		else {
++			sd = entity->my_sched_data;
++			bfqg = container_of(sd, struct bfq_group, sched_data);
++			BUG_ON(!bfqg);
++			bfqd = (struct bfq_data *)bfqg->bfqd;
++			BUG_ON(!bfqd);
++		}
++#endif
++
++		BUG_ON(old_st->wsum < entity->weight);
++		old_st->wsum -= entity->weight;
++
++		if (entity->new_weight != entity->orig_weight) {
++			if (entity->new_weight < BFQ_MIN_WEIGHT ||
++			    entity->new_weight > BFQ_MAX_WEIGHT) {
++				printk(KERN_CRIT "update_weight_prio: "
++						 "new_weight %d\n",
++					entity->new_weight);
++				BUG();
++			}
++			entity->orig_weight = entity->new_weight;
++			entity->ioprio =
++				bfq_weight_to_ioprio(entity->orig_weight);
++		} else if (entity->new_ioprio != entity->ioprio) {
++			entity->ioprio = entity->new_ioprio;
++			entity->orig_weight =
++					bfq_ioprio_to_weight(entity->ioprio);
++		} else
++			entity->new_weight = entity->orig_weight =
++				bfq_ioprio_to_weight(entity->ioprio);
++
++		entity->ioprio_class = entity->new_ioprio_class;
++		entity->ioprio_changed = 0;
++
++		/*
++		 * NOTE: here we may be changing the weight too early,
++		 * this will cause unfairness.  The correct approach
++		 * would have required additional complexity to defer
++		 * weight changes to the proper time instants (i.e.,
++		 * when entity->finish <= old_st->vtime).
++		 */
++		new_st = bfq_entity_service_tree(entity);
++
++		prev_weight = entity->weight;
++		new_weight = entity->orig_weight *
++			     (bfqq != NULL ? bfqq->wr_coeff : 1);
++		/*
++		 * If the weight of the entity changes, remove the entity
++		 * from its old weight counter (if there is a counter
++		 * associated with the entity), and add it to the counter
++		 * associated with its new weight.
++		 */
++		if (prev_weight != new_weight) {
++			root = bfqq ? &bfqd->queue_weights_tree :
++				      &bfqd->group_weights_tree;
++			bfq_weights_tree_remove(bfqd, entity, root);
++		}
++		entity->weight = new_weight;
++		/*
++		 * Add the entity to its weights tree only if it is
++		 * not associated with a weight-raised queue.
++		 */
++		if (prev_weight != new_weight &&
++		    (bfqq ? bfqq->wr_coeff == 1 : 1))
++			/* If we get here, root has been initialized. */
++			bfq_weights_tree_add(bfqd, entity, root);
++
++		new_st->wsum += entity->weight;
++
++		if (new_st != old_st)
++			entity->start = new_st->vtime;
++	}
++
++	return new_st;
++}
++
++/**
++ * bfq_bfqq_served - update the scheduler status after selection for
++ *                   service.
++ * @bfqq: the queue being served.
++ * @served: bytes to transfer.
++ *
++ * NOTE: this can be optimized, as the timestamps of upper level entities
++ * are synchronized every time a new bfqq is selected for service.  By now,
++ * we keep it to better check consistency.
++ */
++static void bfq_bfqq_served(struct bfq_queue *bfqq, unsigned long served)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++	struct bfq_service_tree *st;
++
++	for_each_entity(entity) {
++		st = bfq_entity_service_tree(entity);
++
++		entity->service += served;
++		BUG_ON(entity->service > entity->budget);
++		BUG_ON(st->wsum == 0);
++
++		st->vtime += bfq_delta(served, st->wsum);
++		bfq_forget_idle(st);
++	}
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "bfqq_served %lu secs", served);
++}
++
++/**
++ * bfq_bfqq_charge_full_budget - set the service to the entity budget.
++ * @bfqq: the queue that needs a service update.
++ *
++ * When it's not possible to be fair in the service domain, because
++ * a queue is not consuming its budget fast enough (the meaning of
++ * fast depends on the timeout parameter), we charge it a full
++ * budget.  In this way we should obtain a sort of time-domain
++ * fairness among all the seeky/slow queues.
++ */
++static inline void bfq_bfqq_charge_full_budget(struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "charge_full_budget");
++
++	bfq_bfqq_served(bfqq, entity->budget - entity->service);
++}
++
++/**
++ * __bfq_activate_entity - activate an entity.
++ * @entity: the entity being activated.
++ *
++ * Called whenever an entity is activated, i.e., it is not active and one
++ * of its children receives a new request, or has to be reactivated due to
++ * budget exhaustion.  It uses the current budget of the entity (and the
++ * service received if @entity is active) of the queue to calculate its
++ * timestamps.
++ */
++static void __bfq_activate_entity(struct bfq_entity *entity)
++{
++	struct bfq_sched_data *sd = entity->sched_data;
++	struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++
++	if (entity == sd->in_service_entity) {
++		BUG_ON(entity->tree != NULL);
++		/*
++		 * If we are requeueing the current entity we have
++		 * to take care of not charging to it service it has
++		 * not received.
++		 */
++		bfq_calc_finish(entity, entity->service);
++		entity->start = entity->finish;
++		sd->in_service_entity = NULL;
++	} else if (entity->tree == &st->active) {
++		/*
++		 * Requeueing an entity due to a change of some
++		 * next_in_service entity below it.  We reuse the
++		 * old start time.
++		 */
++		bfq_active_extract(st, entity);
++	} else if (entity->tree == &st->idle) {
++		/*
++		 * Must be on the idle tree, bfq_idle_extract() will
++		 * check for that.
++		 */
++		bfq_idle_extract(st, entity);
++		entity->start = bfq_gt(st->vtime, entity->finish) ?
++				       st->vtime : entity->finish;
++	} else {
++		/*
++		 * The finish time of the entity may be invalid, and
++		 * it is in the past for sure, otherwise the queue
++		 * would have been on the idle tree.
++		 */
++		entity->start = st->vtime;
++		st->wsum += entity->weight;
++		bfq_get_entity(entity);
++
++		BUG_ON(entity->on_st);
++		entity->on_st = 1;
++	}
++
++	st = __bfq_entity_update_weight_prio(st, entity);
++	bfq_calc_finish(entity, entity->budget);
++	bfq_active_insert(st, entity);
++}
++
++/**
++ * bfq_activate_entity - activate an entity and its ancestors if necessary.
++ * @entity: the entity to activate.
++ *
++ * Activate @entity and all the entities on the path from it to the root.
++ */
++static void bfq_activate_entity(struct bfq_entity *entity)
++{
++	struct bfq_sched_data *sd;
++
++	for_each_entity(entity) {
++		__bfq_activate_entity(entity);
++
++		sd = entity->sched_data;
++		if (!bfq_update_next_in_service(sd))
++			/*
++			 * No need to propagate the activation to the
++			 * upper entities, as they will be updated when
++			 * the in-service entity is rescheduled.
++			 */
++			break;
++	}
++}
++
++/**
++ * __bfq_deactivate_entity - deactivate an entity from its service tree.
++ * @entity: the entity to deactivate.
++ * @requeue: if false, the entity will not be put into the idle tree.
++ *
++ * Deactivate an entity, independently from its previous state.  If the
++ * entity was not on a service tree just return, otherwise if it is on
++ * any scheduler tree, extract it from that tree, and if necessary
++ * and if the caller did not specify @requeue, put it on the idle tree.
++ *
++ * Return %1 if the caller should update the entity hierarchy, i.e.,
++ * if the entity was in service or if it was the next_in_service for
++ * its sched_data; return %0 otherwise.
++ */
++static int __bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
++{
++	struct bfq_sched_data *sd = entity->sched_data;
++	struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++	int was_in_service = entity == sd->in_service_entity;
++	int ret = 0;
++
++	if (!entity->on_st)
++		return 0;
++
++	BUG_ON(was_in_service && entity->tree != NULL);
++
++	if (was_in_service) {
++		bfq_calc_finish(entity, entity->service);
++		sd->in_service_entity = NULL;
++	} else if (entity->tree == &st->active)
++		bfq_active_extract(st, entity);
++	else if (entity->tree == &st->idle)
++		bfq_idle_extract(st, entity);
++	else if (entity->tree != NULL)
++		BUG();
++
++	if (was_in_service || sd->next_in_service == entity)
++		ret = bfq_update_next_in_service(sd);
++
++	if (!requeue || !bfq_gt(entity->finish, st->vtime))
++		bfq_forget_entity(st, entity);
++	else
++		bfq_idle_insert(st, entity);
++
++	BUG_ON(sd->in_service_entity == entity);
++	BUG_ON(sd->next_in_service == entity);
++
++	return ret;
++}
++
++/**
++ * bfq_deactivate_entity - deactivate an entity.
++ * @entity: the entity to deactivate.
++ * @requeue: true if the entity can be put on the idle tree
++ */
++static void bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
++{
++	struct bfq_sched_data *sd;
++	struct bfq_entity *parent;
++
++	for_each_entity_safe(entity, parent) {
++		sd = entity->sched_data;
++
++		if (!__bfq_deactivate_entity(entity, requeue))
++			/*
++			 * The parent entity is still backlogged, and
++			 * we don't need to update it as it is still
++			 * in service.
++			 */
++			break;
++
++		if (sd->next_in_service != NULL)
++			/*
++			 * The parent entity is still backlogged and
++			 * the budgets on the path towards the root
++			 * need to be updated.
++			 */
++			goto update;
++
++		/*
++		 * If we reach there the parent is no more backlogged and
++		 * we want to propagate the dequeue upwards.
++		 */
++		requeue = 1;
++	}
++
++	return;
++
++update:
++	entity = parent;
++	for_each_entity(entity) {
++		__bfq_activate_entity(entity);
++
++		sd = entity->sched_data;
++		if (!bfq_update_next_in_service(sd))
++			break;
++	}
++}
++
++/**
++ * bfq_update_vtime - update vtime if necessary.
++ * @st: the service tree to act upon.
++ *
++ * If necessary update the service tree vtime to have at least one
++ * eligible entity, skipping to its start time.  Assumes that the
++ * active tree of the device is not empty.
++ *
++ * NOTE: this hierarchical implementation updates vtimes quite often,
++ * we may end up with reactivated processes getting timestamps after a
++ * vtime skip done because we needed a ->first_active entity on some
++ * intermediate node.
++ */
++static void bfq_update_vtime(struct bfq_service_tree *st)
++{
++	struct bfq_entity *entry;
++	struct rb_node *node = st->active.rb_node;
++
++	entry = rb_entry(node, struct bfq_entity, rb_node);
++	if (bfq_gt(entry->min_start, st->vtime)) {
++		st->vtime = entry->min_start;
++		bfq_forget_idle(st);
++	}
++}
++
++/**
++ * bfq_first_active_entity - find the eligible entity with
++ *                           the smallest finish time
++ * @st: the service tree to select from.
++ *
++ * This function searches the first schedulable entity, starting from the
++ * root of the tree and going on the left every time on this side there is
++ * a subtree with at least one eligible (start >= vtime) entity. The path on
++ * the right is followed only if a) the left subtree contains no eligible
++ * entities and b) no eligible entity has been found yet.
++ */
++static struct bfq_entity *bfq_first_active_entity(struct bfq_service_tree *st)
++{
++	struct bfq_entity *entry, *first = NULL;
++	struct rb_node *node = st->active.rb_node;
++
++	while (node != NULL) {
++		entry = rb_entry(node, struct bfq_entity, rb_node);
++left:
++		if (!bfq_gt(entry->start, st->vtime))
++			first = entry;
++
++		BUG_ON(bfq_gt(entry->min_start, st->vtime));
++
++		if (node->rb_left != NULL) {
++			entry = rb_entry(node->rb_left,
++					 struct bfq_entity, rb_node);
++			if (!bfq_gt(entry->min_start, st->vtime)) {
++				node = node->rb_left;
++				goto left;
++			}
++		}
++		if (first != NULL)
++			break;
++		node = node->rb_right;
++	}
++
++	BUG_ON(first == NULL && !RB_EMPTY_ROOT(&st->active));
++	return first;
++}
++
++/**
++ * __bfq_lookup_next_entity - return the first eligible entity in @st.
++ * @st: the service tree.
++ *
++ * Update the virtual time in @st and return the first eligible entity
++ * it contains.
++ */
++static struct bfq_entity *__bfq_lookup_next_entity(struct bfq_service_tree *st,
++						   bool force)
++{
++	struct bfq_entity *entity, *new_next_in_service = NULL;
++
++	if (RB_EMPTY_ROOT(&st->active))
++		return NULL;
++
++	bfq_update_vtime(st);
++	entity = bfq_first_active_entity(st);
++	BUG_ON(bfq_gt(entity->start, st->vtime));
++
++	/*
++	 * If the chosen entity does not match with the sched_data's
++	 * next_in_service and we are forcedly serving the IDLE priority
++	 * class tree, bubble up budget update.
++	 */
++	if (unlikely(force && entity != entity->sched_data->next_in_service)) {
++		new_next_in_service = entity;
++		for_each_entity(new_next_in_service)
++			bfq_update_budget(new_next_in_service);
++	}
++
++	return entity;
++}
++
++/**
++ * bfq_lookup_next_entity - return the first eligible entity in @sd.
++ * @sd: the sched_data.
++ * @extract: if true the returned entity will be also extracted from @sd.
++ *
++ * NOTE: since we cache the next_in_service entity at each level of the
++ * hierarchy, the complexity of the lookup can be decreased with
++ * absolutely no effort just returning the cached next_in_service value;
++ * we prefer to do full lookups to test the consistency of * the data
++ * structures.
++ */
++static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
++						 int extract,
++						 struct bfq_data *bfqd)
++{
++	struct bfq_service_tree *st = sd->service_tree;
++	struct bfq_entity *entity;
++	int i = 0;
++
++	BUG_ON(sd->in_service_entity != NULL);
++
++	if (bfqd != NULL &&
++	    jiffies - bfqd->bfq_class_idle_last_service > BFQ_CL_IDLE_TIMEOUT) {
++		entity = __bfq_lookup_next_entity(st + BFQ_IOPRIO_CLASSES - 1,
++						  true);
++		if (entity != NULL) {
++			i = BFQ_IOPRIO_CLASSES - 1;
++			bfqd->bfq_class_idle_last_service = jiffies;
++			sd->next_in_service = entity;
++		}
++	}
++	for (; i < BFQ_IOPRIO_CLASSES; i++) {
++		entity = __bfq_lookup_next_entity(st + i, false);
++		if (entity != NULL) {
++			if (extract) {
++				bfq_check_next_in_service(sd, entity);
++				bfq_active_extract(st + i, entity);
++				sd->in_service_entity = entity;
++				sd->next_in_service = NULL;
++			}
++			break;
++		}
++	}
++
++	return entity;
++}
++
++/*
++ * Get next queue for service.
++ */
++static struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
++{
++	struct bfq_entity *entity = NULL;
++	struct bfq_sched_data *sd;
++	struct bfq_queue *bfqq;
++
++	BUG_ON(bfqd->in_service_queue != NULL);
++
++	if (bfqd->busy_queues == 0)
++		return NULL;
++
++	sd = &bfqd->root_group->sched_data;
++	for (; sd != NULL; sd = entity->my_sched_data) {
++		entity = bfq_lookup_next_entity(sd, 1, bfqd);
++		BUG_ON(entity == NULL);
++		entity->service = 0;
++	}
++
++	bfqq = bfq_entity_to_bfqq(entity);
++	BUG_ON(bfqq == NULL);
++
++	return bfqq;
++}
++
++/*
++ * Forced extraction of the given queue.
++ */
++static void bfq_get_next_queue_forced(struct bfq_data *bfqd,
++				      struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity;
++	struct bfq_sched_data *sd;
++
++	BUG_ON(bfqd->in_service_queue != NULL);
++
++	entity = &bfqq->entity;
++	/*
++	 * Bubble up extraction/update from the leaf to the root.
++	*/
++	for_each_entity(entity) {
++		sd = entity->sched_data;
++		bfq_update_budget(entity);
++		bfq_update_vtime(bfq_entity_service_tree(entity));
++		bfq_active_extract(bfq_entity_service_tree(entity), entity);
++		sd->in_service_entity = entity;
++		sd->next_in_service = NULL;
++		entity->service = 0;
++	}
++
++	return;
++}
++
++static void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd)
++{
++	if (bfqd->in_service_bic != NULL) {
++		put_io_context(bfqd->in_service_bic->icq.ioc);
++		bfqd->in_service_bic = NULL;
++	}
++
++	bfqd->in_service_queue = NULL;
++	del_timer(&bfqd->idle_slice_timer);
++}
++
++static void bfq_deactivate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++				int requeue)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++
++	if (bfqq == bfqd->in_service_queue)
++		__bfq_bfqd_reset_in_service(bfqd);
++
++	bfq_deactivate_entity(entity, requeue);
++}
++
++static void bfq_activate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++
++	bfq_activate_entity(entity);
++}
++
++/*
++ * Called when the bfqq no longer has requests pending, remove it from
++ * the service tree.
++ */
++static void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++			      int requeue)
++{
++	BUG_ON(!bfq_bfqq_busy(bfqq));
++	BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
++
++	bfq_log_bfqq(bfqd, bfqq, "del from busy");
++
++	bfq_clear_bfqq_busy(bfqq);
++
++	BUG_ON(bfqd->busy_queues == 0);
++	bfqd->busy_queues--;
++
++	if (!bfqq->dispatched) {
++		bfq_weights_tree_remove(bfqd, &bfqq->entity,
++					&bfqd->queue_weights_tree);
++		if (!blk_queue_nonrot(bfqd->queue)) {
++			BUG_ON(!bfqd->busy_in_flight_queues);
++			bfqd->busy_in_flight_queues--;
++			if (bfq_bfqq_constantly_seeky(bfqq)) {
++				BUG_ON(!bfqd->
++					const_seeky_busy_in_flight_queues);
++				bfqd->const_seeky_busy_in_flight_queues--;
++			}
++		}
++	}
++	if (bfqq->wr_coeff > 1)
++		bfqd->wr_busy_queues--;
++
++	bfq_deactivate_bfqq(bfqd, bfqq, requeue);
++}
++
++/*
++ * Called when an inactive queue receives a new request.
++ */
++static void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	BUG_ON(bfq_bfqq_busy(bfqq));
++	BUG_ON(bfqq == bfqd->in_service_queue);
++
++	bfq_log_bfqq(bfqd, bfqq, "add to busy");
++
++	bfq_activate_bfqq(bfqd, bfqq);
++
++	bfq_mark_bfqq_busy(bfqq);
++	bfqd->busy_queues++;
++
++	if (!bfqq->dispatched) {
++		if (bfqq->wr_coeff == 1)
++			bfq_weights_tree_add(bfqd, &bfqq->entity,
++					     &bfqd->queue_weights_tree);
++		if (!blk_queue_nonrot(bfqd->queue)) {
++			bfqd->busy_in_flight_queues++;
++			if (bfq_bfqq_constantly_seeky(bfqq))
++				bfqd->const_seeky_busy_in_flight_queues++;
++		}
++	}
++	if (bfqq->wr_coeff > 1)
++		bfqd->wr_busy_queues++;
++}
+diff --git a/block/bfq.h b/block/bfq.h
+new file mode 100644
+index 0000000..518f2ac
+--- /dev/null
++++ b/block/bfq.h
+@@ -0,0 +1,775 @@
++/*
++ * BFQ-v7r7 for 4.0.0: data structures and common functions prototypes.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ *		      Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ */
++
++#ifndef _BFQ_H
++#define _BFQ_H
++
++#include <linux/blktrace_api.h>
++#include <linux/hrtimer.h>
++#include <linux/ioprio.h>
++#include <linux/rbtree.h>
++
++#define BFQ_IOPRIO_CLASSES	3
++#define BFQ_CL_IDLE_TIMEOUT	(HZ/5)
++
++#define BFQ_MIN_WEIGHT	1
++#define BFQ_MAX_WEIGHT	1000
++
++#define BFQ_DEFAULT_QUEUE_IOPRIO	4
++
++#define BFQ_DEFAULT_GRP_WEIGHT	10
++#define BFQ_DEFAULT_GRP_IOPRIO	0
++#define BFQ_DEFAULT_GRP_CLASS	IOPRIO_CLASS_BE
++
++struct bfq_entity;
++
++/**
++ * struct bfq_service_tree - per ioprio_class service tree.
++ * @active: tree for active entities (i.e., those backlogged).
++ * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
++ * @first_idle: idle entity with minimum F_i.
++ * @last_idle: idle entity with maximum F_i.
++ * @vtime: scheduler virtual time.
++ * @wsum: scheduler weight sum; active and idle entities contribute to it.
++ *
++ * Each service tree represents a B-WF2Q+ scheduler on its own.  Each
++ * ioprio_class has its own independent scheduler, and so its own
++ * bfq_service_tree.  All the fields are protected by the queue lock
++ * of the containing bfqd.
++ */
++struct bfq_service_tree {
++	struct rb_root active;
++	struct rb_root idle;
++
++	struct bfq_entity *first_idle;
++	struct bfq_entity *last_idle;
++
++	u64 vtime;
++	unsigned long wsum;
++};
++
++/**
++ * struct bfq_sched_data - multi-class scheduler.
++ * @in_service_entity: entity in service.
++ * @next_in_service: head-of-the-line entity in the scheduler.
++ * @service_tree: array of service trees, one per ioprio_class.
++ *
++ * bfq_sched_data is the basic scheduler queue.  It supports three
++ * ioprio_classes, and can be used either as a toplevel queue or as
++ * an intermediate queue on a hierarchical setup.
++ * @next_in_service points to the active entity of the sched_data
++ * service trees that will be scheduled next.
++ *
++ * The supported ioprio_classes are the same as in CFQ, in descending
++ * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
++ * Requests from higher priority queues are served before all the
++ * requests from lower priority queues; among requests of the same
++ * queue requests are served according to B-WF2Q+.
++ * All the fields are protected by the queue lock of the containing bfqd.
++ */
++struct bfq_sched_data {
++	struct bfq_entity *in_service_entity;
++	struct bfq_entity *next_in_service;
++	struct bfq_service_tree service_tree[BFQ_IOPRIO_CLASSES];
++};
++
++/**
++ * struct bfq_weight_counter - counter of the number of all active entities
++ *                             with a given weight.
++ * @weight: weight of the entities that this counter refers to.
++ * @num_active: number of active entities with this weight.
++ * @weights_node: weights tree member (see bfq_data's @queue_weights_tree
++ *                and @group_weights_tree).
++ */
++struct bfq_weight_counter {
++	short int weight;
++	unsigned int num_active;
++	struct rb_node weights_node;
++};
++
++/**
++ * struct bfq_entity - schedulable entity.
++ * @rb_node: service_tree member.
++ * @weight_counter: pointer to the weight counter associated with this entity.
++ * @on_st: flag, true if the entity is on a tree (either the active or
++ *         the idle one of its service_tree).
++ * @finish: B-WF2Q+ finish timestamp (aka F_i).
++ * @start: B-WF2Q+ start timestamp (aka S_i).
++ * @tree: tree the entity is enqueued into; %NULL if not on a tree.
++ * @min_start: minimum start time of the (active) subtree rooted at
++ *             this entity; used for O(log N) lookups into active trees.
++ * @service: service received during the last round of service.
++ * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
++ * @weight: weight of the queue
++ * @parent: parent entity, for hierarchical scheduling.
++ * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
++ *                 associated scheduler queue, %NULL on leaf nodes.
++ * @sched_data: the scheduler queue this entity belongs to.
++ * @ioprio: the ioprio in use.
++ * @new_weight: when a weight change is requested, the new weight value.
++ * @orig_weight: original weight, used to implement weight boosting
++ * @new_ioprio: when an ioprio change is requested, the new ioprio value.
++ * @ioprio_class: the ioprio_class in use.
++ * @new_ioprio_class: when an ioprio_class change is requested, the new
++ *                    ioprio_class value.
++ * @ioprio_changed: flag, true when the user requested a weight, ioprio or
++ *                  ioprio_class change.
++ *
++ * A bfq_entity is used to represent either a bfq_queue (leaf node in the
++ * cgroup hierarchy) or a bfq_group into the upper level scheduler.  Each
++ * entity belongs to the sched_data of the parent group in the cgroup
++ * hierarchy.  Non-leaf entities have also their own sched_data, stored
++ * in @my_sched_data.
++ *
++ * Each entity stores independently its priority values; this would
++ * allow different weights on different devices, but this
++ * functionality is not exported to userspace by now.  Priorities and
++ * weights are updated lazily, first storing the new values into the
++ * new_* fields, then setting the @ioprio_changed flag.  As soon as
++ * there is a transition in the entity state that allows the priority
++ * update to take place the effective and the requested priority
++ * values are synchronized.
++ *
++ * Unless cgroups are used, the weight value is calculated from the
++ * ioprio to export the same interface as CFQ.  When dealing with
++ * ``well-behaved'' queues (i.e., queues that do not spend too much
++ * time to consume their budget and have true sequential behavior, and
++ * when there are no external factors breaking anticipation) the
++ * relative weights at each level of the cgroups hierarchy should be
++ * guaranteed.  All the fields are protected by the queue lock of the
++ * containing bfqd.
++ */
++struct bfq_entity {
++	struct rb_node rb_node;
++	struct bfq_weight_counter *weight_counter;
++
++	int on_st;
++
++	u64 finish;
++	u64 start;
++
++	struct rb_root *tree;
++
++	u64 min_start;
++
++	unsigned long service, budget;
++	unsigned short weight, new_weight;
++	unsigned short orig_weight;
++
++	struct bfq_entity *parent;
++
++	struct bfq_sched_data *my_sched_data;
++	struct bfq_sched_data *sched_data;
++
++	unsigned short ioprio, new_ioprio;
++	unsigned short ioprio_class, new_ioprio_class;
++
++	int ioprio_changed;
++};
++
++struct bfq_group;
++
++/**
++ * struct bfq_queue - leaf schedulable entity.
++ * @ref: reference counter.
++ * @bfqd: parent bfq_data.
++ * @new_bfqq: shared bfq_queue if queue is cooperating with
++ *           one or more other queues.
++ * @pos_node: request-position tree member (see bfq_data's @rq_pos_tree).
++ * @pos_root: request-position tree root (see bfq_data's @rq_pos_tree).
++ * @sort_list: sorted list of pending requests.
++ * @next_rq: if fifo isn't expired, next request to serve.
++ * @queued: nr of requests queued in @sort_list.
++ * @allocated: currently allocated requests.
++ * @meta_pending: pending metadata requests.
++ * @fifo: fifo list of requests in sort_list.
++ * @entity: entity representing this queue in the scheduler.
++ * @max_budget: maximum budget allowed from the feedback mechanism.
++ * @budget_timeout: budget expiration (in jiffies).
++ * @dispatched: number of requests on the dispatch list or inside driver.
++ * @flags: status flags.
++ * @bfqq_list: node for active/idle bfqq list inside our bfqd.
++ * @burst_list_node: node for the device's burst list.
++ * @seek_samples: number of seeks sampled
++ * @seek_total: sum of the distances of the seeks sampled
++ * @seek_mean: mean seek distance
++ * @last_request_pos: position of the last request enqueued
++ * @requests_within_timer: number of consecutive pairs of request completion
++ *                         and arrival, such that the queue becomes idle
++ *                         after the completion, but the next request arrives
++ *                         within an idle time slice; used only if the queue's
++ *                         IO_bound has been cleared.
++ * @pid: pid of the process owning the queue, used for logging purposes.
++ * @last_wr_start_finish: start time of the current weight-raising period if
++ *                        the @bfq-queue is being weight-raised, otherwise
++ *                        finish time of the last weight-raising period
++ * @wr_cur_max_time: current max raising time for this queue
++ * @soft_rt_next_start: minimum time instant such that, only if a new
++ *                      request is enqueued after this time instant in an
++ *                      idle @bfq_queue with no outstanding requests, then
++ *                      the task associated with the queue it is deemed as
++ *                      soft real-time (see the comments to the function
++ *                      bfq_bfqq_softrt_next_start()).
++ * @last_idle_bklogged: time of the last transition of the @bfq_queue from
++ *                      idle to backlogged
++ * @service_from_backlogged: cumulative service received from the @bfq_queue
++ *                           since the last transition from idle to
++ *                           backlogged
++ *
++ * A bfq_queue is a leaf request queue; it can be associated with an io_context
++ * or more, if it is async or shared between cooperating processes. @cgroup
++ * holds a reference to the cgroup, to be sure that it does not disappear while
++ * a bfqq still references it (mostly to avoid races between request issuing and
++ * task migration followed by cgroup destruction).
++ * All the fields are protected by the queue lock of the containing bfqd.
++ */
++struct bfq_queue {
++	atomic_t ref;
++	struct bfq_data *bfqd;
++
++	/* fields for cooperating queues handling */
++	struct bfq_queue *new_bfqq;
++	struct rb_node pos_node;
++	struct rb_root *pos_root;
++
++	struct rb_root sort_list;
++	struct request *next_rq;
++	int queued[2];
++	int allocated[2];
++	int meta_pending;
++	struct list_head fifo;
++
++	struct bfq_entity entity;
++
++	unsigned long max_budget;
++	unsigned long budget_timeout;
++
++	int dispatched;
++
++	unsigned int flags;
++
++	struct list_head bfqq_list;
++
++	struct hlist_node burst_list_node;
++
++	unsigned int seek_samples;
++	u64 seek_total;
++	sector_t seek_mean;
++	sector_t last_request_pos;
++
++	unsigned int requests_within_timer;
++
++	pid_t pid;
++
++	/* weight-raising fields */
++	unsigned long wr_cur_max_time;
++	unsigned long soft_rt_next_start;
++	unsigned long last_wr_start_finish;
++	unsigned int wr_coeff;
++	unsigned long last_idle_bklogged;
++	unsigned long service_from_backlogged;
++};
++
++/**
++ * struct bfq_ttime - per process thinktime stats.
++ * @ttime_total: total process thinktime
++ * @ttime_samples: number of thinktime samples
++ * @ttime_mean: average process thinktime
++ */
++struct bfq_ttime {
++	unsigned long last_end_request;
++
++	unsigned long ttime_total;
++	unsigned long ttime_samples;
++	unsigned long ttime_mean;
++};
++
++/**
++ * struct bfq_io_cq - per (request_queue, io_context) structure.
++ * @icq: associated io_cq structure
++ * @bfqq: array of two process queues, the sync and the async
++ * @ttime: associated @bfq_ttime struct
++ */
++struct bfq_io_cq {
++	struct io_cq icq; /* must be the first member */
++	struct bfq_queue *bfqq[2];
++	struct bfq_ttime ttime;
++	int ioprio;
++};
++
++enum bfq_device_speed {
++	BFQ_BFQD_FAST,
++	BFQ_BFQD_SLOW,
++};
++
++/**
++ * struct bfq_data - per device data structure.
++ * @queue: request queue for the managed device.
++ * @root_group: root bfq_group for the device.
++ * @rq_pos_tree: rbtree sorted by next_request position, used when
++ *               determining if two or more queues have interleaving
++ *               requests (see bfq_close_cooperator()).
++ * @active_numerous_groups: number of bfq_groups containing more than one
++ *                          active @bfq_entity.
++ * @queue_weights_tree: rbtree of weight counters of @bfq_queues, sorted by
++ *                      weight. Used to keep track of whether all @bfq_queues
++ *                     have the same weight. The tree contains one counter
++ *                     for each distinct weight associated to some active
++ *                     and not weight-raised @bfq_queue (see the comments to
++ *                      the functions bfq_weights_tree_[add|remove] for
++ *                     further details).
++ * @group_weights_tree: rbtree of non-queue @bfq_entity weight counters, sorted
++ *                      by weight. Used to keep track of whether all
++ *                     @bfq_groups have the same weight. The tree contains
++ *                     one counter for each distinct weight associated to
++ *                     some active @bfq_group (see the comments to the
++ *                     functions bfq_weights_tree_[add|remove] for further
++ *                     details).
++ * @busy_queues: number of bfq_queues containing requests (including the
++ *		 queue in service, even if it is idling).
++ * @busy_in_flight_queues: number of @bfq_queues containing pending or
++ *                         in-flight requests, plus the @bfq_queue in
++ *                         service, even if idle but waiting for the
++ *                         possible arrival of its next sync request. This
++ *                         field is updated only if the device is rotational,
++ *                         but used only if the device is also NCQ-capable.
++ *                         The reason why the field is updated also for non-
++ *                         NCQ-capable rotational devices is related to the
++ *                         fact that the value of @hw_tag may be set also
++ *                         later than when busy_in_flight_queues may need to
++ *                         be incremented for the first time(s). Taking also
++ *                         this possibility into account, to avoid unbalanced
++ *                         increments/decrements, would imply more overhead
++ *                         than just updating busy_in_flight_queues
++ *                         regardless of the value of @hw_tag.
++ * @const_seeky_busy_in_flight_queues: number of constantly-seeky @bfq_queues
++ *                                     (that is, seeky queues that expired
++ *                                     for budget timeout at least once)
++ *                                     containing pending or in-flight
++ *                                     requests, including the in-service
++ *                                     @bfq_queue if constantly seeky. This
++ *                                     field is updated only if the device
++ *                                     is rotational, but used only if the
++ *                                     device is also NCQ-capable (see the
++ *                                     comments to @busy_in_flight_queues).
++ * @wr_busy_queues: number of weight-raised busy @bfq_queues.
++ * @queued: number of queued requests.
++ * @rq_in_driver: number of requests dispatched and waiting for completion.
++ * @sync_flight: number of sync requests in the driver.
++ * @max_rq_in_driver: max number of reqs in driver in the last
++ *                    @hw_tag_samples completed requests.
++ * @hw_tag_samples: nr of samples used to calculate hw_tag.
++ * @hw_tag: flag set to one if the driver is showing a queueing behavior.
++ * @budgets_assigned: number of budgets assigned.
++ * @idle_slice_timer: timer set when idling for the next sequential request
++ *                    from the queue in service.
++ * @unplug_work: delayed work to restart dispatching on the request queue.
++ * @in_service_queue: bfq_queue in service.
++ * @in_service_bic: bfq_io_cq (bic) associated with the @in_service_queue.
++ * @last_position: on-disk position of the last served request.
++ * @last_budget_start: beginning of the last budget.
++ * @last_idling_start: beginning of the last idle slice.
++ * @peak_rate: peak transfer rate observed for a budget.
++ * @peak_rate_samples: number of samples used to calculate @peak_rate.
++ * @bfq_max_budget: maximum budget allotted to a bfq_queue before
++ *                  rescheduling.
++ * @group_list: list of all the bfq_groups active on the device.
++ * @active_list: list of all the bfq_queues active on the device.
++ * @idle_list: list of all the bfq_queues idle on the device.
++ * @bfq_quantum: max number of requests dispatched per dispatch round.
++ * @bfq_fifo_expire: timeout for async/sync requests; when it expires
++ *                   requests are served in fifo order.
++ * @bfq_back_penalty: weight of backward seeks wrt forward ones.
++ * @bfq_back_max: maximum allowed backward seek.
++ * @bfq_slice_idle: maximum idling time.
++ * @bfq_user_max_budget: user-configured max budget value
++ *                       (0 for auto-tuning).
++ * @bfq_max_budget_async_rq: maximum budget (in nr of requests) allotted to
++ *                           async queues.
++ * @bfq_timeout: timeout for bfq_queues to consume their budget; used to
++ *               to prevent seeky queues to impose long latencies to well
++ *               behaved ones (this also implies that seeky queues cannot
++ *               receive guarantees in the service domain; after a timeout
++ *               they are charged for the whole allocated budget, to try
++ *               to preserve a behavior reasonably fair among them, but
++ *               without service-domain guarantees).
++ * @bfq_coop_thresh: number of queue merges after which a @bfq_queue is
++ *                   no more granted any weight-raising.
++ * @bfq_failed_cooperations: number of consecutive failed cooperation
++ *                           chances after which weight-raising is restored
++ *                           to a queue subject to more than bfq_coop_thresh
++ *                           queue merges.
++ * @bfq_requests_within_timer: number of consecutive requests that must be
++ *                             issued within the idle time slice to set
++ *                             again idling to a queue which was marked as
++ *                             non-I/O-bound (see the definition of the
++ *                             IO_bound flag for further details).
++ * @last_ins_in_burst: last time at which a queue entered the current
++ *                     burst of queues being activated shortly after
++ *                     each other; for more details about this and the
++ *                     following parameters related to a burst of
++ *                     activations, see the comments to the function
++ *                     @bfq_handle_burst.
++ * @bfq_burst_interval: reference time interval used to decide whether a
++ *                      queue has been activated shortly after
++ *                      @last_ins_in_burst.
++ * @burst_size: number of queues in the current burst of queue activations.
++ * @bfq_large_burst_thresh: maximum burst size above which the current
++ * 			    queue-activation burst is deemed as 'large'.
++ * @large_burst: true if a large queue-activation burst is in progress.
++ * @burst_list: head of the burst list (as for the above fields, more details
++ * 		in the comments to the function bfq_handle_burst).
++ * @low_latency: if set to true, low-latency heuristics are enabled.
++ * @bfq_wr_coeff: maximum factor by which the weight of a weight-raised
++ *                queue is multiplied.
++ * @bfq_wr_max_time: maximum duration of a weight-raising period (jiffies).
++ * @bfq_wr_rt_max_time: maximum duration for soft real-time processes.
++ * @bfq_wr_min_idle_time: minimum idle period after which weight-raising
++ *			  may be reactivated for a queue (in jiffies).
++ * @bfq_wr_min_inter_arr_async: minimum period between request arrivals
++ *				after which weight-raising may be
++ *				reactivated for an already busy queue
++ *				(in jiffies).
++ * @bfq_wr_max_softrt_rate: max service-rate for a soft real-time queue,
++ *			    sectors per seconds.
++ * @RT_prod: cached value of the product R*T used for computing the maximum
++ *	     duration of the weight raising automatically.
++ * @device_speed: device-speed class for the low-latency heuristic.
++ * @oom_bfqq: fallback dummy bfqq for extreme OOM conditions.
++ *
++ * All the fields are protected by the @queue lock.
++ */
++struct bfq_data {
++	struct request_queue *queue;
++
++	struct bfq_group *root_group;
++	struct rb_root rq_pos_tree;
++
++#ifdef CONFIG_CGROUP_BFQIO
++	int active_numerous_groups;
++#endif
++
++	struct rb_root queue_weights_tree;
++	struct rb_root group_weights_tree;
++
++	int busy_queues;
++	int busy_in_flight_queues;
++	int const_seeky_busy_in_flight_queues;
++	int wr_busy_queues;
++	int queued;
++	int rq_in_driver;
++	int sync_flight;
++
++	int max_rq_in_driver;
++	int hw_tag_samples;
++	int hw_tag;
++
++	int budgets_assigned;
++
++	struct timer_list idle_slice_timer;
++	struct work_struct unplug_work;
++
++	struct bfq_queue *in_service_queue;
++	struct bfq_io_cq *in_service_bic;
++
++	sector_t last_position;
++
++	ktime_t last_budget_start;
++	ktime_t last_idling_start;
++	int peak_rate_samples;
++	u64 peak_rate;
++	unsigned long bfq_max_budget;
++
++	struct hlist_head group_list;
++	struct list_head active_list;
++	struct list_head idle_list;
++
++	unsigned int bfq_quantum;
++	unsigned int bfq_fifo_expire[2];
++	unsigned int bfq_back_penalty;
++	unsigned int bfq_back_max;
++	unsigned int bfq_slice_idle;
++	u64 bfq_class_idle_last_service;
++
++	unsigned int bfq_user_max_budget;
++	unsigned int bfq_max_budget_async_rq;
++	unsigned int bfq_timeout[2];
++
++	unsigned int bfq_coop_thresh;
++	unsigned int bfq_failed_cooperations;
++	unsigned int bfq_requests_within_timer;
++
++	unsigned long last_ins_in_burst;
++	unsigned long bfq_burst_interval;
++	int burst_size;
++	unsigned long bfq_large_burst_thresh;
++	bool large_burst;
++	struct hlist_head burst_list;
++
++	bool low_latency;
++
++	/* parameters of the low_latency heuristics */
++	unsigned int bfq_wr_coeff;
++	unsigned int bfq_wr_max_time;
++	unsigned int bfq_wr_rt_max_time;
++	unsigned int bfq_wr_min_idle_time;
++	unsigned long bfq_wr_min_inter_arr_async;
++	unsigned int bfq_wr_max_softrt_rate;
++	u64 RT_prod;
++	enum bfq_device_speed device_speed;
++
++	struct bfq_queue oom_bfqq;
++};
++
++enum bfqq_state_flags {
++	BFQ_BFQQ_FLAG_busy = 0,		/* has requests or is in service */
++	BFQ_BFQQ_FLAG_wait_request,	/* waiting for a request */
++	BFQ_BFQQ_FLAG_must_alloc,	/* must be allowed rq alloc */
++	BFQ_BFQQ_FLAG_fifo_expire,	/* FIFO checked in this slice */
++	BFQ_BFQQ_FLAG_idle_window,	/* slice idling enabled */
++	BFQ_BFQQ_FLAG_prio_changed,	/* task priority has changed */
++	BFQ_BFQQ_FLAG_sync,		/* synchronous queue */
++	BFQ_BFQQ_FLAG_budget_new,	/* no completion with this budget */
++	BFQ_BFQQ_FLAG_IO_bound,         /*
++					 * bfqq has timed-out at least once
++					 * having consumed at most 2/10 of
++					 * its budget
++					 */
++	BFQ_BFQQ_FLAG_in_large_burst,	/*
++					 * bfqq activated in a large burst,
++					 * see comments to bfq_handle_burst.
++					 */
++	BFQ_BFQQ_FLAG_constantly_seeky,	/*
++					 * bfqq has proved to be slow and
++					 * seeky until budget timeout
++					 */
++	BFQ_BFQQ_FLAG_softrt_update,    /*
++					 * may need softrt-next-start
++					 * update
++					 */
++	BFQ_BFQQ_FLAG_coop,		/* bfqq is shared */
++	BFQ_BFQQ_FLAG_split_coop,	/* shared bfqq will be splitted */
++};
++
++#define BFQ_BFQQ_FNS(name)						\
++static inline void bfq_mark_bfqq_##name(struct bfq_queue *bfqq)		\
++{									\
++	(bfqq)->flags |= (1 << BFQ_BFQQ_FLAG_##name);			\
++}									\
++static inline void bfq_clear_bfqq_##name(struct bfq_queue *bfqq)	\
++{									\
++	(bfqq)->flags &= ~(1 << BFQ_BFQQ_FLAG_##name);			\
++}									\
++static inline int bfq_bfqq_##name(const struct bfq_queue *bfqq)		\
++{									\
++	return ((bfqq)->flags & (1 << BFQ_BFQQ_FLAG_##name)) != 0;	\
++}
++
++BFQ_BFQQ_FNS(busy);
++BFQ_BFQQ_FNS(wait_request);
++BFQ_BFQQ_FNS(must_alloc);
++BFQ_BFQQ_FNS(fifo_expire);
++BFQ_BFQQ_FNS(idle_window);
++BFQ_BFQQ_FNS(prio_changed);
++BFQ_BFQQ_FNS(sync);
++BFQ_BFQQ_FNS(budget_new);
++BFQ_BFQQ_FNS(IO_bound);
++BFQ_BFQQ_FNS(in_large_burst);
++BFQ_BFQQ_FNS(constantly_seeky);
++BFQ_BFQQ_FNS(coop);
++BFQ_BFQQ_FNS(split_coop);
++BFQ_BFQQ_FNS(softrt_update);
++#undef BFQ_BFQQ_FNS
++
++/* Logging facilities. */
++#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) \
++	blk_add_trace_msg((bfqd)->queue, "bfq%d " fmt, (bfqq)->pid, ##args)
++
++#define bfq_log(bfqd, fmt, args...) \
++	blk_add_trace_msg((bfqd)->queue, "bfq " fmt, ##args)
++
++/* Expiration reasons. */
++enum bfqq_expiration {
++	BFQ_BFQQ_TOO_IDLE = 0,		/*
++					 * queue has been idling for
++					 * too long
++					 */
++	BFQ_BFQQ_BUDGET_TIMEOUT,	/* budget took too long to be used */
++	BFQ_BFQQ_BUDGET_EXHAUSTED,	/* budget consumed */
++	BFQ_BFQQ_NO_MORE_REQUESTS,	/* the queue has no more requests */
++};
++
++#ifdef CONFIG_CGROUP_BFQIO
++/**
++ * struct bfq_group - per (device, cgroup) data structure.
++ * @entity: schedulable entity to insert into the parent group sched_data.
++ * @sched_data: own sched_data, to contain child entities (they may be
++ *              both bfq_queues and bfq_groups).
++ * @group_node: node to be inserted into the bfqio_cgroup->group_data
++ *              list of the containing cgroup's bfqio_cgroup.
++ * @bfqd_node: node to be inserted into the @bfqd->group_list list
++ *             of the groups active on the same device; used for cleanup.
++ * @bfqd: the bfq_data for the device this group acts upon.
++ * @async_bfqq: array of async queues for all the tasks belonging to
++ *              the group, one queue per ioprio value per ioprio_class,
++ *              except for the idle class that has only one queue.
++ * @async_idle_bfqq: async queue for the idle class (ioprio is ignored).
++ * @my_entity: pointer to @entity, %NULL for the toplevel group; used
++ *             to avoid too many special cases during group creation/
++ *             migration.
++ * @active_entities: number of active entities belonging to the group;
++ *                   unused for the root group. Used to know whether there
++ *                   are groups with more than one active @bfq_entity
++ *                   (see the comments to the function
++ *                   bfq_bfqq_must_not_expire()).
++ *
++ * Each (device, cgroup) pair has its own bfq_group, i.e., for each cgroup
++ * there is a set of bfq_groups, each one collecting the lower-level
++ * entities belonging to the group that are acting on the same device.
++ *
++ * Locking works as follows:
++ *    o @group_node is protected by the bfqio_cgroup lock, and is accessed
++ *      via RCU from its readers.
++ *    o @bfqd is protected by the queue lock, RCU is used to access it
++ *      from the readers.
++ *    o All the other fields are protected by the @bfqd queue lock.
++ */
++struct bfq_group {
++	struct bfq_entity entity;
++	struct bfq_sched_data sched_data;
++
++	struct hlist_node group_node;
++	struct hlist_node bfqd_node;
++
++	void *bfqd;
++
++	struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR];
++	struct bfq_queue *async_idle_bfqq;
++
++	struct bfq_entity *my_entity;
++
++	int active_entities;
++};
++
++/**
++ * struct bfqio_cgroup - bfq cgroup data structure.
++ * @css: subsystem state for bfq in the containing cgroup.
++ * @online: flag marked when the subsystem is inserted.
++ * @weight: cgroup weight.
++ * @ioprio: cgroup ioprio.
++ * @ioprio_class: cgroup ioprio_class.
++ * @lock: spinlock that protects @ioprio, @ioprio_class and @group_data.
++ * @group_data: list containing the bfq_group belonging to this cgroup.
++ *
++ * @group_data is accessed using RCU, with @lock protecting the updates,
++ * @ioprio and @ioprio_class are protected by @lock.
++ */
++struct bfqio_cgroup {
++	struct cgroup_subsys_state css;
++	bool online;
++
++	unsigned short weight, ioprio, ioprio_class;
++
++	spinlock_t lock;
++	struct hlist_head group_data;
++};
++#else
++struct bfq_group {
++	struct bfq_sched_data sched_data;
++
++	struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR];
++	struct bfq_queue *async_idle_bfqq;
++};
++#endif
++
++static inline struct bfq_service_tree *
++bfq_entity_service_tree(struct bfq_entity *entity)
++{
++	struct bfq_sched_data *sched_data = entity->sched_data;
++	unsigned int idx = entity->ioprio_class - 1;
++
++	BUG_ON(idx >= BFQ_IOPRIO_CLASSES);
++	BUG_ON(sched_data == NULL);
++
++	return sched_data->service_tree + idx;
++}
++
++static inline struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic,
++					    bool is_sync)
++{
++	return bic->bfqq[is_sync];
++}
++
++static inline void bic_set_bfqq(struct bfq_io_cq *bic,
++				struct bfq_queue *bfqq, bool is_sync)
++{
++	bic->bfqq[is_sync] = bfqq;
++}
++
++static inline struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic)
++{
++	return bic->icq.q->elevator->elevator_data;
++}
++
++/**
++ * bfq_get_bfqd_locked - get a lock to a bfqd using a RCU protected pointer.
++ * @ptr: a pointer to a bfqd.
++ * @flags: storage for the flags to be saved.
++ *
++ * This function allows bfqg->bfqd to be protected by the
++ * queue lock of the bfqd they reference; the pointer is dereferenced
++ * under RCU, so the storage for bfqd is assured to be safe as long
++ * as the RCU read side critical section does not end.  After the
++ * bfqd->queue->queue_lock is taken the pointer is rechecked, to be
++ * sure that no other writer accessed it.  If we raced with a writer,
++ * the function returns NULL, with the queue unlocked, otherwise it
++ * returns the dereferenced pointer, with the queue locked.
++ */
++static inline struct bfq_data *bfq_get_bfqd_locked(void **ptr,
++						   unsigned long *flags)
++{
++	struct bfq_data *bfqd;
++
++	rcu_read_lock();
++	bfqd = rcu_dereference(*(struct bfq_data **)ptr);
++
++	if (bfqd != NULL) {
++		spin_lock_irqsave(bfqd->queue->queue_lock, *flags);
++		if (*ptr == bfqd)
++			goto out;
++		spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
++	}
++
++	bfqd = NULL;
++out:
++	rcu_read_unlock();
++	return bfqd;
++}
++
++static inline void bfq_put_bfqd_unlock(struct bfq_data *bfqd,
++				       unsigned long *flags)
++{
++	spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
++}
++
++static void bfq_changed_ioprio(struct bfq_io_cq *bic);
++static void bfq_put_queue(struct bfq_queue *bfqq);
++static void bfq_dispatch_insert(struct request_queue *q, struct request *rq);
++static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
++				       struct bfq_group *bfqg, int is_sync,
++				       struct bfq_io_cq *bic, gfp_t gfp_mask);
++static void bfq_end_wr_async_queues(struct bfq_data *bfqd,
++				    struct bfq_group *bfqg);
++static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg);
++static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq);
++
++#endif /* _BFQ_H */
+-- 
+2.1.0
+

diff --git a/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r7-for-4.0.0.patch b/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r7-for-4.0.0.patch
new file mode 100644
index 0000000..53267cd
--- /dev/null
+++ b/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r7-for-4.0.0.patch
@@ -0,0 +1,1222 @@
+From d49cf2e7913ec1c4b86a9de657140d9ec5fa8c19 Mon Sep 17 00:00:00 2001
+From: Mauro Andreolini <mauro.andreolini@unimore.it>
+Date: Thu, 18 Dec 2014 21:32:08 +0100
+Subject: [PATCH 3/3] block, bfq: add Early Queue Merge (EQM) to BFQ-v7r7 for
+ 4.0.0
+
+A set of processes may happen  to  perform interleaved reads, i.e.,requests
+whose union would give rise to a  sequential read  pattern.  There are two
+typical  cases: in the first  case,   processes  read  fixed-size chunks of
+data at a fixed distance from each other, while in the second case processes
+may read variable-size chunks at  variable distances. The latter case occurs
+for  example with  QEMU, which  splits the  I/O generated  by the  guest into
+multiple chunks,  and lets these chunks  be served by a  pool of cooperating
+processes,  iteratively  assigning  the  next  chunk of  I/O  to  the first
+available  process. CFQ  uses actual  queue merging  for the  first type of
+rocesses, whereas it  uses preemption to get a sequential  read pattern out
+of the read requests  performed by the second type of  processes. In the end
+it uses  two different  mechanisms to  achieve the  same goal: boosting the
+throughput with interleaved I/O.
+
+This patch introduces  Early Queue Merge (EQM), a unified mechanism to get a
+sequential  read pattern  with both  types of  processes. The  main idea is
+checking newly arrived requests against the next request of the active queue
+both in case of actual request insert and in case of request merge. By doing
+so, both the types of processes can be handled by just merging their queues.
+EQM is  then simpler and  more compact than the  pair of mechanisms used in
+CFQ.
+
+Finally, EQM  also preserves the  typical low-latency properties of BFQ, by
+properly restoring the weight-raising state of a queue when it gets back to
+a non-merged state.
+
+Signed-off-by: Mauro Andreolini <mauro.andreolini@unimore.it>
+Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com>
+Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
+---
+ block/bfq-iosched.c | 751 +++++++++++++++++++++++++++++++++++++---------------
+ block/bfq-sched.c   |  28 --
+ block/bfq.h         |  54 +++-
+ 3 files changed, 581 insertions(+), 252 deletions(-)
+
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 97ee934..328f33c 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -571,6 +571,57 @@ static inline unsigned int bfq_wr_duration(struct bfq_data *bfqd)
+ 	return dur;
+ }
+ 
++static inline unsigned
++bfq_bfqq_cooperations(struct bfq_queue *bfqq)
++{
++	return bfqq->bic ? bfqq->bic->cooperations : 0;
++}
++
++static inline void
++bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
++{
++	if (bic->saved_idle_window)
++		bfq_mark_bfqq_idle_window(bfqq);
++	else
++		bfq_clear_bfqq_idle_window(bfqq);
++	if (bic->saved_IO_bound)
++		bfq_mark_bfqq_IO_bound(bfqq);
++	else
++		bfq_clear_bfqq_IO_bound(bfqq);
++	/* Assuming that the flag in_large_burst is already correctly set */
++	if (bic->wr_time_left && bfqq->bfqd->low_latency &&
++	    !bfq_bfqq_in_large_burst(bfqq) &&
++	    bic->cooperations < bfqq->bfqd->bfq_coop_thresh) {
++		/*
++		 * Start a weight raising period with the duration given by
++		 * the raising_time_left snapshot.
++		 */
++		if (bfq_bfqq_busy(bfqq))
++			bfqq->bfqd->wr_busy_queues++;
++		bfqq->wr_coeff = bfqq->bfqd->bfq_wr_coeff;
++		bfqq->wr_cur_max_time = bic->wr_time_left;
++		bfqq->last_wr_start_finish = jiffies;
++		bfqq->entity.ioprio_changed = 1;
++	}
++	/*
++	 * Clear wr_time_left to prevent bfq_bfqq_save_state() from
++	 * getting confused about the queue's need of a weight-raising
++	 * period.
++	 */
++	bic->wr_time_left = 0;
++}
++
++/* Must be called with the queue_lock held. */
++static int bfqq_process_refs(struct bfq_queue *bfqq)
++{
++	int process_refs, io_refs;
++
++	io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
++	process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
++	BUG_ON(process_refs < 0);
++	return process_refs;
++}
++
+ /* Empty burst list and add just bfqq (see comments to bfq_handle_burst) */
+ static inline void bfq_reset_burst_list(struct bfq_data *bfqd,
+ 					struct bfq_queue *bfqq)
+@@ -815,7 +866,7 @@ static void bfq_add_request(struct request *rq)
+ 		bfq_rq_pos_tree_add(bfqd, bfqq);
+ 
+ 	if (!bfq_bfqq_busy(bfqq)) {
+-		bool soft_rt,
++		bool soft_rt, coop_or_in_burst,
+ 		     idle_for_long_time = time_is_before_jiffies(
+ 						bfqq->budget_timeout +
+ 						bfqd->bfq_wr_min_idle_time);
+@@ -839,11 +890,12 @@ static void bfq_add_request(struct request *rq)
+ 				bfqd->last_ins_in_burst = jiffies;
+ 		}
+ 
++		coop_or_in_burst = bfq_bfqq_in_large_burst(bfqq) ||
++			bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh;
+ 		soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
+-			!bfq_bfqq_in_large_burst(bfqq) &&
++			!coop_or_in_burst &&
+ 			time_is_before_jiffies(bfqq->soft_rt_next_start);
+-		interactive = !bfq_bfqq_in_large_burst(bfqq) &&
+-			      idle_for_long_time;
++		interactive = !coop_or_in_burst && idle_for_long_time;
+ 		entity->budget = max_t(unsigned long, bfqq->max_budget,
+ 				       bfq_serv_to_charge(next_rq, bfqq));
+ 
+@@ -862,11 +914,20 @@ static void bfq_add_request(struct request *rq)
+ 		if (!bfqd->low_latency)
+ 			goto add_bfqq_busy;
+ 
++		if (bfq_bfqq_just_split(bfqq))
++			goto set_ioprio_changed;
++
+ 		/*
+-		 * If the queue is not being boosted and has been idle
+-		 * for enough time, start a weight-raising period
++		 * If the queue:
++		 * - is not being boosted,
++		 * - has been idle for enough time,
++		 * - is not a sync queue or is linked to a bfq_io_cq (it is
++		 *   shared "for its nature" or it is not shared and its
++		 *   requests have not been redirected to a shared queue)
++		 * start a weight-raising period.
+ 		 */
+-		if (old_wr_coeff == 1 && (interactive || soft_rt)) {
++		if (old_wr_coeff == 1 && (interactive || soft_rt) &&
++		    (!bfq_bfqq_sync(bfqq) || bfqq->bic != NULL)) {
+ 			bfqq->wr_coeff = bfqd->bfq_wr_coeff;
+ 			if (interactive)
+ 				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
+@@ -880,7 +941,7 @@ static void bfq_add_request(struct request *rq)
+ 		} else if (old_wr_coeff > 1) {
+ 			if (interactive)
+ 				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
+-			else if (bfq_bfqq_in_large_burst(bfqq) ||
++			else if (coop_or_in_burst ||
+ 				 (bfqq->wr_cur_max_time ==
+ 				  bfqd->bfq_wr_rt_max_time &&
+ 				  !soft_rt)) {
+@@ -899,18 +960,18 @@ static void bfq_add_request(struct request *rq)
+ 				/*
+ 				 *
+ 				 * The remaining weight-raising time is lower
+-				 * than bfqd->bfq_wr_rt_max_time, which
+-				 * means that the application is enjoying
+-				 * weight raising either because deemed soft-
+-				 * rt in the near past, or because deemed
+-				 * interactive a long ago. In both cases,
+-				 * resetting now the current remaining weight-
+-				 * raising time for the application to the
+-				 * weight-raising duration for soft rt
+-				 * applications would not cause any latency
+-				 * increase for the application (as the new
+-				 * duration would be higher than the remaining
+-				 * time).
++				 * than bfqd->bfq_wr_rt_max_time, which means
++				 * that the application is enjoying weight
++				 * raising either because deemed soft-rt in
++				 * the near past, or because deemed interactive
++				 * a long ago.
++				 * In both cases, resetting now the current
++				 * remaining weight-raising time for the
++				 * application to the weight-raising duration
++				 * for soft rt applications would not cause any
++				 * latency increase for the application (as the
++				 * new duration would be higher than the
++				 * remaining time).
+ 				 *
+ 				 * In addition, the application is now meeting
+ 				 * the requirements for being deemed soft rt.
+@@ -945,6 +1006,7 @@ static void bfq_add_request(struct request *rq)
+ 					bfqd->bfq_wr_rt_max_time;
+ 			}
+ 		}
++set_ioprio_changed:
+ 		if (old_wr_coeff != bfqq->wr_coeff)
+ 			entity->ioprio_changed = 1;
+ add_bfqq_busy:
+@@ -1156,90 +1218,35 @@ static void bfq_end_wr(struct bfq_data *bfqd)
+ 	spin_unlock_irq(bfqd->queue->queue_lock);
+ }
+ 
+-static int bfq_allow_merge(struct request_queue *q, struct request *rq,
+-			   struct bio *bio)
++static inline sector_t bfq_io_struct_pos(void *io_struct, bool request)
+ {
+-	struct bfq_data *bfqd = q->elevator->elevator_data;
+-	struct bfq_io_cq *bic;
+-	struct bfq_queue *bfqq;
+-
+-	/*
+-	 * Disallow merge of a sync bio into an async request.
+-	 */
+-	if (bfq_bio_sync(bio) && !rq_is_sync(rq))
+-		return 0;
+-
+-	/*
+-	 * Lookup the bfqq that this bio will be queued with. Allow
+-	 * merge only if rq is queued there.
+-	 * Queue lock is held here.
+-	 */
+-	bic = bfq_bic_lookup(bfqd, current->io_context);
+-	if (bic == NULL)
+-		return 0;
+-
+-	bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
+-	return bfqq == RQ_BFQQ(rq);
+-}
+-
+-static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
+-				       struct bfq_queue *bfqq)
+-{
+-	if (bfqq != NULL) {
+-		bfq_mark_bfqq_must_alloc(bfqq);
+-		bfq_mark_bfqq_budget_new(bfqq);
+-		bfq_clear_bfqq_fifo_expire(bfqq);
+-
+-		bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
+-
+-		bfq_log_bfqq(bfqd, bfqq,
+-			     "set_in_service_queue, cur-budget = %lu",
+-			     bfqq->entity.budget);
+-	}
+-
+-	bfqd->in_service_queue = bfqq;
+-}
+-
+-/*
+- * Get and set a new queue for service.
+- */
+-static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd,
+-						  struct bfq_queue *bfqq)
+-{
+-	if (!bfqq)
+-		bfqq = bfq_get_next_queue(bfqd);
++	if (request)
++		return blk_rq_pos(io_struct);
+ 	else
+-		bfq_get_next_queue_forced(bfqd, bfqq);
+-
+-	__bfq_set_in_service_queue(bfqd, bfqq);
+-	return bfqq;
++		return ((struct bio *)io_struct)->bi_iter.bi_sector;
+ }
+ 
+-static inline sector_t bfq_dist_from_last(struct bfq_data *bfqd,
+-					  struct request *rq)
++static inline sector_t bfq_dist_from(sector_t pos1,
++				     sector_t pos2)
+ {
+-	if (blk_rq_pos(rq) >= bfqd->last_position)
+-		return blk_rq_pos(rq) - bfqd->last_position;
++	if (pos1 >= pos2)
++		return pos1 - pos2;
+ 	else
+-		return bfqd->last_position - blk_rq_pos(rq);
++		return pos2 - pos1;
+ }
+ 
+-/*
+- * Return true if bfqq has no request pending and rq is close enough to
+- * bfqd->last_position, or if rq is closer to bfqd->last_position than
+- * bfqq->next_rq
+- */
+-static inline int bfq_rq_close(struct bfq_data *bfqd, struct request *rq)
++static inline int bfq_rq_close_to_sector(void *io_struct, bool request,
++					 sector_t sector)
+ {
+-	return bfq_dist_from_last(bfqd, rq) <= BFQQ_SEEK_THR;
++	return bfq_dist_from(bfq_io_struct_pos(io_struct, request), sector) <=
++	       BFQQ_SEEK_THR;
+ }
+ 
+-static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
++static struct bfq_queue *bfqq_close(struct bfq_data *bfqd, sector_t sector)
+ {
+ 	struct rb_root *root = &bfqd->rq_pos_tree;
+ 	struct rb_node *parent, *node;
+ 	struct bfq_queue *__bfqq;
+-	sector_t sector = bfqd->last_position;
+ 
+ 	if (RB_EMPTY_ROOT(root))
+ 		return NULL;
+@@ -1258,7 +1265,7 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
+ 	 * next_request position).
+ 	 */
+ 	__bfqq = rb_entry(parent, struct bfq_queue, pos_node);
+-	if (bfq_rq_close(bfqd, __bfqq->next_rq))
++	if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector))
+ 		return __bfqq;
+ 
+ 	if (blk_rq_pos(__bfqq->next_rq) < sector)
+@@ -1269,7 +1276,7 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
+ 		return NULL;
+ 
+ 	__bfqq = rb_entry(node, struct bfq_queue, pos_node);
+-	if (bfq_rq_close(bfqd, __bfqq->next_rq))
++	if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector))
+ 		return __bfqq;
+ 
+ 	return NULL;
+@@ -1278,14 +1285,12 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
+ /*
+  * bfqd - obvious
+  * cur_bfqq - passed in so that we don't decide that the current queue
+- *            is closely cooperating with itself.
+- *
+- * We are assuming that cur_bfqq has dispatched at least one request,
+- * and that bfqd->last_position reflects a position on the disk associated
+- * with the I/O issued by cur_bfqq.
++ *            is closely cooperating with itself
++ * sector - used as a reference point to search for a close queue
+  */
+ static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
+-					      struct bfq_queue *cur_bfqq)
++					      struct bfq_queue *cur_bfqq,
++					      sector_t sector)
+ {
+ 	struct bfq_queue *bfqq;
+ 
+@@ -1305,7 +1310,7 @@ static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
+ 	 * working closely on the same area of the disk. In that case,
+ 	 * we can group them together and don't waste time idling.
+ 	 */
+-	bfqq = bfqq_close(bfqd);
++	bfqq = bfqq_close(bfqd, sector);
+ 	if (bfqq == NULL || bfqq == cur_bfqq)
+ 		return NULL;
+ 
+@@ -1332,6 +1337,315 @@ static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
+ 	return bfqq;
+ }
+ 
++static struct bfq_queue *
++bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
++{
++	int process_refs, new_process_refs;
++	struct bfq_queue *__bfqq;
++
++	/*
++	 * If there are no process references on the new_bfqq, then it is
++	 * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain
++	 * may have dropped their last reference (not just their last process
++	 * reference).
++	 */
++	if (!bfqq_process_refs(new_bfqq))
++		return NULL;
++
++	/* Avoid a circular list and skip interim queue merges. */
++	while ((__bfqq = new_bfqq->new_bfqq)) {
++		if (__bfqq == bfqq)
++			return NULL;
++		new_bfqq = __bfqq;
++	}
++
++	process_refs = bfqq_process_refs(bfqq);
++	new_process_refs = bfqq_process_refs(new_bfqq);
++	/*
++	 * If the process for the bfqq has gone away, there is no
++	 * sense in merging the queues.
++	 */
++	if (process_refs == 0 || new_process_refs == 0)
++		return NULL;
++
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
++		new_bfqq->pid);
++
++	/*
++	 * Merging is just a redirection: the requests of the process
++	 * owning one of the two queues are redirected to the other queue.
++	 * The latter queue, in its turn, is set as shared if this is the
++	 * first time that the requests of some process are redirected to
++	 * it.
++	 *
++	 * We redirect bfqq to new_bfqq and not the opposite, because we
++	 * are in the context of the process owning bfqq, hence we have
++	 * the io_cq of this process. So we can immediately configure this
++	 * io_cq to redirect the requests of the process to new_bfqq.
++	 *
++	 * NOTE, even if new_bfqq coincides with the in-service queue, the
++	 * io_cq of new_bfqq is not available, because, if the in-service
++	 * queue is shared, bfqd->in_service_bic may not point to the
++	 * io_cq of the in-service queue.
++	 * Redirecting the requests of the process owning bfqq to the
++	 * currently in-service queue is in any case the best option, as
++	 * we feed the in-service queue with new requests close to the
++	 * last request served and, by doing so, hopefully increase the
++	 * throughput.
++	 */
++	bfqq->new_bfqq = new_bfqq;
++	atomic_add(process_refs, &new_bfqq->ref);
++	return new_bfqq;
++}
++
++/*
++ * Attempt to schedule a merge of bfqq with the currently in-service queue
++ * or with a close queue among the scheduled queues.
++ * Return NULL if no merge was scheduled, a pointer to the shared bfq_queue
++ * structure otherwise.
++ *
++ * The OOM queue is not allowed to participate to cooperation: in fact, since
++ * the requests temporarily redirected to the OOM queue could be redirected
++ * again to dedicated queues at any time, the state needed to correctly
++ * handle merging with the OOM queue would be quite complex and expensive
++ * to maintain. Besides, in such a critical condition as an out of memory,
++ * the benefits of queue merging may be little relevant, or even negligible.
++ */
++static struct bfq_queue *
++bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++		     void *io_struct, bool request)
++{
++	struct bfq_queue *in_service_bfqq, *new_bfqq;
++
++	if (bfqq->new_bfqq)
++		return bfqq->new_bfqq;
++
++	if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
++		return NULL;
++
++	in_service_bfqq = bfqd->in_service_queue;
++
++	if (in_service_bfqq == NULL || in_service_bfqq == bfqq ||
++	    !bfqd->in_service_bic ||
++	    unlikely(in_service_bfqq == &bfqd->oom_bfqq))
++		goto check_scheduled;
++
++	if (bfq_class_idle(in_service_bfqq) || bfq_class_idle(bfqq))
++		goto check_scheduled;
++
++	if (bfq_class_rt(in_service_bfqq) != bfq_class_rt(bfqq))
++		goto check_scheduled;
++
++	if (in_service_bfqq->entity.parent != bfqq->entity.parent)
++		goto check_scheduled;
++
++	if (bfq_rq_close_to_sector(io_struct, request, bfqd->last_position) &&
++	    bfq_bfqq_sync(in_service_bfqq) && bfq_bfqq_sync(bfqq)) {
++		new_bfqq = bfq_setup_merge(bfqq, in_service_bfqq);
++		if (new_bfqq != NULL)
++			return new_bfqq; /* Merge with in-service queue */
++	}
++
++	/*
++	 * Check whether there is a cooperator among currently scheduled
++	 * queues. The only thing we need is that the bio/request is not
++	 * NULL, as we need it to establish whether a cooperator exists.
++	 */
++check_scheduled:
++	new_bfqq = bfq_close_cooperator(bfqd, bfqq,
++					bfq_io_struct_pos(io_struct, request));
++	if (new_bfqq && likely(new_bfqq != &bfqd->oom_bfqq))
++		return bfq_setup_merge(bfqq, new_bfqq);
++
++	return NULL;
++}
++
++static inline void
++bfq_bfqq_save_state(struct bfq_queue *bfqq)
++{
++	/*
++	 * If bfqq->bic == NULL, the queue is already shared or its requests
++	 * have already been redirected to a shared queue; both idle window
++	 * and weight raising state have already been saved. Do nothing.
++	 */
++	if (bfqq->bic == NULL)
++		return;
++	if (bfqq->bic->wr_time_left)
++		/*
++		 * This is the queue of a just-started process, and would
++		 * deserve weight raising: we set wr_time_left to the full
++		 * weight-raising duration to trigger weight-raising when
++		 * and if the queue is split and the first request of the
++		 * queue is enqueued.
++		 */
++		bfqq->bic->wr_time_left = bfq_wr_duration(bfqq->bfqd);
++	else if (bfqq->wr_coeff > 1) {
++		unsigned long wr_duration =
++			jiffies - bfqq->last_wr_start_finish;
++		/*
++		 * It may happen that a queue's weight raising period lasts
++		 * longer than its wr_cur_max_time, as weight raising is
++		 * handled only when a request is enqueued or dispatched (it
++		 * does not use any timer). If the weight raising period is
++		 * about to end, don't save it.
++		 */
++		if (bfqq->wr_cur_max_time <= wr_duration)
++			bfqq->bic->wr_time_left = 0;
++		else
++			bfqq->bic->wr_time_left =
++				bfqq->wr_cur_max_time - wr_duration;
++		/*
++		 * The bfq_queue is becoming shared or the requests of the
++		 * process owning the queue are being redirected to a shared
++		 * queue. Stop the weight raising period of the queue, as in
++		 * both cases it should not be owned by an interactive or
++		 * soft real-time application.
++		 */
++		bfq_bfqq_end_wr(bfqq);
++	} else
++		bfqq->bic->wr_time_left = 0;
++	bfqq->bic->saved_idle_window = bfq_bfqq_idle_window(bfqq);
++	bfqq->bic->saved_IO_bound = bfq_bfqq_IO_bound(bfqq);
++	bfqq->bic->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq);
++	bfqq->bic->was_in_burst_list = !hlist_unhashed(&bfqq->burst_list_node);
++	bfqq->bic->cooperations++;
++	bfqq->bic->failed_cooperations = 0;
++}
++
++static inline void
++bfq_get_bic_reference(struct bfq_queue *bfqq)
++{
++	/*
++	 * If bfqq->bic has a non-NULL value, the bic to which it belongs
++	 * is about to begin using a shared bfq_queue.
++	 */
++	if (bfqq->bic)
++		atomic_long_inc(&bfqq->bic->icq.ioc->refcount);
++}
++
++static void
++bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
++		struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
++{
++	bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
++		(long unsigned)new_bfqq->pid);
++	/* Save weight raising and idle window of the merged queues */
++	bfq_bfqq_save_state(bfqq);
++	bfq_bfqq_save_state(new_bfqq);
++	if (bfq_bfqq_IO_bound(bfqq))
++		bfq_mark_bfqq_IO_bound(new_bfqq);
++	bfq_clear_bfqq_IO_bound(bfqq);
++	/*
++	 * Grab a reference to the bic, to prevent it from being destroyed
++	 * before being possibly touched by a bfq_split_bfqq().
++	 */
++	bfq_get_bic_reference(bfqq);
++	bfq_get_bic_reference(new_bfqq);
++	/*
++	 * Merge queues (that is, let bic redirect its requests to new_bfqq)
++	 */
++	bic_set_bfqq(bic, new_bfqq, 1);
++	bfq_mark_bfqq_coop(new_bfqq);
++	/*
++	 * new_bfqq now belongs to at least two bics (it is a shared queue):
++	 * set new_bfqq->bic to NULL. bfqq either:
++	 * - does not belong to any bic any more, and hence bfqq->bic must
++	 *   be set to NULL, or
++	 * - is a queue whose owning bics have already been redirected to a
++	 *   different queue, hence the queue is destined to not belong to
++	 *   any bic soon and bfqq->bic is already NULL (therefore the next
++	 *   assignment causes no harm).
++	 */
++	new_bfqq->bic = NULL;
++	bfqq->bic = NULL;
++	bfq_put_queue(bfqq);
++}
++
++static inline void bfq_bfqq_increase_failed_cooperations(struct bfq_queue *bfqq)
++{
++	struct bfq_io_cq *bic = bfqq->bic;
++	struct bfq_data *bfqd = bfqq->bfqd;
++
++	if (bic && bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh) {
++		bic->failed_cooperations++;
++		if (bic->failed_cooperations >= bfqd->bfq_failed_cooperations)
++			bic->cooperations = 0;
++	}
++}
++
++static int bfq_allow_merge(struct request_queue *q, struct request *rq,
++			   struct bio *bio)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_io_cq *bic;
++	struct bfq_queue *bfqq, *new_bfqq;
++
++	/*
++	 * Disallow merge of a sync bio into an async request.
++	 */
++	if (bfq_bio_sync(bio) && !rq_is_sync(rq))
++		return 0;
++
++	/*
++	 * Lookup the bfqq that this bio will be queued with. Allow
++	 * merge only if rq is queued there.
++	 * Queue lock is held here.
++	 */
++	bic = bfq_bic_lookup(bfqd, current->io_context);
++	if (bic == NULL)
++		return 0;
++
++	bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
++	/*
++	 * We take advantage of this function to perform an early merge
++	 * of the queues of possible cooperating processes.
++	 */
++	if (bfqq != NULL) {
++		new_bfqq = bfq_setup_cooperator(bfqd, bfqq, bio, false);
++		if (new_bfqq != NULL) {
++			bfq_merge_bfqqs(bfqd, bic, bfqq, new_bfqq);
++			/*
++			 * If we get here, the bio will be queued in the
++			 * shared queue, i.e., new_bfqq, so use new_bfqq
++			 * to decide whether bio and rq can be merged.
++			 */
++			bfqq = new_bfqq;
++		} else
++			bfq_bfqq_increase_failed_cooperations(bfqq);
++	}
++
++	return bfqq == RQ_BFQQ(rq);
++}
++
++static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
++				       struct bfq_queue *bfqq)
++{
++	if (bfqq != NULL) {
++		bfq_mark_bfqq_must_alloc(bfqq);
++		bfq_mark_bfqq_budget_new(bfqq);
++		bfq_clear_bfqq_fifo_expire(bfqq);
++
++		bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
++
++		bfq_log_bfqq(bfqd, bfqq,
++			     "set_in_service_queue, cur-budget = %lu",
++			     bfqq->entity.budget);
++	}
++
++	bfqd->in_service_queue = bfqq;
++}
++
++/*
++ * Get and set a new queue for service.
++ */
++static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq = bfq_get_next_queue(bfqd);
++
++	__bfq_set_in_service_queue(bfqd, bfqq);
++	return bfqq;
++}
++
+ /*
+  * If enough samples have been computed, return the current max budget
+  * stored in bfqd, which is dynamically updated according to the
+@@ -1475,61 +1789,6 @@ static struct request *bfq_check_fifo(struct bfq_queue *bfqq)
+ 	return rq;
+ }
+ 
+-/* Must be called with the queue_lock held. */
+-static int bfqq_process_refs(struct bfq_queue *bfqq)
+-{
+-	int process_refs, io_refs;
+-
+-	io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
+-	process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
+-	BUG_ON(process_refs < 0);
+-	return process_refs;
+-}
+-
+-static void bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+-{
+-	int process_refs, new_process_refs;
+-	struct bfq_queue *__bfqq;
+-
+-	/*
+-	 * If there are no process references on the new_bfqq, then it is
+-	 * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain
+-	 * may have dropped their last reference (not just their last process
+-	 * reference).
+-	 */
+-	if (!bfqq_process_refs(new_bfqq))
+-		return;
+-
+-	/* Avoid a circular list and skip interim queue merges. */
+-	while ((__bfqq = new_bfqq->new_bfqq)) {
+-		if (__bfqq == bfqq)
+-			return;
+-		new_bfqq = __bfqq;
+-	}
+-
+-	process_refs = bfqq_process_refs(bfqq);
+-	new_process_refs = bfqq_process_refs(new_bfqq);
+-	/*
+-	 * If the process for the bfqq has gone away, there is no
+-	 * sense in merging the queues.
+-	 */
+-	if (process_refs == 0 || new_process_refs == 0)
+-		return;
+-
+-	/*
+-	 * Merge in the direction of the lesser amount of work.
+-	 */
+-	if (new_process_refs >= process_refs) {
+-		bfqq->new_bfqq = new_bfqq;
+-		atomic_add(process_refs, &new_bfqq->ref);
+-	} else {
+-		new_bfqq->new_bfqq = bfqq;
+-		atomic_add(new_process_refs, &bfqq->ref);
+-	}
+-	bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
+-		new_bfqq->pid);
+-}
+-
+ static inline unsigned long bfq_bfqq_budget_left(struct bfq_queue *bfqq)
+ {
+ 	struct bfq_entity *entity = &bfqq->entity;
+@@ -2263,7 +2522,7 @@ static inline bool bfq_bfqq_must_idle(struct bfq_queue *bfqq)
+  */
+ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ {
+-	struct bfq_queue *bfqq, *new_bfqq = NULL;
++	struct bfq_queue *bfqq;
+ 	struct request *next_rq;
+ 	enum bfqq_expiration reason = BFQ_BFQQ_BUDGET_TIMEOUT;
+ 
+@@ -2273,17 +2532,6 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ 
+ 	bfq_log_bfqq(bfqd, bfqq, "select_queue: already in-service queue");
+ 
+-	/*
+-         * If another queue has a request waiting within our mean seek
+-         * distance, let it run. The expire code will check for close
+-         * cooperators and put the close queue at the front of the
+-         * service tree. If possible, merge the expiring queue with the
+-         * new bfqq.
+-         */
+-        new_bfqq = bfq_close_cooperator(bfqd, bfqq);
+-        if (new_bfqq != NULL && bfqq->new_bfqq == NULL)
+-                bfq_setup_merge(bfqq, new_bfqq);
+-
+ 	if (bfq_may_expire_for_budg_timeout(bfqq) &&
+ 	    !timer_pending(&bfqd->idle_slice_timer) &&
+ 	    !bfq_bfqq_must_idle(bfqq))
+@@ -2322,10 +2570,7 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ 				bfq_clear_bfqq_wait_request(bfqq);
+ 				del_timer(&bfqd->idle_slice_timer);
+ 			}
+-			if (new_bfqq == NULL)
+-				goto keep_queue;
+-			else
+-				goto expire;
++			goto keep_queue;
+ 		}
+ 	}
+ 
+@@ -2334,40 +2579,30 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ 	 * in flight (possibly waiting for a completion) or is idling for a
+ 	 * new request, then keep it.
+ 	 */
+-	if (new_bfqq == NULL && (timer_pending(&bfqd->idle_slice_timer) ||
+-	    (bfqq->dispatched != 0 && bfq_bfqq_must_not_expire(bfqq)))) {
++	if (timer_pending(&bfqd->idle_slice_timer) ||
++	    (bfqq->dispatched != 0 && bfq_bfqq_must_not_expire(bfqq))) {
+ 		bfqq = NULL;
+ 		goto keep_queue;
+-	} else if (new_bfqq != NULL && timer_pending(&bfqd->idle_slice_timer)) {
+-		/*
+-		 * Expiring the queue because there is a close cooperator,
+-		 * cancel timer.
+-		 */
+-		bfq_clear_bfqq_wait_request(bfqq);
+-		del_timer(&bfqd->idle_slice_timer);
+ 	}
+ 
+ 	reason = BFQ_BFQQ_NO_MORE_REQUESTS;
+ expire:
+ 	bfq_bfqq_expire(bfqd, bfqq, 0, reason);
+ new_queue:
+-	bfqq = bfq_set_in_service_queue(bfqd, new_bfqq);
++	bfqq = bfq_set_in_service_queue(bfqd);
+ 	bfq_log(bfqd, "select_queue: new queue %d returned",
+ 		bfqq != NULL ? bfqq->pid : 0);
+ keep_queue:
+ 	return bfqq;
+ }
+ 
+-static void bfq_update_wr_data(struct bfq_data *bfqd,
+-			       struct bfq_queue *bfqq)
++static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+-	if (bfqq->wr_coeff > 1) { /* queue is being boosted */
+-		struct bfq_entity *entity = &bfqq->entity;
+-
++	struct bfq_entity *entity = &bfqq->entity;
++	if (bfqq->wr_coeff > 1) { /* queue is being weight-raised */
+ 		bfq_log_bfqq(bfqd, bfqq,
+ 			"raising period dur %u/%u msec, old coeff %u, w %d(%d)",
+-			jiffies_to_msecs(jiffies -
+-				bfqq->last_wr_start_finish),
++			jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish),
+ 			jiffies_to_msecs(bfqq->wr_cur_max_time),
+ 			bfqq->wr_coeff,
+ 			bfqq->entity.weight, bfqq->entity.orig_weight);
+@@ -2376,12 +2611,16 @@ static void bfq_update_wr_data(struct bfq_data *bfqd,
+ 		       entity->orig_weight * bfqq->wr_coeff);
+ 		if (entity->ioprio_changed)
+ 			bfq_log_bfqq(bfqd, bfqq, "WARN: pending prio change");
++
+ 		/*
+ 		 * If the queue was activated in a burst, or
+ 		 * too much time has elapsed from the beginning
+-		 * of this weight-raising, then end weight raising.
++		 * of this weight-raising period, or the queue has
++		 * exceeded the acceptable number of cooperations,
++		 * then end weight raising.
+ 		 */
+ 		if (bfq_bfqq_in_large_burst(bfqq) ||
++		    bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh ||
+ 		    time_is_before_jiffies(bfqq->last_wr_start_finish +
+ 					   bfqq->wr_cur_max_time)) {
+ 			bfqq->last_wr_start_finish = jiffies;
+@@ -2390,11 +2629,13 @@ static void bfq_update_wr_data(struct bfq_data *bfqd,
+ 				     bfqq->last_wr_start_finish,
+ 				     jiffies_to_msecs(bfqq->wr_cur_max_time));
+ 			bfq_bfqq_end_wr(bfqq);
+-			__bfq_entity_update_weight_prio(
+-				bfq_entity_service_tree(entity),
+-				entity);
+ 		}
+ 	}
++	/* Update weight both if it must be raised and if it must be lowered */
++	if ((entity->weight > entity->orig_weight) != (bfqq->wr_coeff > 1))
++		__bfq_entity_update_weight_prio(
++			bfq_entity_service_tree(entity),
++			entity);
+ }
+ 
+ /*
+@@ -2642,6 +2883,25 @@ static inline void bfq_init_icq(struct io_cq *icq)
+ 	struct bfq_io_cq *bic = icq_to_bic(icq);
+ 
+ 	bic->ttime.last_end_request = jiffies;
++	/*
++	 * A newly created bic indicates that the process has just
++	 * started doing I/O, and is probably mapping into memory its
++	 * executable and libraries: it definitely needs weight raising.
++	 * There is however the possibility that the process performs,
++	 * for a while, I/O close to some other process. EQM intercepts
++	 * this behavior and may merge the queue corresponding to the
++	 * process  with some other queue, BEFORE the weight of the queue
++	 * is raised. Merged queues are not weight-raised (they are assumed
++	 * to belong to processes that benefit only from high throughput).
++	 * If the merge is basically the consequence of an accident, then
++	 * the queue will be split soon and will get back its old weight.
++	 * It is then important to write down somewhere that this queue
++	 * does need weight raising, even if it did not make it to get its
++	 * weight raised before being merged. To this purpose, we overload
++	 * the field raising_time_left and assign 1 to it, to mark the queue
++	 * as needing weight raising.
++	 */
++	bic->wr_time_left = 1;
+ }
+ 
+ static void bfq_exit_icq(struct io_cq *icq)
+@@ -2655,6 +2915,13 @@ static void bfq_exit_icq(struct io_cq *icq)
+ 	}
+ 
+ 	if (bic->bfqq[BLK_RW_SYNC]) {
++		/*
++		 * If the bic is using a shared queue, put the reference
++		 * taken on the io_context when the bic started using a
++		 * shared bfq_queue.
++		 */
++		if (bfq_bfqq_coop(bic->bfqq[BLK_RW_SYNC]))
++			put_io_context(icq->ioc);
+ 		bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_SYNC]);
+ 		bic->bfqq[BLK_RW_SYNC] = NULL;
+ 	}
+@@ -2950,6 +3217,10 @@ static void bfq_update_idle_window(struct bfq_data *bfqd,
+ 	if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq))
+ 		return;
+ 
++	/* Idle window just restored, statistics are meaningless. */
++	if (bfq_bfqq_just_split(bfqq))
++		return;
++
+ 	enable_idle = bfq_bfqq_idle_window(bfqq);
+ 
+ 	if (atomic_read(&bic->icq.ioc->active_ref) == 0 ||
+@@ -2997,6 +3268,7 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	if (bfqq->entity.service > bfq_max_budget(bfqd) / 8 ||
+ 	    !BFQQ_SEEKY(bfqq))
+ 		bfq_update_idle_window(bfqd, bfqq, bic);
++	bfq_clear_bfqq_just_split(bfqq);
+ 
+ 	bfq_log_bfqq(bfqd, bfqq,
+ 		     "rq_enqueued: idle_window=%d (seeky %d, mean %llu)",
+@@ -3057,13 +3329,49 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ static void bfq_insert_request(struct request_queue *q, struct request *rq)
+ {
+ 	struct bfq_data *bfqd = q->elevator->elevator_data;
+-	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++	struct bfq_queue *bfqq = RQ_BFQQ(rq), *new_bfqq;
+ 
+ 	assert_spin_locked(bfqd->queue->queue_lock);
++
++	/*
++	 * An unplug may trigger a requeue of a request from the device
++	 * driver: make sure we are in process context while trying to
++	 * merge two bfq_queues.
++	 */
++	if (!in_interrupt()) {
++		new_bfqq = bfq_setup_cooperator(bfqd, bfqq, rq, true);
++		if (new_bfqq != NULL) {
++			if (bic_to_bfqq(RQ_BIC(rq), 1) != bfqq)
++				new_bfqq = bic_to_bfqq(RQ_BIC(rq), 1);
++			/*
++			 * Release the request's reference to the old bfqq
++			 * and make sure one is taken to the shared queue.
++			 */
++			new_bfqq->allocated[rq_data_dir(rq)]++;
++			bfqq->allocated[rq_data_dir(rq)]--;
++			atomic_inc(&new_bfqq->ref);
++			bfq_put_queue(bfqq);
++			if (bic_to_bfqq(RQ_BIC(rq), 1) == bfqq)
++				bfq_merge_bfqqs(bfqd, RQ_BIC(rq),
++						bfqq, new_bfqq);
++			rq->elv.priv[1] = new_bfqq;
++			bfqq = new_bfqq;
++		} else
++			bfq_bfqq_increase_failed_cooperations(bfqq);
++	}
++
+ 	bfq_init_prio_data(bfqq, RQ_BIC(rq));
+ 
+ 	bfq_add_request(rq);
+ 
++	/*
++	 * Here a newly-created bfq_queue has already started a weight-raising
++	 * period: clear raising_time_left to prevent bfq_bfqq_save_state()
++	 * from assigning it a full weight-raising period. See the detailed
++	 * comments about this field in bfq_init_icq().
++	 */
++	if (bfqq->bic != NULL)
++		bfqq->bic->wr_time_left = 0;
+ 	rq->fifo_time = jiffies + bfqd->bfq_fifo_expire[rq_is_sync(rq)];
+ 	list_add_tail(&rq->queuelist, &bfqq->fifo);
+ 
+@@ -3228,18 +3536,6 @@ static void bfq_put_request(struct request *rq)
+ 	}
+ }
+ 
+-static struct bfq_queue *
+-bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
+-		struct bfq_queue *bfqq)
+-{
+-	bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
+-		(long unsigned)bfqq->new_bfqq->pid);
+-	bic_set_bfqq(bic, bfqq->new_bfqq, 1);
+-	bfq_mark_bfqq_coop(bfqq->new_bfqq);
+-	bfq_put_queue(bfqq);
+-	return bic_to_bfqq(bic, 1);
+-}
+-
+ /*
+  * Returns NULL if a new bfqq should be allocated, or the old bfqq if this
+  * was the last process referring to said bfqq.
+@@ -3248,6 +3544,9 @@ static struct bfq_queue *
+ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
+ {
+ 	bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue");
++
++	put_io_context(bic->icq.ioc);
++
+ 	if (bfqq_process_refs(bfqq) == 1) {
+ 		bfqq->pid = current->pid;
+ 		bfq_clear_bfqq_coop(bfqq);
+@@ -3276,6 +3575,7 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+ 	struct bfq_queue *bfqq;
+ 	struct bfq_group *bfqg;
+ 	unsigned long flags;
++	bool split = false;
+ 
+ 	might_sleep_if(gfp_mask & __GFP_WAIT);
+ 
+@@ -3293,25 +3593,26 @@ new_queue:
+ 	if (bfqq == NULL || bfqq == &bfqd->oom_bfqq) {
+ 		bfqq = bfq_get_queue(bfqd, bfqg, is_sync, bic, gfp_mask);
+ 		bic_set_bfqq(bic, bfqq, is_sync);
++		if (split && is_sync) {
++			if ((bic->was_in_burst_list && bfqd->large_burst) ||
++			    bic->saved_in_large_burst)
++				bfq_mark_bfqq_in_large_burst(bfqq);
++			else {
++			    bfq_clear_bfqq_in_large_burst(bfqq);
++			    if (bic->was_in_burst_list)
++			       hlist_add_head(&bfqq->burst_list_node,
++				              &bfqd->burst_list);
++			}
++		}
+ 	} else {
+-		/*
+-		 * If the queue was seeky for too long, break it apart.
+-		 */
++		/* If the queue was seeky for too long, break it apart. */
+ 		if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq)) {
+ 			bfq_log_bfqq(bfqd, bfqq, "breaking apart bfqq");
+ 			bfqq = bfq_split_bfqq(bic, bfqq);
++			split = true;
+ 			if (!bfqq)
+ 				goto new_queue;
+ 		}
+-
+-		/*
+-		 * Check to see if this queue is scheduled to merge with
+-		 * another closely cooperating queue. The merging of queues
+-		 * happens here as it must be done in process context.
+-		 * The reference on new_bfqq was taken in merge_bfqqs.
+-		 */
+-		if (bfqq->new_bfqq != NULL)
+-			bfqq = bfq_merge_bfqqs(bfqd, bic, bfqq);
+ 	}
+ 
+ 	bfqq->allocated[rw]++;
+@@ -3322,6 +3623,26 @@ new_queue:
+ 	rq->elv.priv[0] = bic;
+ 	rq->elv.priv[1] = bfqq;
+ 
++	/*
++	 * If a bfq_queue has only one process reference, it is owned
++	 * by only one bfq_io_cq: we can set the bic field of the
++	 * bfq_queue to the address of that structure. Also, if the
++	 * queue has just been split, mark a flag so that the
++	 * information is available to the other scheduler hooks.
++	 */
++	if (likely(bfqq != &bfqd->oom_bfqq) && bfqq_process_refs(bfqq) == 1) {
++		bfqq->bic = bic;
++		if (split) {
++			bfq_mark_bfqq_just_split(bfqq);
++			/*
++			 * If the queue has just been split from a shared
++			 * queue, restore the idle window and the possible
++			 * weight raising period.
++			 */
++			bfq_bfqq_resume_state(bfqq, bic);
++		}
++	}
++
+ 	spin_unlock_irqrestore(q->queue_lock, flags);
+ 
+ 	return 0;
+diff --git a/block/bfq-sched.c b/block/bfq-sched.c
+index 2931563..6764a7e 100644
+--- a/block/bfq-sched.c
++++ b/block/bfq-sched.c
+@@ -1091,34 +1091,6 @@ static struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
+ 	return bfqq;
+ }
+ 
+-/*
+- * Forced extraction of the given queue.
+- */
+-static void bfq_get_next_queue_forced(struct bfq_data *bfqd,
+-				      struct bfq_queue *bfqq)
+-{
+-	struct bfq_entity *entity;
+-	struct bfq_sched_data *sd;
+-
+-	BUG_ON(bfqd->in_service_queue != NULL);
+-
+-	entity = &bfqq->entity;
+-	/*
+-	 * Bubble up extraction/update from the leaf to the root.
+-	*/
+-	for_each_entity(entity) {
+-		sd = entity->sched_data;
+-		bfq_update_budget(entity);
+-		bfq_update_vtime(bfq_entity_service_tree(entity));
+-		bfq_active_extract(bfq_entity_service_tree(entity), entity);
+-		sd->in_service_entity = entity;
+-		sd->next_in_service = NULL;
+-		entity->service = 0;
+-	}
+-
+-	return;
+-}
+-
+ static void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd)
+ {
+ 	if (bfqd->in_service_bic != NULL) {
+diff --git a/block/bfq.h b/block/bfq.h
+index 518f2ac..4f519ea 100644
+--- a/block/bfq.h
++++ b/block/bfq.h
+@@ -218,18 +218,21 @@ struct bfq_group;
+  *                      idle @bfq_queue with no outstanding requests, then
+  *                      the task associated with the queue it is deemed as
+  *                      soft real-time (see the comments to the function
+- *                      bfq_bfqq_softrt_next_start()).
++ *                      bfq_bfqq_softrt_next_start())
+  * @last_idle_bklogged: time of the last transition of the @bfq_queue from
+  *                      idle to backlogged
+  * @service_from_backlogged: cumulative service received from the @bfq_queue
+  *                           since the last transition from idle to
+  *                           backlogged
++ * @bic: pointer to the bfq_io_cq owning the bfq_queue, set to %NULL if the
++ *	 queue is shared
+  *
+- * A bfq_queue is a leaf request queue; it can be associated with an io_context
+- * or more, if it is async or shared between cooperating processes. @cgroup
+- * holds a reference to the cgroup, to be sure that it does not disappear while
+- * a bfqq still references it (mostly to avoid races between request issuing and
+- * task migration followed by cgroup destruction).
++ * A bfq_queue is a leaf request queue; it can be associated with an
++ * io_context or more, if it  is  async or shared  between  cooperating
++ * processes. @cgroup holds a reference to the cgroup, to be sure that it
++ * does not disappear while a bfqq still references it (mostly to avoid
++ * races between request issuing and task migration followed by cgroup
++ * destruction).
+  * All the fields are protected by the queue lock of the containing bfqd.
+  */
+ struct bfq_queue {
+@@ -269,6 +272,7 @@ struct bfq_queue {
+ 	unsigned int requests_within_timer;
+ 
+ 	pid_t pid;
++	struct bfq_io_cq *bic;
+ 
+ 	/* weight-raising fields */
+ 	unsigned long wr_cur_max_time;
+@@ -298,12 +302,42 @@ struct bfq_ttime {
+  * @icq: associated io_cq structure
+  * @bfqq: array of two process queues, the sync and the async
+  * @ttime: associated @bfq_ttime struct
++ * @wr_time_left: snapshot of the time left before weight raising ends
++ *                for the sync queue associated to this process; this
++ *		  snapshot is taken to remember this value while the weight
++ *		  raising is suspended because the queue is merged with a
++ *		  shared queue, and is used to set @raising_cur_max_time
++ *		  when the queue is split from the shared queue and its
++ *		  weight is raised again
++ * @saved_idle_window: same purpose as the previous field for the idle
++ *                     window
++ * @saved_IO_bound: same purpose as the previous two fields for the I/O
++ *                  bound classification of a queue
++ * @saved_in_large_burst: same purpose as the previous fields for the
++ *                        value of the field keeping the queue's belonging
++ *                        to a large burst
++ * @was_in_burst_list: true if the queue belonged to a burst list
++ *                     before its merge with another cooperating queue
++ * @cooperations: counter of consecutive successful queue merges underwent
++ *                by any of the process' @bfq_queues
++ * @failed_cooperations: counter of consecutive failed queue merges of any
++ *                       of the process' @bfq_queues
+  */
+ struct bfq_io_cq {
+ 	struct io_cq icq; /* must be the first member */
+ 	struct bfq_queue *bfqq[2];
+ 	struct bfq_ttime ttime;
+ 	int ioprio;
++
++	unsigned int wr_time_left;
++	bool saved_idle_window;
++	bool saved_IO_bound;
++
++	bool saved_in_large_burst;
++	bool was_in_burst_list;
++
++	unsigned int cooperations;
++	unsigned int failed_cooperations;
+ };
+ 
+ enum bfq_device_speed {
+@@ -539,7 +573,7 @@ enum bfqq_state_flags {
+ 	BFQ_BFQQ_FLAG_prio_changed,	/* task priority has changed */
+ 	BFQ_BFQQ_FLAG_sync,		/* synchronous queue */
+ 	BFQ_BFQQ_FLAG_budget_new,	/* no completion with this budget */
+-	BFQ_BFQQ_FLAG_IO_bound,         /*
++	BFQ_BFQQ_FLAG_IO_bound,		/*
+ 					 * bfqq has timed-out at least once
+ 					 * having consumed at most 2/10 of
+ 					 * its budget
+@@ -552,12 +586,13 @@ enum bfqq_state_flags {
+ 					 * bfqq has proved to be slow and
+ 					 * seeky until budget timeout
+ 					 */
+-	BFQ_BFQQ_FLAG_softrt_update,    /*
++	BFQ_BFQQ_FLAG_softrt_update,	/*
+ 					 * may need softrt-next-start
+ 					 * update
+ 					 */
+ 	BFQ_BFQQ_FLAG_coop,		/* bfqq is shared */
+-	BFQ_BFQQ_FLAG_split_coop,	/* shared bfqq will be splitted */
++	BFQ_BFQQ_FLAG_split_coop,	/* shared bfqq will be split */
++	BFQ_BFQQ_FLAG_just_split,	/* queue has just been split */
+ };
+ 
+ #define BFQ_BFQQ_FNS(name)						\
+@@ -587,6 +622,7 @@ BFQ_BFQQ_FNS(in_large_burst);
+ BFQ_BFQQ_FNS(constantly_seeky);
+ BFQ_BFQQ_FNS(coop);
+ BFQ_BFQQ_FNS(split_coop);
++BFQ_BFQQ_FNS(just_split);
+ BFQ_BFQQ_FNS(softrt_update);
+ #undef BFQ_BFQQ_FNS
+ 
+-- 
+2.1.0
+


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-05-03 23:55 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-05-03 23:55 UTC (permalink / raw
  To: gentoo-commits

commit:     a7f93abca481c4afc0d6e0c515d41f2c4aef9e41
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun May  3 19:54:53 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun May  3 19:54:53 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a7f93abc

Fix for lz4 compression. Thanks to Christian Xia. See bug #546422.

 0000_README                    |  4 ++++
 2910_lz4-compression-fix.patch | 30 ++++++++++++++++++++++++++++++
 2 files changed, 34 insertions(+)

diff --git a/0000_README b/0000_README
index bcce967..f51d299 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  2905_s2disk-resume-image-fix.patch
 From:   Al Viro <viro <at> ZenIV.linux.org.uk>
 Desc:   Do not lock when UMH is waiting on current thread spawned by linuxrc. (bug #481344)
 
+Patch:  2910_lz4-compression-fix.patch
+From:   https://bugs.gentoo.org/show_bug.cgi?id=546422
+Desc:   Fix for lz4 compression regression. Thanks to Christian Xia. See bug #546422.
+
 Patch:  4200_fbcondecor-3.19.patch
 From:   http://www.mepiscommunity.org/fbcondecor
 Desc:   Bootsplash ported by Marco. (Bug #539616)

diff --git a/2910_lz4-compression-fix.patch b/2910_lz4-compression-fix.patch
new file mode 100644
index 0000000..1c55f32
--- /dev/null
+++ b/2910_lz4-compression-fix.patch
@@ -0,0 +1,30 @@
+--- a/lib/lz4/lz4_decompress.c	2015-04-13 16:20:04.896315560 +0800
++++ b/lib/lz4/lz4_decompress.c	2015-04-13 16:27:08.929317053 +0800
+@@ -139,8 +139,12 @@
+ 			/* Error: request to write beyond destination buffer */
+ 			if (cpy > oend)
+ 				goto _output_error;
++#if LZ4_ARCH64
++			if ((ref + COPYLENGTH) > oend)
++#else
+ 			if ((ref + COPYLENGTH) > oend ||
+ 					(op + COPYLENGTH) > oend)
++#endif
+ 				goto _output_error;
+ 			LZ4_SECURECOPY(ref, op, (oend - COPYLENGTH));
+ 			while (op < cpy)
+@@ -270,7 +274,13 @@
+ 		if (cpy > oend - COPYLENGTH) {
+ 			if (cpy > oend)
+ 				goto _output_error; /* write outside of buf */
+-
++#if LZ4_ARCH64
++			if ((ref + COPYLENGTH) > oend)
++#else
++			if ((ref + COPYLENGTH) > oend ||
++			    (op + COPYLENGTH) > oend)
++#endif
++				goto _output_error;
+ 			LZ4_SECURECOPY(ref, op, (oend - COPYLENGTH));
+ 			while (op < cpy)
+ 				*op++ = *ref++;


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-05-07 19:14 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-05-07 19:14 UTC (permalink / raw
  To: gentoo-commits

commit:     4872a00f636d31563983666718df3abeaf7e9f81
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May  7 19:14:29 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May  7 19:14:29 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4872a00f

Linux patch 4.0.2

 0000_README            |     4 +
 1001_linux-4.0.2.patch | 16857 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 16861 insertions(+)

diff --git a/0000_README b/0000_README
index f51d299..4fdafa3 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch:  1000_linux-4.0.1.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.0.1
 
+Patch:  1001_linux-4.0.2.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.0.2
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1001_linux-4.0.2.patch b/1001_linux-4.0.2.patch
new file mode 100644
index 0000000..5650c4e
--- /dev/null
+++ b/1001_linux-4.0.2.patch
@@ -0,0 +1,16857 @@
+From 7bebf970047f59c16ddd5660b54562c8bcd40074 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Sebastian=20P=C3=B6hn?= <sebastian.poehn@gmail.com>
+Date: Mon, 20 Apr 2015 09:19:20 +0200
+Subject: [PATCH 001/219] ip_forward: Drop frames with attached skb->sk
+Cc: mpagano@gentoo.org
+
+[ Upstream commit 2ab957492d13bb819400ac29ae55911d50a82a13 ]
+
+Initial discussion was:
+[FYI] xfrm: Don't lookup sk_policy for timewait sockets
+
+Forwarded frames should not have a socket attached. Especially
+tw sockets will lead to panics later-on in the stack.
+
+This was observed with TPROXY assigning a tw socket and broken
+policy routing (misconfigured). As a result frame enters
+forwarding path instead of input. We cannot solve this in
+TPROXY as it cannot know that policy routing is broken.
+
+v2:
+Remove useless comment
+
+Signed-off-by: Sebastian Poehn <sebastian.poehn@gmail.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ net/ipv4/ip_forward.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/net/ipv4/ip_forward.c b/net/ipv4/ip_forward.c
+index d9bc28a..53bd53f 100644
+--- a/net/ipv4/ip_forward.c
++++ b/net/ipv4/ip_forward.c
+@@ -82,6 +82,9 @@ int ip_forward(struct sk_buff *skb)
+ 	if (skb->pkt_type != PACKET_HOST)
+ 		goto drop;
+ 
++	if (unlikely(skb->sk))
++		goto drop;
++
+ 	if (skb_warn_if_lro(skb))
+ 		goto drop;
+ 
+-- 
+2.3.6
+
+
+From 8a6846e3226bb475db9686590da85bcc609c75a9 Mon Sep 17 00:00:00 2001
+From: Tom Herbert <tom@herbertland.com>
+Date: Mon, 20 Apr 2015 14:10:04 -0700
+Subject: [PATCH 002/219] net: add skb_checksum_complete_unset
+Cc: mpagano@gentoo.org
+
+[ Upstream commit 4e18b9adf2f910ec4d30b811a74a5b626e6c6125 ]
+
+This function changes ip_summed to CHECKSUM_NONE if CHECKSUM_COMPLETE
+is set. This is called to discard checksum-complete when packet
+is being modified and checksum is not pulled for headers in a layer.
+
+Signed-off-by: Tom Herbert <tom@herbertland.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ include/linux/skbuff.h | 12 ++++++++++++
+ 1 file changed, 12 insertions(+)
+
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index f54d665..b5c204c 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -3013,6 +3013,18 @@ static inline bool __skb_checksum_validate_needed(struct sk_buff *skb,
+  */
+ #define CHECKSUM_BREAK 76
+ 
++/* Unset checksum-complete
++ *
++ * Unset checksum complete can be done when packet is being modified
++ * (uncompressed for instance) and checksum-complete value is
++ * invalidated.
++ */
++static inline void skb_checksum_complete_unset(struct sk_buff *skb)
++{
++	if (skb->ip_summed == CHECKSUM_COMPLETE)
++		skb->ip_summed = CHECKSUM_NONE;
++}
++
+ /* Validate (init) checksum based on checksum complete.
+  *
+  * Return values:
+-- 
+2.3.6
+
+
+From 5a248fca60021d0e35a9de9bd0620eff840365ca Mon Sep 17 00:00:00 2001
+From: Tom Herbert <tom@herbertland.com>
+Date: Mon, 20 Apr 2015 14:10:05 -0700
+Subject: [PATCH 003/219] ppp: call skb_checksum_complete_unset in
+ ppp_receive_frame
+Cc: mpagano@gentoo.org
+
+[ Upstream commit 3dfb05340ec6676e6fc71a9ae87bbbe66d3c2998 ]
+
+Call checksum_complete_unset in PPP receive to discard checksum-complete
+value. PPP does not pull checksum for headers and also modifies packet
+as in VJ compression.
+
+Signed-off-by: Tom Herbert <tom@herbertland.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/net/ppp/ppp_generic.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index af034db..9d15566 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -1716,6 +1716,7 @@ ppp_receive_frame(struct ppp *ppp, struct sk_buff *skb, struct channel *pch)
+ {
+ 	/* note: a 0-length skb is used as an error indication */
+ 	if (skb->len > 0) {
++		skb_checksum_complete_unset(skb);
+ #ifdef CONFIG_PPP_MULTILINK
+ 		/* XXX do channel-level decompression here */
+ 		if (PPP_PROTO(skb) == PPP_MP)
+-- 
+2.3.6
+
+
+From e1b095eb7de9dc2235c86e15be6b9d0bff56a6ab Mon Sep 17 00:00:00 2001
+From: Eric Dumazet <edumazet@google.com>
+Date: Tue, 21 Apr 2015 18:32:24 -0700
+Subject: [PATCH 004/219] tcp: fix possible deadlock in tcp_send_fin()
+Cc: mpagano@gentoo.org
+
+[ Upstream commit d83769a580f1132ac26439f50068a29b02be535e ]
+
+Using sk_stream_alloc_skb() in tcp_send_fin() is dangerous in
+case a huge process is killed by OOM, and tcp_mem[2] is hit.
+
+To be able to free memory we need to make progress, so this
+patch allows FIN packets to not care about tcp_mem[2], if
+skb allocation succeeded.
+
+In a follow-up patch, we might abort tcp_send_fin() infinite loop
+in case TIF_MEMDIE is set on this thread, as memory allocator
+did its best getting extra memory already.
+
+This patch reverts d22e15371811 ("tcp: fix tcp fin memory accounting")
+
+Fixes: d22e15371811 ("tcp: fix tcp fin memory accounting")
+Signed-off-by: Eric Dumazet <edumazet@google.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ net/ipv4/tcp_output.c | 20 +++++++++++++++++++-
+ 1 file changed, 19 insertions(+), 1 deletion(-)
+
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index d520492..f911dc2 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -2751,6 +2751,21 @@ begin_fwd:
+ 	}
+ }
+ 
++/* We allow to exceed memory limits for FIN packets to expedite
++ * connection tear down and (memory) recovery.
++ * Otherwise tcp_send_fin() could loop forever.
++ */
++static void sk_forced_wmem_schedule(struct sock *sk, int size)
++{
++	int amt, status;
++
++	if (size <= sk->sk_forward_alloc)
++		return;
++	amt = sk_mem_pages(size);
++	sk->sk_forward_alloc += amt * SK_MEM_QUANTUM;
++	sk_memory_allocated_add(sk, amt, &status);
++}
++
+ /* Send a fin.  The caller locks the socket for us.  This cannot be
+  * allowed to fail queueing a FIN frame under any circumstances.
+  */
+@@ -2773,11 +2788,14 @@ void tcp_send_fin(struct sock *sk)
+ 	} else {
+ 		/* Socket is locked, keep trying until memory is available. */
+ 		for (;;) {
+-			skb = sk_stream_alloc_skb(sk, 0, sk->sk_allocation);
++			skb = alloc_skb_fclone(MAX_TCP_HEADER,
++					       sk->sk_allocation);
+ 			if (skb)
+ 				break;
+ 			yield();
+ 		}
++		skb_reserve(skb, MAX_TCP_HEADER);
++		sk_forced_wmem_schedule(sk, skb->truesize);
+ 		/* FIN eats a sequence byte, write_seq advanced by tcp_queue_skb(). */
+ 		tcp_init_nondata_skb(skb, tp->write_seq,
+ 				     TCPHDR_ACK | TCPHDR_FIN);
+-- 
+2.3.6
+
+
+From 7e72469760dd73a44e8cfd6105bf695b7572e246 Mon Sep 17 00:00:00 2001
+From: Eric Dumazet <edumazet@google.com>
+Date: Thu, 23 Apr 2015 10:42:39 -0700
+Subject: [PATCH 005/219] tcp: avoid looping in tcp_send_fin()
+Cc: mpagano@gentoo.org
+
+[ Upstream commit 845704a535e9b3c76448f52af1b70e4422ea03fd ]
+
+Presence of an unbound loop in tcp_send_fin() had always been hard
+to explain when analyzing crash dumps involving gigantic dying processes
+with millions of sockets.
+
+Lets try a different strategy :
+
+In case of memory pressure, try to add the FIN flag to last packet
+in write queue, even if packet was already sent. TCP stack will
+be able to deliver this FIN after a timeout event. Note that this
+FIN being delivered by a retransmit, it also carries a Push flag
+given our current implementation.
+
+By checking sk_under_memory_pressure(), we anticipate that cooking
+many FIN packets might deplete tcp memory.
+
+In the case we could not allocate a packet, even with __GFP_WAIT
+allocation, then not sending a FIN seems quite reasonable if it allows
+to get rid of this socket, free memory, and not block the process from
+eventually doing other useful work.
+
+Signed-off-by: Eric Dumazet <edumazet@google.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ net/ipv4/tcp_output.c | 50 +++++++++++++++++++++++++++++---------------------
+ 1 file changed, 29 insertions(+), 21 deletions(-)
+
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index f911dc2..9d48dc4 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -2753,7 +2753,8 @@ begin_fwd:
+ 
+ /* We allow to exceed memory limits for FIN packets to expedite
+  * connection tear down and (memory) recovery.
+- * Otherwise tcp_send_fin() could loop forever.
++ * Otherwise tcp_send_fin() could be tempted to either delay FIN
++ * or even be forced to close flow without any FIN.
+  */
+ static void sk_forced_wmem_schedule(struct sock *sk, int size)
+ {
+@@ -2766,33 +2767,40 @@ static void sk_forced_wmem_schedule(struct sock *sk, int size)
+ 	sk_memory_allocated_add(sk, amt, &status);
+ }
+ 
+-/* Send a fin.  The caller locks the socket for us.  This cannot be
+- * allowed to fail queueing a FIN frame under any circumstances.
++/* Send a FIN. The caller locks the socket for us.
++ * We should try to send a FIN packet really hard, but eventually give up.
+  */
+ void tcp_send_fin(struct sock *sk)
+ {
++	struct sk_buff *skb, *tskb = tcp_write_queue_tail(sk);
+ 	struct tcp_sock *tp = tcp_sk(sk);
+-	struct sk_buff *skb = tcp_write_queue_tail(sk);
+-	int mss_now;
+ 
+-	/* Optimization, tack on the FIN if we have a queue of
+-	 * unsent frames.  But be careful about outgoing SACKS
+-	 * and IP options.
++	/* Optimization, tack on the FIN if we have one skb in write queue and
++	 * this skb was not yet sent, or we are under memory pressure.
++	 * Note: in the latter case, FIN packet will be sent after a timeout,
++	 * as TCP stack thinks it has already been transmitted.
+ 	 */
+-	mss_now = tcp_current_mss(sk);
+-
+-	if (tcp_send_head(sk) != NULL) {
+-		TCP_SKB_CB(skb)->tcp_flags |= TCPHDR_FIN;
+-		TCP_SKB_CB(skb)->end_seq++;
++	if (tskb && (tcp_send_head(sk) || sk_under_memory_pressure(sk))) {
++coalesce:
++		TCP_SKB_CB(tskb)->tcp_flags |= TCPHDR_FIN;
++		TCP_SKB_CB(tskb)->end_seq++;
+ 		tp->write_seq++;
++		if (!tcp_send_head(sk)) {
++			/* This means tskb was already sent.
++			 * Pretend we included the FIN on previous transmit.
++			 * We need to set tp->snd_nxt to the value it would have
++			 * if FIN had been sent. This is because retransmit path
++			 * does not change tp->snd_nxt.
++			 */
++			tp->snd_nxt++;
++			return;
++		}
+ 	} else {
+-		/* Socket is locked, keep trying until memory is available. */
+-		for (;;) {
+-			skb = alloc_skb_fclone(MAX_TCP_HEADER,
+-					       sk->sk_allocation);
+-			if (skb)
+-				break;
+-			yield();
++		skb = alloc_skb_fclone(MAX_TCP_HEADER, sk->sk_allocation);
++		if (unlikely(!skb)) {
++			if (tskb)
++				goto coalesce;
++			return;
+ 		}
+ 		skb_reserve(skb, MAX_TCP_HEADER);
+ 		sk_forced_wmem_schedule(sk, skb->truesize);
+@@ -2801,7 +2809,7 @@ void tcp_send_fin(struct sock *sk)
+ 				     TCPHDR_ACK | TCPHDR_FIN);
+ 		tcp_queue_skb(sk, skb);
+ 	}
+-	__tcp_push_pending_frames(sk, mss_now, TCP_NAGLE_OFF);
++	__tcp_push_pending_frames(sk, tcp_current_mss(sk), TCP_NAGLE_OFF);
+ }
+ 
+ /* We get here when a process closes a file descriptor (either due to
+-- 
+2.3.6
+
+
+From e591662c1a5fb0e9ee486bf8edbed14d0507cfb4 Mon Sep 17 00:00:00 2001
+From: Eric Dumazet <edumazet@google.com>
+Date: Wed, 22 Apr 2015 07:33:36 -0700
+Subject: [PATCH 006/219] net: do not deplete pfmemalloc reserve
+Cc: mpagano@gentoo.org
+
+[ Upstream commit 79930f5892e134c6da1254389577fffb8bd72c66 ]
+
+build_skb() should look at the page pfmemalloc status.
+If set, this means page allocator allocated this page in the
+expectation it would help to free other pages. Networking
+stack can do that only if skb->pfmemalloc is also set.
+
+Also, we must refrain using high order pages from the pfmemalloc
+reserve, so __page_frag_refill() must also use __GFP_NOMEMALLOC for
+them. Under memory pressure, using order-0 pages is probably the best
+strategy.
+
+Signed-off-by: Eric Dumazet <edumazet@google.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ net/core/skbuff.c | 9 +++++++--
+ 1 file changed, 7 insertions(+), 2 deletions(-)
+
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 98d45fe..5ec3742 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -311,7 +311,11 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size)
+ 
+ 	memset(skb, 0, offsetof(struct sk_buff, tail));
+ 	skb->truesize = SKB_TRUESIZE(size);
+-	skb->head_frag = frag_size != 0;
++	if (frag_size) {
++		skb->head_frag = 1;
++		if (virt_to_head_page(data)->pfmemalloc)
++			skb->pfmemalloc = 1;
++	}
+ 	atomic_set(&skb->users, 1);
+ 	skb->head = data;
+ 	skb->data = data;
+@@ -348,7 +352,8 @@ static struct page *__page_frag_refill(struct netdev_alloc_cache *nc,
+ 	gfp_t gfp = gfp_mask;
+ 
+ 	if (order) {
+-		gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY;
++		gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY |
++			    __GFP_NOMEMALLOC;
+ 		page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, order);
+ 		nc->frag.size = PAGE_SIZE << (page ? order : 0);
+ 	}
+-- 
+2.3.6
+
+
+From f009181dcccd55398f872d090fa2e1780b4ca270 Mon Sep 17 00:00:00 2001
+From: Eric Dumazet <edumazet@google.com>
+Date: Fri, 24 Apr 2015 16:05:01 -0700
+Subject: [PATCH 007/219] net: fix crash in build_skb()
+Cc: mpagano@gentoo.org
+
+[ Upstream commit 2ea2f62c8bda242433809c7f4e9eae1c52c40bbe ]
+
+When I added pfmemalloc support in build_skb(), I forgot netlink
+was using build_skb() with a vmalloc() area.
+
+In this patch I introduce __build_skb() for netlink use,
+and build_skb() is a wrapper handling both skb->head_frag and
+skb->pfmemalloc
+
+This means netlink no longer has to hack skb->head_frag
+
+[ 1567.700067] kernel BUG at arch/x86/mm/physaddr.c:26!
+[ 1567.700067] invalid opcode: 0000 [#1] PREEMPT SMP KASAN
+[ 1567.700067] Dumping ftrace buffer:
+[ 1567.700067]    (ftrace buffer empty)
+[ 1567.700067] Modules linked in:
+[ 1567.700067] CPU: 9 PID: 16186 Comm: trinity-c182 Not tainted 4.0.0-next-20150424-sasha-00037-g4796e21 #2167
+[ 1567.700067] task: ffff880127efb000 ti: ffff880246770000 task.ti: ffff880246770000
+[ 1567.700067] RIP: __phys_addr (arch/x86/mm/physaddr.c:26 (discriminator 3))
+[ 1567.700067] RSP: 0018:ffff8802467779d8  EFLAGS: 00010202
+[ 1567.700067] RAX: 000041000ed8e000 RBX: ffffc9008ed8e000 RCX: 000000000000002c
+[ 1567.700067] RDX: 0000000000000004 RSI: 0000000000000000 RDI: ffffffffb3fd6049
+[ 1567.700067] RBP: ffff8802467779f8 R08: 0000000000000019 R09: ffff8801d0168000
+[ 1567.700067] R10: ffff8801d01680c7 R11: ffffed003a02d019 R12: ffffc9000ed8e000
+[ 1567.700067] R13: 0000000000000f40 R14: 0000000000001180 R15: ffffc9000ed8e000
+[ 1567.700067] FS:  00007f2a7da3f700(0000) GS:ffff8801d1000000(0000) knlGS:0000000000000000
+[ 1567.700067] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+[ 1567.700067] CR2: 0000000000738308 CR3: 000000022e329000 CR4: 00000000000007e0
+[ 1567.700067] Stack:
+[ 1567.700067]  ffffc9000ed8e000 ffff8801d0168000 ffffc9000ed8e000 ffff8801d0168000
+[ 1567.700067]  ffff880246777a28 ffffffffad7c0a21 0000000000001080 ffff880246777c08
+[ 1567.700067]  ffff88060d302e68 ffff880246777b58 ffff880246777b88 ffffffffad9a6821
+[ 1567.700067] Call Trace:
+[ 1567.700067] build_skb (include/linux/mm.h:508 net/core/skbuff.c:316)
+[ 1567.700067] netlink_sendmsg (net/netlink/af_netlink.c:1633 net/netlink/af_netlink.c:2329)
+[ 1567.774369] ? sched_clock_cpu (kernel/sched/clock.c:311)
+[ 1567.774369] ? netlink_unicast (net/netlink/af_netlink.c:2273)
+[ 1567.774369] ? netlink_unicast (net/netlink/af_netlink.c:2273)
+[ 1567.774369] sock_sendmsg (net/socket.c:614 net/socket.c:623)
+[ 1567.774369] sock_write_iter (net/socket.c:823)
+[ 1567.774369] ? sock_sendmsg (net/socket.c:806)
+[ 1567.774369] __vfs_write (fs/read_write.c:479 fs/read_write.c:491)
+[ 1567.774369] ? get_lock_stats (kernel/locking/lockdep.c:249)
+[ 1567.774369] ? default_llseek (fs/read_write.c:487)
+[ 1567.774369] ? vtime_account_user (kernel/sched/cputime.c:701)
+[ 1567.774369] ? rw_verify_area (fs/read_write.c:406 (discriminator 4))
+[ 1567.774369] vfs_write (fs/read_write.c:539)
+[ 1567.774369] SyS_write (fs/read_write.c:586 fs/read_write.c:577)
+[ 1567.774369] ? SyS_read (fs/read_write.c:577)
+[ 1567.774369] ? __this_cpu_preempt_check (lib/smp_processor_id.c:63)
+[ 1567.774369] ? trace_hardirqs_on_caller (kernel/locking/lockdep.c:2594 kernel/locking/lockdep.c:2636)
+[ 1567.774369] ? trace_hardirqs_on_thunk (arch/x86/lib/thunk_64.S:42)
+[ 1567.774369] system_call_fastpath (arch/x86/kernel/entry_64.S:261)
+
+Fixes: 79930f5892e ("net: do not deplete pfmemalloc reserve")
+Signed-off-by: Eric Dumazet <edumazet@google.com>
+Reported-by: Sasha Levin <sasha.levin@oracle.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ include/linux/skbuff.h   |  1 +
+ net/core/skbuff.c        | 31 ++++++++++++++++++++++---------
+ net/netlink/af_netlink.c |  6 ++----
+ 3 files changed, 25 insertions(+), 13 deletions(-)
+
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index b5c204c..bdccc4b 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -769,6 +769,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
+ 
+ struct sk_buff *__alloc_skb(unsigned int size, gfp_t priority, int flags,
+ 			    int node);
++struct sk_buff *__build_skb(void *data, unsigned int frag_size);
+ struct sk_buff *build_skb(void *data, unsigned int frag_size);
+ static inline struct sk_buff *alloc_skb(unsigned int size,
+ 					gfp_t priority)
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 5ec3742..e9f9a15 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -280,13 +280,14 @@ nodata:
+ EXPORT_SYMBOL(__alloc_skb);
+ 
+ /**
+- * build_skb - build a network buffer
++ * __build_skb - build a network buffer
+  * @data: data buffer provided by caller
+- * @frag_size: size of fragment, or 0 if head was kmalloced
++ * @frag_size: size of data, or 0 if head was kmalloced
+  *
+  * Allocate a new &sk_buff. Caller provides space holding head and
+  * skb_shared_info. @data must have been allocated by kmalloc() only if
+- * @frag_size is 0, otherwise data should come from the page allocator.
++ * @frag_size is 0, otherwise data should come from the page allocator
++ *  or vmalloc()
+  * The return is the new skb buffer.
+  * On a failure the return is %NULL, and @data is not freed.
+  * Notes :
+@@ -297,7 +298,7 @@ EXPORT_SYMBOL(__alloc_skb);
+  *  before giving packet to stack.
+  *  RX rings only contains data buffers, not full skbs.
+  */
+-struct sk_buff *build_skb(void *data, unsigned int frag_size)
++struct sk_buff *__build_skb(void *data, unsigned int frag_size)
+ {
+ 	struct skb_shared_info *shinfo;
+ 	struct sk_buff *skb;
+@@ -311,11 +312,6 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size)
+ 
+ 	memset(skb, 0, offsetof(struct sk_buff, tail));
+ 	skb->truesize = SKB_TRUESIZE(size);
+-	if (frag_size) {
+-		skb->head_frag = 1;
+-		if (virt_to_head_page(data)->pfmemalloc)
+-			skb->pfmemalloc = 1;
+-	}
+ 	atomic_set(&skb->users, 1);
+ 	skb->head = data;
+ 	skb->data = data;
+@@ -332,6 +328,23 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size)
+ 
+ 	return skb;
+ }
++
++/* build_skb() is wrapper over __build_skb(), that specifically
++ * takes care of skb->head and skb->pfmemalloc
++ * This means that if @frag_size is not zero, then @data must be backed
++ * by a page fragment, not kmalloc() or vmalloc()
++ */
++struct sk_buff *build_skb(void *data, unsigned int frag_size)
++{
++	struct sk_buff *skb = __build_skb(data, frag_size);
++
++	if (skb && frag_size) {
++		skb->head_frag = 1;
++		if (virt_to_head_page(data)->pfmemalloc)
++			skb->pfmemalloc = 1;
++	}
++	return skb;
++}
+ EXPORT_SYMBOL(build_skb);
+ 
+ struct netdev_alloc_cache {
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 05919bf..d1d7a81 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1616,13 +1616,11 @@ static struct sk_buff *netlink_alloc_large_skb(unsigned int size,
+ 	if (data == NULL)
+ 		return NULL;
+ 
+-	skb = build_skb(data, size);
++	skb = __build_skb(data, size);
+ 	if (skb == NULL)
+ 		vfree(data);
+-	else {
+-		skb->head_frag = 0;
++	else
+ 		skb->destructor = netlink_skb_destructor;
+-	}
+ 
+ 	return skb;
+ }
+-- 
+2.3.6
+
+
+From f80e3eb94b7d4b5b9ebf999da1f50cd5b263a23d Mon Sep 17 00:00:00 2001
+From: Alexey Khoroshilov <khoroshilov@ispras.ru>
+Date: Sat, 25 Apr 2015 04:07:03 +0300
+Subject: [PATCH 008/219] pxa168: fix double deallocation of managed resources
+Cc: mpagano@gentoo.org
+
+[ Upstream commit 0e03fd3e335d272bee88fe733d5fd13f5c5b7140 ]
+
+Commit 43d3ddf87a57 ("net: pxa168_eth: add device tree support") starts
+to use managed resources by adding devm_clk_get() and
+devm_ioremap_resource(), but it leaves explicit iounmap() and clock_put()
+in pxa168_eth_remove() and in failure handling code of pxa168_eth_probe().
+As a result double free can happen.
+
+The patch removes explicit resource deallocation. Also it converts
+clk_disable() to clk_disable_unprepare() to make it symmetrical with
+clk_prepare_enable().
+
+Found by Linux Driver Verification project (linuxtesting.org).
+
+Signed-off-by: Alexey Khoroshilov <khoroshilov@ispras.ru>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/net/ethernet/marvell/pxa168_eth.c | 16 +++++-----------
+ 1 file changed, 5 insertions(+), 11 deletions(-)
+
+diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
+index af829c5..7ace07d 100644
+--- a/drivers/net/ethernet/marvell/pxa168_eth.c
++++ b/drivers/net/ethernet/marvell/pxa168_eth.c
+@@ -1508,7 +1508,8 @@ static int pxa168_eth_probe(struct platform_device *pdev)
+ 		np = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
+ 		if (!np) {
+ 			dev_err(&pdev->dev, "missing phy-handle\n");
+-			return -EINVAL;
++			err = -EINVAL;
++			goto err_netdev;
+ 		}
+ 		of_property_read_u32(np, "reg", &pep->phy_addr);
+ 		pep->phy_intf = of_get_phy_mode(pdev->dev.of_node);
+@@ -1526,7 +1527,7 @@ static int pxa168_eth_probe(struct platform_device *pdev)
+ 	pep->smi_bus = mdiobus_alloc();
+ 	if (pep->smi_bus == NULL) {
+ 		err = -ENOMEM;
+-		goto err_base;
++		goto err_netdev;
+ 	}
+ 	pep->smi_bus->priv = pep;
+ 	pep->smi_bus->name = "pxa168_eth smi";
+@@ -1551,13 +1552,10 @@ err_mdiobus:
+ 	mdiobus_unregister(pep->smi_bus);
+ err_free_mdio:
+ 	mdiobus_free(pep->smi_bus);
+-err_base:
+-	iounmap(pep->base);
+ err_netdev:
+ 	free_netdev(dev);
+ err_clk:
+-	clk_disable(clk);
+-	clk_put(clk);
++	clk_disable_unprepare(clk);
+ 	return err;
+ }
+ 
+@@ -1574,13 +1572,9 @@ static int pxa168_eth_remove(struct platform_device *pdev)
+ 	if (pep->phy)
+ 		phy_disconnect(pep->phy);
+ 	if (pep->clk) {
+-		clk_disable(pep->clk);
+-		clk_put(pep->clk);
+-		pep->clk = NULL;
++		clk_disable_unprepare(pep->clk);
+ 	}
+ 
+-	iounmap(pep->base);
+-	pep->base = NULL;
+ 	mdiobus_unregister(pep->smi_bus);
+ 	mdiobus_free(pep->smi_bus);
+ 	unregister_netdev(dev);
+-- 
+2.3.6
+
+
+From b32dec8a9f5834b14daaa75bd3e49f3b54272d65 Mon Sep 17 00:00:00 2001
+From: Eric Dumazet <edumazet@google.com>
+Date: Sat, 25 Apr 2015 09:35:24 -0700
+Subject: [PATCH 009/219] net: rfs: fix crash in get_rps_cpus()
+Cc: mpagano@gentoo.org
+
+[ Upstream commit a31196b07f8034eba6a3487a1ad1bb5ec5cd58a5 ]
+
+Commit 567e4b79731c ("net: rfs: add hash collision detection") had one
+mistake :
+
+RPS_NO_CPU is no longer the marker for invalid cpu in set_rps_cpu()
+and get_rps_cpu(), as @next_cpu was the result of an AND with
+rps_cpu_mask
+
+This bug showed up on a host with 72 cpus :
+next_cpu was 0x7f, and the code was trying to access percpu data of an
+non existent cpu.
+
+In a follow up patch, we might get rid of compares against nr_cpu_ids,
+if we init the tables with 0. This is silly to test for a very unlikely
+condition that exists only shortly after table initialization, as
+we got rid of rps_reset_sock_flow() and similar functions that were
+writing this RPS_NO_CPU magic value at flow dismantle : When table is
+old enough, it never contains this value anymore.
+
+Fixes: 567e4b79731c ("net: rfs: add hash collision detection")
+Signed-off-by: Eric Dumazet <edumazet@google.com>
+Cc: Tom Herbert <tom@herbertland.com>
+Cc: Ben Hutchings <ben@decadent.org.uk>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ Documentation/networking/scaling.txt |  2 +-
+ net/core/dev.c                       | 12 ++++++------
+ 2 files changed, 7 insertions(+), 7 deletions(-)
+
+diff --git a/Documentation/networking/scaling.txt b/Documentation/networking/scaling.txt
+index 99ca40e..5c204df 100644
+--- a/Documentation/networking/scaling.txt
++++ b/Documentation/networking/scaling.txt
+@@ -282,7 +282,7 @@ following is true:
+ 
+ - The current CPU's queue head counter >= the recorded tail counter
+   value in rps_dev_flow[i]
+-- The current CPU is unset (equal to RPS_NO_CPU)
++- The current CPU is unset (>= nr_cpu_ids)
+ - The current CPU is offline
+ 
+ After this check, the packet is sent to the (possibly updated) current
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 45109b7..22a53ac 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3041,7 +3041,7 @@ static struct rps_dev_flow *
+ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
+ 	    struct rps_dev_flow *rflow, u16 next_cpu)
+ {
+-	if (next_cpu != RPS_NO_CPU) {
++	if (next_cpu < nr_cpu_ids) {
+ #ifdef CONFIG_RFS_ACCEL
+ 		struct netdev_rx_queue *rxqueue;
+ 		struct rps_dev_flow_table *flow_table;
+@@ -3146,7 +3146,7 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
+ 		 * If the desired CPU (where last recvmsg was done) is
+ 		 * different from current CPU (one in the rx-queue flow
+ 		 * table entry), switch if one of the following holds:
+-		 *   - Current CPU is unset (equal to RPS_NO_CPU).
++		 *   - Current CPU is unset (>= nr_cpu_ids).
+ 		 *   - Current CPU is offline.
+ 		 *   - The current CPU's queue tail has advanced beyond the
+ 		 *     last packet that was enqueued using this table entry.
+@@ -3154,14 +3154,14 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
+ 		 *     have been dequeued, thus preserving in order delivery.
+ 		 */
+ 		if (unlikely(tcpu != next_cpu) &&
+-		    (tcpu == RPS_NO_CPU || !cpu_online(tcpu) ||
++		    (tcpu >= nr_cpu_ids || !cpu_online(tcpu) ||
+ 		     ((int)(per_cpu(softnet_data, tcpu).input_queue_head -
+ 		      rflow->last_qtail)) >= 0)) {
+ 			tcpu = next_cpu;
+ 			rflow = set_rps_cpu(dev, skb, rflow, next_cpu);
+ 		}
+ 
+-		if (tcpu != RPS_NO_CPU && cpu_online(tcpu)) {
++		if (tcpu < nr_cpu_ids && cpu_online(tcpu)) {
+ 			*rflowp = rflow;
+ 			cpu = tcpu;
+ 			goto done;
+@@ -3202,14 +3202,14 @@ bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index,
+ 	struct rps_dev_flow_table *flow_table;
+ 	struct rps_dev_flow *rflow;
+ 	bool expire = true;
+-	int cpu;
++	unsigned int cpu;
+ 
+ 	rcu_read_lock();
+ 	flow_table = rcu_dereference(rxqueue->rps_flow_table);
+ 	if (flow_table && flow_id <= flow_table->mask) {
+ 		rflow = &flow_table->flows[flow_id];
+ 		cpu = ACCESS_ONCE(rflow->cpu);
+-		if (rflow->filter == filter_id && cpu != RPS_NO_CPU &&
++		if (rflow->filter == filter_id && cpu < nr_cpu_ids &&
+ 		    ((int)(per_cpu(softnet_data, cpu).input_queue_head -
+ 			   rflow->last_qtail) <
+ 		     (int)(10 * flow_table->mask)))
+-- 
+2.3.6
+
+
+From 36fb8ea94764c1435bc5357057373c73f1055be9 Mon Sep 17 00:00:00 2001
+From: Amir Vadai <amirv@mellanox.com>
+Date: Mon, 27 Apr 2015 13:40:56 +0300
+Subject: [PATCH 010/219] net/mlx4_en: Prevent setting invalid RSS hash
+ function
+Cc: mpagano@gentoo.org
+
+[ Upstream commit b37069090b7c5615610a8aa6b36533d67b364d38 ]
+
+mlx4_en_check_rxfh_func() was checking for hardware support before
+setting a known RSS hash function, but didn't do any check before
+setting unknown RSS hash function. Need to make it fail on such values.
+In this occasion, moved the actual setting of the new value from the
+check function into mlx4_en_set_rxfh().
+
+Fixes: 947cbb0 ("net/mlx4_en: Support for configurable RSS hash function")
+Signed-off-by: Amir Vadai <amirv@mellanox.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/net/ethernet/mellanox/mlx4/en_ethtool.c | 29 ++++++++++++++-----------
+ 1 file changed, 16 insertions(+), 13 deletions(-)
+
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+index a7b58ba..3dccf01 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+@@ -981,20 +981,21 @@ static int mlx4_en_check_rxfh_func(struct net_device *dev, u8 hfunc)
+ 	struct mlx4_en_priv *priv = netdev_priv(dev);
+ 
+ 	/* check if requested function is supported by the device */
+-	if ((hfunc == ETH_RSS_HASH_TOP &&
+-	     !(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_TOP)) ||
+-	    (hfunc == ETH_RSS_HASH_XOR &&
+-	     !(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_XOR)))
+-		return -EINVAL;
++	if (hfunc == ETH_RSS_HASH_TOP) {
++		if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_TOP))
++			return -EINVAL;
++		if (!(dev->features & NETIF_F_RXHASH))
++			en_warn(priv, "Toeplitz hash function should be used in conjunction with RX hashing for optimal performance\n");
++		return 0;
++	} else if (hfunc == ETH_RSS_HASH_XOR) {
++		if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_XOR))
++			return -EINVAL;
++		if (dev->features & NETIF_F_RXHASH)
++			en_warn(priv, "Enabling both XOR Hash function and RX Hashing can limit RPS functionality\n");
++		return 0;
++	}
+ 
+-	priv->rss_hash_fn = hfunc;
+-	if (hfunc == ETH_RSS_HASH_TOP && !(dev->features & NETIF_F_RXHASH))
+-		en_warn(priv,
+-			"Toeplitz hash function should be used in conjunction with RX hashing for optimal performance\n");
+-	if (hfunc == ETH_RSS_HASH_XOR && (dev->features & NETIF_F_RXHASH))
+-		en_warn(priv,
+-			"Enabling both XOR Hash function and RX Hashing can limit RPS functionality\n");
+-	return 0;
++	return -EINVAL;
+ }
+ 
+ static int mlx4_en_get_rxfh(struct net_device *dev, u32 *ring_index, u8 *key,
+@@ -1068,6 +1069,8 @@ static int mlx4_en_set_rxfh(struct net_device *dev, const u32 *ring_index,
+ 		priv->prof->rss_rings = rss_rings;
+ 	if (key)
+ 		memcpy(priv->rss_key, key, MLX4_EN_RSS_KEY_SIZE);
++	if (hfunc !=  ETH_RSS_HASH_NO_CHANGE)
++		priv->rss_hash_fn = hfunc;
+ 
+ 	if (port_up) {
+ 		err = mlx4_en_start_port(dev);
+-- 
+2.3.6
+
+
+From 8336ee9076303fbdb38e89f18e921ec238d9c48c Mon Sep 17 00:00:00 2001
+From: Gu Zheng <guz.fnst@cn.fujitsu.com>
+Date: Fri, 3 Apr 2015 08:44:47 +0800
+Subject: [PATCH 011/219] md: fix md io stats accounting broken
+Cc: mpagano@gentoo.org
+
+commit 74672d069b298b03e9f657fd70915e055739882e upstream.
+
+Simon reported the md io stats accounting issue:
+"
+I'm seeing "iostat -x -k 1" print this after a RAID1 rebuild on 4.0-rc5.
+It's not abnormal other than it's 3-disk, with one being SSD (sdc) and
+the other two being write-mostly:
+
+Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
+sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
+sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
+sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
+md0               0.00     0.00    0.00    0.00     0.00     0.00     0.00   345.00    0.00    0.00    0.00   0.00 100.00
+md2               0.00     0.00    0.00    0.00     0.00     0.00     0.00 58779.00    0.00    0.00    0.00   0.00 100.00
+md1               0.00     0.00    0.00    0.00     0.00     0.00     0.00    12.00    0.00    0.00    0.00   0.00 100.00
+"
+The cause is commit "18c0b223cf9901727ef3b02da6711ac930b4e5d4" uses the
+generic_start_io_acct to account the disk stats rather than the open code,
+but it also introduced the increase to .in_flight[rw] which is needless to
+md. So we re-use the open code here to fix it.
+
+Reported-by: Simon Kirby <sim@hostway.ca>
+Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
+Signed-off-by: NeilBrown <neilb@suse.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/md/md.c | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 717daad..e617878 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -249,6 +249,7 @@ static void md_make_request(struct request_queue *q, struct bio *bio)
+ 	const int rw = bio_data_dir(bio);
+ 	struct mddev *mddev = q->queuedata;
+ 	unsigned int sectors;
++	int cpu;
+ 
+ 	if (mddev == NULL || mddev->pers == NULL
+ 	    || !mddev->ready) {
+@@ -284,7 +285,10 @@ static void md_make_request(struct request_queue *q, struct bio *bio)
+ 	sectors = bio_sectors(bio);
+ 	mddev->pers->make_request(mddev, bio);
+ 
+-	generic_start_io_acct(rw, sectors, &mddev->gendisk->part0);
++	cpu = part_stat_lock();
++	part_stat_inc(cpu, &mddev->gendisk->part0, ios[rw]);
++	part_stat_add(cpu, &mddev->gendisk->part0, sectors[rw], sectors);
++	part_stat_unlock();
+ 
+ 	if (atomic_dec_and_test(&mddev->active_io) && mddev->suspended)
+ 		wake_up(&mddev->sb_wait);
+-- 
+2.3.6
+
+
+From bbe33d7992b2dd4a79499aeb384a4597b73451eb Mon Sep 17 00:00:00 2001
+From: Andy Lutomirski <luto@amacapital.net>
+Date: Tue, 27 Jan 2015 16:06:02 -0800
+Subject: [PATCH 012/219] x86/asm/decoder: Fix and enforce max instruction size
+ in the insn decoder
+Cc: mpagano@gentoo.org
+
+commit 91e5ed49fca09c2b83b262b9757d1376ee2b46c3 upstream.
+
+x86 instructions cannot exceed 15 bytes, and the instruction
+decoder should enforce that.  Prior to 6ba48ff46f76, the
+instruction length limit was implicitly set to 16, which was an
+approximation of 15, but there is currently no limit at all.
+
+Fix MAX_INSN_SIZE (it should be 15, not 16), and fix the decoder
+to reject instructions that exceed MAX_INSN_SIZE.
+
+Other than potentially confusing some of the decoder sanity
+checks, I'm not aware of any actual problems that omitting this
+check would cause, nor am I aware of any practical problems
+caused by the MAX_INSN_SIZE error.
+
+Signed-off-by: Andy Lutomirski <luto@amacapital.net>
+Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
+Cc: Dave Hansen <dave.hansen@linux.intel.com>
+Fixes: 6ba48ff46f76 ("x86: Remove arbitrary instruction size limit ...
+Link: http://lkml.kernel.org/r/f8f0bc9b8c58cfd6830f7d88400bf1396cbdcd0f.1422403511.git.luto@amacapital.net
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/x86/include/asm/insn.h | 2 +-
+ arch/x86/lib/insn.c         | 7 +++++++
+ 2 files changed, 8 insertions(+), 1 deletion(-)
+
+diff --git a/arch/x86/include/asm/insn.h b/arch/x86/include/asm/insn.h
+index 47f29b1..e7814b7 100644
+--- a/arch/x86/include/asm/insn.h
++++ b/arch/x86/include/asm/insn.h
+@@ -69,7 +69,7 @@ struct insn {
+ 	const insn_byte_t *next_byte;
+ };
+ 
+-#define MAX_INSN_SIZE	16
++#define MAX_INSN_SIZE	15
+ 
+ #define X86_MODRM_MOD(modrm) (((modrm) & 0xc0) >> 6)
+ #define X86_MODRM_REG(modrm) (((modrm) & 0x38) >> 3)
+diff --git a/arch/x86/lib/insn.c b/arch/x86/lib/insn.c
+index 1313ae6..85994f5 100644
+--- a/arch/x86/lib/insn.c
++++ b/arch/x86/lib/insn.c
+@@ -52,6 +52,13 @@
+  */
+ void insn_init(struct insn *insn, const void *kaddr, int buf_len, int x86_64)
+ {
++	/*
++	 * Instructions longer than MAX_INSN_SIZE (15 bytes) are invalid
++	 * even if the input buffer is long enough to hold them.
++	 */
++	if (buf_len > MAX_INSN_SIZE)
++		buf_len = MAX_INSN_SIZE;
++
+ 	memset(insn, 0, sizeof(*insn));
+ 	insn->kaddr = kaddr;
+ 	insn->end_kaddr = kaddr + buf_len;
+-- 
+2.3.6
+
+
+From 3fbb83fdcd2be33c3091f2c1094c37b5054da9f8 Mon Sep 17 00:00:00 2001
+From: Marcelo Tosatti <mtosatti@redhat.com>
+Date: Mon, 23 Mar 2015 20:21:51 -0300
+Subject: [PATCH 013/219] x86: kvm: Revert "remove sched notifier for cross-cpu
+ migrations"
+Cc: mpagano@gentoo.org
+
+commit 0a4e6be9ca17c54817cf814b4b5aa60478c6df27 upstream.
+
+The following point:
+
+    2. per-CPU pvclock time info is updated if the
+       underlying CPU changes.
+
+Is not true anymore since "KVM: x86: update pvclock area conditionally,
+on cpu migration".
+
+Add task migration notification back.
+
+Problem noticed by Andy Lutomirski.
+
+Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/x86/include/asm/pvclock.h |  1 +
+ arch/x86/kernel/pvclock.c      | 44 ++++++++++++++++++++++++++++++++++++++++++
+ arch/x86/vdso/vclock_gettime.c | 16 +++++++--------
+ include/linux/sched.h          |  8 ++++++++
+ kernel/sched/core.c            | 15 ++++++++++++++
+ 5 files changed, 76 insertions(+), 8 deletions(-)
+
+diff --git a/arch/x86/include/asm/pvclock.h b/arch/x86/include/asm/pvclock.h
+index d6b078e..25b1cc0 100644
+--- a/arch/x86/include/asm/pvclock.h
++++ b/arch/x86/include/asm/pvclock.h
+@@ -95,6 +95,7 @@ unsigned __pvclock_read_cycles(const struct pvclock_vcpu_time_info *src,
+ 
+ struct pvclock_vsyscall_time_info {
+ 	struct pvclock_vcpu_time_info pvti;
++	u32 migrate_count;
+ } __attribute__((__aligned__(SMP_CACHE_BYTES)));
+ 
+ #define PVTI_SIZE sizeof(struct pvclock_vsyscall_time_info)
+diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
+index 2f355d2..e5ecd20 100644
+--- a/arch/x86/kernel/pvclock.c
++++ b/arch/x86/kernel/pvclock.c
+@@ -141,7 +141,46 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
+ 	set_normalized_timespec(ts, now.tv_sec, now.tv_nsec);
+ }
+ 
++static struct pvclock_vsyscall_time_info *pvclock_vdso_info;
++
++static struct pvclock_vsyscall_time_info *
++pvclock_get_vsyscall_user_time_info(int cpu)
++{
++	if (!pvclock_vdso_info) {
++		BUG();
++		return NULL;
++	}
++
++	return &pvclock_vdso_info[cpu];
++}
++
++struct pvclock_vcpu_time_info *pvclock_get_vsyscall_time_info(int cpu)
++{
++	return &pvclock_get_vsyscall_user_time_info(cpu)->pvti;
++}
++
+ #ifdef CONFIG_X86_64
++static int pvclock_task_migrate(struct notifier_block *nb, unsigned long l,
++			        void *v)
++{
++	struct task_migration_notifier *mn = v;
++	struct pvclock_vsyscall_time_info *pvti;
++
++	pvti = pvclock_get_vsyscall_user_time_info(mn->from_cpu);
++
++	/* this is NULL when pvclock vsyscall is not initialized */
++	if (unlikely(pvti == NULL))
++		return NOTIFY_DONE;
++
++	pvti->migrate_count++;
++
++	return NOTIFY_DONE;
++}
++
++static struct notifier_block pvclock_migrate = {
++	.notifier_call = pvclock_task_migrate,
++};
++
+ /*
+  * Initialize the generic pvclock vsyscall state.  This will allocate
+  * a/some page(s) for the per-vcpu pvclock information, set up a
+@@ -155,12 +194,17 @@ int __init pvclock_init_vsyscall(struct pvclock_vsyscall_time_info *i,
+ 
+ 	WARN_ON (size != PVCLOCK_VSYSCALL_NR_PAGES*PAGE_SIZE);
+ 
++	pvclock_vdso_info = i;
++
+ 	for (idx = 0; idx <= (PVCLOCK_FIXMAP_END-PVCLOCK_FIXMAP_BEGIN); idx++) {
+ 		__set_fixmap(PVCLOCK_FIXMAP_BEGIN + idx,
+ 			     __pa(i) + (idx*PAGE_SIZE),
+ 			     PAGE_KERNEL_VVAR);
+ 	}
+ 
++
++	register_task_migration_notifier(&pvclock_migrate);
++
+ 	return 0;
+ }
+ #endif
+diff --git a/arch/x86/vdso/vclock_gettime.c b/arch/x86/vdso/vclock_gettime.c
+index 9793322..3093376 100644
+--- a/arch/x86/vdso/vclock_gettime.c
++++ b/arch/x86/vdso/vclock_gettime.c
+@@ -82,18 +82,15 @@ static notrace cycle_t vread_pvclock(int *mode)
+ 	cycle_t ret;
+ 	u64 last;
+ 	u32 version;
++	u32 migrate_count;
+ 	u8 flags;
+ 	unsigned cpu, cpu1;
+ 
+ 
+ 	/*
+-	 * Note: hypervisor must guarantee that:
+-	 * 1. cpu ID number maps 1:1 to per-CPU pvclock time info.
+-	 * 2. that per-CPU pvclock time info is updated if the
+-	 *    underlying CPU changes.
+-	 * 3. that version is increased whenever underlying CPU
+-	 *    changes.
+-	 *
++	 * When looping to get a consistent (time-info, tsc) pair, we
++	 * also need to deal with the possibility we can switch vcpus,
++	 * so make sure we always re-fetch time-info for the current vcpu.
+ 	 */
+ 	do {
+ 		cpu = __getcpu() & VGETCPU_CPU_MASK;
+@@ -104,6 +101,8 @@ static notrace cycle_t vread_pvclock(int *mode)
+ 
+ 		pvti = get_pvti(cpu);
+ 
++		migrate_count = pvti->migrate_count;
++
+ 		version = __pvclock_read_cycles(&pvti->pvti, &ret, &flags);
+ 
+ 		/*
+@@ -115,7 +114,8 @@ static notrace cycle_t vread_pvclock(int *mode)
+ 		cpu1 = __getcpu() & VGETCPU_CPU_MASK;
+ 	} while (unlikely(cpu != cpu1 ||
+ 			  (pvti->pvti.version & 1) ||
+-			  pvti->pvti.version != version));
++			  pvti->pvti.version != version ||
++			  pvti->migrate_count != migrate_count));
+ 
+ 	if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))
+ 		*mode = VCLOCK_NONE;
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index a419b65..51348f7 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -176,6 +176,14 @@ extern void get_iowait_load(unsigned long *nr_waiters, unsigned long *load);
+ extern void calc_global_load(unsigned long ticks);
+ extern void update_cpu_load_nohz(void);
+ 
++/* Notifier for when a task gets migrated to a new CPU */
++struct task_migration_notifier {
++	struct task_struct *task;
++	int from_cpu;
++	int to_cpu;
++};
++extern void register_task_migration_notifier(struct notifier_block *n);
++
+ extern unsigned long get_parent_ip(unsigned long addr);
+ 
+ extern void dump_cpu_task(int cpu);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 62671f5..3d5f6f6 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -996,6 +996,13 @@ void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags)
+ 		rq_clock_skip_update(rq, true);
+ }
+ 
++static ATOMIC_NOTIFIER_HEAD(task_migration_notifier);
++
++void register_task_migration_notifier(struct notifier_block *n)
++{
++	atomic_notifier_chain_register(&task_migration_notifier, n);
++}
++
+ #ifdef CONFIG_SMP
+ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
+ {
+@@ -1026,10 +1033,18 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
+ 	trace_sched_migrate_task(p, new_cpu);
+ 
+ 	if (task_cpu(p) != new_cpu) {
++		struct task_migration_notifier tmn;
++
+ 		if (p->sched_class->migrate_task_rq)
+ 			p->sched_class->migrate_task_rq(p, new_cpu);
+ 		p->se.nr_migrations++;
+ 		perf_sw_event_sched(PERF_COUNT_SW_CPU_MIGRATIONS, 1, 0);
++
++		tmn.task = p;
++		tmn.from_cpu = task_cpu(p);
++		tmn.to_cpu = new_cpu;
++
++		atomic_notifier_call_chain(&task_migration_notifier, 0, &tmn);
+ 	}
+ 
+ 	__set_task_cpu(p, new_cpu);
+-- 
+2.3.6
+
+
+From 82a7e6737ca5b18841f7130821dbec007d736b0b Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= <rkrcmar@redhat.com>
+Date: Thu, 2 Apr 2015 20:44:23 +0200
+Subject: [PATCH 014/219] x86: vdso: fix pvclock races with task migration
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+Cc: mpagano@gentoo.org
+
+commit 80f7fdb1c7f0f9266421f823964fd1962681f6ce upstream.
+
+If we were migrated right after __getcpu, but before reading the
+migration_count, we wouldn't notice that we read TSC of a different
+VCPU, nor that KVM's bug made pvti invalid, as only migration_count
+on source VCPU is increased.
+
+Change vdso instead of updating migration_count on destination.
+
+Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
+Fixes: 0a4e6be9ca17 ("x86: kvm: Revert "remove sched notifier for cross-cpu migrations"")
+Message-Id: <1428000263-11892-1-git-send-email-rkrcmar@redhat.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/x86/vdso/vclock_gettime.c | 20 ++++++++++++--------
+ 1 file changed, 12 insertions(+), 8 deletions(-)
+
+diff --git a/arch/x86/vdso/vclock_gettime.c b/arch/x86/vdso/vclock_gettime.c
+index 3093376..40d2473 100644
+--- a/arch/x86/vdso/vclock_gettime.c
++++ b/arch/x86/vdso/vclock_gettime.c
+@@ -99,21 +99,25 @@ static notrace cycle_t vread_pvclock(int *mode)
+ 		 * __getcpu() calls (Gleb).
+ 		 */
+ 
+-		pvti = get_pvti(cpu);
++		/* Make sure migrate_count will change if we leave the VCPU. */
++		do {
++			pvti = get_pvti(cpu);
++			migrate_count = pvti->migrate_count;
+ 
+-		migrate_count = pvti->migrate_count;
++			cpu1 = cpu;
++			cpu = __getcpu() & VGETCPU_CPU_MASK;
++		} while (unlikely(cpu != cpu1));
+ 
+ 		version = __pvclock_read_cycles(&pvti->pvti, &ret, &flags);
+ 
+ 		/*
+ 		 * Test we're still on the cpu as well as the version.
+-		 * We could have been migrated just after the first
+-		 * vgetcpu but before fetching the version, so we
+-		 * wouldn't notice a version change.
++		 * - We must read TSC of pvti's VCPU.
++		 * - KVM doesn't follow the versioning protocol, so data could
++		 *   change before version if we left the VCPU.
+ 		 */
+-		cpu1 = __getcpu() & VGETCPU_CPU_MASK;
+-	} while (unlikely(cpu != cpu1 ||
+-			  (pvti->pvti.version & 1) ||
++		smp_rmb();
++	} while (unlikely((pvti->pvti.version & 1) ||
+ 			  pvti->pvti.version != version ||
+ 			  pvti->migrate_count != migrate_count));
+ 
+-- 
+2.3.6
+
+
+From 0e625b6df5ac57968c7ab197e916ea03f70e4a24 Mon Sep 17 00:00:00 2001
+From: Len Brown <len.brown@intel.com>
+Date: Wed, 15 Jan 2014 00:37:34 -0500
+Subject: [PATCH 015/219] sched/idle/x86: Restore mwait_idle() to fix boot
+ hangs, to improve power savings and to improve performance
+Cc: mpagano@gentoo.org
+
+commit b253149b843f89cd300cbdbea27ce1f847506f99 upstream.
+
+In Linux-3.9 we removed the mwait_idle() loop:
+
+  69fb3676df33 ("x86 idle: remove mwait_idle() and "idle=mwait" cmdline param")
+
+The reasoning was that modern machines should be sufficiently
+happy during the boot process using the default_idle() HALT
+loop, until cpuidle loads and either acpi_idle or intel_idle
+invoke the newer MWAIT-with-hints idle loop.
+
+But two machines reported problems:
+
+ 1. Certain Core2-era machines support MWAIT-C1 and HALT only.
+    MWAIT-C1 is preferred for optimal power and performance.
+    But if they support just C1, cpuidle never loads and
+    so they use the boot-time default idle loop forever.
+
+ 2. Some laptops will boot-hang if HALT is used,
+    but will boot successfully if MWAIT is used.
+    This appears to be a hidden assumption in BIOS SMI,
+    that is presumably valid on the proprietary OS
+    where the BIOS was validated.
+
+       https://bugzilla.kernel.org/show_bug.cgi?id=60770
+
+So here we effectively revert the patch above, restoring
+the mwait_idle() loop.  However, we don't bother restoring
+the idle=mwait cmdline parameter, since it appears to add
+no value.
+
+Maintainer notes:
+
+  For 3.9, simply revert 69fb3676df
+  for 3.10, patch -F3 applies, fuzz needed due to __cpuinit use in
+  context For 3.11, 3.12, 3.13, this patch applies cleanly
+
+Tested-by: Mike Galbraith <bitbucket@online.de>
+Signed-off-by: Len Brown <len.brown@intel.com>
+Acked-by: Mike Galbraith <bitbucket@online.de>
+Cc: Borislav Petkov <bp@alien8.de>
+Cc: H. Peter Anvin <hpa@zytor.com>
+Cc: Ian Malone <ibmalone@gmail.com>
+Cc: Josh Boyer <jwboyer@redhat.com>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Mike Galbraith <efault@gmx.de>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Link: http://lkml.kernel.org/r/345254a551eb5a6a866e048d7ab570fd2193aca4.1389763084.git.len.brown@intel.com
+[ Ported to recent kernels. ]
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/x86/include/asm/mwait.h |  8 ++++++++
+ arch/x86/kernel/process.c    | 47 ++++++++++++++++++++++++++++++++++++++++++++
+ 2 files changed, 55 insertions(+)
+
+diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
+index a1410db..653dfa7 100644
+--- a/arch/x86/include/asm/mwait.h
++++ b/arch/x86/include/asm/mwait.h
+@@ -30,6 +30,14 @@ static inline void __mwait(unsigned long eax, unsigned long ecx)
+ 		     :: "a" (eax), "c" (ecx));
+ }
+ 
++static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
++{
++	trace_hardirqs_on();
++	/* "mwait %eax, %ecx;" */
++	asm volatile("sti; .byte 0x0f, 0x01, 0xc9;"
++		     :: "a" (eax), "c" (ecx));
++}
++
+ /*
+  * This uses new MONITOR/MWAIT instructions on P4 processors with PNI,
+  * which can obviate IPI to trigger checking of need_resched.
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 046e2d6..65e1a90 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -24,6 +24,7 @@
+ #include <asm/syscalls.h>
+ #include <asm/idle.h>
+ #include <asm/uaccess.h>
++#include <asm/mwait.h>
+ #include <asm/i387.h>
+ #include <asm/fpu-internal.h>
+ #include <asm/debugreg.h>
+@@ -399,6 +400,49 @@ static void amd_e400_idle(void)
+ 		default_idle();
+ }
+ 
++/*
++ * Intel Core2 and older machines prefer MWAIT over HALT for C1.
++ * We can't rely on cpuidle installing MWAIT, because it will not load
++ * on systems that support only C1 -- so the boot default must be MWAIT.
++ *
++ * Some AMD machines are the opposite, they depend on using HALT.
++ *
++ * So for default C1, which is used during boot until cpuidle loads,
++ * use MWAIT-C1 on Intel HW that has it, else use HALT.
++ */
++static int prefer_mwait_c1_over_halt(const struct cpuinfo_x86 *c)
++{
++	if (c->x86_vendor != X86_VENDOR_INTEL)
++		return 0;
++
++	if (!cpu_has(c, X86_FEATURE_MWAIT))
++		return 0;
++
++	return 1;
++}
++
++/*
++ * MONITOR/MWAIT with no hints, used for default default C1 state.
++ * This invokes MWAIT with interrutps enabled and no flags,
++ * which is backwards compatible with the original MWAIT implementation.
++ */
++
++static void mwait_idle(void)
++{
++	if (!need_resched()) {
++		if (this_cpu_has(X86_BUG_CLFLUSH_MONITOR))
++			clflush((void *)&current_thread_info()->flags);
++
++		__monitor((void *)&current_thread_info()->flags, 0, 0);
++		smp_mb();
++		if (!need_resched())
++			__sti_mwait(0, 0);
++		else
++			local_irq_enable();
++	} else
++		local_irq_enable();
++}
++
+ void select_idle_routine(const struct cpuinfo_x86 *c)
+ {
+ #ifdef CONFIG_SMP
+@@ -412,6 +456,9 @@ void select_idle_routine(const struct cpuinfo_x86 *c)
+ 		/* E400: APIC timer interrupt does not wake up CPU from C1e */
+ 		pr_info("using AMD E400 aware idle routine\n");
+ 		x86_idle = amd_e400_idle;
++	} else if (prefer_mwait_c1_over_halt(c)) {
++		pr_info("using mwait in idle threads\n");
++		x86_idle = mwait_idle;
+ 	} else
+ 		x86_idle = default_idle;
+ }
+-- 
+2.3.6
+
+
+From aaa51337c5819599af0d1f6aba6a31639dd1c0a6 Mon Sep 17 00:00:00 2001
+From: Mike Galbraith <bitbucket@online.de>
+Date: Sat, 18 Jan 2014 17:14:44 +0100
+Subject: [PATCH 016/219] sched/idle/x86: Optimize unnecessary mwait_idle()
+ resched IPIs
+Cc: mpagano@gentoo.org
+
+commit f8e617f4582995f7c25ef25b4167213120ad122b upstream.
+
+To fully take advantage of MWAIT, apparently the CLFLUSH instruction needs
+another quirk on certain CPUs: proper barriers around it on certain machines.
+
+On a Q6600 SMP system, pipe-test scheduling performance, cross core,
+improves significantly:
+
+  3.8.13                   487.2 KHz    1.000
+  3.13.0-master            415.5 KHz     .852
+  3.13.0-master+           415.2 KHz     .852     + restore mwait_idle
+  3.13.0-master++          488.5 KHz    1.002     + restore mwait_idle + IPI fix
+
+Since X86_BUG_CLFLUSH_MONITOR is already a quirk, don't create a separate
+quirk for the extra smp_mb()s.
+
+Signed-off-by: Mike Galbraith <bitbucket@online.de>
+Cc: Borislav Petkov <bp@alien8.de>
+Cc: H. Peter Anvin <hpa@zytor.com>
+Cc: Ian Malone <ibmalone@gmail.com>
+Cc: Josh Boyer <jwboyer@redhat.com>
+Cc: Len Brown <len.brown@intel.com>
+Cc: Len Brown <lenb@kernel.org>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Mike Galbraith <efault@gmx.de>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Link: http://lkml.kernel.org/r/1390061684.5566.4.camel@marge.simpson.net
+[ Ported to recent kernel, added comments about the quirk. ]
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/x86/kernel/process.c | 12 ++++++++----
+ 1 file changed, 8 insertions(+), 4 deletions(-)
+
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 65e1a90..a388bb8 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -429,18 +429,22 @@ static int prefer_mwait_c1_over_halt(const struct cpuinfo_x86 *c)
+ 
+ static void mwait_idle(void)
+ {
+-	if (!need_resched()) {
+-		if (this_cpu_has(X86_BUG_CLFLUSH_MONITOR))
++	if (!current_set_polling_and_test()) {
++		if (this_cpu_has(X86_BUG_CLFLUSH_MONITOR)) {
++			smp_mb(); /* quirk */
+ 			clflush((void *)&current_thread_info()->flags);
++			smp_mb(); /* quirk */
++		}
+ 
+ 		__monitor((void *)&current_thread_info()->flags, 0, 0);
+-		smp_mb();
+ 		if (!need_resched())
+ 			__sti_mwait(0, 0);
+ 		else
+ 			local_irq_enable();
+-	} else
++	} else {
+ 		local_irq_enable();
++	}
++	__current_clr_polling();
+ }
+ 
+ void select_idle_routine(const struct cpuinfo_x86 *c)
+-- 
+2.3.6
+
+
+From 6e4dd840cca3053125c3f55650df1a9313b91615 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <peterz@infradead.org>
+Date: Sat, 11 Apr 2015 12:16:22 +0200
+Subject: [PATCH 017/219] perf/x86/intel: Fix Core2,Atom,NHM,WSM cycles:pp
+ events
+Cc: mpagano@gentoo.org
+
+commit 517e6341fa123ec3a2f9ea78ad547be910529881 upstream.
+
+Ingo reported that cycles:pp didn't work for him on some machines.
+
+It turns out that in this commit:
+
+  af4bdcf675cf perf/x86/intel: Disallow flags for most Core2/Atom/Nehalem/Westmere events
+
+Andi forgot to explicitly allow that event when he
+disabled event flags for PEBS on those uarchs.
+
+Reported-by: Ingo Molnar <mingo@kernel.org>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
+Cc: Jiri Olsa <jolsa@redhat.com>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Fixes: af4bdcf675cf ("perf/x86/intel: Disallow flags for most Core2/Atom/Nehalem/Westmere events")
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/x86/kernel/cpu/perf_event_intel_ds.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
+index 0739833..666bcf1 100644
+--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
++++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
+@@ -557,6 +557,8 @@ struct event_constraint intel_core2_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c5, 0x1), /* BR_INST_RETIRED.MISPRED */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x1fc7, 0x1), /* SIMD_INST_RETURED.ANY */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1),    /* MEM_LOAD_RETIRED.* */
++	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
++	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+@@ -564,6 +566,8 @@ struct event_constraint intel_atom_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c0, 0x1), /* INST_RETIRED.ANY */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c5, 0x1), /* MISPREDICTED_BRANCH_RETIRED */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1),    /* MEM_LOAD_RETIRED.* */
++	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
++	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+@@ -587,6 +591,8 @@ struct event_constraint intel_nehalem_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x20c8, 0xf), /* ITLB_MISS_RETIRED */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf),    /* MEM_LOAD_RETIRED.* */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf),    /* FP_ASSIST.* */
++	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
++	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+@@ -602,6 +608,8 @@ struct event_constraint intel_westmere_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x20c8, 0xf), /* ITLB_MISS_RETIRED */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf),    /* MEM_LOAD_RETIRED.* */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf),    /* FP_ASSIST.* */
++	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
++	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+-- 
+2.3.6
+
+
+From 5c966c4f563f8b10e276e43579c0f27ea2a3cef2 Mon Sep 17 00:00:00 2001
+From: Linus Torvalds <torvalds@linux-foundation.org>
+Date: Thu, 23 Apr 2015 08:33:59 -0700
+Subject: [PATCH 018/219] x86: fix special __probe_kernel_write() tail zeroing
+ case
+Cc: mpagano@gentoo.org
+
+commit d869844bd081081bf537e806a44811884230643e upstream.
+
+Commit cae2a173fe94 ("x86: clean up/fix 'copy_in_user()' tail zeroing")
+fixed the failure case tail zeroing of one special case of the x86-64
+generic user-copy routine, namely when used for the user-to-user case
+("copy_in_user()").
+
+But in the process it broke an even more unusual case: using the user
+copy routine for kernel-to-kernel copying.
+
+Now, normally kernel-kernel copies are obviously done using memcpy(),
+but we have a couple of special cases when we use the user-copy
+functions.  One is when we pass a kernel buffer to a regular user-buffer
+routine, using set_fs(KERNEL_DS).  That's a "normal" case, and continued
+to work fine, because it never takes any faults (with the possible
+exception of a silent and successful vmalloc fault).
+
+But Jan Beulich pointed out another, very unusual, special case: when we
+use the user-copy routines not because it's a path that expects a user
+pointer, but for a couple of ftrace/kgdb cases that want to do a kernel
+copy, but do so using "unsafe" buffers, and use the user-copy routine to
+gracefully handle faults.  IOW, for probe_kernel_write().
+
+And that broke for the case of a faulting kernel destination, because we
+saw the kernel destination and wanted to try to clear the tail of the
+buffer.  Which doesn't work, since that's what faults.
+
+This only triggers for things like kgdb and ftrace users (eg trying
+setting a breakpoint on read-only memory), but it's definitely a bug.
+The fix is to not compare against the kernel address start (TASK_SIZE),
+but instead use the same limits "access_ok()" uses.
+
+Reported-and-tested-by: Jan Beulich <jbeulich@suse.com>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/x86/lib/usercopy_64.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
+index 1f33b3d..0a42327 100644
+--- a/arch/x86/lib/usercopy_64.c
++++ b/arch/x86/lib/usercopy_64.c
+@@ -82,7 +82,7 @@ copy_user_handle_tail(char *to, char *from, unsigned len)
+ 	clac();
+ 
+ 	/* If the destination is a kernel buffer, we always clear the end */
+-	if ((unsigned long)to >= TASK_SIZE_MAX)
++	if (!__addr_ok(to))
+ 		memset(to, 0, len);
+ 	return len;
+ }
+-- 
+2.3.6
+
+
+From 47b34f8519e8a009d3ba8506ea8c5e7fe4314a6d Mon Sep 17 00:00:00 2001
+From: Nadav Amit <namit@cs.technion.ac.il>
+Date: Sun, 12 Apr 2015 21:47:15 +0300
+Subject: [PATCH 019/219] KVM: x86: Fix MSR_IA32_BNDCFGS in msrs_to_save
+Cc: mpagano@gentoo.org
+
+commit 9e9c3fe40bcd28e3f98f0ad8408435f4503f2781 upstream.
+
+kvm_init_msr_list is currently called before hardware_setup. As a result,
+vmx_mpx_supported always returns false when kvm_init_msr_list checks whether to
+save MSR_IA32_BNDCFGS.
+
+Move kvm_init_msr_list after vmx_hardware_setup is called to fix this issue.
+
+Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+Message-Id: <1428864435-4732-1-git-send-email-namit@cs.technion.ac.il>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/x86/kvm/x86.c | 10 ++++++++--
+ 1 file changed, 8 insertions(+), 2 deletions(-)
+
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 32bf19e..e222ba5 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -5775,7 +5775,6 @@ int kvm_arch_init(void *opaque)
+ 	kvm_set_mmio_spte_mask();
+ 
+ 	kvm_x86_ops = ops;
+-	kvm_init_msr_list();
+ 
+ 	kvm_mmu_set_mask_ptes(PT_USER_MASK, PT_ACCESSED_MASK,
+ 			PT_DIRTY_MASK, PT64_NX_MASK, 0);
+@@ -7209,7 +7208,14 @@ void kvm_arch_hardware_disable(void)
+ 
+ int kvm_arch_hardware_setup(void)
+ {
+-	return kvm_x86_ops->hardware_setup();
++	int r;
++
++	r = kvm_x86_ops->hardware_setup();
++	if (r != 0)
++		return r;
++
++	kvm_init_msr_list();
++	return 0;
+ }
+ 
+ void kvm_arch_hardware_unsetup(void)
+-- 
+2.3.6
+
+
+From 7362dcdba904cf6a1c3791c253f25f85390d45c0 Mon Sep 17 00:00:00 2001
+From: Filipe Manana <fdmanana@suse.com>
+Date: Mon, 23 Mar 2015 14:07:40 +0000
+Subject: [PATCH 020/219] Btrfs: fix log tree corruption when fs mounted with
+ -o discard
+Cc: mpagano@gentoo.org
+
+commit dcc82f4783ad91d4ab654f89f37ae9291cdc846a upstream.
+
+While committing a transaction we free the log roots before we write the
+new super block. Freeing the log roots implies marking the disk location
+of every node/leaf (metadata extent) as pinned before the new super block
+is written. This is to prevent the disk location of log metadata extents
+from being reused before the new super block is written, otherwise we
+would have a corrupted log tree if before the new super block is written
+a crash/reboot happens and the location of any log tree metadata extent
+ended up being reused and rewritten.
+
+Even though we pinned the log tree's metadata extents, we were issuing a
+discard against them if the fs was mounted with the -o discard option,
+resulting in corruption of the log tree if a crash/reboot happened before
+writing the new super block - the next time the fs was mounted, during
+the log replay process we would find nodes/leafs of the log btree with
+a content full of zeroes, causing the process to fail and require the
+use of the tool btrfs-zero-log to wipeout the log tree (and all data
+previously fsynced becoming lost forever).
+
+Fix this by not doing a discard when pinning an extent. The discard will
+be done later when it's safe (after the new super block is committed) at
+extent-tree.c:btrfs_finish_extent_commit().
+
+Fixes: e688b7252f78 (Btrfs: fix extent pinning bugs in the tree log)
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Signed-off-by: Chris Mason <clm@fb.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/btrfs/extent-tree.c | 5 ++---
+ 1 file changed, 2 insertions(+), 3 deletions(-)
+
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 8b353ad..0a795c9 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -6956,12 +6956,11 @@ static int __btrfs_free_reserved_extent(struct btrfs_root *root,
+ 		return -ENOSPC;
+ 	}
+ 
+-	if (btrfs_test_opt(root, DISCARD))
+-		ret = btrfs_discard_extent(root, start, len, NULL);
+-
+ 	if (pin)
+ 		pin_down_extent(root, cache, start, len, 1);
+ 	else {
++		if (btrfs_test_opt(root, DISCARD))
++			ret = btrfs_discard_extent(root, start, len, NULL);
+ 		btrfs_add_free_space(cache, start, len);
+ 		btrfs_update_reserved_bytes(cache, len, RESERVE_FREE, delalloc);
+ 	}
+-- 
+2.3.6
+
+
+From 1f6719c298def2c3440dc5e9ca9532053877fff7 Mon Sep 17 00:00:00 2001
+From: David Sterba <dsterba@suse.cz>
+Date: Wed, 25 Mar 2015 19:26:41 +0100
+Subject: [PATCH 021/219] btrfs: don't accept bare namespace as a valid xattr
+Cc: mpagano@gentoo.org
+
+commit 3c3b04d10ff1811a27f86684ccd2f5ba6983211d upstream.
+
+Due to insufficient check in btrfs_is_valid_xattr, this unexpectedly
+works:
+
+ $ touch file
+ $ setfattr -n user. -v 1 file
+ $ getfattr -d file
+user.="1"
+
+ie. the missing attribute name after the namespace.
+
+Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=94291
+Reported-by: William Douglas <william.douglas@intel.com>
+Signed-off-by: David Sterba <dsterba@suse.cz>
+Signed-off-by: Chris Mason <clm@fb.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/btrfs/xattr.c | 53 +++++++++++++++++++++++++++++++++++++++--------------
+ 1 file changed, 39 insertions(+), 14 deletions(-)
+
+diff --git a/fs/btrfs/xattr.c b/fs/btrfs/xattr.c
+index 883b936..45ea704 100644
+--- a/fs/btrfs/xattr.c
++++ b/fs/btrfs/xattr.c
+@@ -364,22 +364,42 @@ const struct xattr_handler *btrfs_xattr_handlers[] = {
+ /*
+  * Check if the attribute is in a supported namespace.
+  *
+- * This applied after the check for the synthetic attributes in the system
++ * This is applied after the check for the synthetic attributes in the system
+  * namespace.
+  */
+-static bool btrfs_is_valid_xattr(const char *name)
++static int btrfs_is_valid_xattr(const char *name)
+ {
+-	return !strncmp(name, XATTR_SECURITY_PREFIX,
+-			XATTR_SECURITY_PREFIX_LEN) ||
+-	       !strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN) ||
+-	       !strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) ||
+-	       !strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN) ||
+-		!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN);
++	int len = strlen(name);
++	int prefixlen = 0;
++
++	if (!strncmp(name, XATTR_SECURITY_PREFIX,
++			XATTR_SECURITY_PREFIX_LEN))
++		prefixlen = XATTR_SECURITY_PREFIX_LEN;
++	else if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
++		prefixlen = XATTR_SYSTEM_PREFIX_LEN;
++	else if (!strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN))
++		prefixlen = XATTR_TRUSTED_PREFIX_LEN;
++	else if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN))
++		prefixlen = XATTR_USER_PREFIX_LEN;
++	else if (!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN))
++		prefixlen = XATTR_BTRFS_PREFIX_LEN;
++	else
++		return -EOPNOTSUPP;
++
++	/*
++	 * The name cannot consist of just prefix
++	 */
++	if (len <= prefixlen)
++		return -EINVAL;
++
++	return 0;
+ }
+ 
+ ssize_t btrfs_getxattr(struct dentry *dentry, const char *name,
+ 		       void *buffer, size_t size)
+ {
++	int ret;
++
+ 	/*
+ 	 * If this is a request for a synthetic attribute in the system.*
+ 	 * namespace use the generic infrastructure to resolve a handler
+@@ -388,8 +408,9 @@ ssize_t btrfs_getxattr(struct dentry *dentry, const char *name,
+ 	if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
+ 		return generic_getxattr(dentry, name, buffer, size);
+ 
+-	if (!btrfs_is_valid_xattr(name))
+-		return -EOPNOTSUPP;
++	ret = btrfs_is_valid_xattr(name);
++	if (ret)
++		return ret;
+ 	return __btrfs_getxattr(dentry->d_inode, name, buffer, size);
+ }
+ 
+@@ -397,6 +418,7 @@ int btrfs_setxattr(struct dentry *dentry, const char *name, const void *value,
+ 		   size_t size, int flags)
+ {
+ 	struct btrfs_root *root = BTRFS_I(dentry->d_inode)->root;
++	int ret;
+ 
+ 	/*
+ 	 * The permission on security.* and system.* is not checked
+@@ -413,8 +435,9 @@ int btrfs_setxattr(struct dentry *dentry, const char *name, const void *value,
+ 	if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
+ 		return generic_setxattr(dentry, name, value, size, flags);
+ 
+-	if (!btrfs_is_valid_xattr(name))
+-		return -EOPNOTSUPP;
++	ret = btrfs_is_valid_xattr(name);
++	if (ret)
++		return ret;
+ 
+ 	if (!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN))
+ 		return btrfs_set_prop(dentry->d_inode, name,
+@@ -430,6 +453,7 @@ int btrfs_setxattr(struct dentry *dentry, const char *name, const void *value,
+ int btrfs_removexattr(struct dentry *dentry, const char *name)
+ {
+ 	struct btrfs_root *root = BTRFS_I(dentry->d_inode)->root;
++	int ret;
+ 
+ 	/*
+ 	 * The permission on security.* and system.* is not checked
+@@ -446,8 +470,9 @@ int btrfs_removexattr(struct dentry *dentry, const char *name)
+ 	if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
+ 		return generic_removexattr(dentry, name);
+ 
+-	if (!btrfs_is_valid_xattr(name))
+-		return -EOPNOTSUPP;
++	ret = btrfs_is_valid_xattr(name);
++	if (ret)
++		return ret;
+ 
+ 	if (!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN))
+ 		return btrfs_set_prop(dentry->d_inode, name,
+-- 
+2.3.6
+
+
+From 9301d5068d8732a0f2d787240270a1426d09ecf5 Mon Sep 17 00:00:00 2001
+From: Filipe Manana <fdmanana@suse.com>
+Date: Mon, 30 Mar 2015 18:23:59 +0100
+Subject: [PATCH 022/219] Btrfs: fix inode eviction infinite loop after cloning
+ into it
+Cc: mpagano@gentoo.org
+
+commit ccccf3d67294714af2d72a6fd6fd7d73b01c9329 upstream.
+
+If we attempt to clone a 0 length region into a file we can end up
+inserting a range in the inode's extent_io tree with a start offset
+that is greater then the end offset, which triggers immediately the
+following warning:
+
+[ 3914.619057] WARNING: CPU: 17 PID: 4199 at fs/btrfs/extent_io.c:435 insert_state+0x4b/0x10b [btrfs]()
+[ 3914.620886] BTRFS: end < start 4095 4096
+(...)
+[ 3914.638093] Call Trace:
+[ 3914.638636]  [<ffffffff81425fd9>] dump_stack+0x4c/0x65
+[ 3914.639620]  [<ffffffff81045390>] warn_slowpath_common+0xa1/0xbb
+[ 3914.640789]  [<ffffffffa03ca44f>] ? insert_state+0x4b/0x10b [btrfs]
+[ 3914.642041]  [<ffffffff810453f0>] warn_slowpath_fmt+0x46/0x48
+[ 3914.643236]  [<ffffffffa03ca44f>] insert_state+0x4b/0x10b [btrfs]
+[ 3914.644441]  [<ffffffffa03ca729>] __set_extent_bit+0x107/0x3f4 [btrfs]
+[ 3914.645711]  [<ffffffffa03cb256>] lock_extent_bits+0x65/0x1bf [btrfs]
+[ 3914.646914]  [<ffffffff8142b2fb>] ? _raw_spin_unlock+0x28/0x33
+[ 3914.648058]  [<ffffffffa03cbac4>] ? test_range_bit+0xcc/0xde [btrfs]
+[ 3914.650105]  [<ffffffffa03cb3c3>] lock_extent+0x13/0x15 [btrfs]
+[ 3914.651361]  [<ffffffffa03db39e>] lock_extent_range+0x3d/0xcd [btrfs]
+[ 3914.652761]  [<ffffffffa03de1fe>] btrfs_ioctl_clone+0x278/0x388 [btrfs]
+[ 3914.654128]  [<ffffffff811226dd>] ? might_fault+0x58/0xb5
+[ 3914.655320]  [<ffffffffa03e0909>] btrfs_ioctl+0xb51/0x2195 [btrfs]
+(...)
+[ 3914.669271] ---[ end trace 14843d3e2e622fc1 ]---
+
+This later makes the inode eviction handler enter an infinite loop that
+keeps dumping the following warning over and over:
+
+[ 3915.117629] WARNING: CPU: 22 PID: 4228 at fs/btrfs/extent_io.c:435 insert_state+0x4b/0x10b [btrfs]()
+[ 3915.119913] BTRFS: end < start 4095 4096
+(...)
+[ 3915.137394] Call Trace:
+[ 3915.137913]  [<ffffffff81425fd9>] dump_stack+0x4c/0x65
+[ 3915.139154]  [<ffffffff81045390>] warn_slowpath_common+0xa1/0xbb
+[ 3915.140316]  [<ffffffffa03ca44f>] ? insert_state+0x4b/0x10b [btrfs]
+[ 3915.141505]  [<ffffffff810453f0>] warn_slowpath_fmt+0x46/0x48
+[ 3915.142709]  [<ffffffffa03ca44f>] insert_state+0x4b/0x10b [btrfs]
+[ 3915.143849]  [<ffffffffa03ca729>] __set_extent_bit+0x107/0x3f4 [btrfs]
+[ 3915.145120]  [<ffffffffa038c1e3>] ? btrfs_kill_super+0x17/0x23 [btrfs]
+[ 3915.146352]  [<ffffffff811548f6>] ? deactivate_locked_super+0x3b/0x50
+[ 3915.147565]  [<ffffffffa03cb256>] lock_extent_bits+0x65/0x1bf [btrfs]
+[ 3915.148785]  [<ffffffff8142b7e2>] ? _raw_write_unlock+0x28/0x33
+[ 3915.149931]  [<ffffffffa03bc325>] btrfs_evict_inode+0x196/0x482 [btrfs]
+[ 3915.151154]  [<ffffffff81168904>] evict+0xa0/0x148
+[ 3915.152094]  [<ffffffff811689e5>] dispose_list+0x39/0x43
+[ 3915.153081]  [<ffffffff81169564>] evict_inodes+0xdc/0xeb
+[ 3915.154062]  [<ffffffff81154418>] generic_shutdown_super+0x49/0xef
+[ 3915.155193]  [<ffffffff811546d1>] kill_anon_super+0x13/0x1e
+[ 3915.156274]  [<ffffffffa038c1e3>] btrfs_kill_super+0x17/0x23 [btrfs]
+(...)
+[ 3915.167404] ---[ end trace 14843d3e2e622fc2 ]---
+
+So just bail out of the clone ioctl if the length of the region to clone
+is zero, without locking any extent range, in order to prevent this issue
+(same behaviour as a pwrite with a 0 length for example).
+
+This is trivial to reproduce. For example, the steps for the test I just
+made for fstests:
+
+  mkfs.btrfs -f SCRATCH_DEV
+  mount SCRATCH_DEV $SCRATCH_MNT
+
+  touch $SCRATCH_MNT/foo
+  touch $SCRATCH_MNT/bar
+
+  $CLONER_PROG -s 0 -d 4096 -l 0 $SCRATCH_MNT/foo $SCRATCH_MNT/bar
+  umount $SCRATCH_MNT
+
+A test case for fstests follows soon.
+
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Reviewed-by: Omar Sandoval <osandov@osandov.com>
+Signed-off-by: Chris Mason <clm@fb.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/btrfs/ioctl.c | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 74609b9..a09d3b8 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3626,6 +3626,11 @@ static noinline long btrfs_ioctl_clone(struct file *file, unsigned long srcfd,
+ 	if (off + len == src->i_size)
+ 		len = ALIGN(src->i_size, bs) - off;
+ 
++	if (len == 0) {
++		ret = 0;
++		goto out_unlock;
++	}
++
+ 	/* verify the end result is block aligned */
+ 	if (!IS_ALIGNED(off, bs) || !IS_ALIGNED(off + len, bs) ||
+ 	    !IS_ALIGNED(destoff, bs))
+-- 
+2.3.6
+
+
+From 68ea2629745f61ddf8a603970e74b294737bc5d7 Mon Sep 17 00:00:00 2001
+From: Filipe Manana <fdmanana@suse.com>
+Date: Mon, 30 Mar 2015 18:26:47 +0100
+Subject: [PATCH 023/219] Btrfs: fix inode eviction infinite loop after
+ extent_same ioctl
+Cc: mpagano@gentoo.org
+
+commit 113e8283869b9855c8b999796aadd506bbac155f upstream.
+
+If we pass a length of 0 to the extent_same ioctl, we end up locking an
+extent range with a start offset greater then its end offset (if the
+destination file's offset is greater than zero). This results in a warning
+from extent_io.c:insert_state through the following call chain:
+
+  btrfs_extent_same()
+    btrfs_double_lock()
+      lock_extent_range()
+        lock_extent(inode->io_tree, offset, offset + len - 1)
+          lock_extent_bits()
+            __set_extent_bit()
+              insert_state()
+                --> WARN_ON(end < start)
+
+This leads to an infinite loop when evicting the inode. This is the same
+problem that my previous patch titled
+"Btrfs: fix inode eviction infinite loop after cloning into it" addressed
+but for the extent_same ioctl instead of the clone ioctl.
+
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Reviewed-by: Omar Sandoval <osandov@osandov.com>
+Signed-off-by: Chris Mason <clm@fb.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/btrfs/ioctl.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index a09d3b8..f23d4be 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -2897,6 +2897,9 @@ static int btrfs_extent_same(struct inode *src, u64 loff, u64 len,
+ 	if (src == dst)
+ 		return -EINVAL;
+ 
++	if (len == 0)
++		return 0;
++
+ 	btrfs_double_lock(src, loff, dst, dst_loff, len);
+ 
+ 	ret = extent_same_check_offsets(src, loff, len);
+-- 
+2.3.6
+
+
+From 5683056e4853891106ae0a99938c96dfdc8fa881 Mon Sep 17 00:00:00 2001
+From: Gerald Schaefer <gerald.schaefer@de.ibm.com>
+Date: Tue, 14 Apr 2015 15:42:30 -0700
+Subject: [PATCH 024/219] mm/hugetlb: use pmd_page() in follow_huge_pmd()
+Cc: mpagano@gentoo.org
+
+commit 97534127012f0e396eddea4691f4c9b170aed74b upstream.
+
+Commit 61f77eda9bbf ("mm/hugetlb: reduce arch dependent code around
+follow_huge_*") broke follow_huge_pmd() on s390, where pmd and pte
+layout differ and using pte_page() on a huge pmd will return wrong
+results.  Using pmd_page() instead fixes this.
+
+All architectures that were touched by that commit have pmd_page()
+defined, so this should not break anything on other architectures.
+
+Fixes: 61f77eda "mm/hugetlb: reduce arch dependent code around follow_huge_*"
+Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
+Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Cc: Hugh Dickins <hughd@google.com>
+Cc: Michal Hocko <mhocko@suse.cz>
+Cc: Andrea Arcangeli <aarcange@redhat.com>
+Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
+Acked-by: David Rientjes <rientjes@google.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ mm/hugetlb.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index c41b2a0..caad3c5 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3735,8 +3735,7 @@ retry:
+ 	if (!pmd_huge(*pmd))
+ 		goto out;
+ 	if (pmd_present(*pmd)) {
+-		page = pte_page(*(pte_t *)pmd) +
+-			((address & ~PMD_MASK) >> PAGE_SHIFT);
++		page = pmd_page(*pmd) + ((address & ~PMD_MASK) >> PAGE_SHIFT);
+ 		if (flags & FOLL_GET)
+ 			get_page(page);
+ 	} else {
+-- 
+2.3.6
+
+
+From 5cb46afa0f6d4c48714951dc856c404d79315a39 Mon Sep 17 00:00:00 2001
+From: Scott Wood <scottwood@freescale.com>
+Date: Fri, 10 Apr 2015 19:37:34 -0500
+Subject: [PATCH 025/219] powerpc/hugetlb: Call mm_dec_nr_pmds() in
+ hugetlb_free_pmd_range()
+Cc: mpagano@gentoo.org
+
+commit 50c6a665b383cb5839e45d04e36faeeefaffa052 upstream.
+
+Commit dc6c9a35b66b5 ("mm: account pmd page tables to the process")
+added a counter that is incremented whenever a PMD is allocated and
+decremented whenever a PMD is freed.  For hugepages on PPC, common code
+is used to allocated PMDs, but arch-specific code is used to free PMDs.
+
+This results in kernel output such as "BUG: non-zero nr_pmds on freeing
+mm: 1" when using hugepages.
+
+Update the PPC hugepage PMD freeing code to decrement the count, just
+as the above commit did for free_pmd_range().
+
+Fixes: dc6c9a35b66b5 ("mm: account pmd page tables to the process")
+Signed-off-by: Scott Wood <scottwood@freescale.com>
+Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/powerpc/mm/hugetlbpage.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
+index 7e408bf..cecbe00 100644
+--- a/arch/powerpc/mm/hugetlbpage.c
++++ b/arch/powerpc/mm/hugetlbpage.c
+@@ -581,6 +581,7 @@ static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud,
+ 	pmd = pmd_offset(pud, start);
+ 	pud_clear(pud);
+ 	pmd_free_tlb(tlb, pmd, start);
++	mm_dec_nr_pmds(tlb->mm);
+ }
+ 
+ static void hugetlb_free_pud_range(struct mmu_gather *tlb, pgd_t *pgd,
+-- 
+2.3.6
+
+
+From 9297ed24421df19f5c5085d65ee2575a63524447 Mon Sep 17 00:00:00 2001
+From: Andrzej Pietrasiewicz <andrzej.p@samsung.com>
+Date: Tue, 3 Mar 2015 10:52:05 +0100
+Subject: [PATCH 026/219] usb: gadget: printer: enqueue printer's response for
+ setup request
+Cc: mpagano@gentoo.org
+
+commit eb132ccbdec5df46e29c9814adf76075ce83576b upstream.
+
+Function-specific setup requests should be handled in such a way, that
+apart from filling in the data buffer, the requests are also actually
+enqueued: if function-specific setup is called from composte_setup(),
+the "usb_ep_queue()" block of code in composite_setup() is skipped.
+
+The printer function lacks this part and it results in e.g. get device id
+requests failing: the host expects some response, the device prepares it
+but does not equeue it for sending to the host, so the host finally asserts
+timeout.
+
+This patch adds enqueueing the prepared responses.
+
+Fixes: 2e87edf49227: "usb: gadget: make g_printer use composite"
+Signed-off-by: Andrzej Pietrasiewicz <andrzej.p@samsung.com>
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/gadget/legacy/printer.c | 9 +++++++++
+ 1 file changed, 9 insertions(+)
+
+diff --git a/drivers/usb/gadget/legacy/printer.c b/drivers/usb/gadget/legacy/printer.c
+index 9054598..6385c19 100644
+--- a/drivers/usb/gadget/legacy/printer.c
++++ b/drivers/usb/gadget/legacy/printer.c
+@@ -1031,6 +1031,15 @@ unknown:
+ 		break;
+ 	}
+ 	/* host either stalls (value < 0) or reports success */
++	if (value >= 0) {
++		req->length = value;
++		req->zero = value < wLength;
++		value = usb_ep_queue(cdev->gadget->ep0, req, GFP_ATOMIC);
++		if (value < 0) {
++			ERROR(dev, "%s:%d Error!\n", __func__, __LINE__);
++			req->status = 0;
++		}
++	}
+ 	return value;
+ }
+ 
+-- 
+2.3.6
+
+
+From bcdd54ffac32205938fa2cdd656604973275214b Mon Sep 17 00:00:00 2001
+From: David Hildenbrand <dahi@linux.vnet.ibm.com>
+Date: Wed, 4 Feb 2015 15:53:42 +0100
+Subject: [PATCH 027/219] KVM: s390: fix handling of write errors in the tpi
+ handler
+Cc: mpagano@gentoo.org
+
+commit 261520dcfcba93ca5dfe671b88ffab038cd940c8 upstream.
+
+If the I/O interrupt could not be written to the guest provided
+area (e.g. access exception), a program exception was injected into the
+guest but "inti" wasn't freed, therefore resulting in a memory leak.
+
+In addition, the I/O interrupt wasn't reinjected. Therefore the dequeued
+interrupt is lost.
+
+This patch fixes the problem while cleaning up the function and making the
+cc and rc logic easier to handle.
+
+Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
+Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/s390/kvm/priv.c | 40 +++++++++++++++++++++++-----------------
+ 1 file changed, 23 insertions(+), 17 deletions(-)
+
+diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
+index 3511169..767149a 100644
+--- a/arch/s390/kvm/priv.c
++++ b/arch/s390/kvm/priv.c
+@@ -229,18 +229,19 @@ static int handle_tpi(struct kvm_vcpu *vcpu)
+ 	struct kvm_s390_interrupt_info *inti;
+ 	unsigned long len;
+ 	u32 tpi_data[3];
+-	int cc, rc;
++	int rc;
+ 	u64 addr;
+ 
+-	rc = 0;
+ 	addr = kvm_s390_get_base_disp_s(vcpu);
+ 	if (addr & 3)
+ 		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
+-	cc = 0;
++
+ 	inti = kvm_s390_get_io_int(vcpu->kvm, vcpu->arch.sie_block->gcr[6], 0);
+-	if (!inti)
+-		goto no_interrupt;
+-	cc = 1;
++	if (!inti) {
++		kvm_s390_set_psw_cc(vcpu, 0);
++		return 0;
++	}
++
+ 	tpi_data[0] = inti->io.subchannel_id << 16 | inti->io.subchannel_nr;
+ 	tpi_data[1] = inti->io.io_int_parm;
+ 	tpi_data[2] = inti->io.io_int_word;
+@@ -251,30 +252,35 @@ static int handle_tpi(struct kvm_vcpu *vcpu)
+ 		 */
+ 		len = sizeof(tpi_data) - 4;
+ 		rc = write_guest(vcpu, addr, &tpi_data, len);
+-		if (rc)
+-			return kvm_s390_inject_prog_cond(vcpu, rc);
++		if (rc) {
++			rc = kvm_s390_inject_prog_cond(vcpu, rc);
++			goto reinject_interrupt;
++		}
+ 	} else {
+ 		/*
+ 		 * Store the three-word I/O interruption code into
+ 		 * the appropriate lowcore area.
+ 		 */
+ 		len = sizeof(tpi_data);
+-		if (write_guest_lc(vcpu, __LC_SUBCHANNEL_ID, &tpi_data, len))
++		if (write_guest_lc(vcpu, __LC_SUBCHANNEL_ID, &tpi_data, len)) {
++			/* failed writes to the low core are not recoverable */
+ 			rc = -EFAULT;
++			goto reinject_interrupt;
++		}
+ 	}
++
++	/* irq was successfully handed to the guest */
++	kfree(inti);
++	kvm_s390_set_psw_cc(vcpu, 1);
++	return 0;
++reinject_interrupt:
+ 	/*
+ 	 * If we encounter a problem storing the interruption code, the
+ 	 * instruction is suppressed from the guest's view: reinject the
+ 	 * interrupt.
+ 	 */
+-	if (!rc)
+-		kfree(inti);
+-	else
+-		kvm_s390_reinject_io_int(vcpu->kvm, inti);
+-no_interrupt:
+-	/* Set condition code and we're done. */
+-	if (!rc)
+-		kvm_s390_set_psw_cc(vcpu, cc);
++	kvm_s390_reinject_io_int(vcpu->kvm, inti);
++	/* don't set the cc, a pgm irq was injected or we drop to user space */
+ 	return rc ? -EFAULT : 0;
+ }
+ 
+-- 
+2.3.6
+
+
+From 98529eff3f93a3179a35f9ae459e21f64e8be813 Mon Sep 17 00:00:00 2001
+From: David Hildenbrand <dahi@linux.vnet.ibm.com>
+Date: Wed, 4 Feb 2015 15:59:11 +0100
+Subject: [PATCH 028/219] KVM: s390: reinjection of irqs can fail in the tpi
+ handler
+Cc: mpagano@gentoo.org
+
+commit 15462e37ca848abac7477dece65f8af25febd744 upstream.
+
+The reinjection of an I/O interrupt can fail if the list is at the limit
+and between the dequeue and the reinjection, another I/O interrupt is
+injected (e.g. if user space floods kvm with I/O interrupts).
+
+This patch avoids this memory leak and returns -EFAULT in this special
+case. This error is not recoverable, so let's fail hard. This can later
+be avoided by not dequeuing the interrupt but working directly on the
+locked list.
+
+Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
+Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/s390/kvm/interrupt.c | 4 ++--
+ arch/s390/kvm/kvm-s390.h  | 4 ++--
+ arch/s390/kvm/priv.c      | 5 ++++-
+ 3 files changed, 8 insertions(+), 5 deletions(-)
+
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index 073b5f3..e7a46e8 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -1332,10 +1332,10 @@ int kvm_s390_inject_vm(struct kvm *kvm,
+ 	return rc;
+ }
+ 
+-void kvm_s390_reinject_io_int(struct kvm *kvm,
++int kvm_s390_reinject_io_int(struct kvm *kvm,
+ 			      struct kvm_s390_interrupt_info *inti)
+ {
+-	__inject_vm(kvm, inti);
++	return __inject_vm(kvm, inti);
+ }
+ 
+ int s390int_to_s390irq(struct kvm_s390_interrupt *s390int,
+diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
+index c34109a..6995a30 100644
+--- a/arch/s390/kvm/kvm-s390.h
++++ b/arch/s390/kvm/kvm-s390.h
+@@ -151,8 +151,8 @@ int __must_check kvm_s390_inject_vcpu(struct kvm_vcpu *vcpu,
+ int __must_check kvm_s390_inject_program_int(struct kvm_vcpu *vcpu, u16 code);
+ struct kvm_s390_interrupt_info *kvm_s390_get_io_int(struct kvm *kvm,
+ 						    u64 cr6, u64 schid);
+-void kvm_s390_reinject_io_int(struct kvm *kvm,
+-			      struct kvm_s390_interrupt_info *inti);
++int kvm_s390_reinject_io_int(struct kvm *kvm,
++			     struct kvm_s390_interrupt_info *inti);
+ int kvm_s390_mask_adapter(struct kvm *kvm, unsigned int id, bool masked);
+ 
+ /* implemented in intercept.c */
+diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
+index 767149a..613e9f0 100644
+--- a/arch/s390/kvm/priv.c
++++ b/arch/s390/kvm/priv.c
+@@ -279,7 +279,10 @@ reinject_interrupt:
+ 	 * instruction is suppressed from the guest's view: reinject the
+ 	 * interrupt.
+ 	 */
+-	kvm_s390_reinject_io_int(vcpu->kvm, inti);
++	if (kvm_s390_reinject_io_int(vcpu->kvm, inti)) {
++		kfree(inti);
++		rc = -EFAULT;
++	}
+ 	/* don't set the cc, a pgm irq was injected or we drop to user space */
+ 	return rc ? -EFAULT : 0;
+ }
+-- 
+2.3.6
+
+
+From 7f1a4ebee923455bb5f50ab4ce832194dff859a7 Mon Sep 17 00:00:00 2001
+From: Ekaterina Tumanova <tumanova@linux.vnet.ibm.com>
+Date: Tue, 3 Mar 2015 09:54:41 +0100
+Subject: [PATCH 029/219] KVM: s390: Zero out current VMDB of STSI before
+ including level3 data.
+Cc: mpagano@gentoo.org
+
+commit b75f4c9afac2604feb971441116c07a24ecca1ec upstream.
+
+s390 documentation requires words 0 and 10-15 to be reserved and stored as
+zeros. As we fill out all other fields, we can memset the full structure.
+
+Signed-off-by: Ekaterina Tumanova <tumanova@linux.vnet.ibm.com>
+Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
+Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/s390/kvm/priv.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
+index 613e9f0..b982fbc 100644
+--- a/arch/s390/kvm/priv.c
++++ b/arch/s390/kvm/priv.c
+@@ -476,6 +476,7 @@ static void handle_stsi_3_2_2(struct kvm_vcpu *vcpu, struct sysinfo_3_2_2 *mem)
+ 	for (n = mem->count - 1; n > 0 ; n--)
+ 		memcpy(&mem->vm[n], &mem->vm[n - 1], sizeof(mem->vm[0]));
+ 
++	memset(&mem->vm[0], 0, sizeof(mem->vm[0]));
+ 	mem->vm[0].cpus_total = cpus;
+ 	mem->vm[0].cpus_configured = cpus;
+ 	mem->vm[0].cpus_standby = 0;
+-- 
+2.3.6
+
+
+From 4756129f7d1bf8fa4ff6011a39f729f5d3bc64c4 Mon Sep 17 00:00:00 2001
+From: Jens Freimann <jfrei@linux.vnet.ibm.com>
+Date: Mon, 16 Mar 2015 12:17:13 +0100
+Subject: [PATCH 030/219] KVM: s390: fix get_all_floating_irqs
+Cc: mpagano@gentoo.org
+
+commit 94aa033efcac47b09db22cb561e135baf37b7887 upstream.
+
+This fixes a bug introduced with commit c05c4186bbe4 ("KVM: s390:
+add floating irq controller").
+
+get_all_floating_irqs() does copy_to_user() while holding
+a spin lock. Let's fix this by filling a temporary buffer
+first and copy it to userspace after giving up the lock.
+
+Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
+Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
+Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
+Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ Documentation/virtual/kvm/devices/s390_flic.txt |  3 ++
+ arch/s390/kvm/interrupt.c                       | 58 ++++++++++++++-----------
+ 2 files changed, 35 insertions(+), 26 deletions(-)
+
+diff --git a/Documentation/virtual/kvm/devices/s390_flic.txt b/Documentation/virtual/kvm/devices/s390_flic.txt
+index 4ceef53..d1ad9d5 100644
+--- a/Documentation/virtual/kvm/devices/s390_flic.txt
++++ b/Documentation/virtual/kvm/devices/s390_flic.txt
+@@ -27,6 +27,9 @@ Groups:
+     Copies all floating interrupts into a buffer provided by userspace.
+     When the buffer is too small it returns -ENOMEM, which is the indication
+     for userspace to try again with a bigger buffer.
++    -ENOBUFS is returned when the allocation of a kernelspace buffer has
++    failed.
++    -EFAULT is returned when copying data to userspace failed.
+     All interrupts remain pending, i.e. are not deleted from the list of
+     currently pending interrupts.
+     attr->addr contains the userspace address of the buffer into which all
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index e7a46e8..e7bc2fd 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -17,6 +17,7 @@
+ #include <linux/signal.h>
+ #include <linux/slab.h>
+ #include <linux/bitmap.h>
++#include <linux/vmalloc.h>
+ #include <asm/asm-offsets.h>
+ #include <asm/uaccess.h>
+ #include <asm/sclp.h>
+@@ -1455,61 +1456,66 @@ void kvm_s390_clear_float_irqs(struct kvm *kvm)
+ 	spin_unlock(&fi->lock);
+ }
+ 
+-static inline int copy_irq_to_user(struct kvm_s390_interrupt_info *inti,
+-				   u8 *addr)
++static void inti_to_irq(struct kvm_s390_interrupt_info *inti,
++		       struct kvm_s390_irq *irq)
+ {
+-	struct kvm_s390_irq __user *uptr = (struct kvm_s390_irq __user *) addr;
+-	struct kvm_s390_irq irq = {0};
+-
+-	irq.type = inti->type;
++	irq->type = inti->type;
+ 	switch (inti->type) {
+ 	case KVM_S390_INT_PFAULT_INIT:
+ 	case KVM_S390_INT_PFAULT_DONE:
+ 	case KVM_S390_INT_VIRTIO:
+ 	case KVM_S390_INT_SERVICE:
+-		irq.u.ext = inti->ext;
++		irq->u.ext = inti->ext;
+ 		break;
+ 	case KVM_S390_INT_IO_MIN...KVM_S390_INT_IO_MAX:
+-		irq.u.io = inti->io;
++		irq->u.io = inti->io;
+ 		break;
+ 	case KVM_S390_MCHK:
+-		irq.u.mchk = inti->mchk;
++		irq->u.mchk = inti->mchk;
+ 		break;
+-	default:
+-		return -EINVAL;
+ 	}
+-
+-	if (copy_to_user(uptr, &irq, sizeof(irq)))
+-		return -EFAULT;
+-
+-	return 0;
+ }
+ 
+-static int get_all_floating_irqs(struct kvm *kvm, __u8 *buf, __u64 len)
++static int get_all_floating_irqs(struct kvm *kvm, u8 __user *usrbuf, u64 len)
+ {
+ 	struct kvm_s390_interrupt_info *inti;
+ 	struct kvm_s390_float_interrupt *fi;
++	struct kvm_s390_irq *buf;
++	int max_irqs;
+ 	int ret = 0;
+ 	int n = 0;
+ 
++	if (len > KVM_S390_FLIC_MAX_BUFFER || len == 0)
++		return -EINVAL;
++
++	/*
++	 * We are already using -ENOMEM to signal
++	 * userspace it may retry with a bigger buffer,
++	 * so we need to use something else for this case
++	 */
++	buf = vzalloc(len);
++	if (!buf)
++		return -ENOBUFS;
++
++	max_irqs = len / sizeof(struct kvm_s390_irq);
++
+ 	fi = &kvm->arch.float_int;
+ 	spin_lock(&fi->lock);
+-
+ 	list_for_each_entry(inti, &fi->list, list) {
+-		if (len < sizeof(struct kvm_s390_irq)) {
++		if (n == max_irqs) {
+ 			/* signal userspace to try again */
+ 			ret = -ENOMEM;
+ 			break;
+ 		}
+-		ret = copy_irq_to_user(inti, buf);
+-		if (ret)
+-			break;
+-		buf += sizeof(struct kvm_s390_irq);
+-		len -= sizeof(struct kvm_s390_irq);
++		inti_to_irq(inti, &buf[n]);
+ 		n++;
+ 	}
+-
+ 	spin_unlock(&fi->lock);
++	if (!ret && n > 0) {
++		if (copy_to_user(usrbuf, buf, sizeof(struct kvm_s390_irq) * n))
++			ret = -EFAULT;
++	}
++	vfree(buf);
+ 
+ 	return ret < 0 ? ret : n;
+ }
+@@ -1520,7 +1526,7 @@ static int flic_get_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
+ 
+ 	switch (attr->group) {
+ 	case KVM_DEV_FLIC_GET_ALL_IRQS:
+-		r = get_all_floating_irqs(dev->kvm, (u8 *) attr->addr,
++		r = get_all_floating_irqs(dev->kvm, (u8 __user *) attr->addr,
+ 					  attr->attr);
+ 		break;
+ 	default:
+-- 
+2.3.6
+
+
+From 654de1f9fd289e10a3de1daf0806051f05f57d92 Mon Sep 17 00:00:00 2001
+From: Heiko Carstens <heiko.carstens@de.ibm.com>
+Date: Wed, 25 Mar 2015 10:13:33 +0100
+Subject: [PATCH 031/219] s390/hibernate: fix save and restore of kernel text
+ section
+Cc: mpagano@gentoo.org
+
+commit d74419495633493c9cd3f2bbeb7f3529d0edded6 upstream.
+
+Sebastian reported a crash caused by a jump label mismatch after resume.
+This happens because we do not save the kernel text section during suspend
+and therefore also do not restore it during resume, but use the kernel image
+that restores the old system.
+
+This means that after a suspend/resume cycle we lost all modifications done
+to the kernel text section.
+The reason for this is the pfn_is_nosave() function, which incorrectly
+returns that read-only pages don't need to be saved. This is incorrect since
+we mark the kernel text section read-only.
+We still need to make sure to not save and restore pages contained within
+NSS and DCSS segment.
+To fix this add an extra case for the kernel text section and only save
+those pages if they are not contained within an NSS segment.
+
+Fixes the following crash (and the above bugs as well):
+
+Jump label code mismatch at netif_receive_skb_internal+0x28/0xd0
+Found:    c0 04 00 00 00 00
+Expected: c0 f4 00 00 00 11
+New:      c0 04 00 00 00 00
+Kernel panic - not syncing: Corrupted kernel text
+CPU: 0 PID: 9 Comm: migration/0 Not tainted 3.19.0-01975-gb1b096e70f23 #4
+Call Trace:
+  [<0000000000113972>] show_stack+0x72/0xf0
+  [<000000000081f15e>] dump_stack+0x6e/0x90
+  [<000000000081c4e8>] panic+0x108/0x2b0
+  [<000000000081be64>] jump_label_bug.isra.2+0x104/0x108
+  [<0000000000112176>] __jump_label_transform+0x9e/0xd0
+  [<00000000001121e6>] __sm_arch_jump_label_transform+0x3e/0x50
+  [<00000000001d1136>] multi_cpu_stop+0x12e/0x170
+  [<00000000001d1472>] cpu_stopper_thread+0xb2/0x168
+  [<000000000015d2ac>] smpboot_thread_fn+0x134/0x1b0
+  [<0000000000158baa>] kthread+0x10a/0x110
+  [<0000000000824a86>] kernel_thread_starter+0x6/0xc
+
+Reported-and-tested-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
+Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
+Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/s390/kernel/suspend.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/arch/s390/kernel/suspend.c b/arch/s390/kernel/suspend.c
+index 1c4c5ac..d3236c9 100644
+--- a/arch/s390/kernel/suspend.c
++++ b/arch/s390/kernel/suspend.c
+@@ -138,6 +138,8 @@ int pfn_is_nosave(unsigned long pfn)
+ {
+ 	unsigned long nosave_begin_pfn = PFN_DOWN(__pa(&__nosave_begin));
+ 	unsigned long nosave_end_pfn = PFN_DOWN(__pa(&__nosave_end));
++	unsigned long eshared_pfn = PFN_DOWN(__pa(&_eshared)) - 1;
++	unsigned long stext_pfn = PFN_DOWN(__pa(&_stext));
+ 
+ 	/* Always save lowcore pages (LC protection might be enabled). */
+ 	if (pfn <= LC_PAGES)
+@@ -145,6 +147,8 @@ int pfn_is_nosave(unsigned long pfn)
+ 	if (pfn >= nosave_begin_pfn && pfn < nosave_end_pfn)
+ 		return 1;
+ 	/* Skip memory holes and read-only pages (NSS, DCSS, ...). */
++	if (pfn >= stext_pfn && pfn <= eshared_pfn)
++		return ipl_info.type == IPL_TYPE_NSS ? 1 : 0;
+ 	if (tprot(PFN_PHYS(pfn)))
+ 		return 1;
+ 	return 0;
+-- 
+2.3.6
+
+
+From 15254fde3f5d723bd591a73d88296e9aecdd6bb7 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= <rkrcmar@redhat.com>
+Date: Wed, 8 Apr 2015 14:16:48 +0200
+Subject: [PATCH 032/219] KVM: use slowpath for cross page cached accesses
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+Cc: mpagano@gentoo.org
+
+commit ca3f0874723fad81d0c701b63ae3a17a408d5f25 upstream.
+
+kvm_write_guest_cached() does not mark all written pages as dirty and
+code comments in kvm_gfn_to_hva_cache_init() talk about NULL memslot
+with cross page accesses.  Fix all the easy way.
+
+The check is '<= 1' to have the same result for 'len = 0' cache anywhere
+in the page.  (nr_pages_needed is 0 on page boundary.)
+
+Fixes: 8f964525a121 ("KVM: Allow cross page reads and writes from cached translations.")
+Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
+Message-Id: <20150408121648.GA3519@potion.brq.redhat.com>
+Reviewed-by: Wanpeng Li <wanpeng.li@linux.intel.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ virt/kvm/kvm_main.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index cc6a25d..f8f3f5f 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1653,8 +1653,8 @@ int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
+ 	ghc->generation = slots->generation;
+ 	ghc->len = len;
+ 	ghc->memslot = gfn_to_memslot(kvm, start_gfn);
+-	ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, &nr_pages_avail);
+-	if (!kvm_is_error_hva(ghc->hva) && nr_pages_avail >= nr_pages_needed) {
++	ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, NULL);
++	if (!kvm_is_error_hva(ghc->hva) && nr_pages_needed <= 1) {
+ 		ghc->hva += offset;
+ 	} else {
+ 		/*
+-- 
+2.3.6
+
+
+From fb124f8c695ec8ddc72f19a8b3247b5ee872422f Mon Sep 17 00:00:00 2001
+From: Andre Przywara <andre.przywara@arm.com>
+Date: Fri, 10 Apr 2015 16:17:59 +0100
+Subject: [PATCH 033/219] KVM: arm/arm64: check IRQ number on userland
+ injection
+Cc: mpagano@gentoo.org
+
+commit fd1d0ddf2ae92fb3df42ed476939861806c5d785 upstream.
+
+When userland injects a SPI via the KVM_IRQ_LINE ioctl we currently
+only check it against a fixed limit, which historically is set
+to 127. With the new dynamic IRQ allocation the effective limit may
+actually be smaller (64).
+So when now a malicious or buggy userland injects a SPI in that
+range, we spill over on our VGIC bitmaps and bytemaps memory.
+I could trigger a host kernel NULL pointer dereference with current
+mainline by injecting some bogus IRQ number from a hacked kvmtool:
+-----------------
+....
+DEBUG: kvm_vgic_inject_irq(kvm, cpu=0, irq=114, level=1)
+DEBUG: vgic_update_irq_pending(kvm, cpu=0, irq=114, level=1)
+DEBUG: IRQ #114 still in the game, writing to bytemap now...
+Unable to handle kernel NULL pointer dereference at virtual address 00000000
+pgd = ffffffc07652e000
+[00000000] *pgd=00000000f658b003, *pud=00000000f658b003, *pmd=0000000000000000
+Internal error: Oops: 96000006 [#1] PREEMPT SMP
+Modules linked in:
+CPU: 1 PID: 1053 Comm: lkvm-msi-irqinj Not tainted 4.0.0-rc7+ #3027
+Hardware name: FVP Base (DT)
+task: ffffffc0774e9680 ti: ffffffc0765a8000 task.ti: ffffffc0765a8000
+PC is at kvm_vgic_inject_irq+0x234/0x310
+LR is at kvm_vgic_inject_irq+0x30c/0x310
+pc : [<ffffffc0000ae0a8>] lr : [<ffffffc0000ae180>] pstate: 80000145
+.....
+
+So this patch fixes this by checking the SPI number against the
+actual limit. Also we remove the former legacy hard limit of
+127 in the ioctl code.
+
+Signed-off-by: Andre Przywara <andre.przywara@arm.com>
+Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
+[maz: wrap KVM_ARM_IRQ_GIC_MAX with #ifndef __KERNEL__,
+as suggested by Christopher Covington]
+Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/arm/include/uapi/asm/kvm.h   | 8 +++++++-
+ arch/arm/kvm/arm.c                | 3 +--
+ arch/arm64/include/uapi/asm/kvm.h | 8 +++++++-
+ virt/kvm/arm/vgic.c               | 3 +++
+ 4 files changed, 18 insertions(+), 4 deletions(-)
+
+diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
+index 0db25bc..3a42ac6 100644
+--- a/arch/arm/include/uapi/asm/kvm.h
++++ b/arch/arm/include/uapi/asm/kvm.h
+@@ -195,8 +195,14 @@ struct kvm_arch_memory_slot {
+ #define KVM_ARM_IRQ_CPU_IRQ		0
+ #define KVM_ARM_IRQ_CPU_FIQ		1
+ 
+-/* Highest supported SPI, from VGIC_NR_IRQS */
++/*
++ * This used to hold the highest supported SPI, but it is now obsolete
++ * and only here to provide source code level compatibility with older
++ * userland. The highest SPI number can be set via KVM_DEV_ARM_VGIC_GRP_NR_IRQS.
++ */
++#ifndef __KERNEL__
+ #define KVM_ARM_IRQ_GIC_MAX		127
++#endif
+ 
+ /* PSCI interface */
+ #define KVM_PSCI_FN_BASE		0x95c1ba5e
+diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
+index 5560f74..b652af5 100644
+--- a/arch/arm/kvm/arm.c
++++ b/arch/arm/kvm/arm.c
+@@ -651,8 +651,7 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
+ 		if (!irqchip_in_kernel(kvm))
+ 			return -ENXIO;
+ 
+-		if (irq_num < VGIC_NR_PRIVATE_IRQS ||
+-		    irq_num > KVM_ARM_IRQ_GIC_MAX)
++		if (irq_num < VGIC_NR_PRIVATE_IRQS)
+ 			return -EINVAL;
+ 
+ 		return kvm_vgic_inject_irq(kvm, 0, irq_num, level);
+diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
+index 3ef77a4..bc49a18 100644
+--- a/arch/arm64/include/uapi/asm/kvm.h
++++ b/arch/arm64/include/uapi/asm/kvm.h
+@@ -188,8 +188,14 @@ struct kvm_arch_memory_slot {
+ #define KVM_ARM_IRQ_CPU_IRQ		0
+ #define KVM_ARM_IRQ_CPU_FIQ		1
+ 
+-/* Highest supported SPI, from VGIC_NR_IRQS */
++/*
++ * This used to hold the highest supported SPI, but it is now obsolete
++ * and only here to provide source code level compatibility with older
++ * userland. The highest SPI number can be set via KVM_DEV_ARM_VGIC_GRP_NR_IRQS.
++ */
++#ifndef __KERNEL__
+ #define KVM_ARM_IRQ_GIC_MAX		127
++#endif
+ 
+ /* PSCI interface */
+ #define KVM_PSCI_FN_BASE		0x95c1ba5e
+diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
+index c9f60f5..e5abe7c 100644
+--- a/virt/kvm/arm/vgic.c
++++ b/virt/kvm/arm/vgic.c
+@@ -1371,6 +1371,9 @@ int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
+ 			goto out;
+ 	}
+ 
++	if (irq_num >= kvm->arch.vgic.nr_irqs)
++		return -EINVAL;
++
+ 	vcpu_id = vgic_update_irq_pending(kvm, cpuid, irq_num, level);
+ 	if (vcpu_id >= 0) {
+ 		/* kick the specified vcpu */
+-- 
+2.3.6
+
+
+From 9656af0b6cee1496640cfd6dc321e216ff650d37 Mon Sep 17 00:00:00 2001
+From: Ben Serebrin <serebrin@google.com>
+Date: Thu, 16 Apr 2015 11:58:05 -0700
+Subject: [PATCH 034/219] KVM: VMX: Preserve host CR4.MCE value while in guest
+ mode.
+Cc: mpagano@gentoo.org
+
+commit 085e68eeafbf76e21848ad5bafaecec88a11dd64 upstream.
+
+The host's decision to enable machine check exceptions should remain
+in force during non-root mode.  KVM was writing 0 to cr4 on VCPU reset
+and passed a slightly-modified 0 to the vmcs.guest_cr4 value.
+
+Tested: Built.
+On earlier version, tested by injecting machine check
+while a guest is spinning.
+
+Before the change, if guest CR4.MCE==0, then the machine check is
+escalated to Catastrophic Error (CATERR) and the machine dies.
+If guest CR4.MCE==1, then the machine check causes VMEXIT and is
+handled normally by host Linux. After the change, injecting a machine
+check causes normal Linux machine check handling.
+
+Signed-off-by: Ben Serebrin <serebrin@google.com>
+Reviewed-by: Venkatesh Srinivas <venkateshs@google.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/x86/kvm/vmx.c | 12 ++++++++++--
+ 1 file changed, 10 insertions(+), 2 deletions(-)
+
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index ae4f6d3..a60bd3a 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -3621,8 +3621,16 @@ static void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
+ 
+ static int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ {
+-	unsigned long hw_cr4 = cr4 | (to_vmx(vcpu)->rmode.vm86_active ?
+-		    KVM_RMODE_VM_CR4_ALWAYS_ON : KVM_PMODE_VM_CR4_ALWAYS_ON);
++	/*
++	 * Pass through host's Machine Check Enable value to hw_cr4, which
++	 * is in force while we are in guest mode.  Do not let guests control
++	 * this bit, even if host CR4.MCE == 0.
++	 */
++	unsigned long hw_cr4 =
++		(cr4_read_shadow() & X86_CR4_MCE) |
++		(cr4 & ~X86_CR4_MCE) |
++		(to_vmx(vcpu)->rmode.vm86_active ?
++		 KVM_RMODE_VM_CR4_ALWAYS_ON : KVM_PMODE_VM_CR4_ALWAYS_ON);
+ 
+ 	if (cr4 & X86_CR4_VMXE) {
+ 		/*
+-- 
+2.3.6
+
+
+From 7e5ed3d726c9333bdb3f23c3de7ff2f9e9902508 Mon Sep 17 00:00:00 2001
+From: James Hogan <james.hogan@imgtec.com>
+Date: Fri, 6 Feb 2015 11:11:56 +0000
+Subject: [PATCH 035/219] MIPS: KVM: Handle MSA Disabled exceptions from guest
+Cc: mpagano@gentoo.org
+
+commit 98119ad53376885819d93dfb8737b6a9a61ca0ba upstream.
+
+Guest user mode can generate a guest MSA Disabled exception on an MSA
+capable core by simply trying to execute an MSA instruction. Since this
+exception is unknown to KVM it will be passed on to the guest kernel.
+However guest Linux kernels prior to v3.15 do not set up an exception
+handler for the MSA Disabled exception as they don't support any MSA
+capable cores. This results in a guest OS panic.
+
+Since an older processor ID may be being emulated, and MSA support is
+not advertised to the guest, the correct behaviour is to generate a
+Reserved Instruction exception in the guest kernel so it can send the
+guest process an illegal instruction signal (SIGILL), as would happen
+with a non-MSA-capable core.
+
+Fix this as minimally as reasonably possible by preventing
+kvm_mips_check_privilege() from relaying MSA Disabled exceptions from
+guest user mode to the guest kernel, and handling the MSA Disabled
+exception by emulating a Reserved Instruction exception in the guest,
+via a new handle_msa_disabled() KVM callback.
+
+Signed-off-by: James Hogan <james.hogan@imgtec.com>
+Cc: Paolo Bonzini <pbonzini@redhat.com>
+Cc: Paul Burton <paul.burton@imgtec.com>
+Cc: Ralf Baechle <ralf@linux-mips.org>
+Cc: Gleb Natapov <gleb@kernel.org>
+Cc: linux-mips@linux-mips.org
+Cc: kvm@vger.kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/mips/include/asm/kvm_host.h |  2 ++
+ arch/mips/kvm/emulate.c          |  1 +
+ arch/mips/kvm/mips.c             |  4 ++++
+ arch/mips/kvm/trap_emul.c        | 28 ++++++++++++++++++++++++++++
+ 4 files changed, 35 insertions(+)
+
+diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
+index ac4fc71..f722b05 100644
+--- a/arch/mips/include/asm/kvm_host.h
++++ b/arch/mips/include/asm/kvm_host.h
+@@ -322,6 +322,7 @@ enum mips_mmu_types {
+ #define T_TRAP			13	/* Trap instruction */
+ #define T_VCEI			14	/* Virtual coherency exception */
+ #define T_FPE			15	/* Floating point exception */
++#define T_MSADIS		21	/* MSA disabled exception */
+ #define T_WATCH			23	/* Watch address reference */
+ #define T_VCED			31	/* Virtual coherency data */
+ 
+@@ -578,6 +579,7 @@ struct kvm_mips_callbacks {
+ 	int (*handle_syscall)(struct kvm_vcpu *vcpu);
+ 	int (*handle_res_inst)(struct kvm_vcpu *vcpu);
+ 	int (*handle_break)(struct kvm_vcpu *vcpu);
++	int (*handle_msa_disabled)(struct kvm_vcpu *vcpu);
+ 	int (*vm_init)(struct kvm *kvm);
+ 	int (*vcpu_init)(struct kvm_vcpu *vcpu);
+ 	int (*vcpu_setup)(struct kvm_vcpu *vcpu);
+diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
+index fb3e8df..838d3a6 100644
+--- a/arch/mips/kvm/emulate.c
++++ b/arch/mips/kvm/emulate.c
+@@ -2176,6 +2176,7 @@ enum emulation_result kvm_mips_check_privilege(unsigned long cause,
+ 		case T_SYSCALL:
+ 		case T_BREAK:
+ 		case T_RES_INST:
++		case T_MSADIS:
+ 			break;
+ 
+ 		case T_COP_UNUSABLE:
+diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
+index c9eccf5..f5e7dda 100644
+--- a/arch/mips/kvm/mips.c
++++ b/arch/mips/kvm/mips.c
+@@ -1119,6 +1119,10 @@ int kvm_mips_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
+ 		ret = kvm_mips_callbacks->handle_break(vcpu);
+ 		break;
+ 
++	case T_MSADIS:
++		ret = kvm_mips_callbacks->handle_msa_disabled(vcpu);
++		break;
++
+ 	default:
+ 		kvm_err("Exception Code: %d, not yet handled, @ PC: %p, inst: 0x%08x  BadVaddr: %#lx Status: %#lx\n",
+ 			exccode, opc, kvm_get_inst(opc, vcpu), badvaddr,
+diff --git a/arch/mips/kvm/trap_emul.c b/arch/mips/kvm/trap_emul.c
+index fd7257b..4372cc8 100644
+--- a/arch/mips/kvm/trap_emul.c
++++ b/arch/mips/kvm/trap_emul.c
+@@ -330,6 +330,33 @@ static int kvm_trap_emul_handle_break(struct kvm_vcpu *vcpu)
+ 	return ret;
+ }
+ 
++static int kvm_trap_emul_handle_msa_disabled(struct kvm_vcpu *vcpu)
++{
++	struct kvm_run *run = vcpu->run;
++	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
++	unsigned long cause = vcpu->arch.host_cp0_cause;
++	enum emulation_result er = EMULATE_DONE;
++	int ret = RESUME_GUEST;
++
++	/* No MSA supported in guest, guest reserved instruction exception */
++	er = kvm_mips_emulate_ri_exc(cause, opc, run, vcpu);
++
++	switch (er) {
++	case EMULATE_DONE:
++		ret = RESUME_GUEST;
++		break;
++
++	case EMULATE_FAIL:
++		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
++		ret = RESUME_HOST;
++		break;
++
++	default:
++		BUG();
++	}
++	return ret;
++}
++
+ static int kvm_trap_emul_vm_init(struct kvm *kvm)
+ {
+ 	return 0;
+@@ -470,6 +497,7 @@ static struct kvm_mips_callbacks kvm_trap_emul_callbacks = {
+ 	.handle_syscall = kvm_trap_emul_handle_syscall,
+ 	.handle_res_inst = kvm_trap_emul_handle_res_inst,
+ 	.handle_break = kvm_trap_emul_handle_break,
++	.handle_msa_disabled = kvm_trap_emul_handle_msa_disabled,
+ 
+ 	.vm_init = kvm_trap_emul_vm_init,
+ 	.vcpu_init = kvm_trap_emul_vcpu_init,
+-- 
+2.3.6
+
+
+From facbd0f25d07e3448d472d679aafefe7580990b2 Mon Sep 17 00:00:00 2001
+From: James Hogan <james.hogan@imgtec.com>
+Date: Wed, 25 Feb 2015 13:08:05 +0000
+Subject: [PATCH 036/219] MIPS: lose_fpu(): Disable FPU when MSA enabled
+Cc: mpagano@gentoo.org
+
+commit acaf6a97d623af123314c2f8ce4cf7254f6b2fc1 upstream.
+
+The lose_fpu() function only disables the FPU in CP0_Status.CU1 if the
+FPU is in use and MSA isn't enabled.
+
+This isn't necessarily a problem because KSTK_STATUS(current), the
+version of CP0_Status stored on the kernel stack on entry from user
+mode, does always get updated and gets restored when returning to user
+mode, but I don't think it was intended, and it is inconsistent with the
+case of only the FPU being in use. Sometimes leaving the FPU enabled may
+also mask kernel bugs where FPU operations are executed when the FPU
+might not be enabled.
+
+So lets disable the FPU in the MSA case too.
+
+Fixes: 33c771ba5c5d ("MIPS: save/disable MSA in lose_fpu")
+Signed-off-by: James Hogan <james.hogan@imgtec.com>
+Cc: Ralf Baechle <ralf@linux-mips.org>
+Cc: Paul Burton <paul.burton@imgtec.com>
+Cc: linux-mips@linux-mips.org
+Patchwork: https://patchwork.linux-mips.org/patch/9323/
+Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/mips/include/asm/fpu.h | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/arch/mips/include/asm/fpu.h b/arch/mips/include/asm/fpu.h
+index dd083e9..9f26b07 100644
+--- a/arch/mips/include/asm/fpu.h
++++ b/arch/mips/include/asm/fpu.h
+@@ -170,6 +170,7 @@ static inline void lose_fpu(int save)
+ 		}
+ 		disable_msa();
+ 		clear_thread_flag(TIF_USEDMSA);
++		__disable_fpu();
+ 	} else if (is_fpu_owner()) {
+ 		if (save)
+ 			_save_fp(current);
+-- 
+2.3.6
+
+
+From 0668432d35a9e96ee500cbe1b3f7df6c4fe29b09 Mon Sep 17 00:00:00 2001
+From: Markos Chandras <markos.chandras@imgtec.com>
+Date: Fri, 27 Feb 2015 07:51:32 +0000
+Subject: [PATCH 037/219] MIPS: Malta: Detect and fix bad memsize values
+Cc: mpagano@gentoo.org
+
+commit f7f8aea4b97c4d48e42f02cb37026bee445f239f upstream.
+
+memsize denotes the amount of RAM we can access from kseg{0,1} and
+that should be up to 256M. In case the bootloader reports a value
+higher than that (perhaps reporting all the available RAM) it's best
+if we fix it ourselves and just warn the user about that. This is
+usually a problem with the bootloader and/or its environment.
+
+[ralf@linux-mips.org: Remove useless parens as suggested bei Sergei.
+Reformat long pr_warn statement to fit into 80 column limit.]
+
+Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
+Cc: linux-mips@linux-mips.org
+Patchwork: https://patchwork.linux-mips.org/patch/9362/
+Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/mips/mti-malta/malta-memory.c | 6 ++++++
+ 1 file changed, 6 insertions(+)
+
+diff --git a/arch/mips/mti-malta/malta-memory.c b/arch/mips/mti-malta/malta-memory.c
+index 8fddd2cd..efe366d 100644
+--- a/arch/mips/mti-malta/malta-memory.c
++++ b/arch/mips/mti-malta/malta-memory.c
+@@ -53,6 +53,12 @@ fw_memblock_t * __init fw_getmdesc(int eva)
+ 		pr_warn("memsize not set in YAMON, set to default (32Mb)\n");
+ 		physical_memsize = 0x02000000;
+ 	} else {
++		if (memsize > (256 << 20)) { /* memsize should be capped to 256M */
++			pr_warn("Unsupported memsize value (0x%lx) detected! "
++				"Using 0x10000000 (256M) instead\n",
++				memsize);
++			memsize = 256 << 20;
++		}
+ 		/* If ememsize is set, then set physical_memsize to that */
+ 		physical_memsize = ememsize ? : memsize;
+ 	}
+-- 
+2.3.6
+
+
+From e52a20fcbf2ae06dc538b953c065bd6ae0b5f4ad Mon Sep 17 00:00:00 2001
+From: Markos Chandras <markos.chandras@imgtec.com>
+Date: Mon, 9 Mar 2015 14:54:49 +0000
+Subject: [PATCH 038/219] MIPS: asm: asm-eva: Introduce kernel load/store
+ variants
+Cc: mpagano@gentoo.org
+
+commit 60cd7e08e453bc6828ac4b539f949e4acd80f143 upstream.
+
+Introduce new macros for kernel load/store variants which will be
+used to perform regular kernel space load/store operations in EVA
+mode.
+
+Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
+Cc: linux-mips@linux-mips.org
+Patchwork: https://patchwork.linux-mips.org/patch/9500/
+Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/mips/include/asm/asm-eva.h | 137 +++++++++++++++++++++++++++-------------
+ 1 file changed, 93 insertions(+), 44 deletions(-)
+
+diff --git a/arch/mips/include/asm/asm-eva.h b/arch/mips/include/asm/asm-eva.h
+index e41c56e..1e38f0e 100644
+--- a/arch/mips/include/asm/asm-eva.h
++++ b/arch/mips/include/asm/asm-eva.h
+@@ -11,6 +11,36 @@
+ #define __ASM_ASM_EVA_H
+ 
+ #ifndef __ASSEMBLY__
++
++/* Kernel variants */
++
++#define kernel_cache(op, base)		"cache " op ", " base "\n"
++#define kernel_ll(reg, addr)		"ll " reg ", " addr "\n"
++#define kernel_sc(reg, addr)		"sc " reg ", " addr "\n"
++#define kernel_lw(reg, addr)		"lw " reg ", " addr "\n"
++#define kernel_lwl(reg, addr)		"lwl " reg ", " addr "\n"
++#define kernel_lwr(reg, addr)		"lwr " reg ", " addr "\n"
++#define kernel_lh(reg, addr)		"lh " reg ", " addr "\n"
++#define kernel_lb(reg, addr)		"lb " reg ", " addr "\n"
++#define kernel_lbu(reg, addr)		"lbu " reg ", " addr "\n"
++#define kernel_sw(reg, addr)		"sw " reg ", " addr "\n"
++#define kernel_swl(reg, addr)		"swl " reg ", " addr "\n"
++#define kernel_swr(reg, addr)		"swr " reg ", " addr "\n"
++#define kernel_sh(reg, addr)		"sh " reg ", " addr "\n"
++#define kernel_sb(reg, addr)		"sb " reg ", " addr "\n"
++
++#ifdef CONFIG_32BIT
++/*
++ * No 'sd' or 'ld' instructions in 32-bit but the code will
++ * do the correct thing
++ */
++#define kernel_sd(reg, addr)		user_sw(reg, addr)
++#define kernel_ld(reg, addr)		user_lw(reg, addr)
++#else
++#define kernel_sd(reg, addr)		"sd " reg", " addr "\n"
++#define kernel_ld(reg, addr)		"ld " reg", " addr "\n"
++#endif /* CONFIG_32BIT */
++
+ #ifdef CONFIG_EVA
+ 
+ #define __BUILD_EVA_INSN(insn, reg, addr)				\
+@@ -41,37 +71,60 @@
+ 
+ #else
+ 
+-#define user_cache(op, base)		"cache " op ", " base "\n"
+-#define user_ll(reg, addr)		"ll " reg ", " addr "\n"
+-#define user_sc(reg, addr)		"sc " reg ", " addr "\n"
+-#define user_lw(reg, addr)		"lw " reg ", " addr "\n"
+-#define user_lwl(reg, addr)		"lwl " reg ", " addr "\n"
+-#define user_lwr(reg, addr)		"lwr " reg ", " addr "\n"
+-#define user_lh(reg, addr)		"lh " reg ", " addr "\n"
+-#define user_lb(reg, addr)		"lb " reg ", " addr "\n"
+-#define user_lbu(reg, addr)		"lbu " reg ", " addr "\n"
+-#define user_sw(reg, addr)		"sw " reg ", " addr "\n"
+-#define user_swl(reg, addr)		"swl " reg ", " addr "\n"
+-#define user_swr(reg, addr)		"swr " reg ", " addr "\n"
+-#define user_sh(reg, addr)		"sh " reg ", " addr "\n"
+-#define user_sb(reg, addr)		"sb " reg ", " addr "\n"
++#define user_cache(op, base)		kernel_cache(op, base)
++#define user_ll(reg, addr)		kernel_ll(reg, addr)
++#define user_sc(reg, addr)		kernel_sc(reg, addr)
++#define user_lw(reg, addr)		kernel_lw(reg, addr)
++#define user_lwl(reg, addr)		kernel_lwl(reg, addr)
++#define user_lwr(reg, addr)		kernel_lwr(reg, addr)
++#define user_lh(reg, addr)		kernel_lh(reg, addr)
++#define user_lb(reg, addr)		kernel_lb(reg, addr)
++#define user_lbu(reg, addr)		kernel_lbu(reg, addr)
++#define user_sw(reg, addr)		kernel_sw(reg, addr)
++#define user_swl(reg, addr)		kernel_swl(reg, addr)
++#define user_swr(reg, addr)		kernel_swr(reg, addr)
++#define user_sh(reg, addr)		kernel_sh(reg, addr)
++#define user_sb(reg, addr)		kernel_sb(reg, addr)
+ 
+ #ifdef CONFIG_32BIT
+-/*
+- * No 'sd' or 'ld' instructions in 32-bit but the code will
+- * do the correct thing
+- */
+-#define user_sd(reg, addr)		user_sw(reg, addr)
+-#define user_ld(reg, addr)		user_lw(reg, addr)
++#define user_sd(reg, addr)		kernel_sw(reg, addr)
++#define user_ld(reg, addr)		kernel_lw(reg, addr)
+ #else
+-#define user_sd(reg, addr)		"sd " reg", " addr "\n"
+-#define user_ld(reg, addr)		"ld " reg", " addr "\n"
++#define user_sd(reg, addr)		kernel_sd(reg, addr)
++#define user_ld(reg, addr)		kernel_ld(reg, addr)
+ #endif /* CONFIG_32BIT */
+ 
+ #endif /* CONFIG_EVA */
+ 
+ #else /* __ASSEMBLY__ */
+ 
++#define kernel_cache(op, base)		cache op, base
++#define kernel_ll(reg, addr)		ll reg, addr
++#define kernel_sc(reg, addr)		sc reg, addr
++#define kernel_lw(reg, addr)		lw reg, addr
++#define kernel_lwl(reg, addr)		lwl reg, addr
++#define kernel_lwr(reg, addr)		lwr reg, addr
++#define kernel_lh(reg, addr)		lh reg, addr
++#define kernel_lb(reg, addr)		lb reg, addr
++#define kernel_lbu(reg, addr)		lbu reg, addr
++#define kernel_sw(reg, addr)		sw reg, addr
++#define kernel_swl(reg, addr)		swl reg, addr
++#define kernel_swr(reg, addr)		swr reg, addr
++#define kernel_sh(reg, addr)		sh reg, addr
++#define kernel_sb(reg, addr)		sb reg, addr
++
++#ifdef CONFIG_32BIT
++/*
++ * No 'sd' or 'ld' instructions in 32-bit but the code will
++ * do the correct thing
++ */
++#define kernel_sd(reg, addr)		user_sw(reg, addr)
++#define kernel_ld(reg, addr)		user_lw(reg, addr)
++#else
++#define kernel_sd(reg, addr)		sd reg, addr
++#define kernel_ld(reg, addr)		ld reg, addr
++#endif /* CONFIG_32BIT */
++
+ #ifdef CONFIG_EVA
+ 
+ #define __BUILD_EVA_INSN(insn, reg, addr)			\
+@@ -101,31 +154,27 @@
+ #define user_sd(reg, addr)		user_sw(reg, addr)
+ #else
+ 
+-#define user_cache(op, base)		cache op, base
+-#define user_ll(reg, addr)		ll reg, addr
+-#define user_sc(reg, addr)		sc reg, addr
+-#define user_lw(reg, addr)		lw reg, addr
+-#define user_lwl(reg, addr)		lwl reg, addr
+-#define user_lwr(reg, addr)		lwr reg, addr
+-#define user_lh(reg, addr)		lh reg, addr
+-#define user_lb(reg, addr)		lb reg, addr
+-#define user_lbu(reg, addr)		lbu reg, addr
+-#define user_sw(reg, addr)		sw reg, addr
+-#define user_swl(reg, addr)		swl reg, addr
+-#define user_swr(reg, addr)		swr reg, addr
+-#define user_sh(reg, addr)		sh reg, addr
+-#define user_sb(reg, addr)		sb reg, addr
++#define user_cache(op, base)		kernel_cache(op, base)
++#define user_ll(reg, addr)		kernel_ll(reg, addr)
++#define user_sc(reg, addr)		kernel_sc(reg, addr)
++#define user_lw(reg, addr)		kernel_lw(reg, addr)
++#define user_lwl(reg, addr)		kernel_lwl(reg, addr)
++#define user_lwr(reg, addr)		kernel_lwr(reg, addr)
++#define user_lh(reg, addr)		kernel_lh(reg, addr)
++#define user_lb(reg, addr)		kernel_lb(reg, addr)
++#define user_lbu(reg, addr)		kernel_lbu(reg, addr)
++#define user_sw(reg, addr)		kernel_sw(reg, addr)
++#define user_swl(reg, addr)		kernel_swl(reg, addr)
++#define user_swr(reg, addr)		kernel_swr(reg, addr)
++#define user_sh(reg, addr)		kernel_sh(reg, addr)
++#define user_sb(reg, addr)		kernel_sb(reg, addr)
+ 
+ #ifdef CONFIG_32BIT
+-/*
+- * No 'sd' or 'ld' instructions in 32-bit but the code will
+- * do the correct thing
+- */
+-#define user_sd(reg, addr)		user_sw(reg, addr)
+-#define user_ld(reg, addr)		user_lw(reg, addr)
++#define user_sd(reg, addr)		kernel_sw(reg, addr)
++#define user_ld(reg, addr)		kernel_lw(reg, addr)
+ #else
+-#define user_sd(reg, addr)		sd reg, addr
+-#define user_ld(reg, addr)		ld reg, addr
++#define user_sd(reg, addr)		kernel_sd(reg, addr)
++#define user_ld(reg, addr)		kernel_sd(reg, addr)
+ #endif /* CONFIG_32BIT */
+ 
+ #endif /* CONFIG_EVA */
+-- 
+2.3.6
+
+
+From 88a82d60a26013483a22b19035517fec54b7dee5 Mon Sep 17 00:00:00 2001
+From: Markos Chandras <markos.chandras@imgtec.com>
+Date: Mon, 9 Mar 2015 14:54:50 +0000
+Subject: [PATCH 039/219] MIPS: unaligned: Prevent EVA instructions on kernel
+ unaligned accesses
+Cc: mpagano@gentoo.org
+
+commit eeb538950367e3966cbf0237ab1a1dc30e059818 upstream.
+
+Commit c1771216ab48 ("MIPS: kernel: unaligned: Handle unaligned
+accesses for EVA") allowed unaligned accesses to be emulated for
+EVA. However, when emulating regular load/store unaligned accesses,
+we need to use the appropriate "address space" instructions for that.
+Previously, an unaligned load/store instruction in kernel space would
+have used the corresponding EVA instructions to emulate it which led to
+segmentation faults because of the address translation that happens
+with EVA instructions. This is now fixed by using the EVA instruction
+only when emulating EVA unaligned accesses.
+
+Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
+Fixes: c1771216ab48 ("MIPS: kernel: unaligned: Handle unaligned accesses for EVA")
+Cc: linux-mips@linux-mips.org
+Patchwork: https://patchwork.linux-mips.org/patch/9501/
+Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/mips/kernel/unaligned.c | 172 +++++++++++++++++++++++--------------------
+ 1 file changed, 94 insertions(+), 78 deletions(-)
+
+diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c
+index bbb6969..7a5707e 100644
+--- a/arch/mips/kernel/unaligned.c
++++ b/arch/mips/kernel/unaligned.c
+@@ -109,10 +109,10 @@ static u32 unaligned_action;
+ extern void show_registers(struct pt_regs *regs);
+ 
+ #ifdef __BIG_ENDIAN
+-#define     LoadHW(addr, value, res)  \
++#define     _LoadHW(addr, value, res, type)  \
+ 		__asm__ __volatile__ (".set\tnoat\n"        \
+-			"1:\t"user_lb("%0", "0(%2)")"\n"    \
+-			"2:\t"user_lbu("$1", "1(%2)")"\n\t" \
++			"1:\t"type##_lb("%0", "0(%2)")"\n"  \
++			"2:\t"type##_lbu("$1", "1(%2)")"\n\t"\
+ 			"sll\t%0, 0x8\n\t"                  \
+ 			"or\t%0, $1\n\t"                    \
+ 			"li\t%1, 0\n"                       \
+@@ -130,10 +130,10 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "r" (addr), "i" (-EFAULT));
+ 
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     LoadW(addr, value, res)   \
++#define     _LoadW(addr, value, res, type)   \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_lwl("%0", "(%2)")"\n"    \
+-			"2:\t"user_lwr("%0", "3(%2)")"\n\t" \
++			"1:\t"type##_lwl("%0", "(%2)")"\n"   \
++			"2:\t"type##_lwr("%0", "3(%2)")"\n\t"\
+ 			"li\t%1, 0\n"                       \
+ 			"3:\n\t"                            \
+ 			".insn\n\t"                         \
+@@ -149,18 +149,18 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "r" (addr), "i" (-EFAULT));
+ #else
+ /* MIPSR6 has no lwl instruction */
+-#define     LoadW(addr, value, res) \
++#define     _LoadW(addr, value, res, type) \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n"			    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_lb("%0", "0(%2)")"\n\t"    \
+-			"2:"user_lbu("$1", "1(%2)")"\n\t"   \
++			"1:"type##_lb("%0", "0(%2)")"\n\t"  \
++			"2:"type##_lbu("$1", "1(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"3:"user_lbu("$1", "2(%2)")"\n\t"   \
++			"3:"type##_lbu("$1", "2(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"4:"user_lbu("$1", "3(%2)")"\n\t"   \
++			"4:"type##_lbu("$1", "3(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+ 			"li\t%1, 0\n"			    \
+@@ -181,11 +181,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "r" (addr), "i" (-EFAULT));
+ #endif /* CONFIG_CPU_MIPSR6 */
+ 
+-#define     LoadHWU(addr, value, res) \
++#define     _LoadHWU(addr, value, res, type) \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+-			"1:\t"user_lbu("%0", "0(%2)")"\n"   \
+-			"2:\t"user_lbu("$1", "1(%2)")"\n\t" \
++			"1:\t"type##_lbu("%0", "0(%2)")"\n" \
++			"2:\t"type##_lbu("$1", "1(%2)")"\n\t"\
+ 			"sll\t%0, 0x8\n\t"                  \
+ 			"or\t%0, $1\n\t"                    \
+ 			"li\t%1, 0\n"                       \
+@@ -204,10 +204,10 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "r" (addr), "i" (-EFAULT));
+ 
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     LoadWU(addr, value, res)  \
++#define     _LoadWU(addr, value, res, type)  \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_lwl("%0", "(%2)")"\n"    \
+-			"2:\t"user_lwr("%0", "3(%2)")"\n\t" \
++			"1:\t"type##_lwl("%0", "(%2)")"\n"  \
++			"2:\t"type##_lwr("%0", "3(%2)")"\n\t"\
+ 			"dsll\t%0, %0, 32\n\t"              \
+ 			"dsrl\t%0, %0, 32\n\t"              \
+ 			"li\t%1, 0\n"                       \
+@@ -224,7 +224,7 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "=&r" (value), "=r" (res)         \
+ 			: "r" (addr), "i" (-EFAULT));
+ 
+-#define     LoadDW(addr, value, res)  \
++#define     _LoadDW(addr, value, res)  \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tldl\t%0, (%2)\n"               \
+ 			"2:\tldr\t%0, 7(%2)\n\t"            \
+@@ -243,18 +243,18 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "r" (addr), "i" (-EFAULT));
+ #else
+ /* MIPSR6 has not lwl and ldl instructions */
+-#define	    LoadWU(addr, value, res) \
++#define	    _LoadWU(addr, value, res, type) \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_lbu("%0", "0(%2)")"\n\t"   \
+-			"2:"user_lbu("$1", "1(%2)")"\n\t"   \
++			"1:"type##_lbu("%0", "0(%2)")"\n\t" \
++			"2:"type##_lbu("$1", "1(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"3:"user_lbu("$1", "2(%2)")"\n\t"   \
++			"3:"type##_lbu("$1", "2(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"4:"user_lbu("$1", "3(%2)")"\n\t"   \
++			"4:"type##_lbu("$1", "3(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+ 			"li\t%1, 0\n"			    \
+@@ -274,7 +274,7 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "=&r" (value), "=r" (res)	    \
+ 			: "r" (addr), "i" (-EFAULT));
+ 
+-#define     LoadDW(addr, value, res)  \
++#define     _LoadDW(addr, value, res)  \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -323,12 +323,12 @@ extern void show_registers(struct pt_regs *regs);
+ #endif /* CONFIG_CPU_MIPSR6 */
+ 
+ 
+-#define     StoreHW(addr, value, res) \
++#define     _StoreHW(addr, value, res, type) \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+-			"1:\t"user_sb("%1", "1(%2)")"\n"    \
++			"1:\t"type##_sb("%1", "1(%2)")"\n"  \
+ 			"srl\t$1, %1, 0x8\n"                \
+-			"2:\t"user_sb("$1", "0(%2)")"\n"    \
++			"2:\t"type##_sb("$1", "0(%2)")"\n"  \
+ 			".set\tat\n\t"                      \
+ 			"li\t%0, 0\n"                       \
+ 			"3:\n\t"                            \
+@@ -345,10 +345,10 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "r" (value), "r" (addr), "i" (-EFAULT));
+ 
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     StoreW(addr, value, res)  \
++#define     _StoreW(addr, value, res, type)  \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_swl("%1", "(%2)")"\n"    \
+-			"2:\t"user_swr("%1", "3(%2)")"\n\t" \
++			"1:\t"type##_swl("%1", "(%2)")"\n"  \
++			"2:\t"type##_swr("%1", "3(%2)")"\n\t"\
+ 			"li\t%0, 0\n"                       \
+ 			"3:\n\t"                            \
+ 			".insn\n\t"                         \
+@@ -363,7 +363,7 @@ extern void show_registers(struct pt_regs *regs);
+ 		: "=r" (res)                                \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT));
+ 
+-#define     StoreDW(addr, value, res) \
++#define     _StoreDW(addr, value, res) \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tsdl\t%1,(%2)\n"                \
+ 			"2:\tsdr\t%1, 7(%2)\n\t"            \
+@@ -382,17 +382,17 @@ extern void show_registers(struct pt_regs *regs);
+ 		: "r" (value), "r" (addr), "i" (-EFAULT));
+ #else
+ /* MIPSR6 has no swl and sdl instructions */
+-#define     StoreW(addr, value, res)  \
++#define     _StoreW(addr, value, res, type)  \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_sb("%1", "3(%2)")"\n\t"    \
++			"1:"type##_sb("%1", "3(%2)")"\n\t"  \
+ 			"srl\t$1, %1, 0x8\n\t"		    \
+-			"2:"user_sb("$1", "2(%2)")"\n\t"    \
++			"2:"type##_sb("$1", "2(%2)")"\n\t"  \
+ 			"srl\t$1, $1,  0x8\n\t"		    \
+-			"3:"user_sb("$1", "1(%2)")"\n\t"    \
++			"3:"type##_sb("$1", "1(%2)")"\n\t"  \
+ 			"srl\t$1, $1, 0x8\n\t"		    \
+-			"4:"user_sb("$1", "0(%2)")"\n\t"    \
++			"4:"type##_sb("$1", "0(%2)")"\n\t"  \
+ 			".set\tpop\n\t"			    \
+ 			"li\t%0, 0\n"			    \
+ 			"10:\n\t"			    \
+@@ -456,10 +456,10 @@ extern void show_registers(struct pt_regs *regs);
+ 
+ #else /* __BIG_ENDIAN */
+ 
+-#define     LoadHW(addr, value, res)  \
++#define     _LoadHW(addr, value, res, type)  \
+ 		__asm__ __volatile__ (".set\tnoat\n"        \
+-			"1:\t"user_lb("%0", "1(%2)")"\n"    \
+-			"2:\t"user_lbu("$1", "0(%2)")"\n\t" \
++			"1:\t"type##_lb("%0", "1(%2)")"\n"  \
++			"2:\t"type##_lbu("$1", "0(%2)")"\n\t"\
+ 			"sll\t%0, 0x8\n\t"                  \
+ 			"or\t%0, $1\n\t"                    \
+ 			"li\t%1, 0\n"                       \
+@@ -477,10 +477,10 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "r" (addr), "i" (-EFAULT));
+ 
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     LoadW(addr, value, res)   \
++#define     _LoadW(addr, value, res, type)   \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_lwl("%0", "3(%2)")"\n"   \
+-			"2:\t"user_lwr("%0", "(%2)")"\n\t"  \
++			"1:\t"type##_lwl("%0", "3(%2)")"\n" \
++			"2:\t"type##_lwr("%0", "(%2)")"\n\t"\
+ 			"li\t%1, 0\n"                       \
+ 			"3:\n\t"                            \
+ 			".insn\n\t"                         \
+@@ -496,18 +496,18 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "r" (addr), "i" (-EFAULT));
+ #else
+ /* MIPSR6 has no lwl instruction */
+-#define     LoadW(addr, value, res) \
++#define     _LoadW(addr, value, res, type) \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n"			    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_lb("%0", "3(%2)")"\n\t"    \
+-			"2:"user_lbu("$1", "2(%2)")"\n\t"   \
++			"1:"type##_lb("%0", "3(%2)")"\n\t"  \
++			"2:"type##_lbu("$1", "2(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"3:"user_lbu("$1", "1(%2)")"\n\t"   \
++			"3:"type##_lbu("$1", "1(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"4:"user_lbu("$1", "0(%2)")"\n\t"   \
++			"4:"type##_lbu("$1", "0(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+ 			"li\t%1, 0\n"			    \
+@@ -529,11 +529,11 @@ extern void show_registers(struct pt_regs *regs);
+ #endif /* CONFIG_CPU_MIPSR6 */
+ 
+ 
+-#define     LoadHWU(addr, value, res) \
++#define     _LoadHWU(addr, value, res, type) \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+-			"1:\t"user_lbu("%0", "1(%2)")"\n"   \
+-			"2:\t"user_lbu("$1", "0(%2)")"\n\t" \
++			"1:\t"type##_lbu("%0", "1(%2)")"\n" \
++			"2:\t"type##_lbu("$1", "0(%2)")"\n\t"\
+ 			"sll\t%0, 0x8\n\t"                  \
+ 			"or\t%0, $1\n\t"                    \
+ 			"li\t%1, 0\n"                       \
+@@ -552,10 +552,10 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "r" (addr), "i" (-EFAULT));
+ 
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     LoadWU(addr, value, res)  \
++#define     _LoadWU(addr, value, res, type)  \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_lwl("%0", "3(%2)")"\n"   \
+-			"2:\t"user_lwr("%0", "(%2)")"\n\t"  \
++			"1:\t"type##_lwl("%0", "3(%2)")"\n" \
++			"2:\t"type##_lwr("%0", "(%2)")"\n\t"\
+ 			"dsll\t%0, %0, 32\n\t"              \
+ 			"dsrl\t%0, %0, 32\n\t"              \
+ 			"li\t%1, 0\n"                       \
+@@ -572,7 +572,7 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "=&r" (value), "=r" (res)         \
+ 			: "r" (addr), "i" (-EFAULT));
+ 
+-#define     LoadDW(addr, value, res)  \
++#define     _LoadDW(addr, value, res)  \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tldl\t%0, 7(%2)\n"              \
+ 			"2:\tldr\t%0, (%2)\n\t"             \
+@@ -591,18 +591,18 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "r" (addr), "i" (-EFAULT));
+ #else
+ /* MIPSR6 has not lwl and ldl instructions */
+-#define	    LoadWU(addr, value, res) \
++#define	    _LoadWU(addr, value, res, type) \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_lbu("%0", "3(%2)")"\n\t"   \
+-			"2:"user_lbu("$1", "2(%2)")"\n\t"   \
++			"1:"type##_lbu("%0", "3(%2)")"\n\t" \
++			"2:"type##_lbu("$1", "2(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"3:"user_lbu("$1", "1(%2)")"\n\t"   \
++			"3:"type##_lbu("$1", "1(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"4:"user_lbu("$1", "0(%2)")"\n\t"   \
++			"4:"type##_lbu("$1", "0(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+ 			"li\t%1, 0\n"			    \
+@@ -622,7 +622,7 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "=&r" (value), "=r" (res)	    \
+ 			: "r" (addr), "i" (-EFAULT));
+ 
+-#define     LoadDW(addr, value, res)  \
++#define     _LoadDW(addr, value, res)  \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -670,12 +670,12 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "r" (addr), "i" (-EFAULT));
+ #endif /* CONFIG_CPU_MIPSR6 */
+ 
+-#define     StoreHW(addr, value, res) \
++#define     _StoreHW(addr, value, res, type) \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+-			"1:\t"user_sb("%1", "0(%2)")"\n"    \
++			"1:\t"type##_sb("%1", "0(%2)")"\n"  \
+ 			"srl\t$1,%1, 0x8\n"                 \
+-			"2:\t"user_sb("$1", "1(%2)")"\n"    \
++			"2:\t"type##_sb("$1", "1(%2)")"\n"  \
+ 			".set\tat\n\t"                      \
+ 			"li\t%0, 0\n"                       \
+ 			"3:\n\t"                            \
+@@ -691,10 +691,10 @@ extern void show_registers(struct pt_regs *regs);
+ 			: "=r" (res)                        \
+ 			: "r" (value), "r" (addr), "i" (-EFAULT));
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     StoreW(addr, value, res)  \
++#define     _StoreW(addr, value, res, type)  \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_swl("%1", "3(%2)")"\n"   \
+-			"2:\t"user_swr("%1", "(%2)")"\n\t"  \
++			"1:\t"type##_swl("%1", "3(%2)")"\n" \
++			"2:\t"type##_swr("%1", "(%2)")"\n\t"\
+ 			"li\t%0, 0\n"                       \
+ 			"3:\n\t"                            \
+ 			".insn\n\t"                         \
+@@ -709,7 +709,7 @@ extern void show_registers(struct pt_regs *regs);
+ 		: "=r" (res)                                \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT));
+ 
+-#define     StoreDW(addr, value, res) \
++#define     _StoreDW(addr, value, res) \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tsdl\t%1, 7(%2)\n"              \
+ 			"2:\tsdr\t%1, (%2)\n\t"             \
+@@ -728,17 +728,17 @@ extern void show_registers(struct pt_regs *regs);
+ 		: "r" (value), "r" (addr), "i" (-EFAULT));
+ #else
+ /* MIPSR6 has no swl and sdl instructions */
+-#define     StoreW(addr, value, res)  \
++#define     _StoreW(addr, value, res, type)  \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_sb("%1", "0(%2)")"\n\t"    \
++			"1:"type##_sb("%1", "0(%2)")"\n\t"  \
+ 			"srl\t$1, %1, 0x8\n\t"		    \
+-			"2:"user_sb("$1", "1(%2)")"\n\t"    \
++			"2:"type##_sb("$1", "1(%2)")"\n\t"  \
+ 			"srl\t$1, $1,  0x8\n\t"		    \
+-			"3:"user_sb("$1", "2(%2)")"\n\t"    \
++			"3:"type##_sb("$1", "2(%2)")"\n\t"  \
+ 			"srl\t$1, $1, 0x8\n\t"		    \
+-			"4:"user_sb("$1", "3(%2)")"\n\t"    \
++			"4:"type##_sb("$1", "3(%2)")"\n\t"  \
+ 			".set\tpop\n\t"			    \
+ 			"li\t%0, 0\n"			    \
+ 			"10:\n\t"			    \
+@@ -757,7 +757,7 @@ extern void show_registers(struct pt_regs *regs);
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+ 		: "memory");
+ 
+-#define     StoreDW(addr, value, res) \
++#define     _StoreDW(addr, value, res) \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -801,6 +801,22 @@ extern void show_registers(struct pt_regs *regs);
+ #endif /* CONFIG_CPU_MIPSR6 */
+ #endif
+ 
++#define LoadHWU(addr, value, res)	_LoadHWU(addr, value, res, kernel)
++#define LoadHWUE(addr, value, res)	_LoadHWU(addr, value, res, user)
++#define LoadWU(addr, value, res)	_LoadWU(addr, value, res, kernel)
++#define LoadWUE(addr, value, res)	_LoadWU(addr, value, res, user)
++#define LoadHW(addr, value, res)	_LoadHW(addr, value, res, kernel)
++#define LoadHWE(addr, value, res)	_LoadHW(addr, value, res, user)
++#define LoadW(addr, value, res)		_LoadW(addr, value, res, kernel)
++#define LoadWE(addr, value, res)	_LoadW(addr, value, res, user)
++#define LoadDW(addr, value, res)	_LoadDW(addr, value, res)
++
++#define StoreHW(addr, value, res)	_StoreHW(addr, value, res, kernel)
++#define StoreHWE(addr, value, res)	_StoreHW(addr, value, res, user)
++#define StoreW(addr, value, res)	_StoreW(addr, value, res, kernel)
++#define StoreWE(addr, value, res)	_StoreW(addr, value, res, user)
++#define StoreDW(addr, value, res)	_StoreDW(addr, value, res)
++
+ static void emulate_load_store_insn(struct pt_regs *regs,
+ 	void __user *addr, unsigned int __user *pc)
+ {
+@@ -872,7 +888,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 				set_fs(seg);
+ 				goto sigbus;
+ 			}
+-			LoadHW(addr, value, res);
++			LoadHWE(addr, value, res);
+ 			if (res) {
+ 				set_fs(seg);
+ 				goto fault;
+@@ -885,7 +901,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 				set_fs(seg);
+ 				goto sigbus;
+ 			}
+-				LoadW(addr, value, res);
++				LoadWE(addr, value, res);
+ 			if (res) {
+ 				set_fs(seg);
+ 				goto fault;
+@@ -898,7 +914,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 				set_fs(seg);
+ 				goto sigbus;
+ 			}
+-			LoadHWU(addr, value, res);
++			LoadHWUE(addr, value, res);
+ 			if (res) {
+ 				set_fs(seg);
+ 				goto fault;
+@@ -913,7 +929,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 			}
+ 			compute_return_epc(regs);
+ 			value = regs->regs[insn.spec3_format.rt];
+-			StoreHW(addr, value, res);
++			StoreHWE(addr, value, res);
+ 			if (res) {
+ 				set_fs(seg);
+ 				goto fault;
+@@ -926,7 +942,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 			}
+ 			compute_return_epc(regs);
+ 			value = regs->regs[insn.spec3_format.rt];
+-			StoreW(addr, value, res);
++			StoreWE(addr, value, res);
+ 			if (res) {
+ 				set_fs(seg);
+ 				goto fault;
+-- 
+2.3.6
+
+
+From ae0a145ca5b6c135e068a08f859e3f10ad2242d9 Mon Sep 17 00:00:00 2001
+From: Markos Chandras <markos.chandras@imgtec.com>
+Date: Mon, 9 Mar 2015 14:54:51 +0000
+Subject: [PATCH 040/219] MIPS: unaligned: Surround load/store macros in do {}
+ while statements
+Cc: mpagano@gentoo.org
+
+commit 3563c32d6532ece53c9dd8905a8e41983ef9952f upstream.
+
+It's best to surround such complex macros with do {} while statements
+so they can appear as independent logical blocks when used within other
+control blocks.
+
+Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
+Cc: linux-mips@linux-mips.org
+Patchwork: https://patchwork.linux-mips.org/patch/9502/
+Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/mips/kernel/unaligned.c | 116 +++++++++++++++++++++++++++++++++----------
+ 1 file changed, 90 insertions(+), 26 deletions(-)
+
+diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c
+index 7a5707e..ab47590 100644
+--- a/arch/mips/kernel/unaligned.c
++++ b/arch/mips/kernel/unaligned.c
+@@ -110,6 +110,7 @@ extern void show_registers(struct pt_regs *regs);
+ 
+ #ifdef __BIG_ENDIAN
+ #define     _LoadHW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (".set\tnoat\n"        \
+ 			"1:\t"type##_lb("%0", "0(%2)")"\n"  \
+ 			"2:\t"type##_lbu("$1", "1(%2)")"\n\t"\
+@@ -127,10 +128,12 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+ #ifndef CONFIG_CPU_MIPSR6
+ #define     _LoadW(addr, value, res, type)   \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\t"type##_lwl("%0", "(%2)")"\n"   \
+ 			"2:\t"type##_lwr("%0", "3(%2)")"\n\t"\
+@@ -146,10 +149,13 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
++
+ #else
+ /* MIPSR6 has no lwl instruction */
+ #define     _LoadW(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n"			    \
+ 			".set\tnoat\n\t"		    \
+@@ -178,10 +184,13 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t4b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
++
+ #endif /* CONFIG_CPU_MIPSR6 */
+ 
+ #define     _LoadHWU(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+ 			"1:\t"type##_lbu("%0", "0(%2)")"\n" \
+@@ -201,10 +210,12 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+ #ifndef CONFIG_CPU_MIPSR6
+ #define     _LoadWU(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\t"type##_lwl("%0", "(%2)")"\n"  \
+ 			"2:\t"type##_lwr("%0", "3(%2)")"\n\t"\
+@@ -222,9 +233,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+ #define     _LoadDW(addr, value, res)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tldl\t%0, (%2)\n"               \
+ 			"2:\tldr\t%0, 7(%2)\n\t"            \
+@@ -240,10 +253,13 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
++
+ #else
+ /* MIPSR6 has not lwl and ldl instructions */
+ #define	    _LoadWU(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -272,9 +288,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t4b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+ #define     _LoadDW(addr, value, res)  \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -319,11 +337,14 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t8b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
++
+ #endif /* CONFIG_CPU_MIPSR6 */
+ 
+ 
+ #define     _StoreHW(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+ 			"1:\t"type##_sb("%1", "1(%2)")"\n"  \
+@@ -342,10 +363,12 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=r" (res)                        \
+-			: "r" (value), "r" (addr), "i" (-EFAULT));
++			: "r" (value), "r" (addr), "i" (-EFAULT));\
++} while(0)
+ 
+ #ifndef CONFIG_CPU_MIPSR6
+ #define     _StoreW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\t"type##_swl("%1", "(%2)")"\n"  \
+ 			"2:\t"type##_swr("%1", "3(%2)")"\n\t"\
+@@ -361,9 +384,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 		: "=r" (res)                                \
+-		: "r" (value), "r" (addr), "i" (-EFAULT));
++		: "r" (value), "r" (addr), "i" (-EFAULT));  \
++} while(0)
+ 
+ #define     _StoreDW(addr, value, res) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tsdl\t%1,(%2)\n"                \
+ 			"2:\tsdr\t%1, 7(%2)\n\t"            \
+@@ -379,10 +404,13 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 		: "=r" (res)                                \
+-		: "r" (value), "r" (addr), "i" (-EFAULT));
++		: "r" (value), "r" (addr), "i" (-EFAULT));  \
++} while(0)
++
+ #else
+ /* MIPSR6 has no swl and sdl instructions */
+ #define     _StoreW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -409,9 +437,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			".previous"			    \
+ 		: "=&r" (res)			    	    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+-		: "memory");
++		: "memory");                                \
++} while(0)
+ 
+ #define     StoreDW(addr, value, res) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -451,12 +481,15 @@ extern void show_registers(struct pt_regs *regs);
+ 			".previous"			    \
+ 		: "=&r" (res)			    	    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+-		: "memory");
++		: "memory");                                \
++} while(0)
++
+ #endif /* CONFIG_CPU_MIPSR6 */
+ 
+ #else /* __BIG_ENDIAN */
+ 
+ #define     _LoadHW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (".set\tnoat\n"        \
+ 			"1:\t"type##_lb("%0", "1(%2)")"\n"  \
+ 			"2:\t"type##_lbu("$1", "0(%2)")"\n\t"\
+@@ -474,10 +507,12 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+ #ifndef CONFIG_CPU_MIPSR6
+ #define     _LoadW(addr, value, res, type)   \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\t"type##_lwl("%0", "3(%2)")"\n" \
+ 			"2:\t"type##_lwr("%0", "(%2)")"\n\t"\
+@@ -493,10 +528,13 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
++
+ #else
+ /* MIPSR6 has no lwl instruction */
+ #define     _LoadW(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n"			    \
+ 			".set\tnoat\n\t"		    \
+@@ -525,11 +563,14 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t4b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
++
+ #endif /* CONFIG_CPU_MIPSR6 */
+ 
+ 
+ #define     _LoadHWU(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+ 			"1:\t"type##_lbu("%0", "1(%2)")"\n" \
+@@ -549,10 +590,12 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+ #ifndef CONFIG_CPU_MIPSR6
+ #define     _LoadWU(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\t"type##_lwl("%0", "3(%2)")"\n" \
+ 			"2:\t"type##_lwr("%0", "(%2)")"\n\t"\
+@@ -570,9 +613,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+ #define     _LoadDW(addr, value, res)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tldl\t%0, 7(%2)\n"              \
+ 			"2:\tldr\t%0, (%2)\n\t"             \
+@@ -588,10 +633,13 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
++
+ #else
+ /* MIPSR6 has not lwl and ldl instructions */
+ #define	    _LoadWU(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -620,9 +668,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t4b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+ #define     _LoadDW(addr, value, res)  \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -667,10 +717,12 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t8b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ #endif /* CONFIG_CPU_MIPSR6 */
+ 
+ #define     _StoreHW(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+ 			"1:\t"type##_sb("%1", "0(%2)")"\n"  \
+@@ -689,9 +741,12 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=r" (res)                        \
+-			: "r" (value), "r" (addr), "i" (-EFAULT));
++			: "r" (value), "r" (addr), "i" (-EFAULT));\
++} while(0)
++
+ #ifndef CONFIG_CPU_MIPSR6
+ #define     _StoreW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\t"type##_swl("%1", "3(%2)")"\n" \
+ 			"2:\t"type##_swr("%1", "(%2)")"\n\t"\
+@@ -707,9 +762,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 		: "=r" (res)                                \
+-		: "r" (value), "r" (addr), "i" (-EFAULT));
++		: "r" (value), "r" (addr), "i" (-EFAULT));  \
++} while(0)
+ 
+ #define     _StoreDW(addr, value, res) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tsdl\t%1, 7(%2)\n"              \
+ 			"2:\tsdr\t%1, (%2)\n\t"             \
+@@ -725,10 +782,13 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 		: "=r" (res)                                \
+-		: "r" (value), "r" (addr), "i" (-EFAULT));
++		: "r" (value), "r" (addr), "i" (-EFAULT));  \
++} while(0)
++
+ #else
+ /* MIPSR6 has no swl and sdl instructions */
+ #define     _StoreW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -755,9 +815,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			".previous"			    \
+ 		: "=&r" (res)			    	    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+-		: "memory");
++		: "memory");                                \
++} while(0)
+ 
+ #define     _StoreDW(addr, value, res) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -797,7 +859,9 @@ extern void show_registers(struct pt_regs *regs);
+ 			".previous"			    \
+ 		: "=&r" (res)			    	    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+-		: "memory");
++		: "memory");                                \
++} while(0)
++
+ #endif /* CONFIG_CPU_MIPSR6 */
+ #endif
+ 
+-- 
+2.3.6
+
+
+From e239cb24f08477d187a5bb831088de60f70e3ade Mon Sep 17 00:00:00 2001
+From: Markos Chandras <markos.chandras@imgtec.com>
+Date: Mon, 9 Mar 2015 14:54:52 +0000
+Subject: [PATCH 041/219] MIPS: unaligned: Fix regular load/store instruction
+ emulation for EVA
+Cc: mpagano@gentoo.org
+
+commit 6eae35485b26f9e51ab896eb8a936bed9908fdf6 upstream.
+
+When emulating a regular lh/lw/lhu/sh/sw we need to use the appropriate
+instruction if we are in EVA mode. This is necessary for userspace
+applications which trigger alignment exceptions. In such case, the
+userspace load/store instruction needs to be emulated with the correct
+eva/non-eva instruction by the kernel emulator.
+
+Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
+Fixes: c1771216ab48 ("MIPS: kernel: unaligned: Handle unaligned accesses for EVA")
+Cc: linux-mips@linux-mips.org
+Patchwork: https://patchwork.linux-mips.org/patch/9503/
+Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/mips/kernel/unaligned.c | 52 +++++++++++++++++++++++++++++++++++++++-----
+ 1 file changed, 47 insertions(+), 5 deletions(-)
+
+diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c
+index ab47590..7659da2 100644
+--- a/arch/mips/kernel/unaligned.c
++++ b/arch/mips/kernel/unaligned.c
+@@ -1023,7 +1023,15 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 		if (!access_ok(VERIFY_READ, addr, 2))
+ 			goto sigbus;
+ 
+-		LoadHW(addr, value, res);
++		if (config_enabled(CONFIG_EVA)) {
++			if (segment_eq(get_fs(), get_ds()))
++				LoadHW(addr, value, res);
++			else
++				LoadHWE(addr, value, res);
++		} else {
++			LoadHW(addr, value, res);
++		}
++
+ 		if (res)
+ 			goto fault;
+ 		compute_return_epc(regs);
+@@ -1034,7 +1042,15 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 		if (!access_ok(VERIFY_READ, addr, 4))
+ 			goto sigbus;
+ 
+-		LoadW(addr, value, res);
++		if (config_enabled(CONFIG_EVA)) {
++			if (segment_eq(get_fs(), get_ds()))
++				LoadW(addr, value, res);
++			else
++				LoadWE(addr, value, res);
++		} else {
++			LoadW(addr, value, res);
++		}
++
+ 		if (res)
+ 			goto fault;
+ 		compute_return_epc(regs);
+@@ -1045,7 +1061,15 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 		if (!access_ok(VERIFY_READ, addr, 2))
+ 			goto sigbus;
+ 
+-		LoadHWU(addr, value, res);
++		if (config_enabled(CONFIG_EVA)) {
++			if (segment_eq(get_fs(), get_ds()))
++				LoadHWU(addr, value, res);
++			else
++				LoadHWUE(addr, value, res);
++		} else {
++			LoadHWU(addr, value, res);
++		}
++
+ 		if (res)
+ 			goto fault;
+ 		compute_return_epc(regs);
+@@ -1104,7 +1128,16 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 
+ 		compute_return_epc(regs);
+ 		value = regs->regs[insn.i_format.rt];
+-		StoreHW(addr, value, res);
++
++		if (config_enabled(CONFIG_EVA)) {
++			if (segment_eq(get_fs(), get_ds()))
++				StoreHW(addr, value, res);
++			else
++				StoreHWE(addr, value, res);
++		} else {
++			StoreHW(addr, value, res);
++		}
++
+ 		if (res)
+ 			goto fault;
+ 		break;
+@@ -1115,7 +1148,16 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 
+ 		compute_return_epc(regs);
+ 		value = regs->regs[insn.i_format.rt];
+-		StoreW(addr, value, res);
++
++		if (config_enabled(CONFIG_EVA)) {
++			if (segment_eq(get_fs(), get_ds()))
++				StoreW(addr, value, res);
++			else
++				StoreWE(addr, value, res);
++		} else {
++			StoreW(addr, value, res);
++		}
++
+ 		if (res)
+ 			goto fault;
+ 		break;
+-- 
+2.3.6
+
+
+From 9da8705189d48b9d74724d5ae37c5a3a486fcfef Mon Sep 17 00:00:00 2001
+From: Huacai Chen <chenhc@lemote.com>
+Date: Thu, 12 Mar 2015 11:51:06 +0800
+Subject: [PATCH 042/219] MIPS: Loongson-3: Add IRQF_NO_SUSPEND to Cascade
+ irqaction
+Cc: mpagano@gentoo.org
+
+commit 0add9c2f1cff9f3f1f2eb7e9babefa872a9d14b9 upstream.
+
+HPET irq is routed to i8259 and then to MIPS CPU irq (cascade). After
+commit a3e6c1eff5 (MIPS: IRQ: Fix disable_irq on CPU IRQs), if without
+IRQF_NO_SUSPEND in cascade_irqaction, HPET interrupts will lost during
+suspend. The result is machine cannot be waken up.
+
+Signed-off-by: Huacai Chen <chenhc@lemote.com>
+Cc: Steven J. Hill <Steven.Hill@imgtec.com>
+Cc: linux-mips@linux-mips.org
+Cc: Fuxin Zhang <zhangfx@lemote.com>
+Cc: Zhangjin Wu <wuzhangjin@gmail.com>
+Patchwork: https://patchwork.linux-mips.org/patch/9528/
+Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/mips/loongson/loongson-3/irq.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/arch/mips/loongson/loongson-3/irq.c b/arch/mips/loongson/loongson-3/irq.c
+index 21221ed..0f75b6b 100644
+--- a/arch/mips/loongson/loongson-3/irq.c
++++ b/arch/mips/loongson/loongson-3/irq.c
+@@ -44,6 +44,7 @@ void mach_irq_dispatch(unsigned int pending)
+ 
+ static struct irqaction cascade_irqaction = {
+ 	.handler = no_action,
++	.flags = IRQF_NO_SUSPEND,
+ 	.name = "cascade",
+ };
+ 
+-- 
+2.3.6
+
+
+From 6fbe5c7cd4d50582ba22c0a979131e347ec7b132 Mon Sep 17 00:00:00 2001
+From: Huacai Chen <chenhc@lemote.com>
+Date: Sun, 29 Mar 2015 10:54:05 +0800
+Subject: [PATCH 043/219] MIPS: Hibernate: flush TLB entries earlier
+Cc: mpagano@gentoo.org
+
+commit a843d00d038b11267279e3b5388222320f9ddc1d upstream.
+
+We found that TLB mismatch not only happens after kernel resume, but
+also happens during snapshot restore. So move it to the beginning of
+swsusp_arch_suspend().
+
+Signed-off-by: Huacai Chen <chenhc@lemote.com>
+Cc: Steven J. Hill <Steven.Hill@imgtec.com>
+Cc: linux-mips@linux-mips.org
+Cc: Fuxin Zhang <zhangfx@lemote.com>
+Cc: Zhangjin Wu <wuzhangjin@gmail.com>
+Patchwork: https://patchwork.linux-mips.org/patch/9621/
+Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/mips/power/hibernate.S | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/arch/mips/power/hibernate.S b/arch/mips/power/hibernate.S
+index 32a7c82..e7567c8 100644
+--- a/arch/mips/power/hibernate.S
++++ b/arch/mips/power/hibernate.S
+@@ -30,6 +30,8 @@ LEAF(swsusp_arch_suspend)
+ END(swsusp_arch_suspend)
+ 
+ LEAF(swsusp_arch_resume)
++	/* Avoid TLB mismatch during and after kernel resume */
++	jal local_flush_tlb_all
+ 	PTR_L t0, restore_pblist
+ 0:
+ 	PTR_L t1, PBE_ADDRESS(t0)   /* source */
+@@ -43,7 +45,6 @@ LEAF(swsusp_arch_resume)
+ 	bne t1, t3, 1b
+ 	PTR_L t0, PBE_NEXT(t0)
+ 	bnez t0, 0b
+-	jal local_flush_tlb_all /* Avoid TLB mismatch after kernel resume */
+ 	PTR_LA t0, saved_regs
+ 	PTR_L ra, PT_R31(t0)
+ 	PTR_L sp, PT_R29(t0)
+-- 
+2.3.6
+
+
+From f0ce3bf7fa069f614101c819576cb0344076e95c Mon Sep 17 00:00:00 2001
+From: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
+Date: Tue, 24 Mar 2015 16:29:32 +0530
+Subject: [PATCH 044/219] staging: panel: fix lcd type
+Cc: mpagano@gentoo.org
+
+commit 2c20d92dad5db6440cfa88d811b69fd605240ce4 upstream.
+
+the lcd type as defined in the Kconfig is not matching in the code.
+as a result the rs, rw and en pins were getting interchanged.
+Kconfig defines the value of PANEL_LCD to be 1 if we select custom
+configuration but in the code LCD_TYPE_CUSTOM is defined as 5.
+
+my hardware is LCD_TYPE_CUSTOM, but the pins were assigned to it
+as pins of LCD_TYPE_OLD, and it was not working.
+Now values are corrected with referenece to the values defined in
+Kconfig and it is working.
+checked on JHD204A lcd with LCD_TYPE_CUSTOM configuration.
+
+Signed-off-by: Sudip Mukherjee <sudip@vectorindia.org>
+Acked-by: Willy Tarreau <w@1wt.eu>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/staging/panel/panel.c | 12 ++++++------
+ 1 file changed, 6 insertions(+), 6 deletions(-)
+
+diff --git a/drivers/staging/panel/panel.c b/drivers/staging/panel/panel.c
+index 6ed35b6..04fc217 100644
+--- a/drivers/staging/panel/panel.c
++++ b/drivers/staging/panel/panel.c
+@@ -335,11 +335,11 @@ static unsigned char lcd_bits[LCD_PORTS][LCD_BITS][BIT_STATES];
+  * LCD types
+  */
+ #define LCD_TYPE_NONE		0
+-#define LCD_TYPE_OLD		1
+-#define LCD_TYPE_KS0074		2
+-#define LCD_TYPE_HANTRONIX	3
+-#define LCD_TYPE_NEXCOM		4
+-#define LCD_TYPE_CUSTOM		5
++#define LCD_TYPE_CUSTOM		1
++#define LCD_TYPE_OLD		2
++#define LCD_TYPE_KS0074		3
++#define LCD_TYPE_HANTRONIX	4
++#define LCD_TYPE_NEXCOM		5
+ 
+ /*
+  * keypad types
+@@ -502,7 +502,7 @@ MODULE_PARM_DESC(keypad_type,
+ static int lcd_type = NOT_SET;
+ module_param(lcd_type, int, 0000);
+ MODULE_PARM_DESC(lcd_type,
+-		 "LCD type: 0=none, 1=old //, 2=serial ks0074, 3=hantronix //, 4=nexcom //, 5=compiled-in");
++		 "LCD type: 0=none, 1=compiled-in, 2=old, 3=serial ks0074, 4=hantronix, 5=nexcom");
+ 
+ static int lcd_height = NOT_SET;
+ module_param(lcd_height, int, 0000);
+-- 
+2.3.6
+
+
+From da01c0cfb196bef048fcb16727d646138d257ce3 Mon Sep 17 00:00:00 2001
+From: Alistair Strachan <alistair.strachan@imgtec.com>
+Date: Tue, 24 Mar 2015 14:51:31 -0700
+Subject: [PATCH 045/219] staging: android: sync: Fix memory corruption in
+ sync_timeline_signal().
+Cc: mpagano@gentoo.org
+
+commit 8e43c9c75faf2902955bd2ecd7a50a8cc41cb00a upstream.
+
+The android_fence_release() function checks for active sync points
+by calling list_empty() on the list head embedded on the sync
+point. However, it is only valid to use list_empty() on nodes that
+have been initialized with INIT_LIST_HEAD() or list_del_init().
+
+Because the list entry has likely been removed from the active list
+by sync_timeline_signal(), there is a good chance that this
+WARN_ON_ONCE() will be hit due to dangling pointers pointing at
+freed memory (even though the sync drivers did nothing wrong)
+and memory corruption will ensue as the list entry is removed for
+a second time, corrupting the active list.
+
+This problem can be reproduced quite easily with CONFIG_DEBUG_LIST=y
+and fences with more than one sync point.
+
+Signed-off-by: Alistair Strachan <alistair.strachan@imgtec.com>
+Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
+Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: Colin Cross <ccross@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/staging/android/sync.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/staging/android/sync.c b/drivers/staging/android/sync.c
+index 7bdb62b..f83e00c 100644
+--- a/drivers/staging/android/sync.c
++++ b/drivers/staging/android/sync.c
+@@ -114,7 +114,7 @@ void sync_timeline_signal(struct sync_timeline *obj)
+ 	list_for_each_entry_safe(pt, next, &obj->active_list_head,
+ 				 active_list) {
+ 		if (fence_is_signaled_locked(&pt->base))
+-			list_del(&pt->active_list);
++			list_del_init(&pt->active_list);
+ 	}
+ 
+ 	spin_unlock_irqrestore(&obj->child_list_lock, flags);
+-- 
+2.3.6
+
+
+From c373916a7434a49607ece05dbf0f60c697ad7291 Mon Sep 17 00:00:00 2001
+From: Malcolm Priestley <tvboxspy@gmail.com>
+Date: Wed, 1 Apr 2015 22:32:52 +0100
+Subject: [PATCH 046/219] staging: vt6655: use ieee80211_tx_info to select
+ packet type.
+Cc: mpagano@gentoo.org
+
+commit a6388e68321a1e0a0f408379c2a36396807745b3 upstream.
+
+Information for packet type is in ieee80211_tx_info
+
+band IEEE80211_BAND_5GHZ for PK_TYPE_11A.
+
+IEEE80211_TX_RC_USE_CTS_PROTECT via tx_rate flags selects PK_TYPE_11GB
+
+This ensures that the packet is always the right type.
+
+Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/staging/vt6655/rxtx.c | 14 +++++++++++---
+ 1 file changed, 11 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/staging/vt6655/rxtx.c b/drivers/staging/vt6655/rxtx.c
+index 07ce3fd..fdf5c56 100644
+--- a/drivers/staging/vt6655/rxtx.c
++++ b/drivers/staging/vt6655/rxtx.c
+@@ -1308,10 +1308,18 @@ int vnt_generate_fifo_header(struct vnt_private *priv, u32 dma_idx,
+ 			    priv->hw->conf.chandef.chan->hw_value);
+ 	}
+ 
+-	if (current_rate > RATE_11M)
+-		pkt_type = (u8)priv->byPacketType;
+-	else
++	if (current_rate > RATE_11M) {
++		if (info->band == IEEE80211_BAND_5GHZ) {
++			pkt_type = PK_TYPE_11A;
++		} else {
++			if (tx_rate->flags & IEEE80211_TX_RC_USE_CTS_PROTECT)
++				pkt_type = PK_TYPE_11GB;
++			else
++				pkt_type = PK_TYPE_11GA;
++		}
++	} else {
+ 		pkt_type = PK_TYPE_11B;
++	}
+ 
+ 	/*Set fifo controls */
+ 	if (pkt_type == PK_TYPE_11A)
+-- 
+2.3.6
+
+
+From a89d16cbd3a2838b54e404d7f8dd0af60667fa21 Mon Sep 17 00:00:00 2001
+From: NeilBrown <neilb@suse.de>
+Date: Fri, 10 Apr 2015 13:19:04 +1000
+Subject: [PATCH 047/219] md/raid0: fix bug with chunksize not a power of 2.
+Cc: mpagano@gentoo.org
+
+commit 47d68979cc968535cb87f3e5f2e6a3533ea48fbd upstream.
+
+Since commit 20d0189b1012a37d2533a87fb451f7852f2418d1
+in v3.14-rc1 RAID0 has performed incorrect calculations
+when the chunksize is not a power of 2.
+
+This happens because "sector_div()" modifies its first argument, but
+this wasn't taken into account in the patch.
+
+So restore that first arg before re-using the variable.
+
+Reported-by: Joe Landman <joe.landman@gmail.com>
+Reported-by: Dave Chinner <david@fromorbit.com>
+Fixes: 20d0189b1012a37d2533a87fb451f7852f2418d1
+Signed-off-by: NeilBrown <neilb@suse.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/md/raid0.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index 3ed9f42..3b5d7f7 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -313,7 +313,7 @@ static struct strip_zone *find_zone(struct r0conf *conf,
+ 
+ /*
+  * remaps the bio to the target device. we separate two flows.
+- * power 2 flow and a general flow for the sake of perfromance
++ * power 2 flow and a general flow for the sake of performance
+ */
+ static struct md_rdev *map_sector(struct mddev *mddev, struct strip_zone *zone,
+ 				sector_t sector, sector_t *sector_offset)
+@@ -524,6 +524,7 @@ static void raid0_make_request(struct mddev *mddev, struct bio *bio)
+ 			split = bio;
+ 		}
+ 
++		sector = bio->bi_iter.bi_sector;
+ 		zone = find_zone(mddev->private, &sector);
+ 		tmp_dev = map_sector(mddev, zone, sector, &sector);
+ 		split->bi_bdev = tmp_dev->bdev;
+-- 
+2.3.6
+
+
+From a3ec48fa3f64ea293bfe691a02c17c0a7d2887e1 Mon Sep 17 00:00:00 2001
+From: Christoph Hellwig <hch@infradead.org>
+Date: Wed, 15 Apr 2015 09:44:37 -0700
+Subject: [PATCH 048/219] megaraid_sas: use raw_smp_processor_id()
+Cc: mpagano@gentoo.org
+
+commit 16b8528d20607925899b1df93bfd8fbab98d267c upstream.
+
+We only want to steer the I/O completion towards a queue, but don't
+actually access any per-CPU data, so the raw_ version is fine to use
+and avoids the warnings when using smp_processor_id().
+
+Signed-off-by: Christoph Hellwig <hch@lst.de>
+Reported-by: Andy Lutomirski <luto@kernel.org>
+Tested-by: Andy Lutomirski <luto@kernel.org>
+Acked-by: Sumit Saxena <sumit.saxena@avagotech.com>
+Signed-off-by: James Bottomley <JBottomley@Odin.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/scsi/megaraid/megaraid_sas_fusion.c | 9 ++++++---
+ 1 file changed, 6 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index 675b5e7..5a0800d 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -1584,11 +1584,11 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
+ 			fp_possible = io_info.fpOkForIo;
+ 	}
+ 
+-	/* Use smp_processor_id() for now until cmd->request->cpu is CPU
++	/* Use raw_smp_processor_id() for now until cmd->request->cpu is CPU
+ 	   id by default, not CPU group id, otherwise all MSI-X queues won't
+ 	   be utilized */
+ 	cmd->request_desc->SCSIIO.MSIxIndex = instance->msix_vectors ?
+-		smp_processor_id() % instance->msix_vectors : 0;
++		raw_smp_processor_id() % instance->msix_vectors : 0;
+ 
+ 	if (fp_possible) {
+ 		megasas_set_pd_lba(io_request, scp->cmd_len, &io_info, scp,
+@@ -1693,7 +1693,10 @@ megasas_build_dcdb_fusion(struct megasas_instance *instance,
+ 			<< MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT;
+ 		cmd->request_desc->SCSIIO.DevHandle = io_request->DevHandle;
+ 		cmd->request_desc->SCSIIO.MSIxIndex =
+-			instance->msix_vectors ? smp_processor_id() % instance->msix_vectors : 0;
++			instance->msix_vectors ?
++				raw_smp_processor_id() %
++					instance->msix_vectors :
++				0;
+ 		os_timeout_value = scmd->request->timeout / HZ;
+ 
+ 		if (instance->secure_jbod_support &&
+-- 
+2.3.6
+
+
+From e654ded279c44285d07a31fe6d6c6fb74a9b5465 Mon Sep 17 00:00:00 2001
+From: Sudeep Holla <sudeep.holla@arm.com>
+Date: Tue, 17 Mar 2015 17:28:46 +0000
+Subject: [PATCH 049/219] drivers/base: cacheinfo: validate device node for all
+ the caches
+Cc: mpagano@gentoo.org
+
+commit 8a7d95f95c95f396decbd4cda6d4903fc4664946 upstream.
+
+On architectures that depend on DT for obtaining cache hierarcy, we need
+to validate the device node for all the cache indices, failing to do so
+might result in wrong information being exposed to the userspace.
+
+This is quite possible on initial/incomplete versions of the device
+trees. In such cases, it's better to bail out if all the required device
+nodes are not present.
+
+This patch adds checks for the validation of device node for all the
+caches and doesn't initialise the cacheinfo if there's any error.
+
+Reported-by: Mark Rutland <mark.rutland@arm.com>
+Acked-by: Mark Rutland <mark.rutland@arm.com>
+Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/base/cacheinfo.c | 13 +++++++++++--
+ 1 file changed, 11 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
+index 6e64563..9c2ba1c 100644
+--- a/drivers/base/cacheinfo.c
++++ b/drivers/base/cacheinfo.c
+@@ -62,15 +62,21 @@ static int cache_setup_of_node(unsigned int cpu)
+ 		return -ENOENT;
+ 	}
+ 
+-	while (np && index < cache_leaves(cpu)) {
++	while (index < cache_leaves(cpu)) {
+ 		this_leaf = this_cpu_ci->info_list + index;
+ 		if (this_leaf->level != 1)
+ 			np = of_find_next_cache_node(np);
+ 		else
+ 			np = of_node_get(np);/* cpu node itself */
++		if (!np)
++			break;
+ 		this_leaf->of_node = np;
+ 		index++;
+ 	}
++
++	if (index != cache_leaves(cpu)) /* not all OF nodes populated */
++		return -ENOENT;
++
+ 	return 0;
+ }
+ 
+@@ -189,8 +195,11 @@ static int detect_cache_attributes(unsigned int cpu)
+ 	 * will be set up here only if they are not populated already
+ 	 */
+ 	ret = cache_shared_cpu_map_setup(cpu);
+-	if (ret)
++	if (ret) {
++		pr_warn("Unable to detect cache hierarcy from DT for CPU %d\n",
++			cpu);
+ 		goto free_ci;
++	}
+ 	return 0;
+ 
+ free_ci:
+-- 
+2.3.6
+
+
+From 766f84104c3a294da5c4f1660589b3d167c5b1c6 Mon Sep 17 00:00:00 2001
+From: Oliver Neukum <oneukum@suse.de>
+Date: Fri, 20 Mar 2015 14:29:34 +0100
+Subject: [PATCH 050/219] cdc-wdm: fix endianness bug in debug statements
+Cc: mpagano@gentoo.org
+
+commit 323ece54e0761198946ecd0c2091f1d2bfdfcb64 upstream.
+
+Values directly from descriptors given in debug statements
+must be converted to native endianness.
+
+Signed-off-by: Oliver Neukum <oneukum@suse.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/class/cdc-wdm.c | 12 +++++++-----
+ 1 file changed, 7 insertions(+), 5 deletions(-)
+
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index a051a7a..a81f9dd 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -245,7 +245,7 @@ static void wdm_int_callback(struct urb *urb)
+ 	case USB_CDC_NOTIFY_RESPONSE_AVAILABLE:
+ 		dev_dbg(&desc->intf->dev,
+ 			"NOTIFY_RESPONSE_AVAILABLE received: index %d len %d",
+-			dr->wIndex, dr->wLength);
++			le16_to_cpu(dr->wIndex), le16_to_cpu(dr->wLength));
+ 		break;
+ 
+ 	case USB_CDC_NOTIFY_NETWORK_CONNECTION:
+@@ -262,7 +262,9 @@ static void wdm_int_callback(struct urb *urb)
+ 		clear_bit(WDM_POLL_RUNNING, &desc->flags);
+ 		dev_err(&desc->intf->dev,
+ 			"unknown notification %d received: index %d len %d\n",
+-			dr->bNotificationType, dr->wIndex, dr->wLength);
++			dr->bNotificationType,
++			le16_to_cpu(dr->wIndex),
++			le16_to_cpu(dr->wLength));
+ 		goto exit;
+ 	}
+ 
+@@ -408,7 +410,7 @@ static ssize_t wdm_write
+ 			     USB_RECIP_INTERFACE);
+ 	req->bRequest = USB_CDC_SEND_ENCAPSULATED_COMMAND;
+ 	req->wValue = 0;
+-	req->wIndex = desc->inum;
++	req->wIndex = desc->inum; /* already converted */
+ 	req->wLength = cpu_to_le16(count);
+ 	set_bit(WDM_IN_USE, &desc->flags);
+ 	desc->outbuf = buf;
+@@ -422,7 +424,7 @@ static ssize_t wdm_write
+ 		rv = usb_translate_errors(rv);
+ 	} else {
+ 		dev_dbg(&desc->intf->dev, "Tx URB has been submitted index=%d",
+-			req->wIndex);
++			le16_to_cpu(req->wIndex));
+ 	}
+ out:
+ 	usb_autopm_put_interface(desc->intf);
+@@ -820,7 +822,7 @@ static int wdm_create(struct usb_interface *intf, struct usb_endpoint_descriptor
+ 	desc->irq->bRequestType = (USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE);
+ 	desc->irq->bRequest = USB_CDC_GET_ENCAPSULATED_RESPONSE;
+ 	desc->irq->wValue = 0;
+-	desc->irq->wIndex = desc->inum;
++	desc->irq->wIndex = desc->inum; /* already converted */
+ 	desc->irq->wLength = cpu_to_le16(desc->wMaxCommand);
+ 
+ 	usb_fill_control_urb(
+-- 
+2.3.6
+
+
+From 7df0c5a403d2e9a1698a6ebdcf6e37a0639aad85 Mon Sep 17 00:00:00 2001
+From: Geert Uytterhoeven <geert+renesas@glider.be>
+Date: Wed, 18 Feb 2015 17:34:59 +0100
+Subject: [PATCH 051/219] mmc: tmio: Remove bogus un-initialization in
+ tmio_mmc_host_free()
+Cc: mpagano@gentoo.org
+
+commit 13a6a2ed1f5e77ae47c2b1a8e3bf22b2fa2d56ba upstream.
+
+If CONFIG_DEBUG_SLAB=y:
+
+    sh_mobile_sdhi ee100000.sd: Got CD GPIO
+    sh_mobile_sdhi ee100000.sd: Got WP GPIO
+    platform ee100000.sd: Driver sh_mobile_sdhi requests probe deferral
+    ...
+    Slab corruption (Not tainted): kmalloc-1024 start=ed8b3c00, len=1024
+    2d0: 00 00 00 00 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  ....kkkkkkkkkkkk
+    Prev obj: start=ed8b3800, len=1024
+    000: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+    010: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
+
+Struct tmio_mmc_host is embedded inside struct mmc_host, and thus is
+freed by the call to mmc_free_host(). Hence it must not be written to
+afterwards, as that will corrupt freed (and perhaps already reused)
+memory.
+
+Fixes: 94b110aff8679b14 ("mmc: tmio: add tmio_mmc_host_alloc/free()")
+Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/mmc/host/tmio_mmc_pio.c | 2 --
+ 1 file changed, 2 deletions(-)
+
+diff --git a/drivers/mmc/host/tmio_mmc_pio.c b/drivers/mmc/host/tmio_mmc_pio.c
+index a31c357..dba7e1c 100644
+--- a/drivers/mmc/host/tmio_mmc_pio.c
++++ b/drivers/mmc/host/tmio_mmc_pio.c
+@@ -1073,8 +1073,6 @@ EXPORT_SYMBOL(tmio_mmc_host_alloc);
+ void tmio_mmc_host_free(struct tmio_mmc_host *host)
+ {
+ 	mmc_free_host(host->mmc);
+-
+-	host->mmc = NULL;
+ }
+ EXPORT_SYMBOL(tmio_mmc_host_free);
+ 
+-- 
+2.3.6
+
+
+From 85895968a9444e810f96cc951c6b5fc7dd183296 Mon Sep 17 00:00:00 2001
+From: Chen-Yu Tsai <wens@csie.org>
+Date: Tue, 3 Mar 2015 09:44:40 +0800
+Subject: [PATCH 052/219] mmc: sunxi: Use devm_reset_control_get_optional() for
+ reset control
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+Cc: mpagano@gentoo.org
+
+commit 9e71c589e44ddf2b86f361c81e360c6b0d0354b1 upstream.
+
+The reset control for the sunxi mmc controller is optional. Some
+newer platforms (sun6i, sun8i, sun9i) have it, while older ones
+(sun4i, sun5i, sun7i) don't.
+
+Use the properly stubbed _optional version so the driver does not
+fail to compile when RESET_CONTROLLER=n.
+
+This patch also adds a check for deferred probing on the reset
+control.
+
+Signed-off-by: Chen-Yu Tsai <wens@csie.org>
+Acked-by: David Lanzendörfer <david.lanzendoerfer@o2s.ch>
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/mmc/host/sunxi-mmc.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/mmc/host/sunxi-mmc.c b/drivers/mmc/host/sunxi-mmc.c
+index e8a4218..459ed1b 100644
+--- a/drivers/mmc/host/sunxi-mmc.c
++++ b/drivers/mmc/host/sunxi-mmc.c
+@@ -930,7 +930,9 @@ static int sunxi_mmc_resource_request(struct sunxi_mmc_host *host,
+ 		return PTR_ERR(host->clk_sample);
+ 	}
+ 
+-	host->reset = devm_reset_control_get(&pdev->dev, "ahb");
++	host->reset = devm_reset_control_get_optional(&pdev->dev, "ahb");
++	if (PTR_ERR(host->reset) == -EPROBE_DEFER)
++		return PTR_ERR(host->reset);
+ 
+ 	ret = clk_prepare_enable(host->clk_ahb);
+ 	if (ret) {
+-- 
+2.3.6
+
+
+From 662552a3bf88447e8985bdad78fc7e548487416b Mon Sep 17 00:00:00 2001
+From: Lucas Stach <l.stach@pengutronix.de>
+Date: Wed, 1 Apr 2015 10:46:15 +0200
+Subject: [PATCH 053/219] spi: imx: read back the RX/TX watermark levels
+ earlier
+Cc: mpagano@gentoo.org
+
+commit f511ab09dfb0fe7b2335eccac51ff9f001a32e4a upstream.
+
+They are used to decide if the controller can do DMA on a buffer
+of a specific length and thus are needed before any transfer is attempted.
+
+This fixes a memory leak where the SPI core uses the drivers can_dma()
+callback to determine if a buffer needs to be mapped. As the watermark
+levels aren't correct at that point the driver falsely claims to be able to
+DMA the buffer when it fact it isn't.
+After the transfer has been done the core uses the same callback to
+determine if it needs to unmap the buffers. As the driver now correctly
+claims to not being able to DMA the buffer the core doesn't attempt to
+unmap the buffer which leaves the SGT leaking.
+
+Fixes: f62caccd12c17e4 (spi: spi-imx: add DMA support)
+Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
+Signed-off-by: Mark Brown <broonie@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/spi/spi-imx.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 6fea4af..aea3a67 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -370,8 +370,6 @@ static int __maybe_unused mx51_ecspi_config(struct spi_imx_data *spi_imx,
+ 	if (spi_imx->dma_is_inited) {
+ 		dma = readl(spi_imx->base + MX51_ECSPI_DMA);
+ 
+-		spi_imx->tx_wml = spi_imx_get_fifosize(spi_imx) / 2;
+-		spi_imx->rx_wml = spi_imx_get_fifosize(spi_imx) / 2;
+ 		spi_imx->rxt_wml = spi_imx_get_fifosize(spi_imx) / 2;
+ 		rx_wml_cfg = spi_imx->rx_wml << MX51_ECSPI_DMA_RX_WML_OFFSET;
+ 		tx_wml_cfg = spi_imx->tx_wml << MX51_ECSPI_DMA_TX_WML_OFFSET;
+@@ -868,6 +866,8 @@ static int spi_imx_sdma_init(struct device *dev, struct spi_imx_data *spi_imx,
+ 	master->max_dma_len = MAX_SDMA_BD_BYTES;
+ 	spi_imx->bitbang.master->flags = SPI_MASTER_MUST_RX |
+ 					 SPI_MASTER_MUST_TX;
++	spi_imx->tx_wml = spi_imx_get_fifosize(spi_imx) / 2;
++	spi_imx->rx_wml = spi_imx_get_fifosize(spi_imx) / 2;
+ 	spi_imx->dma_is_inited = 1;
+ 
+ 	return 0;
+-- 
+2.3.6
+
+
+From 721669bff3eaa852476783845293dca50431ce5b Mon Sep 17 00:00:00 2001
+From: Ian Abbott <abbotti@mev.co.uk>
+Date: Mon, 23 Mar 2015 17:50:27 +0000
+Subject: [PATCH 054/219] spi: spidev: fix possible arithmetic overflow for
+ multi-transfer message
+Cc: mpagano@gentoo.org
+
+commit f20fbaad7620af2df36a1f9d1c9ecf48ead5b747 upstream.
+
+`spidev_message()` sums the lengths of the individual SPI transfers to
+determine the overall SPI message length.  It restricts the total
+length, returning an error if too long, but it does not check for
+arithmetic overflow.  For example, if the SPI message consisted of two
+transfers and the first has a length of 10 and the second has a length
+of (__u32)(-1), the total length would be seen as 9, even though the
+second transfer is actually very long.  If the second transfer specifies
+a null `rx_buf` and a non-null `tx_buf`, the `copy_from_user()` could
+overrun the spidev's pre-allocated tx buffer before it reaches an
+invalid user memory address.  Fix it by checking that neither the total
+nor the individual transfer lengths exceed the maximum allowed value.
+
+Thanks to Dan Carpenter for reporting the potential integer overflow.
+
+Signed-off-by: Ian Abbott <abbotti@mev.co.uk>
+Signed-off-by: Mark Brown <broonie@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/spi/spidev.c | 5 ++++-
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
+index 4eb7a98..7bf5186 100644
+--- a/drivers/spi/spidev.c
++++ b/drivers/spi/spidev.c
+@@ -245,7 +245,10 @@ static int spidev_message(struct spidev_data *spidev,
+ 		k_tmp->len = u_tmp->len;
+ 
+ 		total += k_tmp->len;
+-		if (total > bufsiz) {
++		/* Check total length of transfers.  Also check each
++		 * transfer length to avoid arithmetic overflow.
++		 */
++		if (total > bufsiz || k_tmp->len > bufsiz) {
+ 			status = -EMSGSIZE;
+ 			goto done;
+ 		}
+-- 
+2.3.6
+
+
+From 855715fa0e283d4ff8280c79ac2c531116bc3290 Mon Sep 17 00:00:00 2001
+From: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Date: Thu, 12 Mar 2015 08:43:59 +0100
+Subject: [PATCH 055/219] compal-laptop: Fix leaking hwmon device
+Cc: mpagano@gentoo.org
+
+commit ad774702f1705c04e5fa492b793d8d477a504fa6 upstream.
+
+The commit c2be45f09bb0 ("compal-laptop: Use
+devm_hwmon_device_register_with_groups") wanted to change the
+registering of hwmon device to resource-managed version. It mostly did
+it except the main thing - it forgot to use devm-like function so the
+hwmon device leaked after device removal or probe failure.
+
+Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Fixes: c2be45f09bb0 ("compal-laptop: Use devm_hwmon_device_register_with_groups")
+Acked-by: Guenter Roeck <linux@roeck-us.net>
+Acked-by: Darren Hart <dvhart@linux.intel.com>
+Signed-off-by: Sebastian Reichel <sre@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/platform/x86/compal-laptop.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/platform/x86/compal-laptop.c b/drivers/platform/x86/compal-laptop.c
+index 15c0fab..eb9885e 100644
+--- a/drivers/platform/x86/compal-laptop.c
++++ b/drivers/platform/x86/compal-laptop.c
+@@ -1026,9 +1026,9 @@ static int compal_probe(struct platform_device *pdev)
+ 	if (err)
+ 		return err;
+ 
+-	hwmon_dev = hwmon_device_register_with_groups(&pdev->dev,
+-						      "compal", data,
+-						      compal_hwmon_groups);
++	hwmon_dev = devm_hwmon_device_register_with_groups(&pdev->dev,
++							   "compal", data,
++							   compal_hwmon_groups);
+ 	if (IS_ERR(hwmon_dev)) {
+ 		err = PTR_ERR(hwmon_dev);
+ 		goto remove;
+-- 
+2.3.6
+
+
+From 7d91365ba6ce7256b1afb1197aecf3dd0dca6e65 Mon Sep 17 00:00:00 2001
+From: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Date: Thu, 12 Mar 2015 08:44:00 +0100
+Subject: [PATCH 056/219] compal-laptop: Check return value of
+ power_supply_register
+Cc: mpagano@gentoo.org
+
+commit 1915a718b1872edffcb13e5436a9f7302d3d36f0 upstream.
+
+The return value of power_supply_register() call was not checked and
+even on error probe() function returned 0. If registering failed then
+during unbind the driver tried to unregister power supply which was not
+actually registered.
+
+This could lead to memory corruption because power_supply_unregister()
+unconditionally cleans up given power supply.
+
+Fix this by checking return status of power_supply_register() call. In
+case of failure, clean up sysfs entries and fail the probe.
+
+Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Fixes: 9be0fcb5ed46 ("compal-laptop: add JHL90, battery & hwmon interface")
+Signed-off-by: Sebastian Reichel <sre@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/platform/x86/compal-laptop.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/platform/x86/compal-laptop.c b/drivers/platform/x86/compal-laptop.c
+index eb9885e..bceb30b 100644
+--- a/drivers/platform/x86/compal-laptop.c
++++ b/drivers/platform/x86/compal-laptop.c
+@@ -1036,7 +1036,9 @@ static int compal_probe(struct platform_device *pdev)
+ 
+ 	/* Power supply */
+ 	initialize_power_supply_data(data);
+-	power_supply_register(&compal_device->dev, &data->psy);
++	err = power_supply_register(&compal_device->dev, &data->psy);
++	if (err < 0)
++		goto remove;
+ 
+ 	platform_set_drvdata(pdev, data);
+ 
+-- 
+2.3.6
+
+
+From 676ee802b67bf6ea0287ab5b25ae3f551cf27f74 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt@goodmis.org>
+Date: Tue, 17 Mar 2015 10:40:38 -0400
+Subject: [PATCH 057/219] ring-buffer: Replace this_cpu_*() with __this_cpu_*()
+Cc: mpagano@gentoo.org
+
+commit 80a9b64e2c156b6523e7a01f2ba6e5d86e722814 upstream.
+
+It has come to my attention that this_cpu_read/write are horrible on
+architectures other than x86. Worse yet, they actually disable
+preemption or interrupts! This caused some unexpected tracing results
+on ARM.
+
+   101.356868: preempt_count_add <-ring_buffer_lock_reserve
+   101.356870: preempt_count_sub <-ring_buffer_lock_reserve
+
+The ring_buffer_lock_reserve has recursion protection that requires
+accessing a per cpu variable. But since preempt_disable() is traced, it
+too got traced while accessing the variable that is suppose to prevent
+recursion like this.
+
+The generic version of this_cpu_read() and write() are:
+
+ #define this_cpu_generic_read(pcp)					\
+ ({	typeof(pcp) ret__;						\
+	preempt_disable();						\
+	ret__ = *this_cpu_ptr(&(pcp));					\
+	preempt_enable();						\
+	ret__;								\
+ })
+
+ #define this_cpu_generic_to_op(pcp, val, op)				\
+ do {									\
+	unsigned long flags;						\
+	raw_local_irq_save(flags);					\
+	*__this_cpu_ptr(&(pcp)) op val;					\
+	raw_local_irq_restore(flags);					\
+ } while (0)
+
+Which is unacceptable for locations that know they are within preempt
+disabled or interrupt disabled locations.
+
+Paul McKenney stated that __this_cpu_() versions produce much better code on
+other architectures than this_cpu_() does, if we know that the call is done in
+a preempt disabled location.
+
+I also changed the recursive_unlock() to use two local variables instead
+of accessing the per_cpu variable twice.
+
+Link: http://lkml.kernel.org/r/20150317114411.GE3589@linux.vnet.ibm.com
+Link: http://lkml.kernel.org/r/20150317104038.312e73d1@gandalf.local.home
+
+Acked-by: Christoph Lameter <cl@linux.com>
+Reported-by: Uwe Kleine-Koenig <u.kleine-koenig@pengutronix.de>
+Tested-by: Uwe Kleine-Koenig <u.kleine-koenig@pengutronix.de>
+Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ kernel/trace/ring_buffer.c | 11 +++++------
+ 1 file changed, 5 insertions(+), 6 deletions(-)
+
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 5040d44..922048a 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -2679,7 +2679,7 @@ static DEFINE_PER_CPU(unsigned int, current_context);
+ 
+ static __always_inline int trace_recursive_lock(void)
+ {
+-	unsigned int val = this_cpu_read(current_context);
++	unsigned int val = __this_cpu_read(current_context);
+ 	int bit;
+ 
+ 	if (in_interrupt()) {
+@@ -2696,18 +2696,17 @@ static __always_inline int trace_recursive_lock(void)
+ 		return 1;
+ 
+ 	val |= (1 << bit);
+-	this_cpu_write(current_context, val);
++	__this_cpu_write(current_context, val);
+ 
+ 	return 0;
+ }
+ 
+ static __always_inline void trace_recursive_unlock(void)
+ {
+-	unsigned int val = this_cpu_read(current_context);
++	unsigned int val = __this_cpu_read(current_context);
+ 
+-	val--;
+-	val &= this_cpu_read(current_context);
+-	this_cpu_write(current_context, val);
++	val &= val & (val - 1);
++	__this_cpu_write(current_context, val);
+ }
+ 
+ #else
+-- 
+2.3.6
+
+
+From 85020c092b437aaceec966678ec5fd9f7792b547 Mon Sep 17 00:00:00 2001
+From: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Date: Fri, 20 Feb 2015 14:32:22 +0100
+Subject: [PATCH 058/219] power_supply: twl4030_madc: Check return value of
+ power_supply_register
+Cc: mpagano@gentoo.org
+
+commit 68c3ed6fa7e0d69529ced772d650ab128916a81d upstream.
+
+The return value of power_supply_register() call was not checked and
+even on error probe() function returned 0. If registering failed then
+during unbind the driver tried to unregister power supply which was not
+actually registered.
+
+This could lead to memory corruption because power_supply_unregister()
+unconditionally cleans up given power supply.
+
+Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Fixes: da0a00ebc239 ("power: Add twl4030_madc battery driver.")
+Signed-off-by: Sebastian Reichel <sre@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/power/twl4030_madc_battery.c | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/power/twl4030_madc_battery.c b/drivers/power/twl4030_madc_battery.c
+index 7ef445a..cf90760 100644
+--- a/drivers/power/twl4030_madc_battery.c
++++ b/drivers/power/twl4030_madc_battery.c
+@@ -192,6 +192,7 @@ static int twl4030_madc_battery_probe(struct platform_device *pdev)
+ {
+ 	struct twl4030_madc_battery *twl4030_madc_bat;
+ 	struct twl4030_madc_bat_platform_data *pdata = pdev->dev.platform_data;
++	int ret = 0;
+ 
+ 	twl4030_madc_bat = kzalloc(sizeof(*twl4030_madc_bat), GFP_KERNEL);
+ 	if (!twl4030_madc_bat)
+@@ -216,9 +217,11 @@ static int twl4030_madc_battery_probe(struct platform_device *pdev)
+ 
+ 	twl4030_madc_bat->pdata = pdata;
+ 	platform_set_drvdata(pdev, twl4030_madc_bat);
+-	power_supply_register(&pdev->dev, &twl4030_madc_bat->psy);
++	ret = power_supply_register(&pdev->dev, &twl4030_madc_bat->psy);
++	if (ret < 0)
++		kfree(twl4030_madc_bat);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int twl4030_madc_battery_remove(struct platform_device *pdev)
+-- 
+2.3.6
+
+
+From e7b8d14c9be1ddb14796569a636807647e30724c Mon Sep 17 00:00:00 2001
+From: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Date: Fri, 20 Feb 2015 14:32:25 +0100
+Subject: [PATCH 059/219] power_supply: lp8788-charger: Fix leaked power supply
+ on probe fail
+Cc: mpagano@gentoo.org
+
+commit a7117f81e8391e035c49b3440792f7e6cea28173 upstream.
+
+Driver forgot to unregister charger power supply if registering of
+battery supply failed in probe(). In such case the memory associated
+with power supply leaked.
+
+Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Fixes: 98a276649358 ("power_supply: Add new lp8788 charger driver")
+Signed-off-by: Sebastian Reichel <sre@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/power/lp8788-charger.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/power/lp8788-charger.c b/drivers/power/lp8788-charger.c
+index 21fc233..176dab2 100644
+--- a/drivers/power/lp8788-charger.c
++++ b/drivers/power/lp8788-charger.c
+@@ -417,8 +417,10 @@ static int lp8788_psy_register(struct platform_device *pdev,
+ 	pchg->battery.num_properties = ARRAY_SIZE(lp8788_battery_prop);
+ 	pchg->battery.get_property = lp8788_battery_get_property;
+ 
+-	if (power_supply_register(&pdev->dev, &pchg->battery))
++	if (power_supply_register(&pdev->dev, &pchg->battery)) {
++		power_supply_unregister(&pchg->charger);
+ 		return -EPERM;
++	}
+ 
+ 	return 0;
+ }
+-- 
+2.3.6
+
+
+From a8cb866f5168eaec313528f7059b0025b859cccf Mon Sep 17 00:00:00 2001
+From: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Date: Fri, 20 Feb 2015 14:32:23 +0100
+Subject: [PATCH 060/219] power_supply: ipaq_micro_battery: Fix leaking
+ workqueue
+Cc: mpagano@gentoo.org
+
+commit f852ec461e24504690445e7d281cbe806df5ccef upstream.
+
+Driver allocates singlethread workqueue in probe but it is not destroyed
+during removal.
+
+Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Fixes: 00a588f9d27f ("power: add driver for battery reading on iPaq h3xxx")
+Signed-off-by: Sebastian Reichel <sre@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/power/ipaq_micro_battery.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/power/ipaq_micro_battery.c b/drivers/power/ipaq_micro_battery.c
+index 9d69460..698cf16 100644
+--- a/drivers/power/ipaq_micro_battery.c
++++ b/drivers/power/ipaq_micro_battery.c
+@@ -251,6 +251,7 @@ static int micro_batt_remove(struct platform_device *pdev)
+ 	power_supply_unregister(&micro_ac_power);
+ 	power_supply_unregister(&micro_batt_power);
+ 	cancel_delayed_work_sync(&mb->update);
++	destroy_workqueue(mb->wq);
+ 
+ 	return 0;
+ }
+-- 
+2.3.6
+
+
+From 640e9bd83b3a3bc313eb0ade22effbab5c135a76 Mon Sep 17 00:00:00 2001
+From: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Date: Fri, 20 Feb 2015 14:32:24 +0100
+Subject: [PATCH 061/219] power_supply: ipaq_micro_battery: Check return values
+ in probe
+Cc: mpagano@gentoo.org
+
+commit a2c1d531854c4319610f1d83351213b47a633969 upstream.
+
+The return values of create_singlethread_workqueue() and
+power_supply_register() calls were not checked and even on error probe()
+function returned 0.
+
+1. If allocation of workqueue failed (returning NULL) then further
+   accesses could lead to NULL pointer dereference. The
+   queue_delayed_work() expects workqueue to be non-NULL.
+
+2. If registration of power supply failed then during unbind the driver
+   tried to unregister power supply which was not actually registered.
+   This could lead to memory corruption because
+   power_supply_unregister() unconditionally cleans up given power
+   supply.
+
+Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Fixes: 00a588f9d27f ("power: add driver for battery reading on iPaq h3xxx")
+Signed-off-by: Sebastian Reichel <sre@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/power/ipaq_micro_battery.c | 21 +++++++++++++++++++--
+ 1 file changed, 19 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/power/ipaq_micro_battery.c b/drivers/power/ipaq_micro_battery.c
+index 698cf16..96b15e0 100644
+--- a/drivers/power/ipaq_micro_battery.c
++++ b/drivers/power/ipaq_micro_battery.c
+@@ -226,6 +226,7 @@ static struct power_supply micro_ac_power = {
+ static int micro_batt_probe(struct platform_device *pdev)
+ {
+ 	struct micro_battery *mb;
++	int ret;
+ 
+ 	mb = devm_kzalloc(&pdev->dev, sizeof(*mb), GFP_KERNEL);
+ 	if (!mb)
+@@ -233,14 +234,30 @@ static int micro_batt_probe(struct platform_device *pdev)
+ 
+ 	mb->micro = dev_get_drvdata(pdev->dev.parent);
+ 	mb->wq = create_singlethread_workqueue("ipaq-battery-wq");
++	if (!mb->wq)
++		return -ENOMEM;
++
+ 	INIT_DELAYED_WORK(&mb->update, micro_battery_work);
+ 	platform_set_drvdata(pdev, mb);
+ 	queue_delayed_work(mb->wq, &mb->update, 1);
+-	power_supply_register(&pdev->dev, &micro_batt_power);
+-	power_supply_register(&pdev->dev, &micro_ac_power);
++
++	ret = power_supply_register(&pdev->dev, &micro_batt_power);
++	if (ret < 0)
++		goto batt_err;
++
++	ret = power_supply_register(&pdev->dev, &micro_ac_power);
++	if (ret < 0)
++		goto ac_err;
+ 
+ 	dev_info(&pdev->dev, "iPAQ micro battery driver\n");
+ 	return 0;
++
++ac_err:
++	power_supply_unregister(&micro_ac_power);
++batt_err:
++	cancel_delayed_work_sync(&mb->update);
++	destroy_workqueue(mb->wq);
++	return ret;
+ }
+ 
+ static int micro_batt_remove(struct platform_device *pdev)
+-- 
+2.3.6
+
+
+From 4fc2e2c56db0c05c62444ed7bc8d285704155386 Mon Sep 17 00:00:00 2001
+From: Oliver Neukum <oneukum@suse.de>
+Date: Wed, 25 Mar 2015 15:13:36 +0100
+Subject: [PATCH 062/219] HID: add HP OEM mouse to quirk ALWAYS_POLL
+Cc: mpagano@gentoo.org
+
+commit 7a8e53c414c8183e8735e3b08d9a776200e6e665 upstream.
+
+This mouse needs QUIRK_ALWAYS_POLL.
+
+Signed-off-by: Oliver Neukum <oneukum@suse.de>
+Signed-off-by: Jiri Kosina <jkosina@suse.cz>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/hid/hid-ids.h           | 3 +++
+ drivers/hid/usbhid/hid-quirks.c | 1 +
+ 2 files changed, 4 insertions(+)
+
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 9c47867..7ace715 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -459,6 +459,9 @@
+ #define USB_DEVICE_ID_UGCI_FLYING	0x0020
+ #define USB_DEVICE_ID_UGCI_FIGHTING	0x0030
+ 
++#define USB_VENDOR_ID_HP		0x03f0
++#define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE	0x0a4a
++
+ #define USB_VENDOR_ID_HUION		0x256c
+ #define USB_DEVICE_ID_HUION_TABLET	0x006e
+ 
+diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c
+index a821277..fe6c60d 100644
+--- a/drivers/hid/usbhid/hid-quirks.c
++++ b/drivers/hid/usbhid/hid-quirks.c
+@@ -78,6 +78,7 @@ static const struct hid_blacklist {
+ 	{ USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS },
+ 	{ USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET },
++	{ USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL },
+ 	{ USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077, HID_QUIRK_ALWAYS_POLL },
+ 	{ USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3, HID_QUIRK_NO_INIT_REPORTS },
+-- 
+2.3.6
+
+
+From 66997b1d6c47e793556da41877262f5ac92e8d4d Mon Sep 17 00:00:00 2001
+From: Oliver Neukum <oneukum@suse.de>
+Date: Wed, 25 Mar 2015 15:38:31 +0100
+Subject: [PATCH 063/219] HID: add quirk for PIXART OEM mouse used by HP
+Cc: mpagano@gentoo.org
+
+commit b70b82580248b5393241c986082842ec05a2b7d7 upstream.
+
+This mouse is also known under other IDs. It needs the quirk or will disconnect
+in runlevel 1 or 3.
+
+Signed-off-by: Oliver Neukum <oneukum@suse.de>
+Signed-off-by: Jiri Kosina <jkosina@suse.cz>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/hid/hid-ids.h           | 1 +
+ drivers/hid/usbhid/hid-quirks.c | 1 +
+ 2 files changed, 2 insertions(+)
+
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 7ace715..7fe5590 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -461,6 +461,7 @@
+ 
+ #define USB_VENDOR_ID_HP		0x03f0
+ #define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE	0x0a4a
++#define USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE		0x134a
+ 
+ #define USB_VENDOR_ID_HUION		0x256c
+ #define USB_DEVICE_ID_HUION_TABLET	0x006e
+diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c
+index fe6c60d..4e3ae9f 100644
+--- a/drivers/hid/usbhid/hid-quirks.c
++++ b/drivers/hid/usbhid/hid-quirks.c
+@@ -79,6 +79,7 @@ static const struct hid_blacklist {
+ 	{ USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS },
+ 	{ USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL },
++	{ USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL },
+ 	{ USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077, HID_QUIRK_ALWAYS_POLL },
+ 	{ USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3, HID_QUIRK_NO_INIT_REPORTS },
+-- 
+2.3.6
+
+
+From 3bc3783ea692a04256e2cf027bfd98bf7b8d82a6 Mon Sep 17 00:00:00 2001
+From: Andrew Elble <aweits@rit.edu>
+Date: Mon, 23 Feb 2015 08:51:24 -0500
+Subject: [PATCH 064/219] NFS: fix BUG() crash in notify_change() with patch to
+ chown_common()
+Cc: mpagano@gentoo.org
+
+commit c1b8940b42bb6487b10f2267a96b486276ce9ff7 upstream.
+
+We have observed a BUG() crash in fs/attr.c:notify_change(). The crash
+occurs during an rsync into a filesystem that is exported via NFS.
+
+1.) fs/attr.c:notify_change() modifies the caller's version of attr.
+2.) 6de0ec00ba8d ("VFS: make notify_change pass ATTR_KILL_S*ID to
+    setattr operations") introduced a BUG() restriction such that "no
+    function will ever call notify_change() with both ATTR_MODE and
+    ATTR_KILL_S*ID set". Under some circumstances though, it will have
+    assisted in setting the caller's version of attr to this very
+    combination.
+3.) 27ac0ffeac80 ("locks: break delegations on any attribute
+    modification") introduced code to handle breaking
+    delegations. This can result in notify_change() being re-called. attr
+    _must_ be explicitly reset to avoid triggering the BUG() established
+    in #2.
+4.) The path that that triggers this is via fs/open.c:chmod_common().
+    The combination of attr flags set here and in the first call to
+    notify_change() along with a later failed break_deleg_wait()
+    results in notify_change() being called again via retry_deleg
+    without resetting attr.
+
+Solution is to move retry_deleg in chmod_common() a bit further up to
+ensure attr is completely reset.
+
+There are other places where this seemingly could occur, such as
+fs/utimes.c:utimes_common(), but the attr flags are not initially
+set in such a way to trigger this.
+
+Fixes: 27ac0ffeac80 ("locks: break delegations on any attribute modification")
+Reported-by: Eric Meddaugh <etmsys@rit.edu>
+Tested-by: Eric Meddaugh <etmsys@rit.edu>
+Signed-off-by: Andrew Elble <aweits@rit.edu>
+Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/open.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/fs/open.c b/fs/open.c
+index 33f9cbf..44a3be1 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -570,6 +570,7 @@ static int chown_common(struct path *path, uid_t user, gid_t group)
+ 	uid = make_kuid(current_user_ns(), user);
+ 	gid = make_kgid(current_user_ns(), group);
+ 
++retry_deleg:
+ 	newattrs.ia_valid =  ATTR_CTIME;
+ 	if (user != (uid_t) -1) {
+ 		if (!uid_valid(uid))
+@@ -586,7 +587,6 @@ static int chown_common(struct path *path, uid_t user, gid_t group)
+ 	if (!S_ISDIR(inode->i_mode))
+ 		newattrs.ia_valid |=
+ 			ATTR_KILL_SUID | ATTR_KILL_SGID | ATTR_KILL_PRIV;
+-retry_deleg:
+ 	mutex_lock(&inode->i_mutex);
+ 	error = security_path_chown(path, uid, gid);
+ 	if (!error)
+-- 
+2.3.6
+
+
+From 46d09e1c86167373dcb343cfd6c901c78624ff01 Mon Sep 17 00:00:00 2001
+From: Russell King <rmk+kernel@arm.linux.org.uk>
+Date: Wed, 1 Apr 2015 16:20:39 +0100
+Subject: [PATCH 065/219] ARM: fix broken hibernation
+Cc: mpagano@gentoo.org
+
+commit 767bf7e7a1e82a81c59778348d156993d0a6175d upstream.
+
+Normally, when a CPU wants to clear a cache line to zero in the external
+L2 cache, it would generate bus cycles to write each word as it would do
+with any other data access.
+
+However, a Cortex A9 connected to a L2C-310 has a specific feature where
+the CPU can detect this operation, and signal that it wants to zero an
+entire cache line.  This feature, known as Full Line of Zeros (FLZ),
+involves a non-standard AXI signalling mechanism which only the L2C-310
+can properly interpret.
+
+There are separate enable bits in both the L2C-310 and the Cortex A9 -
+the L2C-310 needs to be enabled and have the FLZ enable bit set in the
+auxiliary control register before the Cortex A9 has this feature
+enabled.
+
+Unfortunately, the suspend code was not respecting this - it's not
+obvious from the code:
+
+swsusp_arch_suspend()
+ cpu_suspend() /* saves the Cortex A9 auxiliary control register */
+  arch_save_image()
+  soft_restart() /* turns off FLZ in Cortex A9, and disables L2C */
+   cpu_resume() /* restores the Cortex A9 registers, inc auxcr */
+
+At this point, we end up with the L2C disabled, but the Cortex A9 with
+FLZ enabled - which means any memset() or zeroing of a full cache line
+will fail to take effect.
+
+A similar issue exists in the resume path, but it's slightly more
+complex:
+
+swsusp_arch_suspend()
+ cpu_suspend() /* saves the Cortex A9 auxiliary control register */
+  arch_save_image() /* image with A9 auxcr saved */
+...
+swsusp_arch_resume()
+ call_with_stack()
+  arch_restore_image() /* restores image with A9 auxcr saved above */
+  soft_restart() /* turns off FLZ in Cortex A9, and disables L2C */
+   cpu_resume() /* restores the Cortex A9 registers, inc auxcr */
+
+Again, here we end up with the L2C disabled, but Cortex A9 FLZ enabled.
+
+There's no need to turn off the L2C in either of these two paths; there
+are benefits from not doing so - for example, the page copies will be
+faster with the L2C enabled.
+
+Hence, fix this by providing a variant of soft_restart() which can be
+used without turning the L2 cache controller off, and use it in both
+of these paths to keep the L2C enabled across the respective resume
+transitions.
+
+Fixes: 8ef418c7178f ("ARM: l2c: trial at enabling some Cortex-A9 optimisations")
+Reported-by: Sean Cross <xobs@kosagi.com>
+Tested-by: Sean Cross <xobs@kosagi.com>
+Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/arm/kernel/hibernate.c |  5 +++--
+ arch/arm/kernel/process.c   | 10 ++++++++--
+ arch/arm/kernel/reboot.h    |  6 ++++++
+ 3 files changed, 17 insertions(+), 4 deletions(-)
+ create mode 100644 arch/arm/kernel/reboot.h
+
+diff --git a/arch/arm/kernel/hibernate.c b/arch/arm/kernel/hibernate.c
+index c4cc50e..cfb354f 100644
+--- a/arch/arm/kernel/hibernate.c
++++ b/arch/arm/kernel/hibernate.c
+@@ -22,6 +22,7 @@
+ #include <asm/suspend.h>
+ #include <asm/memory.h>
+ #include <asm/sections.h>
++#include "reboot.h"
+ 
+ int pfn_is_nosave(unsigned long pfn)
+ {
+@@ -61,7 +62,7 @@ static int notrace arch_save_image(unsigned long unused)
+ 
+ 	ret = swsusp_save();
+ 	if (ret == 0)
+-		soft_restart(virt_to_phys(cpu_resume));
++		_soft_restart(virt_to_phys(cpu_resume), false);
+ 	return ret;
+ }
+ 
+@@ -86,7 +87,7 @@ static void notrace arch_restore_image(void *unused)
+ 	for (pbe = restore_pblist; pbe; pbe = pbe->next)
+ 		copy_page(pbe->orig_address, pbe->address);
+ 
+-	soft_restart(virt_to_phys(cpu_resume));
++	_soft_restart(virt_to_phys(cpu_resume), false);
+ }
+ 
+ static u64 resume_stack[PAGE_SIZE/2/sizeof(u64)] __nosavedata;
+diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
+index fdfa3a7..2bf1a16 100644
+--- a/arch/arm/kernel/process.c
++++ b/arch/arm/kernel/process.c
+@@ -41,6 +41,7 @@
+ #include <asm/system_misc.h>
+ #include <asm/mach/time.h>
+ #include <asm/tls.h>
++#include "reboot.h"
+ 
+ #ifdef CONFIG_CC_STACKPROTECTOR
+ #include <linux/stackprotector.h>
+@@ -95,7 +96,7 @@ static void __soft_restart(void *addr)
+ 	BUG();
+ }
+ 
+-void soft_restart(unsigned long addr)
++void _soft_restart(unsigned long addr, bool disable_l2)
+ {
+ 	u64 *stack = soft_restart_stack + ARRAY_SIZE(soft_restart_stack);
+ 
+@@ -104,7 +105,7 @@ void soft_restart(unsigned long addr)
+ 	local_fiq_disable();
+ 
+ 	/* Disable the L2 if we're the last man standing. */
+-	if (num_online_cpus() == 1)
++	if (disable_l2)
+ 		outer_disable();
+ 
+ 	/* Change to the new stack and continue with the reset. */
+@@ -114,6 +115,11 @@ void soft_restart(unsigned long addr)
+ 	BUG();
+ }
+ 
++void soft_restart(unsigned long addr)
++{
++	_soft_restart(addr, num_online_cpus() == 1);
++}
++
+ /*
+  * Function pointers to optional machine specific functions
+  */
+diff --git a/arch/arm/kernel/reboot.h b/arch/arm/kernel/reboot.h
+new file mode 100644
+index 0000000..c87f058
+--- /dev/null
++++ b/arch/arm/kernel/reboot.h
+@@ -0,0 +1,6 @@
++#ifndef REBOOT_H
++#define REBOOT_H
++
++extern void _soft_restart(unsigned long addr, bool disable_l2);
++
++#endif
+-- 
+2.3.6
+
+
+From c5528d2a0edcbbc3ceba739ec70133e2594486c4 Mon Sep 17 00:00:00 2001
+From: Andrey Ryabinin <a.ryabinin@samsung.com>
+Date: Fri, 20 Mar 2015 15:42:27 +0100
+Subject: [PATCH 066/219] ARM: 8320/1: fix integer overflow in ELF_ET_DYN_BASE
+Cc: mpagano@gentoo.org
+
+commit 8defb3367fcd19d1af64c07792aade0747b54e0f upstream.
+
+Usually ELF_ET_DYN_BASE is 2/3 of TASK_SIZE. With 3G/1G user/kernel
+split this is not so, because 2*TASK_SIZE overflows 32 bits,
+so the actual value of ELF_ET_DYN_BASE is:
+	(2 * TASK_SIZE / 3) = 0x2a000000
+
+When ASLR is disabled PIE binaries will load at ELF_ET_DYN_BASE address.
+On 32bit platforms AddressSanitzer uses addresses [0x20000000 - 0x40000000]
+for shadow memory [1]. So ASan doesn't work for PIE binaries when ASLR disabled
+as it fails to map shadow memory.
+Also after Kees's 'split ET_DYN ASLR from mmap ASLR' patchset PIE binaries
+has a high chance of loading somewhere in between [0x2a000000 - 0x40000000]
+even if ASLR enabled. This makes ASan with PIE absolutely incompatible.
+
+Fix overflow by dividing TASK_SIZE prior to multiplying.
+After this patch ELF_ET_DYN_BASE equals to (for CONFIG_VMSPLIT_3G=y):
+	(TASK_SIZE / 3 * 2) = 0x7f555554
+
+[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerAlgorithm#Mapping
+
+Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
+Reported-by: Maria Guseva <m.guseva@samsung.com>
+Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/arm/include/asm/elf.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/arm/include/asm/elf.h b/arch/arm/include/asm/elf.h
+index afb9caf..674d03f 100644
+--- a/arch/arm/include/asm/elf.h
++++ b/arch/arm/include/asm/elf.h
+@@ -115,7 +115,7 @@ int dump_task_regs(struct task_struct *t, elf_gregset_t *elfregs);
+    the loader.  We need to make sure that it is out of the way of the program
+    that it will "exec", and that there is sufficient room for the brk.  */
+ 
+-#define ELF_ET_DYN_BASE	(2 * TASK_SIZE / 3)
++#define ELF_ET_DYN_BASE	(TASK_SIZE / 3 * 2)
+ 
+ /* When the program starts, a1 contains a pointer to a function to be 
+    registered with atexit, as per the SVR4 ABI.  A value of 0 means we 
+-- 
+2.3.6
+
+
+From 6ec6b63f4e9d59f78b61944f8c533d9ff029f46f Mon Sep 17 00:00:00 2001
+From: Gregory CLEMENT <gregory.clement@free-electrons.com>
+Date: Fri, 30 Jan 2015 12:34:25 +0100
+Subject: [PATCH 067/219] ARM: mvebu: Disable CPU Idle on Armada 38x
+Cc: mpagano@gentoo.org
+
+commit 548ae94c1cc7fc120848757249b9a542b1080ffb upstream.
+
+On Armada 38x SoCs, under heavy I/O load, the system hangs when CPU
+Idle is enabled. Waiting for a solution to this issue, this patch
+disables the CPU Idle support for this SoC.
+
+As CPU Hot plug support also uses some of the CPU Idle functions it is
+also affected by the same issue. This patch disables it also for the
+Armada 38x SoCs.
+
+Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
+Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/arm/mach-mvebu/pmsu.c | 16 +++++++++++++++-
+ 1 file changed, 15 insertions(+), 1 deletion(-)
+
+diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c
+index 8b9f5e2..4f4e222 100644
+--- a/arch/arm/mach-mvebu/pmsu.c
++++ b/arch/arm/mach-mvebu/pmsu.c
+@@ -415,6 +415,9 @@ static __init int armada_38x_cpuidle_init(void)
+ 	void __iomem *mpsoc_base;
+ 	u32 reg;
+ 
++	pr_warn("CPU idle is currently broken on Armada 38x: disabling");
++	return 0;
++
+ 	np = of_find_compatible_node(NULL, NULL,
+ 				     "marvell,armada-380-coherency-fabric");
+ 	if (!np)
+@@ -476,6 +479,16 @@ static int __init mvebu_v7_cpu_pm_init(void)
+ 		return 0;
+ 	of_node_put(np);
+ 
++	/*
++	 * Currently the CPU idle support for Armada 38x is broken, as
++	 * the CPU hotplug uses some of the CPU idle functions it is
++	 * broken too, so let's disable it
++	 */
++	if (of_machine_is_compatible("marvell,armada380")) {
++		cpu_hotplug_disable();
++		pr_warn("CPU hotplug support is currently broken on Armada 38x: disabling");
++	}
++
+ 	if (of_machine_is_compatible("marvell,armadaxp"))
+ 		ret = armada_xp_cpuidle_init();
+ 	else if (of_machine_is_compatible("marvell,armada370"))
+@@ -489,7 +502,8 @@ static int __init mvebu_v7_cpu_pm_init(void)
+ 		return ret;
+ 
+ 	mvebu_v7_pmsu_enable_l2_powerdown_onidle();
+-	platform_device_register(&mvebu_v7_cpuidle_device);
++	if (mvebu_v7_cpuidle_device.name)
++		platform_device_register(&mvebu_v7_cpuidle_device);
+ 	cpu_pm_register_notifier(&mvebu_v7_cpu_pm_notifier);
+ 
+ 	return 0;
+-- 
+2.3.6
+
+
+From 3c9d536953582615eb9054c38a5e4de6c711ccb5 Mon Sep 17 00:00:00 2001
+From: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
+Date: Fri, 27 Mar 2015 01:58:08 +0900
+Subject: [PATCH 068/219] ARM: S3C64XX: Use fixed IRQ bases to avoid conflicts
+ on Cragganmore
+Cc: mpagano@gentoo.org
+
+commit 4e330ae4ab2915444f1e6dca1358a910aa259362 upstream.
+
+There are two PMICs on Cragganmore, currently one dynamically assign
+its IRQ base and the other uses a fixed base. It is possible for the
+statically assigned PMIC to fail if its IRQ is taken by the dynamically
+assigned one. Fix this by statically assigning both the IRQ bases.
+
+Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
+Signed-off-by: Kukjin Kim <kgene@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/arm/mach-s3c64xx/crag6410.h      | 1 +
+ arch/arm/mach-s3c64xx/mach-crag6410.c | 1 +
+ 2 files changed, 2 insertions(+)
+
+diff --git a/arch/arm/mach-s3c64xx/crag6410.h b/arch/arm/mach-s3c64xx/crag6410.h
+index 7bc6668..dcbe17f 100644
+--- a/arch/arm/mach-s3c64xx/crag6410.h
++++ b/arch/arm/mach-s3c64xx/crag6410.h
+@@ -14,6 +14,7 @@
+ #include <mach/gpio-samsung.h>
+ 
+ #define GLENFARCLAS_PMIC_IRQ_BASE	IRQ_BOARD_START
++#define BANFF_PMIC_IRQ_BASE		(IRQ_BOARD_START + 64)
+ 
+ #define PCA935X_GPIO_BASE		GPIO_BOARD_START
+ #define CODEC_GPIO_BASE			(GPIO_BOARD_START + 8)
+diff --git a/arch/arm/mach-s3c64xx/mach-crag6410.c b/arch/arm/mach-s3c64xx/mach-crag6410.c
+index 10b913b..65c426b 100644
+--- a/arch/arm/mach-s3c64xx/mach-crag6410.c
++++ b/arch/arm/mach-s3c64xx/mach-crag6410.c
+@@ -554,6 +554,7 @@ static struct wm831x_touch_pdata touch_pdata = {
+ 
+ static struct wm831x_pdata crag_pmic_pdata = {
+ 	.wm831x_num = 1,
++	.irq_base = BANFF_PMIC_IRQ_BASE,
+ 	.gpio_base = BANFF_PMIC_GPIO_BASE,
+ 	.soft_shutdown = true,
+ 
+-- 
+2.3.6
+
+
+From 64d90ab58af7a385a7955061e0a319f7f939ddff Mon Sep 17 00:00:00 2001
+From: Nicolas Ferre <nicolas.ferre@atmel.com>
+Date: Tue, 31 Mar 2015 10:56:10 +0200
+Subject: [PATCH 069/219] ARM: at91/dt: sama5d3 xplained: add phy address for
+ macb1
+Cc: mpagano@gentoo.org
+
+commit 98b80987c940956da48f0c703f60340128bb8521 upstream.
+
+After 57a38effa598 (net: phy: micrel: disable broadcast for KSZ8081/KSZ8091)
+the macb1 interface refuses to work properly because it tries
+to cling to address 0 which isn't able to communicate in broadcast with
+the mac anymore. The micrel phy on the board is actually configured
+to show up at address 1.
+Adding the phy node and its real address fixes the issue.
+
+Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
+Cc: Johan Hovold <johan@kernel.org>
+Signed-off-by: Olof Johansson <olof@lixom.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/arm/boot/dts/at91-sama5d3_xplained.dts | 6 ++++++
+ 1 file changed, 6 insertions(+)
+
+diff --git a/arch/arm/boot/dts/at91-sama5d3_xplained.dts b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+index fec1fca..6c4bc53 100644
+--- a/arch/arm/boot/dts/at91-sama5d3_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+@@ -167,7 +167,13 @@
+ 
+ 			macb1: ethernet@f802c000 {
+ 				phy-mode = "rmii";
++				#address-cells = <1>;
++				#size-cells = <0>;
+ 				status = "okay";
++
++				ethernet-phy@1 {
++					reg = <0x1>;
++				};
+ 			};
+ 
+ 			dbgu: serial@ffffee00 {
+-- 
+2.3.6
+
+
+From 5b126c3890f31b1b0e2bbfd94aace90169664e69 Mon Sep 17 00:00:00 2001
+From: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
+Date: Tue, 17 Feb 2015 19:52:04 +0100
+Subject: [PATCH 070/219] ARM: dts: dove: Fix uart[23] reg property
+Cc: mpagano@gentoo.org
+
+commit a74cd13b807029397f7232449df929bac11fb228 upstream.
+
+Fix Dove's register addresses of uart2 and uart3 nodes that seem to
+be broken since ages due to a copy-and-paste error.
+
+Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
+Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
+Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/arm/boot/dts/dove.dtsi | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/arch/arm/boot/dts/dove.dtsi b/arch/arm/boot/dts/dove.dtsi
+index a5441d5..3cc8b83 100644
+--- a/arch/arm/boot/dts/dove.dtsi
++++ b/arch/arm/boot/dts/dove.dtsi
+@@ -154,7 +154,7 @@
+ 
+ 			uart2: serial@12200 {
+ 				compatible = "ns16550a";
+-				reg = <0x12000 0x100>;
++				reg = <0x12200 0x100>;
+ 				reg-shift = <2>;
+ 				interrupts = <9>;
+ 				clocks = <&core_clk 0>;
+@@ -163,7 +163,7 @@
+ 
+ 			uart3: serial@12300 {
+ 				compatible = "ns16550a";
+-				reg = <0x12100 0x100>;
++				reg = <0x12300 0x100>;
+ 				reg-shift = <2>;
+ 				interrupts = <10>;
+ 				clocks = <&core_clk 0>;
+-- 
+2.3.6
+
+
+From 422be9a5e09ea7d6e84ad2c3d05dfdf01e4a7a3f Mon Sep 17 00:00:00 2001
+From: Andreas Faerber <afaerber@suse.de>
+Date: Wed, 18 Mar 2015 01:25:18 +0900
+Subject: [PATCH 071/219] ARM: dts: fix mmc node updates for exynos5250-spring
+Cc: mpagano@gentoo.org
+
+commit 7e9e20b1faab02357501553d7f4e3efec1b4cfd3 upstream.
+
+Resolve a merge conflict with mmc refactoring aaa25a5a33cb ("ARM: dts:
+unuse the slot-node and deprecate the supports-highspeed for dw-mmc in
+exynos") by dropping the slot@0 nodes, moving its bus-width property to
+the mmc node and replacing supports-highspeed with cap-{mmc,sd}-highspeed,
+matching exynos5250-snow.
+
+Cc: Jaehoon Chung <jh80.chung@samsung.com>
+Fixes: 53dd4138bb0a ("ARM: dts: Add exynos5250-spring device tree")
+Signed-off-by: Andreas Faerber <afaerber@suse.de>
+Reviewed-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk>
+Signed-off-by: Kukjin Kim <kgene@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/arm/boot/dts/exynos5250-spring.dts | 16 ++++------------
+ 1 file changed, 4 insertions(+), 12 deletions(-)
+
+diff --git a/arch/arm/boot/dts/exynos5250-spring.dts b/arch/arm/boot/dts/exynos5250-spring.dts
+index f027754..c41600e 100644
+--- a/arch/arm/boot/dts/exynos5250-spring.dts
++++ b/arch/arm/boot/dts/exynos5250-spring.dts
+@@ -429,7 +429,6 @@
+ &mmc_0 {
+ 	status = "okay";
+ 	num-slots = <1>;
+-	supports-highspeed;
+ 	broken-cd;
+ 	card-detect-delay = <200>;
+ 	samsung,dw-mshc-ciu-div = <3>;
+@@ -437,11 +436,8 @@
+ 	samsung,dw-mshc-ddr-timing = <1 2>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&sd0_clk &sd0_cmd &sd0_cd &sd0_bus4 &sd0_bus8>;
+-
+-	slot@0 {
+-		reg = <0>;
+-		bus-width = <8>;
+-	};
++	bus-width = <8>;
++	cap-mmc-highspeed;
+ };
+ 
+ /*
+@@ -451,7 +447,6 @@
+ &mmc_1 {
+ 	status = "okay";
+ 	num-slots = <1>;
+-	supports-highspeed;
+ 	broken-cd;
+ 	card-detect-delay = <200>;
+ 	samsung,dw-mshc-ciu-div = <3>;
+@@ -459,11 +454,8 @@
+ 	samsung,dw-mshc-ddr-timing = <1 2>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&sd1_clk &sd1_cmd &sd1_cd &sd1_bus4>;
+-
+-	slot@0 {
+-		reg = <0>;
+-		bus-width = <4>;
+-	};
++	bus-width = <4>;
++	cap-sd-highspeed;
+ };
+ 
+ &pinctrl_0 {
+-- 
+2.3.6
+
+
+From 55db0145ac65aec05c736cddb3a6717b83619d7e Mon Sep 17 00:00:00 2001
+From: Felipe Balbi <balbi@ti.com>
+Date: Mon, 30 Dec 2013 12:33:53 -0600
+Subject: [PATCH 072/219] usb: musb: core: fix TX/RX endpoint order
+Cc: mpagano@gentoo.org
+
+commit e3c93e1a3f35be4cf1493d3ccfb0c6d9209e4922 upstream.
+
+As per Mentor Graphics' documentation, we should
+always handle TX endpoints before RX endpoints.
+
+This patch fixes that error while also updating
+some hard-to-read comments which were scattered
+around musb_interrupt().
+
+This patch should be backported as far back as
+possible since this error has been in the driver
+since it's conception.
+
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/musb/musb_core.c | 44 ++++++++++++++++++++++++++------------------
+ 1 file changed, 26 insertions(+), 18 deletions(-)
+
+diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
+index 067920f..461bfe8 100644
+--- a/drivers/usb/musb/musb_core.c
++++ b/drivers/usb/musb/musb_core.c
+@@ -1597,16 +1597,30 @@ irqreturn_t musb_interrupt(struct musb *musb)
+ 		is_host_active(musb) ? "host" : "peripheral",
+ 		musb->int_usb, musb->int_tx, musb->int_rx);
+ 
+-	/* the core can interrupt us for multiple reasons; docs have
+-	 * a generic interrupt flowchart to follow
++	/**
++	 * According to Mentor Graphics' documentation, flowchart on page 98,
++	 * IRQ should be handled as follows:
++	 *
++	 * . Resume IRQ
++	 * . Session Request IRQ
++	 * . VBUS Error IRQ
++	 * . Suspend IRQ
++	 * . Connect IRQ
++	 * . Disconnect IRQ
++	 * . Reset/Babble IRQ
++	 * . SOF IRQ (we're not using this one)
++	 * . Endpoint 0 IRQ
++	 * . TX Endpoints
++	 * . RX Endpoints
++	 *
++	 * We will be following that flowchart in order to avoid any problems
++	 * that might arise with internal Finite State Machine.
+ 	 */
++
+ 	if (musb->int_usb)
+ 		retval |= musb_stage0_irq(musb, musb->int_usb,
+ 				devctl);
+ 
+-	/* "stage 1" is handling endpoint irqs */
+-
+-	/* handle endpoint 0 first */
+ 	if (musb->int_tx & 1) {
+ 		if (is_host_active(musb))
+ 			retval |= musb_h_ep0_irq(musb);
+@@ -1614,37 +1628,31 @@ irqreturn_t musb_interrupt(struct musb *musb)
+ 			retval |= musb_g_ep0_irq(musb);
+ 	}
+ 
+-	/* RX on endpoints 1-15 */
+-	reg = musb->int_rx >> 1;
++	reg = musb->int_tx >> 1;
+ 	ep_num = 1;
+ 	while (reg) {
+ 		if (reg & 1) {
+-			/* musb_ep_select(musb->mregs, ep_num); */
+-			/* REVISIT just retval = ep->rx_irq(...) */
+ 			retval = IRQ_HANDLED;
+ 			if (is_host_active(musb))
+-				musb_host_rx(musb, ep_num);
++				musb_host_tx(musb, ep_num);
+ 			else
+-				musb_g_rx(musb, ep_num);
++				musb_g_tx(musb, ep_num);
+ 		}
+-
+ 		reg >>= 1;
+ 		ep_num++;
+ 	}
+ 
+-	/* TX on endpoints 1-15 */
+-	reg = musb->int_tx >> 1;
++	reg = musb->int_rx >> 1;
+ 	ep_num = 1;
+ 	while (reg) {
+ 		if (reg & 1) {
+-			/* musb_ep_select(musb->mregs, ep_num); */
+-			/* REVISIT just retval |= ep->tx_irq(...) */
+ 			retval = IRQ_HANDLED;
+ 			if (is_host_active(musb))
+-				musb_host_tx(musb, ep_num);
++				musb_host_rx(musb, ep_num);
+ 			else
+-				musb_g_tx(musb, ep_num);
++				musb_g_rx(musb, ep_num);
+ 		}
++
+ 		reg >>= 1;
+ 		ep_num++;
+ 	}
+-- 
+2.3.6
+
+
+From 968986cb57477f487045baa184eee0cf7a82b2e3 Mon Sep 17 00:00:00 2001
+From: Axel Lin <axel.lin@ingics.com>
+Date: Thu, 12 Mar 2015 09:15:28 +0800
+Subject: [PATCH 073/219] usb: phy: Find the right match in devm_usb_phy_match
+Cc: mpagano@gentoo.org
+
+commit 869aee0f31429fa9d94d5aef539602b73ae0cf4b upstream.
+
+The res parameter passed to devm_usb_phy_match() is the location where the
+pointer to the usb_phy is stored, hence it needs to be dereferenced before
+comparing to the match data in order to find the correct match.
+
+Fixes: 410219dcd2ba ("usb: otg: utils: devres: Add API's to associate a device with the phy")
+Signed-off-by: Axel Lin <axel.lin@ingics.com>
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/phy/phy.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/usb/phy/phy.c b/drivers/usb/phy/phy.c
+index 2f9735b..d1cd6b5 100644
+--- a/drivers/usb/phy/phy.c
++++ b/drivers/usb/phy/phy.c
+@@ -81,7 +81,9 @@ static void devm_usb_phy_release(struct device *dev, void *res)
+ 
+ static int devm_usb_phy_match(struct device *dev, void *res, void *match_data)
+ {
+-	return res == match_data;
++	struct usb_phy **phy = res;
++
++	return *phy == match_data;
+ }
+ 
+ /**
+-- 
+2.3.6
+
+
+From c3f787950225dc61f2a4342601d78d1052d0f8ef Mon Sep 17 00:00:00 2001
+From: Felipe Balbi <balbi@ti.com>
+Date: Fri, 13 Feb 2015 14:34:25 -0600
+Subject: [PATCH 074/219] usb: define a generic USB_RESUME_TIMEOUT macro
+Cc: mpagano@gentoo.org
+
+commit 62f0342de1f012f3e90607d39e20fce811391169 upstream.
+
+Every USB Host controller should use this new
+macro to define for how long resume signalling
+should be driven on the bus.
+
+Currently, almost every single USB controller
+is using a 20ms timeout for resume signalling.
+
+That's problematic for two reasons:
+
+a) sometimes that 20ms timer expires a little
+before 20ms, which makes us fail certification
+
+b) some (many) devices actually need more than
+20ms resume signalling.
+
+Sure, in case of (b) we can state that the device
+is against the USB spec, but the fact is that
+we have no control over which device the certification
+lab will use. We also have no control over which host
+they will use. Most likely they'll be using a Windows
+PC which, again, we have no control over how that
+USB stack is written and how long resume signalling
+they are using.
+
+At the end of the day, we must make sure Linux passes
+electrical compliance when working as Host or as Device
+and currently we don't pass compliance as host because
+we're driving resume signallig for exactly 20ms and
+that confuses certification test setup resulting in
+Certification failure.
+
+Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Acked-by: Peter Chen <peter.chen@freescale.com>
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ include/linux/usb.h | 26 ++++++++++++++++++++++++++
+ 1 file changed, 26 insertions(+)
+
+diff --git a/include/linux/usb.h b/include/linux/usb.h
+index 7ee1b5c..447fe29 100644
+--- a/include/linux/usb.h
++++ b/include/linux/usb.h
+@@ -205,6 +205,32 @@ void usb_put_intf(struct usb_interface *intf);
+ #define USB_MAXINTERFACES	32
+ #define USB_MAXIADS		(USB_MAXINTERFACES/2)
+ 
++/*
++ * USB Resume Timer: Every Host controller driver should drive the resume
++ * signalling on the bus for the amount of time defined by this macro.
++ *
++ * That way we will have a 'stable' behavior among all HCDs supported by Linux.
++ *
++ * Note that the USB Specification states we should drive resume for *at least*
++ * 20 ms, but it doesn't give an upper bound. This creates two possible
++ * situations which we want to avoid:
++ *
++ * (a) sometimes an msleep(20) might expire slightly before 20 ms, which causes
++ * us to fail USB Electrical Tests, thus failing Certification
++ *
++ * (b) Some (many) devices actually need more than 20 ms of resume signalling,
++ * and while we can argue that's against the USB Specification, we don't have
++ * control over which devices a certification laboratory will be using for
++ * certification. If CertLab uses a device which was tested against Windows and
++ * that happens to have relaxed resume signalling rules, we might fall into
++ * situations where we fail interoperability and electrical tests.
++ *
++ * In order to avoid both conditions, we're using a 40 ms resume timeout, which
++ * should cope with both LPJ calibration errors and devices not following every
++ * detail of the USB Specification.
++ */
++#define USB_RESUME_TIMEOUT	40 /* ms */
++
+ /**
+  * struct usb_interface_cache - long-term representation of a device interface
+  * @num_altsetting: number of altsettings defined.
+-- 
+2.3.6
+
+
+From 913916432e9f24d403a51dae54b905b07e509dd9 Mon Sep 17 00:00:00 2001
+From: Felipe Balbi <balbi@ti.com>
+Date: Fri, 13 Feb 2015 14:46:27 -0600
+Subject: [PATCH 075/219] usb: musb: use new USB_RESUME_TIMEOUT
+Cc: mpagano@gentoo.org
+
+commit 309be239369609929d5d3833ee043f7c5afc95d1 upstream.
+
+Make sure we're using the new macro, so our
+resume signaling will always pass certification.
+
+Based on original work by Bin Liu <Bin Liu <b-liu@ti.com>>
+
+Cc: Bin Liu <b-liu@ti.com>
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/musb/musb_core.c    | 7 ++++---
+ drivers/usb/musb/musb_virthub.c | 2 +-
+ 2 files changed, 5 insertions(+), 4 deletions(-)
+
+diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
+index 461bfe8..ec0ee3b 100644
+--- a/drivers/usb/musb/musb_core.c
++++ b/drivers/usb/musb/musb_core.c
+@@ -99,6 +99,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/io.h>
+ #include <linux/dma-mapping.h>
++#include <linux/usb.h>
+ 
+ #include "musb_core.h"
+ 
+@@ -562,7 +563,7 @@ static irqreturn_t musb_stage0_irq(struct musb *musb, u8 int_usb,
+ 						(USB_PORT_STAT_C_SUSPEND << 16)
+ 						| MUSB_PORT_STAT_RESUME;
+ 				musb->rh_timer = jiffies
+-						 + msecs_to_jiffies(20);
++					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 				musb->need_finish_resume = 1;
+ 
+ 				musb->xceiv->otg->state = OTG_STATE_A_HOST;
+@@ -2471,7 +2472,7 @@ static int musb_resume(struct device *dev)
+ 	if (musb->need_finish_resume) {
+ 		musb->need_finish_resume = 0;
+ 		schedule_delayed_work(&musb->finish_resume_work,
+-				      msecs_to_jiffies(20));
++				      msecs_to_jiffies(USB_RESUME_TIMEOUT));
+ 	}
+ 
+ 	/*
+@@ -2514,7 +2515,7 @@ static int musb_runtime_resume(struct device *dev)
+ 	if (musb->need_finish_resume) {
+ 		musb->need_finish_resume = 0;
+ 		schedule_delayed_work(&musb->finish_resume_work,
+-				msecs_to_jiffies(20));
++				msecs_to_jiffies(USB_RESUME_TIMEOUT));
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/usb/musb/musb_virthub.c b/drivers/usb/musb/musb_virthub.c
+index 294e159..5428ed1 100644
+--- a/drivers/usb/musb/musb_virthub.c
++++ b/drivers/usb/musb/musb_virthub.c
+@@ -136,7 +136,7 @@ void musb_port_suspend(struct musb *musb, bool do_suspend)
+ 		/* later, GetPortStatus will stop RESUME signaling */
+ 		musb->port1_status |= MUSB_PORT_STAT_RESUME;
+ 		schedule_delayed_work(&musb->finish_resume_work,
+-				      msecs_to_jiffies(20));
++				      msecs_to_jiffies(USB_RESUME_TIMEOUT));
+ 	}
+ }
+ 
+-- 
+2.3.6
+
+
+From 0e33853a595e4947e416e86c966a2f532084b3ae Mon Sep 17 00:00:00 2001
+From: Felipe Balbi <balbi@ti.com>
+Date: Fri, 13 Feb 2015 14:57:54 -0600
+Subject: [PATCH 076/219] usb: host: oxu210hp: use new USB_RESUME_TIMEOUT
+Cc: mpagano@gentoo.org
+
+commit 84c0d178eb9f3a3ae4d63dc97a440266cf17f7f5 upstream.
+
+Make sure we're using the new macro, so our
+resume signaling will always pass certification.
+
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/host/oxu210hp-hcd.c | 7 ++++---
+ 1 file changed, 4 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/usb/host/oxu210hp-hcd.c b/drivers/usb/host/oxu210hp-hcd.c
+index ef7efb2..28a2866 100644
+--- a/drivers/usb/host/oxu210hp-hcd.c
++++ b/drivers/usb/host/oxu210hp-hcd.c
+@@ -2500,11 +2500,12 @@ static irqreturn_t oxu210_hcd_irq(struct usb_hcd *hcd)
+ 					|| oxu->reset_done[i] != 0)
+ 				continue;
+ 
+-			/* start 20 msec resume signaling from this port,
+-			 * and make hub_wq collect PORT_STAT_C_SUSPEND to
++			/* start USB_RESUME_TIMEOUT resume signaling from this
++			 * port, and make hub_wq collect PORT_STAT_C_SUSPEND to
+ 			 * stop that signaling.
+ 			 */
+-			oxu->reset_done[i] = jiffies + msecs_to_jiffies(20);
++			oxu->reset_done[i] = jiffies +
++				msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			oxu_dbg(oxu, "port %d remote wakeup\n", i + 1);
+ 			mod_timer(&hcd->rh_timer, oxu->reset_done[i]);
+ 		}
+-- 
+2.3.6
+
+
+From 9aeb024dc65fa1c9520c655a36d52d48e4285ab1 Mon Sep 17 00:00:00 2001
+From: Felipe Balbi <balbi@ti.com>
+Date: Fri, 13 Feb 2015 14:55:34 -0600
+Subject: [PATCH 077/219] usb: host: fusbh200: use new USB_RESUME_TIMEOUT
+Cc: mpagano@gentoo.org
+
+commit 595227db1f2d98bfc33f02a55842f268e12b247d upstream.
+
+Make sure we're using the new macro, so our
+resume signaling will always pass certification.
+
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/host/fusbh200-hcd.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+diff --git a/drivers/usb/host/fusbh200-hcd.c b/drivers/usb/host/fusbh200-hcd.c
+index a83eefe..ba77e2e 100644
+--- a/drivers/usb/host/fusbh200-hcd.c
++++ b/drivers/usb/host/fusbh200-hcd.c
+@@ -1550,10 +1550,9 @@ static int fusbh200_hub_control (
+ 			if ((temp & PORT_PE) == 0)
+ 				goto error;
+ 
+-			/* resume signaling for 20 msec */
+ 			fusbh200_writel(fusbh200, temp | PORT_RESUME, status_reg);
+ 			fusbh200->reset_done[wIndex] = jiffies
+-					+ msecs_to_jiffies(20);
++					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			break;
+ 		case USB_PORT_FEAT_C_SUSPEND:
+ 			clear_bit(wIndex, &fusbh200->port_c_suspend);
+-- 
+2.3.6
+
+
+From c8d7235af46783ee3e312ea5c877ac73de8c435d Mon Sep 17 00:00:00 2001
+From: Felipe Balbi <balbi@ti.com>
+Date: Fri, 13 Feb 2015 14:44:17 -0600
+Subject: [PATCH 078/219] usb: host: uhci: use new USB_RESUME_TIMEOUT
+Cc: mpagano@gentoo.org
+
+commit b8fb6f79f76f478acbbffccc966daa878f172a0a upstream.
+
+Make sure we're using the new macro, so our
+resume signaling will always pass certification.
+
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/host/uhci-hub.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/usb/host/uhci-hub.c b/drivers/usb/host/uhci-hub.c
+index 19ba5ea..7b3d1af 100644
+--- a/drivers/usb/host/uhci-hub.c
++++ b/drivers/usb/host/uhci-hub.c
+@@ -166,7 +166,7 @@ static void uhci_check_ports(struct uhci_hcd *uhci)
+ 				/* Port received a wakeup request */
+ 				set_bit(port, &uhci->resuming_ports);
+ 				uhci->ports_timeout = jiffies +
+-						msecs_to_jiffies(25);
++					msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 				usb_hcd_start_port_resume(
+ 						&uhci_to_hcd(uhci)->self, port);
+ 
+@@ -338,7 +338,8 @@ static int uhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 			uhci_finish_suspend(uhci, port, port_addr);
+ 
+ 			/* USB v2.0 7.1.7.5 */
+-			uhci->ports_timeout = jiffies + msecs_to_jiffies(50);
++			uhci->ports_timeout = jiffies +
++				msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			break;
+ 		case USB_PORT_FEAT_POWER:
+ 			/* UHCI has no power switching */
+-- 
+2.3.6
+
+
+From fb4655758ba685c5aa07b9af45b18895e3df2a26 Mon Sep 17 00:00:00 2001
+From: Felipe Balbi <balbi@ti.com>
+Date: Fri, 13 Feb 2015 14:54:38 -0600
+Subject: [PATCH 079/219] usb: host: fotg210: use new USB_RESUME_TIMEOUT
+Cc: mpagano@gentoo.org
+
+commit 7e136bb71a08e8b8be3bc492f041d9b0bea3856d upstream.
+
+Make sure we're using the new macro, so our
+resume signaling will always pass certification.
+
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/host/fotg210-hcd.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/usb/host/fotg210-hcd.c b/drivers/usb/host/fotg210-hcd.c
+index 475b21f..7a6681f 100644
+--- a/drivers/usb/host/fotg210-hcd.c
++++ b/drivers/usb/host/fotg210-hcd.c
+@@ -1595,7 +1595,7 @@ static int fotg210_hub_control(
+ 			/* resume signaling for 20 msec */
+ 			fotg210_writel(fotg210, temp | PORT_RESUME, status_reg);
+ 			fotg210->reset_done[wIndex] = jiffies
+-					+ msecs_to_jiffies(20);
++					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			break;
+ 		case USB_PORT_FEAT_C_SUSPEND:
+ 			clear_bit(wIndex, &fotg210->port_c_suspend);
+-- 
+2.3.6
+
+
+From 14c69a53b6c0640d94796b04762ed943e9cf3918 Mon Sep 17 00:00:00 2001
+From: Felipe Balbi <balbi@ti.com>
+Date: Fri, 13 Feb 2015 14:58:53 -0600
+Subject: [PATCH 080/219] usb: host: r8a66597: use new USB_RESUME_TIMEOUT
+Cc: mpagano@gentoo.org
+
+commit 7a606ac29752a3e571b83f9b3fceb1eaa1d37781 upstream.
+
+While this driver was already using a 50ms resume
+timeout, let's make sure everybody uses the same
+macro so it's easy to fix later should anything
+go wrong.
+
+It also gives a more "stable" expectation to Linux
+users.
+
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/host/r8a66597-hcd.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/usb/host/r8a66597-hcd.c b/drivers/usb/host/r8a66597-hcd.c
+index bdc82fe..54a4170 100644
+--- a/drivers/usb/host/r8a66597-hcd.c
++++ b/drivers/usb/host/r8a66597-hcd.c
+@@ -2301,7 +2301,7 @@ static int r8a66597_bus_resume(struct usb_hcd *hcd)
+ 		rh->port &= ~USB_PORT_STAT_SUSPEND;
+ 		rh->port |= USB_PORT_STAT_C_SUSPEND << 16;
+ 		r8a66597_mdfy(r8a66597, RESUME, RESUME | UACT, dvstctr_reg);
+-		msleep(50);
++		msleep(USB_RESUME_TIMEOUT);
+ 		r8a66597_mdfy(r8a66597, UACT, RESUME | UACT, dvstctr_reg);
+ 	}
+ 
+-- 
+2.3.6
+
+
+From 34f698795e94955800a8ba8acdea4a725211a20a Mon Sep 17 00:00:00 2001
+From: Felipe Balbi <balbi@ti.com>
+Date: Fri, 13 Feb 2015 14:50:10 -0600
+Subject: [PATCH 081/219] usb: host: isp116x: use new USB_RESUME_TIMEOUT
+Cc: mpagano@gentoo.org
+
+commit 8c0ae6574ccfd3d619876a65829aad74c9d22ba5 upstream.
+
+Make sure we're using the new macro, so our
+resume signaling will always pass certification.
+
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/host/isp116x-hcd.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/usb/host/isp116x-hcd.c b/drivers/usb/host/isp116x-hcd.c
+index 113d0cc..9ef5644 100644
+--- a/drivers/usb/host/isp116x-hcd.c
++++ b/drivers/usb/host/isp116x-hcd.c
+@@ -1490,7 +1490,7 @@ static int isp116x_bus_resume(struct usb_hcd *hcd)
+ 	spin_unlock_irq(&isp116x->lock);
+ 
+ 	hcd->state = HC_STATE_RESUMING;
+-	msleep(20);
++	msleep(USB_RESUME_TIMEOUT);
+ 
+ 	/* Go operational */
+ 	spin_lock_irq(&isp116x->lock);
+-- 
+2.3.6
+
+
+From 9a0a677ad3526bf0914aecab14423c761e5af9e7 Mon Sep 17 00:00:00 2001
+From: Felipe Balbi <balbi@ti.com>
+Date: Fri, 13 Feb 2015 14:39:13 -0600
+Subject: [PATCH 082/219] usb: host: xhci: use new USB_RESUME_TIMEOUT
+Cc: mpagano@gentoo.org
+
+commit b9e451885deb6262dbaf5cd14aa77d192d9ac759 upstream.
+
+Make sure we're using the new macro, so our
+resume signaling will always pass certification.
+
+Acked-by: Mathias Nyman <mathias.nyman@linux.intel.com>
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/host/xhci-ring.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 73485fa..eeedde8 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1574,7 +1574,7 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ 		} else {
+ 			xhci_dbg(xhci, "resume HS port %d\n", port_id);
+ 			bus_state->resume_done[faked_port_index] = jiffies +
+-				msecs_to_jiffies(20);
++				msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			set_bit(faked_port_index, &bus_state->resuming_ports);
+ 			mod_timer(&hcd->rh_timer,
+ 				  bus_state->resume_done[faked_port_index]);
+-- 
+2.3.6
+
+
+From 426c93ea979c24f4f011351af58d5f5319514493 Mon Sep 17 00:00:00 2001
+From: Felipe Balbi <balbi@ti.com>
+Date: Fri, 13 Feb 2015 14:42:25 -0600
+Subject: [PATCH 083/219] usb: host: ehci: use new USB_RESUME_TIMEOUT
+Cc: mpagano@gentoo.org
+
+commit ea16328f80ca8d74434352157f37ef60e2f55ce2 upstream.
+
+Make sure we're using the new macro, so our
+resume signaling will always pass certification.
+
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/host/ehci-hcd.c | 10 +++++-----
+ drivers/usb/host/ehci-hub.c |  9 ++++++---
+ 2 files changed, 11 insertions(+), 8 deletions(-)
+
+diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
+index 85e56d1..f4d88df 100644
+--- a/drivers/usb/host/ehci-hcd.c
++++ b/drivers/usb/host/ehci-hcd.c
+@@ -792,12 +792,12 @@ static irqreturn_t ehci_irq (struct usb_hcd *hcd)
+ 					ehci->reset_done[i] == 0))
+ 				continue;
+ 
+-			/* start 20 msec resume signaling from this port,
+-			 * and make hub_wq collect PORT_STAT_C_SUSPEND to
+-			 * stop that signaling.  Use 5 ms extra for safety,
+-			 * like usb_port_resume() does.
++			/* start USB_RESUME_TIMEOUT msec resume signaling from
++			 * this port, and make hub_wq collect
++			 * PORT_STAT_C_SUSPEND to stop that signaling.
+ 			 */
+-			ehci->reset_done[i] = jiffies + msecs_to_jiffies(25);
++			ehci->reset_done[i] = jiffies +
++				msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			set_bit(i, &ehci->resuming_ports);
+ 			ehci_dbg (ehci, "port %d remote wakeup\n", i + 1);
+ 			usb_hcd_start_port_resume(&hcd->self, i);
+diff --git a/drivers/usb/host/ehci-hub.c b/drivers/usb/host/ehci-hub.c
+index 87cf86f..7354d01 100644
+--- a/drivers/usb/host/ehci-hub.c
++++ b/drivers/usb/host/ehci-hub.c
+@@ -471,10 +471,13 @@ static int ehci_bus_resume (struct usb_hcd *hcd)
+ 		ehci_writel(ehci, temp, &ehci->regs->port_status [i]);
+ 	}
+ 
+-	/* msleep for 20ms only if code is trying to resume port */
++	/*
++	 * msleep for USB_RESUME_TIMEOUT ms only if code is trying to resume
++	 * port
++	 */
+ 	if (resume_needed) {
+ 		spin_unlock_irq(&ehci->lock);
+-		msleep(20);
++		msleep(USB_RESUME_TIMEOUT);
+ 		spin_lock_irq(&ehci->lock);
+ 		if (ehci->shutdown)
+ 			goto shutdown;
+@@ -942,7 +945,7 @@ int ehci_hub_control(
+ 			temp &= ~PORT_WAKE_BITS;
+ 			ehci_writel(ehci, temp | PORT_RESUME, status_reg);
+ 			ehci->reset_done[wIndex] = jiffies
+-					+ msecs_to_jiffies(20);
++					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			set_bit(wIndex, &ehci->resuming_ports);
+ 			usb_hcd_start_port_resume(&hcd->self, wIndex);
+ 			break;
+-- 
+2.3.6
+
+
+From 6a0ecbeea7d077ae4e49c3a1ef03a38bb91c5218 Mon Sep 17 00:00:00 2001
+From: Felipe Balbi <balbi@ti.com>
+Date: Fri, 13 Feb 2015 15:00:38 -0600
+Subject: [PATCH 084/219] usb: host: sl811: use new USB_RESUME_TIMEOUT
+Cc: mpagano@gentoo.org
+
+commit 08debfb13b199716da6153940c31968c556b195d upstream.
+
+Make sure we're using the new macro, so our
+resume signaling will always pass certification.
+
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/host/sl811-hcd.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/usb/host/sl811-hcd.c b/drivers/usb/host/sl811-hcd.c
+index 4f4ba1e..9118cd8 100644
+--- a/drivers/usb/host/sl811-hcd.c
++++ b/drivers/usb/host/sl811-hcd.c
+@@ -1259,7 +1259,7 @@ sl811h_hub_control(
+ 			sl811_write(sl811, SL11H_CTLREG1, sl811->ctrl1);
+ 
+ 			mod_timer(&sl811->timer, jiffies
+-					+ msecs_to_jiffies(20));
++					+ msecs_to_jiffies(USB_RESUME_TIMEOUT));
+ 			break;
+ 		case USB_PORT_FEAT_POWER:
+ 			port_power(sl811, 0);
+-- 
+2.3.6
+
+
+From 8271acf33346951d281a428ae8a40f20750e789f Mon Sep 17 00:00:00 2001
+From: Felipe Balbi <balbi@ti.com>
+Date: Fri, 13 Feb 2015 15:03:13 -0600
+Subject: [PATCH 085/219] usb: dwc2: hcd: use new USB_RESUME_TIMEOUT
+Cc: mpagano@gentoo.org
+
+commit 74bd7b69801819707713b88e9d0bc074efa2f5e7 upstream.
+
+Make sure we're using the new macro, so our
+resume signaling will always pass certification.
+
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/dwc2/hcd.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index c78c874..758b7e0 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -1521,7 +1521,7 @@ static int dwc2_hcd_hub_control(struct dwc2_hsotg *hsotg, u16 typereq,
+ 			dev_dbg(hsotg->dev,
+ 				"ClearPortFeature USB_PORT_FEAT_SUSPEND\n");
+ 			writel(0, hsotg->regs + PCGCTL);
+-			usleep_range(20000, 40000);
++			msleep(USB_RESUME_TIMEOUT);
+ 
+ 			hprt0 = dwc2_read_hprt0(hsotg);
+ 			hprt0 |= HPRT0_RES;
+-- 
+2.3.6
+
+
+From b6053a1546ea879b47c346628cf40401bcf9e27e Mon Sep 17 00:00:00 2001
+From: Felipe Balbi <balbi@ti.com>
+Date: Fri, 13 Feb 2015 15:04:06 -0600
+Subject: [PATCH 086/219] usb: isp1760: hcd: use new USB_RESUME_TIMEOUT
+Cc: mpagano@gentoo.org
+
+commit 59c9904cce77b55892e15f40791f1e66e4d3a1e6 upstream.
+
+Make sure we're using the new macro, so our
+resume signaling will always pass certification.
+
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/isp1760/isp1760-hcd.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/usb/isp1760/isp1760-hcd.c b/drivers/usb/isp1760/isp1760-hcd.c
+index 3cb98b1..7911b6b 100644
+--- a/drivers/usb/isp1760/isp1760-hcd.c
++++ b/drivers/usb/isp1760/isp1760-hcd.c
+@@ -1869,7 +1869,7 @@ static int isp1760_hub_control(struct usb_hcd *hcd, u16 typeReq,
+ 				reg_write32(hcd->regs, HC_PORTSC1,
+ 							temp | PORT_RESUME);
+ 				priv->reset_done = jiffies +
+-					msecs_to_jiffies(20);
++					msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			}
+ 			break;
+ 		case USB_PORT_FEAT_C_SUSPEND:
+-- 
+2.3.6
+
+
+From 1eeba7304a3e8070983c3a9f757a6b51236a64de Mon Sep 17 00:00:00 2001
+From: Felipe Balbi <balbi@ti.com>
+Date: Fri, 13 Feb 2015 15:38:33 -0600
+Subject: [PATCH 087/219] usb: core: hub: use new USB_RESUME_TIMEOUT
+Cc: mpagano@gentoo.org
+
+commit bbc78c07a51f6fd29c227b1220a9016e585358ba upstream.
+
+Make sure we're using the new macro, so our
+resume signaling will always pass certification.
+
+Signed-off-by: Felipe Balbi <balbi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/usb/core/hub.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index d7c3d5a..3b71516 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -3406,10 +3406,10 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
+ 	if (status) {
+ 		dev_dbg(&port_dev->dev, "can't resume, status %d\n", status);
+ 	} else {
+-		/* drive resume for at least 20 msec */
++		/* drive resume for USB_RESUME_TIMEOUT msec */
+ 		dev_dbg(&udev->dev, "usb %sresume\n",
+ 				(PMSG_IS_AUTO(msg) ? "auto-" : ""));
+-		msleep(25);
++		msleep(USB_RESUME_TIMEOUT);
+ 
+ 		/* Virtual root hubs can trigger on GET_PORT_STATUS to
+ 		 * stop resume signaling.  Then finish the resume
+-- 
+2.3.6
+
+
+From f5a652339c3ff18b6184d0ee02f7f0eef2ebe681 Mon Sep 17 00:00:00 2001
+From: Boris Brezillon <boris.brezillon@free-electrons.com>
+Date: Sun, 29 Mar 2015 03:45:33 +0200
+Subject: [PATCH 088/219] clk: at91: usb: propagate rate modification to the
+ parent clk
+Cc: mpagano@gentoo.org
+
+commit 4591243102faa8de92da320edea47219901461e9 upstream.
+
+The at91sam9n12 and at91sam9x5 usb clocks do not propagate rate
+modification requests to their parents.
+This causes a bug when the PLLB is left uninitialized by the bootloader
+(PLL multiplier set to 0, or in other words, PLL rate = 0 Hz).
+
+Implement the determinate_rate method and propagate the change rate
+request to the parent clk.
+
+Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
+Reported-by: Bo Shen <voice.shen@atmel.com>
+Tested-by: Bo Shen <voice.shen@atmel.com>
+Signed-off-by: Michael Turquette <mturquette@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/clk/at91/clk-usb.c | 64 +++++++++++++++++++++++++++++++++++-----------
+ 1 file changed, 49 insertions(+), 15 deletions(-)
+
+diff --git a/drivers/clk/at91/clk-usb.c b/drivers/clk/at91/clk-usb.c
+index a23ac0c..0b7c3e8 100644
+--- a/drivers/clk/at91/clk-usb.c
++++ b/drivers/clk/at91/clk-usb.c
+@@ -56,22 +56,55 @@ static unsigned long at91sam9x5_clk_usb_recalc_rate(struct clk_hw *hw,
+ 	return DIV_ROUND_CLOSEST(parent_rate, (usbdiv + 1));
+ }
+ 
+-static long at91sam9x5_clk_usb_round_rate(struct clk_hw *hw, unsigned long rate,
+-					  unsigned long *parent_rate)
++static long at91sam9x5_clk_usb_determine_rate(struct clk_hw *hw,
++					      unsigned long rate,
++					      unsigned long min_rate,
++					      unsigned long max_rate,
++					      unsigned long *best_parent_rate,
++					      struct clk_hw **best_parent_hw)
+ {
+-	unsigned long div;
++	struct clk *parent = NULL;
++	long best_rate = -EINVAL;
++	unsigned long tmp_rate;
++	int best_diff = -1;
++	int tmp_diff;
++	int i;
+ 
+-	if (!rate)
+-		return -EINVAL;
++	for (i = 0; i < __clk_get_num_parents(hw->clk); i++) {
++		int div;
+ 
+-	if (rate >= *parent_rate)
+-		return *parent_rate;
++		parent = clk_get_parent_by_index(hw->clk, i);
++		if (!parent)
++			continue;
++
++		for (div = 1; div < SAM9X5_USB_MAX_DIV + 2; div++) {
++			unsigned long tmp_parent_rate;
++
++			tmp_parent_rate = rate * div;
++			tmp_parent_rate = __clk_round_rate(parent,
++							   tmp_parent_rate);
++			tmp_rate = DIV_ROUND_CLOSEST(tmp_parent_rate, div);
++			if (tmp_rate < rate)
++				tmp_diff = rate - tmp_rate;
++			else
++				tmp_diff = tmp_rate - rate;
++
++			if (best_diff < 0 || best_diff > tmp_diff) {
++				best_rate = tmp_rate;
++				best_diff = tmp_diff;
++				*best_parent_rate = tmp_parent_rate;
++				*best_parent_hw = __clk_get_hw(parent);
++			}
++
++			if (!best_diff || tmp_rate < rate)
++				break;
++		}
+ 
+-	div = DIV_ROUND_CLOSEST(*parent_rate, rate);
+-	if (div > SAM9X5_USB_MAX_DIV + 1)
+-		div = SAM9X5_USB_MAX_DIV + 1;
++		if (!best_diff)
++			break;
++	}
+ 
+-	return DIV_ROUND_CLOSEST(*parent_rate, div);
++	return best_rate;
+ }
+ 
+ static int at91sam9x5_clk_usb_set_parent(struct clk_hw *hw, u8 index)
+@@ -121,7 +154,7 @@ static int at91sam9x5_clk_usb_set_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ static const struct clk_ops at91sam9x5_usb_ops = {
+ 	.recalc_rate = at91sam9x5_clk_usb_recalc_rate,
+-	.round_rate = at91sam9x5_clk_usb_round_rate,
++	.determine_rate = at91sam9x5_clk_usb_determine_rate,
+ 	.get_parent = at91sam9x5_clk_usb_get_parent,
+ 	.set_parent = at91sam9x5_clk_usb_set_parent,
+ 	.set_rate = at91sam9x5_clk_usb_set_rate,
+@@ -159,7 +192,7 @@ static const struct clk_ops at91sam9n12_usb_ops = {
+ 	.disable = at91sam9n12_clk_usb_disable,
+ 	.is_enabled = at91sam9n12_clk_usb_is_enabled,
+ 	.recalc_rate = at91sam9x5_clk_usb_recalc_rate,
+-	.round_rate = at91sam9x5_clk_usb_round_rate,
++	.determine_rate = at91sam9x5_clk_usb_determine_rate,
+ 	.set_rate = at91sam9x5_clk_usb_set_rate,
+ };
+ 
+@@ -179,7 +212,8 @@ at91sam9x5_clk_register_usb(struct at91_pmc *pmc, const char *name,
+ 	init.ops = &at91sam9x5_usb_ops;
+ 	init.parent_names = parent_names;
+ 	init.num_parents = num_parents;
+-	init.flags = CLK_SET_RATE_GATE | CLK_SET_PARENT_GATE;
++	init.flags = CLK_SET_RATE_GATE | CLK_SET_PARENT_GATE |
++		     CLK_SET_RATE_PARENT;
+ 
+ 	usb->hw.init = &init;
+ 	usb->pmc = pmc;
+@@ -207,7 +241,7 @@ at91sam9n12_clk_register_usb(struct at91_pmc *pmc, const char *name,
+ 	init.ops = &at91sam9n12_usb_ops;
+ 	init.parent_names = &parent_name;
+ 	init.num_parents = 1;
+-	init.flags = CLK_SET_RATE_GATE;
++	init.flags = CLK_SET_RATE_GATE | CLK_SET_RATE_PARENT;
+ 
+ 	usb->hw.init = &init;
+ 	usb->pmc = pmc;
+-- 
+2.3.6
+
+
+From ffa5893889612e5d65e456c0b433d0160d46c4eb Mon Sep 17 00:00:00 2001
+From: Yves-Alexis Perez <corsac@debian.org>
+Date: Sat, 11 Apr 2015 09:31:35 +0200
+Subject: [PATCH 089/219] ALSA: hda - Add dock support for ThinkPad X250
+ (17aa:2226)
+Cc: mpagano@gentoo.org
+
+commit c0278669fb61596cc1a10ab8686d27c37269c37b upstream.
+
+This model uses the same dock port as the previous generation.
+
+Signed-off-by: Yves-Alexis Perez <corsac@debian.org>
+Signed-off-by: Takashi Iwai <tiwai@suse.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ sound/pci/hda/patch_realtek.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f9d12c0..3ad85c7 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5047,6 +5047,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x2212, "Thinkpad T440", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad X240", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++	SND_PCI_QUIRK(0x17aa, 0x2226, "ThinkPad X250", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP),
+ 	SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+-- 
+2.3.6
+
+
+From 0b586ed327f10ed037bf84381cbdb16754d7bdfd Mon Sep 17 00:00:00 2001
+From: Adam Honse <calcprogrammer1@gmail.com>
+Date: Sun, 12 Apr 2015 01:03:07 -0500
+Subject: [PATCH 090/219] ALSA: usb-audio: Don't attempt to get Microsoft
+ Lifecam Cinema sample rate
+Cc: mpagano@gentoo.org
+
+commit eef0342cf32689f77d78ee3302999e5caaa6a8f3 upstream.
+
+Adds Microsoft LifeCam Cinema USB ID to the snd_usb_get_sample_rate_quirk list as the Lifecam Cinema does not appear to support getting the sample rate.
+
+Fixes the issue where the LifeCam Cinema would wait for USB timeout and log the message "cannot get freq at ep 0x82" when accessed.
+
+Addresses bug report https://bugzilla.kernel.org/show_bug.cgi?id=95961.
+
+Signed-off-by: Adam Honse <calcprogrammer1@gmail.com>
+Signed-off-by: Takashi Iwai <tiwai@suse.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ sound/usb/quirks.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 9a28365..32631a8 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1115,6 +1115,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
+ {
+ 	/* devices which do not support reading the sample rate. */
+ 	switch (chip->usb_id) {
++	case USB_ID(0x045E, 0x075D): /* MS Lifecam Cinema  */
+ 	case USB_ID(0x045E, 0x076D): /* MS Lifecam HD-5000 */
+ 	case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */
+ 		return true;
+-- 
+2.3.6
+
+
+From 15c97265c67f27eef7d92262964a43e0aff8df61 Mon Sep 17 00:00:00 2001
+From: Michael Gernoth <michael@gernoth.net>
+Date: Thu, 9 Apr 2015 23:42:15 +0200
+Subject: [PATCH 091/219] ALSA: emu10k1: don't deadlock in proc-functions
+Cc: mpagano@gentoo.org
+
+commit 91bf0c2dcb935a87e5c0795f5047456b965fd143 upstream.
+
+The functions snd_emu10k1_proc_spdif_read and snd_emu1010_fpga_read
+acquire the emu_lock before accessing the FPGA. The function used
+to access the FPGA (snd_emu1010_fpga_read) also tries to take
+the emu_lock which causes a deadlock.
+Remove the outer locking in the proc-functions (guarding only the
+already safe fpga read) to prevent this deadlock.
+
+[removed superfluous flags variables too -- tiwai]
+
+Signed-off-by: Michael Gernoth <michael@gernoth.net>
+Signed-off-by: Takashi Iwai <tiwai@suse.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ sound/pci/emu10k1/emuproc.c | 12 ------------
+ 1 file changed, 12 deletions(-)
+
+diff --git a/sound/pci/emu10k1/emuproc.c b/sound/pci/emu10k1/emuproc.c
+index 2ca9f2e..53745f4 100644
+--- a/sound/pci/emu10k1/emuproc.c
++++ b/sound/pci/emu10k1/emuproc.c
+@@ -241,31 +241,22 @@ static void snd_emu10k1_proc_spdif_read(struct snd_info_entry *entry,
+ 	struct snd_emu10k1 *emu = entry->private_data;
+ 	u32 value;
+ 	u32 value2;
+-	unsigned long flags;
+ 	u32 rate;
+ 
+ 	if (emu->card_capabilities->emu_model) {
+-		spin_lock_irqsave(&emu->emu_lock, flags);
+ 		snd_emu1010_fpga_read(emu, 0x38, &value);
+-		spin_unlock_irqrestore(&emu->emu_lock, flags);
+ 		if ((value & 0x1) == 0) {
+-			spin_lock_irqsave(&emu->emu_lock, flags);
+ 			snd_emu1010_fpga_read(emu, 0x2a, &value);
+ 			snd_emu1010_fpga_read(emu, 0x2b, &value2);
+-			spin_unlock_irqrestore(&emu->emu_lock, flags);
+ 			rate = 0x1770000 / (((value << 5) | value2)+1);	
+ 			snd_iprintf(buffer, "ADAT Locked : %u\n", rate);
+ 		} else {
+ 			snd_iprintf(buffer, "ADAT Unlocked\n");
+ 		}
+-		spin_lock_irqsave(&emu->emu_lock, flags);
+ 		snd_emu1010_fpga_read(emu, 0x20, &value);
+-		spin_unlock_irqrestore(&emu->emu_lock, flags);
+ 		if ((value & 0x4) == 0) {
+-			spin_lock_irqsave(&emu->emu_lock, flags);
+ 			snd_emu1010_fpga_read(emu, 0x28, &value);
+ 			snd_emu1010_fpga_read(emu, 0x29, &value2);
+-			spin_unlock_irqrestore(&emu->emu_lock, flags);
+ 			rate = 0x1770000 / (((value << 5) | value2)+1);	
+ 			snd_iprintf(buffer, "SPDIF Locked : %d\n", rate);
+ 		} else {
+@@ -410,14 +401,11 @@ static void snd_emu_proc_emu1010_reg_read(struct snd_info_entry *entry,
+ {
+ 	struct snd_emu10k1 *emu = entry->private_data;
+ 	u32 value;
+-	unsigned long flags;
+ 	int i;
+ 	snd_iprintf(buffer, "EMU1010 Registers:\n\n");
+ 
+ 	for(i = 0; i < 0x40; i+=1) {
+-		spin_lock_irqsave(&emu->emu_lock, flags);
+ 		snd_emu1010_fpga_read(emu, i, &value);
+-		spin_unlock_irqrestore(&emu->emu_lock, flags);
+ 		snd_iprintf(buffer, "%02X: %08X, %02X\n", i, value, (value >> 8) & 0x7f);
+ 	}
+ }
+-- 
+2.3.6
+
+
+From 0933e9dd839f4d37d408d9365266940928a73a8c Mon Sep 17 00:00:00 2001
+From: Jo-Philipp Wich <jow@openwrt.org>
+Date: Mon, 13 Apr 2015 12:47:26 +0200
+Subject: [PATCH 092/219] ALSA: hda/realtek - Enable the ALC292 dock fixup on
+ the Thinkpad T450
+Cc: mpagano@gentoo.org
+
+commit f2aa111041ce36b94e651d882458dea502e76721 upstream.
+
+The Lenovo Thinkpad T450 requires the ALC292_FIXUP_TPT440_DOCK as well in
+order to get working sound output on the docking stations headphone jack.
+
+Patch tested on a Thinkpad T450 (20BVCTO1WW) using kernel 4.0-rc7 in
+conjunction with a ThinkPad Ultradock.
+
+Signed-off-by: Jo-Philipp Wich <jow@openwrt.org>
+Signed-off-by: Takashi Iwai <tiwai@suse.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ sound/pci/hda/patch_realtek.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 3ad85c7..f37e4ea 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5054,6 +5054,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x501e, "Thinkpad L440", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x5026, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++	SND_PCI_QUIRK(0x17aa, 0x5034, "Thinkpad T450", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x5036, "Thinkpad T450s", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+-- 
+2.3.6
+
+
+From cb927a0ae496171966921e084eb7f6c2dc04e43b Mon Sep 17 00:00:00 2001
+From: David Henningsson <david.henningsson@canonical.com>
+Date: Tue, 21 Apr 2015 10:48:46 +0200
+Subject: [PATCH 093/219] ALSA: hda - fix "num_steps = 0" error on ALC256
+Cc: mpagano@gentoo.org
+
+commit 7d1b6e29327428993ba568bdd8c66734070f45e0 upstream.
+
+The ALC256 does not have a mixer nid at 0x0b, and there's no
+loopback path (the output pins are directly connected to the DACs).
+
+This commit fixes an "num_steps = 0 for NID=0xb (ctl = Beep Playback Volume)"
+error (and as a result, problems with amixer/alsamixer).
+
+If there's pcbeep functionality, it certainly isn't controlled by setting an
+amp on 0x0b, so disable beep functionality (at least for now).
+
+BugLink: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1446517
+Signed-off-by: David Henningsson <david.henningsson@canonical.com>
+Signed-off-by: Takashi Iwai <tiwai@suse.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ sound/pci/hda/patch_realtek.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f37e4ea..b46bb84 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5565,6 +5565,7 @@ static int patch_alc269(struct hda_codec *codec)
+ 		break;
+ 	case 0x10ec0256:
+ 		spec->codec_variant = ALC269_TYPE_ALC256;
++		spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */
+ 		break;
+ 	}
+ 
+@@ -5578,8 +5579,8 @@ static int patch_alc269(struct hda_codec *codec)
+ 	if (err < 0)
+ 		goto error;
+ 
+-	if (!spec->gen.no_analog && spec->gen.beep_nid)
+-		set_beep_amp(spec, 0x0b, 0x04, HDA_INPUT);
++	if (!spec->gen.no_analog && spec->gen.beep_nid && spec->gen.mixer_nid)
++		set_beep_amp(spec, spec->gen.mixer_nid, 0x04, HDA_INPUT);
+ 
+ 	codec->patch_ops = alc_patch_ops;
+ #ifdef CONFIG_PM
+-- 
+2.3.6
+
+
+From c7a98726965179726bbd105e5ff6465c1d09ace1 Mon Sep 17 00:00:00 2001
+From: Kailang Yang <kailang@realtek.com>
+Date: Thu, 23 Apr 2015 15:10:53 +0800
+Subject: [PATCH 094/219] ALSA: hda/realtek - Fix Headphone Mic doesn't
+ recording for ALC256
+Cc: mpagano@gentoo.org
+
+commit d32b66668c702aed0e330dc5ca186afbadcdacf8 upstream.
+
+Switch default pcbeep path to Line in path.
+
+Signed-off-by: Kailang Yang <kailang@realtek.com>
+Tested-by: Hui Wang <hui.wang@canonical.com>
+Signed-off-by: Takashi Iwai <tiwai@suse.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ sound/pci/hda/patch_realtek.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index b46bb84..2210e1b 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5566,6 +5566,7 @@ static int patch_alc269(struct hda_codec *codec)
+ 	case 0x10ec0256:
+ 		spec->codec_variant = ALC269_TYPE_ALC256;
+ 		spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */
++		alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/
+ 		break;
+ 	}
+ 
+-- 
+2.3.6
+
+
+From ca7d80c841febeb3688d5ed57660d37b4baedad5 Mon Sep 17 00:00:00 2001
+From: Hui Wang <hui.wang@canonical.com>
+Date: Fri, 24 Apr 2015 13:39:59 +0800
+Subject: [PATCH 095/219] ALSA: hda - fix headset mic detection problem for one
+ more machine
+Cc: mpagano@gentoo.org
+
+commit e8191a8e475551b277d85cd76c3f0f52fdf42e86 upstream.
+
+We have two machines with alc256 codec in the pin quirk table, so
+moving the common pins to ALC256_STANDARD_PINS.
+
+BugLink: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1447909
+Signed-off-by: Hui Wang <hui.wang@canonical.com>
+Signed-off-by: Takashi Iwai <tiwai@suse.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ sound/pci/hda/patch_realtek.c | 24 +++++++++++++++---------
+ 1 file changed, 15 insertions(+), 9 deletions(-)
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2210e1b..2fd490b 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5144,6 +5144,16 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{0x1b, 0x411111f0}, \
+ 	{0x1e, 0x411111f0}
+ 
++#define ALC256_STANDARD_PINS \
++	{0x12, 0x90a60140}, \
++	{0x14, 0x90170110}, \
++	{0x19, 0x411111f0}, \
++	{0x1a, 0x411111f0}, \
++	{0x1b, 0x411111f0}, \
++	{0x1d, 0x40700001}, \
++	{0x1e, 0x411111f0}, \
++	{0x21, 0x02211020}
++
+ #define ALC282_STANDARD_PINS \
+ 	{0x14, 0x90170110}, \
+ 	{0x18, 0x411111f0}, \
+@@ -5237,15 +5247,11 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x1d, 0x40700001},
+ 		{0x21, 0x02211050}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+-		{0x12, 0x90a60140},
+-		{0x13, 0x40000000},
+-		{0x14, 0x90170110},
+-		{0x19, 0x411111f0},
+-		{0x1a, 0x411111f0},
+-		{0x1b, 0x411111f0},
+-		{0x1d, 0x40700001},
+-		{0x1e, 0x411111f0},
+-		{0x21, 0x02211020}),
++		ALC256_STANDARD_PINS,
++		{0x13, 0x40000000}),
++	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++		ALC256_STANDARD_PINS,
++		{0x13, 0x411111f0}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0280, 0x103c, "HP", ALC280_FIXUP_HP_GPIO4,
+ 		{0x12, 0x90a60130},
+ 		{0x13, 0x40000000},
+-- 
+2.3.6
+
+
+From 53c20b74579ec9bb7b45b2208fce79df09e8bdfb Mon Sep 17 00:00:00 2001
+From: Ulrik De Bie <ulrik.debie-os@e2big.org>
+Date: Mon, 6 Apr 2015 15:35:38 -0700
+Subject: [PATCH 096/219] Input: elantech - fix absolute mode setting on some
+ ASUS laptops
+Cc: mpagano@gentoo.org
+
+commit bd884149aca61de269fd9bad83fe2a4232ffab21 upstream.
+
+On ASUS TP500LN and X750JN, the touchpad absolute mode is reset each
+time set_rate is done.
+
+In order to fix this, we will verify the firmware version, and if it
+matches the one in those laptops, the set_rate function is overloaded
+with a function elantech_set_rate_restore_reg_07 that performs the
+set_rate with the original function, followed by a restore of reg_07
+(the register that sets the absolute mode on elantech v4 hardware).
+
+Also the ASUS TP500LN and X750JN firmware version, capabilities, and
+button constellation is added to elantech.c
+
+Reported-and-tested-by: George Moutsopoulos <gmoutso@yahoo.co.uk>
+Signed-off-by: Ulrik De Bie <ulrik.debie-os@e2big.org>
+Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/input/mouse/elantech.c | 22 ++++++++++++++++++++++
+ drivers/input/mouse/elantech.h |  1 +
+ 2 files changed, 23 insertions(+)
+
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index 6e22682..991dc6b 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -893,6 +893,21 @@ static psmouse_ret_t elantech_process_byte(struct psmouse *psmouse)
+ }
+ 
+ /*
++ * This writes the reg_07 value again to the hardware at the end of every
++ * set_rate call because the register loses its value. reg_07 allows setting
++ * absolute mode on v4 hardware
++ */
++static void elantech_set_rate_restore_reg_07(struct psmouse *psmouse,
++		unsigned int rate)
++{
++	struct elantech_data *etd = psmouse->private;
++
++	etd->original_set_rate(psmouse, rate);
++	if (elantech_write_reg(psmouse, 0x07, etd->reg_07))
++		psmouse_err(psmouse, "restoring reg_07 failed\n");
++}
++
++/*
+  * Put the touchpad into absolute mode
+  */
+ static int elantech_set_absolute_mode(struct psmouse *psmouse)
+@@ -1094,6 +1109,8 @@ static int elantech_get_resolution_v4(struct psmouse *psmouse,
+  * Asus K53SV              0x450f01        78, 15, 0c      2 hw buttons
+  * Asus G46VW              0x460f02        00, 18, 0c      2 hw buttons
+  * Asus G750JX             0x360f00        00, 16, 0c      2 hw buttons
++ * Asus TP500LN            0x381f17        10, 14, 0e      clickpad
++ * Asus X750JN             0x381f17        10, 14, 0e      clickpad
+  * Asus UX31               0x361f00        20, 15, 0e      clickpad
+  * Asus UX32VD             0x361f02        00, 15, 0e      clickpad
+  * Avatar AVIU-145A2       0x361f00        ?               clickpad
+@@ -1635,6 +1652,11 @@ int elantech_init(struct psmouse *psmouse)
+ 		goto init_fail;
+ 	}
+ 
++	if (etd->fw_version == 0x381f17) {
++		etd->original_set_rate = psmouse->set_rate;
++		psmouse->set_rate = elantech_set_rate_restore_reg_07;
++	}
++
+ 	if (elantech_set_input_params(psmouse)) {
+ 		psmouse_err(psmouse, "failed to query touchpad range.\n");
+ 		goto init_fail;
+diff --git a/drivers/input/mouse/elantech.h b/drivers/input/mouse/elantech.h
+index 6f3afec..f965d15 100644
+--- a/drivers/input/mouse/elantech.h
++++ b/drivers/input/mouse/elantech.h
+@@ -142,6 +142,7 @@ struct elantech_data {
+ 	struct finger_pos mt[ETP_MAX_FINGERS];
+ 	unsigned char parity[256];
+ 	int (*send_cmd)(struct psmouse *psmouse, unsigned char c, unsigned char *param);
++	void (*original_set_rate)(struct psmouse *psmouse, unsigned int rate);
+ };
+ 
+ #ifdef CONFIG_MOUSE_PS2_ELANTECH
+-- 
+2.3.6
+
+
+From 93ab611572eae4cb426cf006c70a7c7216603cfe Mon Sep 17 00:00:00 2001
+From: Hans de Goede <hdegoede@redhat.com>
+Date: Wed, 8 Apr 2015 09:26:42 -0700
+Subject: [PATCH 097/219] Input: alps - fix touchpad buttons getting stuck when
+ used with trackpoint
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+Cc: mpagano@gentoo.org
+
+commit 6bcca19f5dcedc3a006ca0bcc3699a437cadee74 upstream.
+
+When the left touchpad button gets pressed, and then the trackpoint is
+moved, and then the button is released, the following happens:
+
+1) touchpad packet is received, touchpad evdev node reports BTN_LEFT 1
+
+2) pointing stick packet is received, the hw will report a BTN_LEFT 1 in
+   this packet because when the trackstick is active it communicates the
+   combined touchpad + pointing stick buttons in the trackstick packet,
+   since alps_report_bare_ps2_packet passes NULL (*) for the dev2 parameter
+   to alps_report_buttons the combining is not detected and the
+   pointing stick evdev node will also report BTN_LEFT 1
+
+3) on release of the button a pointing stick packet with BTN_LEFT 0 is
+   received and the pointing stick evdev node will report BTN_LEFT 0
+
+Note how because of the passing as NULL for dev2 the touchpad evdev node
+will never send BTN_LEFT 0 in this scenario leading to a stuck mouse button.
+
+This is a regression in 4.0 introduced by commit 04aae283ba6a8
+("Input: ALPS - do not mix trackstick and external PS/2 mouse data")
+
+This commit fixes this by passing in the touchpad evdev as dev2 parameter
+when calling alps_report_buttons for the pointingstick on alps v2 devices,
+so that alps_report_buttons correctly detect that we're already reporting
+the button as pressed via the touchpad evdev node, and will also send the
+release event there.
+
+Reported-by: Hans de Bruin <jmdebruin@xmsnet.nl>
+Signed-off-by: Hans de Goede <hdegoede@redhat.com>
+Acked-by: Pali Rohár <pali.rohar@gmail.com>
+Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/input/mouse/alps.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/input/mouse/alps.c b/drivers/input/mouse/alps.c
+index 27bcdbc..ea6cb64 100644
+--- a/drivers/input/mouse/alps.c
++++ b/drivers/input/mouse/alps.c
+@@ -1159,13 +1159,14 @@ static void alps_report_bare_ps2_packet(struct psmouse *psmouse,
+ 					bool report_buttons)
+ {
+ 	struct alps_data *priv = psmouse->private;
+-	struct input_dev *dev;
++	struct input_dev *dev, *dev2 = NULL;
+ 
+ 	/* Figure out which device to use to report the bare packet */
+ 	if (priv->proto_version == ALPS_PROTO_V2 &&
+ 	    (priv->flags & ALPS_DUALPOINT)) {
+ 		/* On V2 devices the DualPoint Stick reports bare packets */
+ 		dev = priv->dev2;
++		dev2 = psmouse->dev;
+ 	} else if (unlikely(IS_ERR_OR_NULL(priv->dev3))) {
+ 		/* Register dev3 mouse if we received PS/2 packet first time */
+ 		if (!IS_ERR(priv->dev3))
+@@ -1177,7 +1178,7 @@ static void alps_report_bare_ps2_packet(struct psmouse *psmouse,
+ 	}
+ 
+ 	if (report_buttons)
+-		alps_report_buttons(dev, NULL,
++		alps_report_buttons(dev, dev2,
+ 				packet[0] & 1, packet[0] & 2, packet[0] & 4);
+ 
+ 	input_report_rel(dev, REL_X,
+-- 
+2.3.6
+
+
+From 9a7fcd609f2e3eaf2d661ee26ab7601e450cd7a2 Mon Sep 17 00:00:00 2001
+From: Johan Hovold <johan@kernel.org>
+Date: Wed, 25 Mar 2015 12:07:05 +0100
+Subject: [PATCH 098/219] mfd: core: Fix platform-device name collisions
+Cc: mpagano@gentoo.org
+
+commit a77c50b44cfb663ad03faba9800fec19bdf83577 upstream.
+
+Since commit 6e3f62f0793e ("mfd: core: Fix platform-device id
+generation") we honour PLATFORM_DEVID_AUTO and PLATFORM_DEVID_NONE when
+registering mfd-devices.
+
+Unfortunately, some mfd-drivers rely on the old behaviour of generating
+platform-device ids by adding the cell id also to the special value of
+PLATFORM_DEVID_NONE. The resulting platform ids are not only used to
+generate device-unique names, but are also used instead of the cell id
+to identify cells when probing subdevices.
+
+These drivers should be updated to use PLATFORM_DEVID_AUTO, which would
+also allow more than one device to be registered without resorting to
+hacks (see for example wm831x), but lets fix the regression first by
+partially reverting the above mentioned commit with respect to
+PLATFORM_DEVID_NONE.
+
+Fixes: 6e3f62f0793e ("mfd: core: Fix platform-device id generation")
+Reported-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
+Signed-off-by: Johan Hovold <johan@kernel.org>
+Acked-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
+Signed-off-by: Lee Jones <lee.jones@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/mfd/mfd-core.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c
+index 2a87f69..1aed3b7 100644
+--- a/drivers/mfd/mfd-core.c
++++ b/drivers/mfd/mfd-core.c
+@@ -128,7 +128,7 @@ static int mfd_add_device(struct device *parent, int id,
+ 	int platform_id;
+ 	int r;
+ 
+-	if (id < 0)
++	if (id == PLATFORM_DEVID_AUTO)
+ 		platform_id = id;
+ 	else
+ 		platform_id = id + cell->id;
+-- 
+2.3.6
+
+
+From 671ea8186b4d894fef503c13745152d9827d7a1b Mon Sep 17 00:00:00 2001
+From: Michael Davidson <md@google.com>
+Date: Tue, 14 Apr 2015 15:47:38 -0700
+Subject: [PATCH 099/219] fs/binfmt_elf.c: fix bug in loading of PIE binaries
+Cc: mpagano@gentoo.org
+
+commit a87938b2e246b81b4fb713edb371a9fa3c5c3c86 upstream.
+
+With CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE enabled, and a normal top-down
+address allocation strategy, load_elf_binary() will attempt to map a PIE
+binary into an address range immediately below mm->mmap_base.
+
+Unfortunately, load_elf_ binary() does not take account of the need to
+allocate sufficient space for the entire binary which means that, while
+the first PT_LOAD segment is mapped below mm->mmap_base, the subsequent
+PT_LOAD segment(s) end up being mapped above mm->mmap_base into the are
+that is supposed to be the "gap" between the stack and the binary.
+
+Since the size of the "gap" on x86_64 is only guaranteed to be 128MB this
+means that binaries with large data segments > 128MB can end up mapping
+part of their data segment over their stack resulting in corruption of the
+stack (and the data segment once the binary starts to run).
+
+Any PIE binary with a data segment > 128MB is vulnerable to this although
+address randomization means that the actual gap between the stack and the
+end of the binary is normally greater than 128MB.  The larger the data
+segment of the binary the higher the probability of failure.
+
+Fix this by calculating the total size of the binary in the same way as
+load_elf_interp().
+
+Signed-off-by: Michael Davidson <md@google.com>
+Cc: Alexander Viro <viro@zeniv.linux.org.uk>
+Cc: Jiri Kosina <jkosina@suse.cz>
+Cc: Kees Cook <keescook@chromium.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/binfmt_elf.c | 9 ++++++++-
+ 1 file changed, 8 insertions(+), 1 deletion(-)
+
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 995986b..d925f55 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -862,6 +862,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ 	    i < loc->elf_ex.e_phnum; i++, elf_ppnt++) {
+ 		int elf_prot = 0, elf_flags;
+ 		unsigned long k, vaddr;
++		unsigned long total_size = 0;
+ 
+ 		if (elf_ppnt->p_type != PT_LOAD)
+ 			continue;
+@@ -924,10 +925,16 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ #else
+ 			load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
+ #endif
++			total_size = total_mapping_size(elf_phdata,
++							loc->elf_ex.e_phnum);
++			if (!total_size) {
++				error = -EINVAL;
++				goto out_free_dentry;
++			}
+ 		}
+ 
+ 		error = elf_map(bprm->file, load_bias + vaddr, elf_ppnt,
+-				elf_prot, elf_flags, 0);
++				elf_prot, elf_flags, total_size);
+ 		if (BAD_ADDR(error)) {
+ 			retval = IS_ERR((void *)error) ?
+ 				PTR_ERR((void*)error) : -EINVAL;
+-- 
+2.3.6
+
+
+From 12ea13bf83f15c5cf59b4039295f98b0d7a83881 Mon Sep 17 00:00:00 2001
+From: Oleg Nesterov <oleg@redhat.com>
+Date: Thu, 16 Apr 2015 12:47:29 -0700
+Subject: [PATCH 100/219] ptrace: fix race between ptrace_resume() and
+ wait_task_stopped()
+Cc: mpagano@gentoo.org
+
+commit b72c186999e689cb0b055ab1c7b3cd8fffbeb5ed upstream.
+
+ptrace_resume() is called when the tracee is still __TASK_TRACED.  We set
+tracee->exit_code and then wake_up_state() changes tracee->state.  If the
+tracer's sub-thread does wait() in between, task_stopped_code(ptrace => T)
+wrongly looks like another report from tracee.
+
+This confuses debugger, and since wait_task_stopped() clears ->exit_code
+the tracee can miss a signal.
+
+Test-case:
+
+	#include <stdio.h>
+	#include <unistd.h>
+	#include <sys/wait.h>
+	#include <sys/ptrace.h>
+	#include <pthread.h>
+	#include <assert.h>
+
+	int pid;
+
+	void *waiter(void *arg)
+	{
+		int stat;
+
+		for (;;) {
+			assert(pid == wait(&stat));
+			assert(WIFSTOPPED(stat));
+			if (WSTOPSIG(stat) == SIGHUP)
+				continue;
+
+			assert(WSTOPSIG(stat) == SIGCONT);
+			printf("ERR! extra/wrong report:%x\n", stat);
+		}
+	}
+
+	int main(void)
+	{
+		pthread_t thread;
+
+		pid = fork();
+		if (!pid) {
+			assert(ptrace(PTRACE_TRACEME, 0,0,0) == 0);
+			for (;;)
+				kill(getpid(), SIGHUP);
+		}
+
+		assert(pthread_create(&thread, NULL, waiter, NULL) == 0);
+
+		for (;;)
+			ptrace(PTRACE_CONT, pid, 0, SIGCONT);
+
+		return 0;
+	}
+
+Note for stable: the bug is very old, but without 9899d11f6544 "ptrace:
+ensure arch_ptrace/ptrace_request can never race with SIGKILL" the fix
+should use lock_task_sighand(child).
+
+Signed-off-by: Oleg Nesterov <oleg@redhat.com>
+Reported-by: Pavel Labath <labath@google.com>
+Tested-by: Pavel Labath <labath@google.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ kernel/ptrace.c | 20 ++++++++++++++++++++
+ 1 file changed, 20 insertions(+)
+
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index 227fec3..9a34bd8 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -697,6 +697,8 @@ static int ptrace_peek_siginfo(struct task_struct *child,
+ static int ptrace_resume(struct task_struct *child, long request,
+ 			 unsigned long data)
+ {
++	bool need_siglock;
++
+ 	if (!valid_signal(data))
+ 		return -EIO;
+ 
+@@ -724,8 +726,26 @@ static int ptrace_resume(struct task_struct *child, long request,
+ 		user_disable_single_step(child);
+ 	}
+ 
++	/*
++	 * Change ->exit_code and ->state under siglock to avoid the race
++	 * with wait_task_stopped() in between; a non-zero ->exit_code will
++	 * wrongly look like another report from tracee.
++	 *
++	 * Note that we need siglock even if ->exit_code == data and/or this
++	 * status was not reported yet, the new status must not be cleared by
++	 * wait_task_stopped() after resume.
++	 *
++	 * If data == 0 we do not care if wait_task_stopped() reports the old
++	 * status and clears the code too; this can't race with the tracee, it
++	 * takes siglock after resume.
++	 */
++	need_siglock = data && !thread_group_empty(current);
++	if (need_siglock)
++		spin_lock_irq(&child->sighand->siglock);
+ 	child->exit_code = data;
+ 	wake_up_state(child, __TASK_TRACED);
++	if (need_siglock)
++		spin_unlock_irq(&child->sighand->siglock);
+ 
+ 	return 0;
+ }
+-- 
+2.3.6
+
+
+From 64b22d90114136c3f66fef541c844bc2deb539c5 Mon Sep 17 00:00:00 2001
+From: Len Brown <len.brown@intel.com>
+Date: Tue, 24 Mar 2015 23:23:20 -0400
+Subject: [PATCH 101/219] intel_idle: Update support for Silvermont Core in
+ Baytrail SOC
+Cc: mpagano@gentoo.org
+
+commit d7ef76717322c8e2df7d4360b33faa9466cb1a0d upstream.
+
+On some Silvermont-Core/Baytrail-SOC systems,
+C1E latency is higher than original specifications.
+Although C1E is still enumerated in CPUID.MWAIT.EDX,
+we delete the state from intel_idle to avoid latency impact.
+
+Under some conditions, the latency of the C6N-BYT and C6S-BYT states
+may exceed the specified values of 40 and 140 usec, respectively.
+Increase those values to 300 and 500 usec; to assure
+that the hardware does not violate constraints that may be set
+by the Linux PM_QOS sub-system.
+
+Also increase the C7-BYT target residency to 4.0 ms from 1.5 ms.
+
+Signed-off-by: Len Brown <len.brown@intel.com>
+Cc: Kumar P Mahesh <mahesh.kumar.p@intel.com>
+Cc: Alan Cox <alan@linux.intel.com>
+Cc: Mika Westerberg <mika.westerberg@linux.intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/idle/intel_idle.c | 14 +++-----------
+ 1 file changed, 3 insertions(+), 11 deletions(-)
+
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index b0e5852..44d1d79 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -218,18 +218,10 @@ static struct cpuidle_state byt_cstates[] = {
+ 		.enter = &intel_idle,
+ 		.enter_freeze = intel_idle_freeze, },
+ 	{
+-		.name = "C1E-BYT",
+-		.desc = "MWAIT 0x01",
+-		.flags = MWAIT2flg(0x01),
+-		.exit_latency = 15,
+-		.target_residency = 30,
+-		.enter = &intel_idle,
+-		.enter_freeze = intel_idle_freeze, },
+-	{
+ 		.name = "C6N-BYT",
+ 		.desc = "MWAIT 0x58",
+ 		.flags = MWAIT2flg(0x58) | CPUIDLE_FLAG_TLB_FLUSHED,
+-		.exit_latency = 40,
++		.exit_latency = 300,
+ 		.target_residency = 275,
+ 		.enter = &intel_idle,
+ 		.enter_freeze = intel_idle_freeze, },
+@@ -237,7 +229,7 @@ static struct cpuidle_state byt_cstates[] = {
+ 		.name = "C6S-BYT",
+ 		.desc = "MWAIT 0x52",
+ 		.flags = MWAIT2flg(0x52) | CPUIDLE_FLAG_TLB_FLUSHED,
+-		.exit_latency = 140,
++		.exit_latency = 500,
+ 		.target_residency = 560,
+ 		.enter = &intel_idle,
+ 		.enter_freeze = intel_idle_freeze, },
+@@ -246,7 +238,7 @@ static struct cpuidle_state byt_cstates[] = {
+ 		.desc = "MWAIT 0x60",
+ 		.flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED,
+ 		.exit_latency = 1200,
+-		.target_residency = 1500,
++		.target_residency = 4000,
+ 		.enter = &intel_idle,
+ 		.enter_freeze = intel_idle_freeze, },
+ 	{
+-- 
+2.3.6
+
+
+From 6181a6b2238de82fed39b0568645ea6a1ff2c6fd Mon Sep 17 00:00:00 2001
+From: Nicolas Ferre <nicolas.ferre@atmel.com>
+Date: Tue, 31 Mar 2015 15:02:05 +0200
+Subject: [PATCH 102/219] net/macb: fix the peripheral version test
+Cc: mpagano@gentoo.org
+
+commit 361918970b7426bba97a64678ef2b2679c37199b upstream.
+
+We currently need two checks of the peripheral version in MACB_MID register.
+One of them got out of sync after modification by 8a013a9c71b2 (net: macb:
+Include multi queue support for xilinx ZynqMP ethernet version).
+Fix this in macb_configure_caps() so that xilinx ZynqMP will be considered
+as a GEM flavor.
+
+Fixes: 8a013a9c71b2 ("net: macb: Include multi queue support for xilinx ZynqMP
+ethernet version")
+
+Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
+Cc: Michal Simek <michal.simek@xilinx.com>
+Cc: Punnaiah Choudary Kalluri <punnaia@xilinx.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/net/ethernet/cadence/macb.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/net/ethernet/cadence/macb.c b/drivers/net/ethernet/cadence/macb.c
+index 81d4153..77bf133 100644
+--- a/drivers/net/ethernet/cadence/macb.c
++++ b/drivers/net/ethernet/cadence/macb.c
+@@ -2165,7 +2165,7 @@ static void macb_configure_caps(struct macb *bp)
+ 		}
+ 	}
+ 
+-	if (MACB_BFEXT(IDNUM, macb_readl(bp, MID)) == 0x2)
++	if (MACB_BFEXT(IDNUM, macb_readl(bp, MID)) >= 0x2)
+ 		bp->caps |= MACB_CAPS_MACB_IS_GEM;
+ 
+ 	if (macb_is_gem(bp)) {
+-- 
+2.3.6
+
+
+From 95df5a6b8698921ca30cd55853446016a2acb891 Mon Sep 17 00:00:00 2001
+From: Christophe Ricard <christophe.ricard@gmail.com>
+Date: Tue, 31 Mar 2015 08:02:15 +0200
+Subject: [PATCH 103/219] NFC: st21nfcb: Retry i2c_master_send if it returns a
+ negative value
+Cc: mpagano@gentoo.org
+
+commit d4a41d10b2cb5890aeda6b2912973b2a754b05b1 upstream.
+
+i2c_master_send may return many negative values different than
+-EREMOTEIO.
+In case an i2c transaction is NACK'ed, on raspberry pi B+
+kernel 3.18, -EIO is generated instead.
+
+Signed-off-by: Christophe Ricard <christophe-h.ricard@st.com>
+Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/nfc/st21nfcb/i2c.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/nfc/st21nfcb/i2c.c b/drivers/nfc/st21nfcb/i2c.c
+index eb88693..7b53a5c 100644
+--- a/drivers/nfc/st21nfcb/i2c.c
++++ b/drivers/nfc/st21nfcb/i2c.c
+@@ -109,7 +109,7 @@ static int st21nfcb_nci_i2c_write(void *phy_id, struct sk_buff *skb)
+ 		return phy->ndlc->hard_fault;
+ 
+ 	r = i2c_master_send(client, skb->data, skb->len);
+-	if (r == -EREMOTEIO) {  /* Retry, chip was in standby */
++	if (r < 0) {  /* Retry, chip was in standby */
+ 		usleep_range(1000, 4000);
+ 		r = i2c_master_send(client, skb->data, skb->len);
+ 	}
+@@ -148,7 +148,7 @@ static int st21nfcb_nci_i2c_read(struct st21nfcb_i2c_phy *phy,
+ 	struct i2c_client *client = phy->i2c_dev;
+ 
+ 	r = i2c_master_recv(client, buf, ST21NFCB_NCI_I2C_MIN_SIZE);
+-	if (r == -EREMOTEIO) {  /* Retry, chip was in standby */
++	if (r < 0) {  /* Retry, chip was in standby */
+ 		usleep_range(1000, 4000);
+ 		r = i2c_master_recv(client, buf, ST21NFCB_NCI_I2C_MIN_SIZE);
+ 	}
+-- 
+2.3.6
+
+
+From 9e2d43e521a469a50ef03b55cef24e7d260bbdbb Mon Sep 17 00:00:00 2001
+From: Larry Finger <Larry.Finger@lwfinger.net>
+Date: Mon, 23 Mar 2015 18:14:10 -0500
+Subject: [PATCH 104/219] rtlwifi: rtl8192cu: Add new USB ID
+Cc: mpagano@gentoo.org
+
+commit 2f92b314f4daff2117847ac5343c54d3d041bf78 upstream.
+
+USB ID 2001:330d is used for a D-Link DWA-131.
+
+Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
+Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/net/wireless/rtlwifi/rtl8192cu/sw.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
+index 90a714c..6fde250 100644
+--- a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
++++ b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
+@@ -377,6 +377,7 @@ static struct usb_device_id rtl8192c_usb_ids[] = {
+ 	{RTL_USB_DEVICE(0x2001, 0x3307, rtl92cu_hal_cfg)}, /*D-Link-Cameo*/
+ 	{RTL_USB_DEVICE(0x2001, 0x3309, rtl92cu_hal_cfg)}, /*D-Link-Alpha*/
+ 	{RTL_USB_DEVICE(0x2001, 0x330a, rtl92cu_hal_cfg)}, /*D-Link-Alpha*/
++	{RTL_USB_DEVICE(0x2001, 0x330d, rtl92cu_hal_cfg)}, /*D-Link DWA-131 */
+ 	{RTL_USB_DEVICE(0x2019, 0xab2b, rtl92cu_hal_cfg)}, /*Planex -Abocom*/
+ 	{RTL_USB_DEVICE(0x20f4, 0x624d, rtl92cu_hal_cfg)}, /*TRENDNet*/
+ 	{RTL_USB_DEVICE(0x2357, 0x0100, rtl92cu_hal_cfg)}, /*TP-Link WN8200ND*/
+-- 
+2.3.6
+
+
+From a9fe1b9caf0ea4ccada73ce243b23fd6a7e896d3 Mon Sep 17 00:00:00 2001
+From: Marek Vasut <marex@denx.de>
+Date: Thu, 26 Mar 2015 02:16:06 +0100
+Subject: [PATCH 105/219] rtlwifi: rtl8192cu: Add new device ID
+Cc: mpagano@gentoo.org
+
+commit 9374e7d2fdcad3c36dafc8d3effd554bc702c4b6 upstream.
+
+Add new ID for ASUS N10 WiFi dongle.
+
+Signed-off-by: Marek Vasut <marex@denx.de>
+Tested-by: Marek Vasut <marex@denx.de>
+Cc: Larry Finger <Larry.Finger@lwfinger.net>
+Cc: John W. Linville <linville@tuxdriver.com>
+Acked-by: Larry Finger <Larry.Finger@lwfinger.net>
+Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/net/wireless/rtlwifi/rtl8192cu/sw.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
+index 6fde250..23806c2 100644
+--- a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
++++ b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
+@@ -321,6 +321,7 @@ static struct usb_device_id rtl8192c_usb_ids[] = {
+ 	{RTL_USB_DEVICE(0x07b8, 0x8188, rtl92cu_hal_cfg)}, /*Abocom - Abocom*/
+ 	{RTL_USB_DEVICE(0x07b8, 0x8189, rtl92cu_hal_cfg)}, /*Funai - Abocom*/
+ 	{RTL_USB_DEVICE(0x0846, 0x9041, rtl92cu_hal_cfg)}, /*NetGear WNA1000M*/
++	{RTL_USB_DEVICE(0x0b05, 0x17ba, rtl92cu_hal_cfg)}, /*ASUS-Edimax*/
+ 	{RTL_USB_DEVICE(0x0bda, 0x5088, rtl92cu_hal_cfg)}, /*Thinkware-CC&C*/
+ 	{RTL_USB_DEVICE(0x0df6, 0x0052, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/
+ 	{RTL_USB_DEVICE(0x0df6, 0x005c, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/
+-- 
+2.3.6
+
+
+From 3536e283ea6797daac8054aebea238cafe9a464c Mon Sep 17 00:00:00 2001
+From: Lukas Czerner <lczerner@redhat.com>
+Date: Fri, 3 Apr 2015 10:46:58 -0400
+Subject: [PATCH 106/219] ext4: make fsync to sync parent dir in no-journal for
+ real this time
+Cc: mpagano@gentoo.org
+
+commit e12fb97222fc41e8442896934f76d39ef99b590a upstream.
+
+Previously commit 14ece1028b3ed53ffec1b1213ffc6acaf79ad77c added a
+support for for syncing parent directory of newly created inodes to
+make sure that the inode is not lost after a power failure in
+no-journal mode.
+
+However this does not work in majority of cases, namely:
+ - if the directory has inline data
+ - if the directory is already indexed
+ - if the directory already has at least one block and:
+	- the new entry fits into it
+	- or we've successfully converted it to indexed
+
+So in those cases we might lose the inode entirely even after fsync in
+the no-journal mode. This also includes ext2 default mode obviously.
+
+I've noticed this while running xfstest generic/321 and even though the
+test should fail (we need to run fsck after a crash in no-journal mode)
+I could not find a newly created entries even when if it was fsynced
+before.
+
+Fix this by adjusting the ext4_add_entry() successful exit paths to set
+the inode EXT4_STATE_NEWENTRY so that fsync has the chance to fsync the
+parent directory as well.
+
+Signed-off-by: Lukas Czerner <lczerner@redhat.com>
+Signed-off-by: Theodore Ts'o <tytso@mit.edu>
+Reviewed-by: Jan Kara <jack@suse.cz>
+Cc: Frank Mayhar <fmayhar@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/ext4/namei.c | 20 +++++++++++---------
+ 1 file changed, 11 insertions(+), 9 deletions(-)
+
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 28fe71a..aae7011 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1865,7 +1865,7 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ 			  struct inode *inode)
+ {
+ 	struct inode *dir = dentry->d_parent->d_inode;
+-	struct buffer_head *bh;
++	struct buffer_head *bh = NULL;
+ 	struct ext4_dir_entry_2 *de;
+ 	struct ext4_dir_entry_tail *t;
+ 	struct super_block *sb;
+@@ -1889,14 +1889,14 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ 			return retval;
+ 		if (retval == 1) {
+ 			retval = 0;
+-			return retval;
++			goto out;
+ 		}
+ 	}
+ 
+ 	if (is_dx(dir)) {
+ 		retval = ext4_dx_add_entry(handle, dentry, inode);
+ 		if (!retval || (retval != ERR_BAD_DX_DIR))
+-			return retval;
++			goto out;
+ 		ext4_clear_inode_flag(dir, EXT4_INODE_INDEX);
+ 		dx_fallback++;
+ 		ext4_mark_inode_dirty(handle, dir);
+@@ -1908,14 +1908,15 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ 			return PTR_ERR(bh);
+ 
+ 		retval = add_dirent_to_buf(handle, dentry, inode, NULL, bh);
+-		if (retval != -ENOSPC) {
+-			brelse(bh);
+-			return retval;
+-		}
++		if (retval != -ENOSPC)
++			goto out;
+ 
+ 		if (blocks == 1 && !dx_fallback &&
+-		    EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_DIR_INDEX))
+-			return make_indexed_dir(handle, dentry, inode, bh);
++		    EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_DIR_INDEX)) {
++			retval = make_indexed_dir(handle, dentry, inode, bh);
++			bh = NULL; /* make_indexed_dir releases bh */
++			goto out;
++		}
+ 		brelse(bh);
+ 	}
+ 	bh = ext4_append(handle, dir, &block);
+@@ -1931,6 +1932,7 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ 	}
+ 
+ 	retval = add_dirent_to_buf(handle, dentry, inode, de, bh);
++out:
+ 	brelse(bh);
+ 	if (retval == 0)
+ 		ext4_set_inode_state(inode, EXT4_STATE_NEWENTRY);
+-- 
+2.3.6
+
+
+From 1527fbabfa4fdb32f66b47dd48518572fb4e0eaa Mon Sep 17 00:00:00 2001
+From: "Eric W. Biederman" <ebiederm@xmission.com>
+Date: Wed, 24 Dec 2014 07:20:01 -0600
+Subject: [PATCH 107/219] mnt: Improve the umount_tree flags
+Cc: mpagano@gentoo.org
+
+commit e819f152104c9f7c9fe50e1aecce6f5d4bf06d65 upstream.
+
+- Remove the unneeded declaration from pnode.h
+- Mark umount_tree static as it has no callers outside of namespace.c
+- Define an enumeration of umount_tree's flags.
+- Pass umount_tree's flags in by name
+
+This removes the magic numbers 0, 1 and 2 making the code a little
+clearer and makes it possible for there to be lazy unmounts that don't
+propagate.  Which is what __detach_mounts actually wants for example.
+
+Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/namespace.c | 31 ++++++++++++++++---------------
+ fs/pnode.h     |  1 -
+ 2 files changed, 16 insertions(+), 16 deletions(-)
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 82ef140..712b3c5 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1319,14 +1319,15 @@ static inline void namespace_lock(void)
+ 	down_write(&namespace_sem);
+ }
+ 
++enum umount_tree_flags {
++	UMOUNT_SYNC = 1,
++	UMOUNT_PROPAGATE = 2,
++};
+ /*
+  * mount_lock must be held
+  * namespace_sem must be held for write
+- * how = 0 => just this tree, don't propagate
+- * how = 1 => propagate; we know that nobody else has reference to any victims
+- * how = 2 => lazy umount
+  */
+-void umount_tree(struct mount *mnt, int how)
++static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
+ {
+ 	HLIST_HEAD(tmp_list);
+ 	struct mount *p;
+@@ -1339,7 +1340,7 @@ void umount_tree(struct mount *mnt, int how)
+ 	hlist_for_each_entry(p, &tmp_list, mnt_hash)
+ 		list_del_init(&p->mnt_child);
+ 
+-	if (how)
++	if (how & UMOUNT_PROPAGATE)
+ 		propagate_umount(&tmp_list);
+ 
+ 	while (!hlist_empty(&tmp_list)) {
+@@ -1349,7 +1350,7 @@ void umount_tree(struct mount *mnt, int how)
+ 		list_del_init(&p->mnt_list);
+ 		__touch_mnt_namespace(p->mnt_ns);
+ 		p->mnt_ns = NULL;
+-		if (how < 2)
++		if (how & UMOUNT_SYNC)
+ 			p->mnt.mnt_flags |= MNT_SYNC_UMOUNT;
+ 
+ 		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt, &unmounted);
+@@ -1447,14 +1448,14 @@ static int do_umount(struct mount *mnt, int flags)
+ 
+ 	if (flags & MNT_DETACH) {
+ 		if (!list_empty(&mnt->mnt_list))
+-			umount_tree(mnt, 2);
++			umount_tree(mnt, UMOUNT_PROPAGATE);
+ 		retval = 0;
+ 	} else {
+ 		shrink_submounts(mnt);
+ 		retval = -EBUSY;
+ 		if (!propagate_mount_busy(mnt, 2)) {
+ 			if (!list_empty(&mnt->mnt_list))
+-				umount_tree(mnt, 1);
++				umount_tree(mnt, UMOUNT_PROPAGATE|UMOUNT_SYNC);
+ 			retval = 0;
+ 		}
+ 	}
+@@ -1486,7 +1487,7 @@ void __detach_mounts(struct dentry *dentry)
+ 	lock_mount_hash();
+ 	while (!hlist_empty(&mp->m_list)) {
+ 		mnt = hlist_entry(mp->m_list.first, struct mount, mnt_mp_list);
+-		umount_tree(mnt, 2);
++		umount_tree(mnt, UMOUNT_PROPAGATE);
+ 	}
+ 	unlock_mount_hash();
+ 	put_mountpoint(mp);
+@@ -1648,7 +1649,7 @@ struct mount *copy_tree(struct mount *mnt, struct dentry *dentry,
+ out:
+ 	if (res) {
+ 		lock_mount_hash();
+-		umount_tree(res, 0);
++		umount_tree(res, UMOUNT_SYNC);
+ 		unlock_mount_hash();
+ 	}
+ 	return q;
+@@ -1672,7 +1673,7 @@ void drop_collected_mounts(struct vfsmount *mnt)
+ {
+ 	namespace_lock();
+ 	lock_mount_hash();
+-	umount_tree(real_mount(mnt), 0);
++	umount_tree(real_mount(mnt), UMOUNT_SYNC);
+ 	unlock_mount_hash();
+ 	namespace_unlock();
+ }
+@@ -1855,7 +1856,7 @@ static int attach_recursive_mnt(struct mount *source_mnt,
+  out_cleanup_ids:
+ 	while (!hlist_empty(&tree_list)) {
+ 		child = hlist_entry(tree_list.first, struct mount, mnt_hash);
+-		umount_tree(child, 0);
++		umount_tree(child, UMOUNT_SYNC);
+ 	}
+ 	unlock_mount_hash();
+ 	cleanup_group_ids(source_mnt, NULL);
+@@ -2035,7 +2036,7 @@ static int do_loopback(struct path *path, const char *old_name,
+ 	err = graft_tree(mnt, parent, mp);
+ 	if (err) {
+ 		lock_mount_hash();
+-		umount_tree(mnt, 0);
++		umount_tree(mnt, UMOUNT_SYNC);
+ 		unlock_mount_hash();
+ 	}
+ out2:
+@@ -2406,7 +2407,7 @@ void mark_mounts_for_expiry(struct list_head *mounts)
+ 	while (!list_empty(&graveyard)) {
+ 		mnt = list_first_entry(&graveyard, struct mount, mnt_expire);
+ 		touch_mnt_namespace(mnt->mnt_ns);
+-		umount_tree(mnt, 1);
++		umount_tree(mnt, UMOUNT_PROPAGATE|UMOUNT_SYNC);
+ 	}
+ 	unlock_mount_hash();
+ 	namespace_unlock();
+@@ -2477,7 +2478,7 @@ static void shrink_submounts(struct mount *mnt)
+ 			m = list_first_entry(&graveyard, struct mount,
+ 						mnt_expire);
+ 			touch_mnt_namespace(m->mnt_ns);
+-			umount_tree(m, 1);
++			umount_tree(m, UMOUNT_PROPAGATE|UMOUNT_SYNC);
+ 		}
+ 	}
+ }
+diff --git a/fs/pnode.h b/fs/pnode.h
+index 4a24635..16afc3d 100644
+--- a/fs/pnode.h
++++ b/fs/pnode.h
+@@ -47,7 +47,6 @@ int get_dominating_id(struct mount *mnt, const struct path *root);
+ unsigned int mnt_get_count(struct mount *mnt);
+ void mnt_set_mountpoint(struct mount *, struct mountpoint *,
+ 			struct mount *);
+-void umount_tree(struct mount *, int);
+ struct mount *copy_tree(struct mount *, struct dentry *, int);
+ bool is_path_reachable(struct mount *, struct dentry *,
+ 			 const struct path *root);
+-- 
+2.3.6
+
+
+From a15f7b5e276d1b8f71d3d64d7f3f509e77bee5e4 Mon Sep 17 00:00:00 2001
+From: "Eric W. Biederman" <ebiederm@xmission.com>
+Date: Wed, 24 Dec 2014 07:35:10 -0600
+Subject: [PATCH 108/219] mnt: Don't propagate umounts in __detach_mounts
+Cc: mpagano@gentoo.org
+
+commit 8318e667f176f7ea34451a1a530634e293f216ac upstream.
+
+Invoking mount propagation from __detach_mounts is inefficient and
+wrong.
+
+It is inefficient because __detach_mounts already walks the list of
+mounts that where something needs to be done, and mount propagation
+walks some subset of those mounts again.
+
+It is actively wrong because if the dentry that is passed to
+__detach_mounts is not part of the path to a mount that mount should
+not be affected.
+
+change_mnt_propagation(p,MS_PRIVATE) modifies the mount propagation
+tree of a master mount so it's slaves are connected to another master
+if possible.  Which means even removing a mount from the middle of a
+mount tree with __detach_mounts will not deprive any mount propagated
+mount events.
+
+Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/namespace.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 712b3c5..616a694 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1487,7 +1487,7 @@ void __detach_mounts(struct dentry *dentry)
+ 	lock_mount_hash();
+ 	while (!hlist_empty(&mp->m_list)) {
+ 		mnt = hlist_entry(mp->m_list.first, struct mount, mnt_mp_list);
+-		umount_tree(mnt, UMOUNT_PROPAGATE);
++		umount_tree(mnt, 0);
+ 	}
+ 	unlock_mount_hash();
+ 	put_mountpoint(mp);
+-- 
+2.3.6
+
+
+From 953bab2cb35f8f6f2a0183c1b27ff7466f72bccc Mon Sep 17 00:00:00 2001
+From: "Eric W. Biederman" <ebiederm@xmission.com>
+Date: Thu, 18 Dec 2014 13:10:48 -0600
+Subject: [PATCH 109/219] mnt: In umount_tree reuse mnt_list instead of
+ mnt_hash
+Cc: mpagano@gentoo.org
+
+commit c003b26ff98ca04a180ff34c38c007a3998d62f9 upstream.
+
+umount_tree builds a list of mounts that need to be unmounted.
+Utilize mnt_list for this purpose instead of mnt_hash.  This begins to
+allow keeping a mount on the mnt_hash after it is unmounted, which is
+necessary for a properly functioning MNT_LOCKED implementation.
+
+The fact that mnt_list is an ordinary list makding available list_move
+is nice bonus.
+
+Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/namespace.c | 20 +++++++++++---------
+ fs/pnode.c     |  6 +++---
+ fs/pnode.h     |  2 +-
+ 3 files changed, 15 insertions(+), 13 deletions(-)
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 616a694..18df0af 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1329,23 +1329,25 @@ enum umount_tree_flags {
+  */
+ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
+ {
+-	HLIST_HEAD(tmp_list);
++	LIST_HEAD(tmp_list);
+ 	struct mount *p;
+ 
+-	for (p = mnt; p; p = next_mnt(p, mnt)) {
+-		hlist_del_init_rcu(&p->mnt_hash);
+-		hlist_add_head(&p->mnt_hash, &tmp_list);
+-	}
++	/* Gather the mounts to umount */
++	for (p = mnt; p; p = next_mnt(p, mnt))
++		list_move(&p->mnt_list, &tmp_list);
+ 
+-	hlist_for_each_entry(p, &tmp_list, mnt_hash)
++	/* Hide the mounts from lookup_mnt and mnt_mounts */
++	list_for_each_entry(p, &tmp_list, mnt_list) {
++		hlist_del_init_rcu(&p->mnt_hash);
+ 		list_del_init(&p->mnt_child);
++	}
+ 
++	/* Add propogated mounts to the tmp_list */
+ 	if (how & UMOUNT_PROPAGATE)
+ 		propagate_umount(&tmp_list);
+ 
+-	while (!hlist_empty(&tmp_list)) {
+-		p = hlist_entry(tmp_list.first, struct mount, mnt_hash);
+-		hlist_del_init_rcu(&p->mnt_hash);
++	while (!list_empty(&tmp_list)) {
++		p = list_first_entry(&tmp_list, struct mount, mnt_list);
+ 		list_del_init(&p->mnt_expire);
+ 		list_del_init(&p->mnt_list);
+ 		__touch_mnt_namespace(p->mnt_ns);
+diff --git a/fs/pnode.c b/fs/pnode.c
+index 260ac8f..bf012af 100644
+--- a/fs/pnode.c
++++ b/fs/pnode.c
+@@ -384,7 +384,7 @@ static void __propagate_umount(struct mount *mnt)
+ 		if (child && list_empty(&child->mnt_mounts)) {
+ 			list_del_init(&child->mnt_child);
+ 			hlist_del_init_rcu(&child->mnt_hash);
+-			hlist_add_before_rcu(&child->mnt_hash, &mnt->mnt_hash);
++			list_move_tail(&child->mnt_list, &mnt->mnt_list);
+ 		}
+ 	}
+ }
+@@ -396,11 +396,11 @@ static void __propagate_umount(struct mount *mnt)
+  *
+  * vfsmount lock must be held for write
+  */
+-int propagate_umount(struct hlist_head *list)
++int propagate_umount(struct list_head *list)
+ {
+ 	struct mount *mnt;
+ 
+-	hlist_for_each_entry(mnt, list, mnt_hash)
++	list_for_each_entry(mnt, list, mnt_list)
+ 		__propagate_umount(mnt);
+ 	return 0;
+ }
+diff --git a/fs/pnode.h b/fs/pnode.h
+index 16afc3d..aa6d65d 100644
+--- a/fs/pnode.h
++++ b/fs/pnode.h
+@@ -40,7 +40,7 @@ static inline void set_mnt_shared(struct mount *mnt)
+ void change_mnt_propagation(struct mount *, int);
+ int propagate_mnt(struct mount *, struct mountpoint *, struct mount *,
+ 		struct hlist_head *);
+-int propagate_umount(struct hlist_head *);
++int propagate_umount(struct list_head *);
+ int propagate_mount_busy(struct mount *, int);
+ void mnt_release_group_id(struct mount *);
+ int get_dominating_id(struct mount *mnt, const struct path *root);
+-- 
+2.3.6
+
+
+From 7052e71b2d085f76800115d4a212dcaf82b86262 Mon Sep 17 00:00:00 2001
+From: "Eric W. Biederman" <ebiederm@xmission.com>
+Date: Mon, 22 Dec 2014 18:30:08 -0600
+Subject: [PATCH 110/219] mnt: Add MNT_UMOUNT flag
+Cc: mpagano@gentoo.org
+
+commit 590ce4bcbfb4e0462a720a4ad901e84416080bba upstream.
+
+In some instances it is necessary to know if the the unmounting
+process has begun on a mount.  Add MNT_UMOUNT to make that reliably
+testable.
+
+This fix gets used in fixing locked mounts in MNT_DETACH
+
+Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/namespace.c        | 4 +++-
+ fs/pnode.c            | 1 +
+ include/linux/mount.h | 1 +
+ 3 files changed, 5 insertions(+), 1 deletion(-)
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 18df0af..9f3c7e5 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1333,8 +1333,10 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
+ 	struct mount *p;
+ 
+ 	/* Gather the mounts to umount */
+-	for (p = mnt; p; p = next_mnt(p, mnt))
++	for (p = mnt; p; p = next_mnt(p, mnt)) {
++		p->mnt.mnt_flags |= MNT_UMOUNT;
+ 		list_move(&p->mnt_list, &tmp_list);
++	}
+ 
+ 	/* Hide the mounts from lookup_mnt and mnt_mounts */
+ 	list_for_each_entry(p, &tmp_list, mnt_list) {
+diff --git a/fs/pnode.c b/fs/pnode.c
+index bf012af..ac3aa0d 100644
+--- a/fs/pnode.c
++++ b/fs/pnode.c
+@@ -384,6 +384,7 @@ static void __propagate_umount(struct mount *mnt)
+ 		if (child && list_empty(&child->mnt_mounts)) {
+ 			list_del_init(&child->mnt_child);
+ 			hlist_del_init_rcu(&child->mnt_hash);
++			child->mnt.mnt_flags |= MNT_UMOUNT;
+ 			list_move_tail(&child->mnt_list, &mnt->mnt_list);
+ 		}
+ 	}
+diff --git a/include/linux/mount.h b/include/linux/mount.h
+index c2c561d..564beee 100644
+--- a/include/linux/mount.h
++++ b/include/linux/mount.h
+@@ -61,6 +61,7 @@ struct mnt_namespace;
+ #define MNT_DOOMED		0x1000000
+ #define MNT_SYNC_UMOUNT		0x2000000
+ #define MNT_MARKED		0x4000000
++#define MNT_UMOUNT		0x8000000
+ 
+ struct vfsmount {
+ 	struct dentry *mnt_root;	/* root of the mounted tree */
+-- 
+2.3.6
+
+
+From 7a9742a65c02e30a62ae42c765eb4dff26b51cc9 Mon Sep 17 00:00:00 2001
+From: "Eric W. Biederman" <ebiederm@xmission.com>
+Date: Mon, 22 Dec 2014 19:12:07 -0600
+Subject: [PATCH 111/219] mnt: Delay removal from the mount hash.
+Cc: mpagano@gentoo.org
+
+commit 411a938b5abc9cb126c41cccf5975ae464fe0f3e upstream.
+
+- Modify __lookup_mnt_hash_last to ignore mounts that have MNT_UMOUNTED set.
+- Don't remove mounts from the mount hash table in propogate_umount
+- Don't remove mounts from the mount hash table in umount_tree before
+  the entire list of mounts to be umounted is selected.
+- Remove mounts from the mount hash table as the last thing that
+  happens in the case where a mount has a parent in umount_tree.
+  Mounts without parents are not hashed (by definition).
+
+This paves the way for delaying removal from the mount hash table even
+farther and fixing the MNT_LOCKED vs MNT_DETACH issue.
+
+Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/namespace.c | 13 ++++++++-----
+ fs/pnode.c     |  1 -
+ 2 files changed, 8 insertions(+), 6 deletions(-)
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 9f3c7e5..6c477be 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -632,14 +632,17 @@ struct mount *__lookup_mnt(struct vfsmount *mnt, struct dentry *dentry)
+  */
+ struct mount *__lookup_mnt_last(struct vfsmount *mnt, struct dentry *dentry)
+ {
+-	struct mount *p, *res;
+-	res = p = __lookup_mnt(mnt, dentry);
++	struct mount *p, *res = NULL;
++	p = __lookup_mnt(mnt, dentry);
+ 	if (!p)
+ 		goto out;
++	if (!(p->mnt.mnt_flags & MNT_UMOUNT))
++		res = p;
+ 	hlist_for_each_entry_continue(p, mnt_hash) {
+ 		if (&p->mnt_parent->mnt != mnt || p->mnt_mountpoint != dentry)
+ 			break;
+-		res = p;
++		if (!(p->mnt.mnt_flags & MNT_UMOUNT))
++			res = p;
+ 	}
+ out:
+ 	return res;
+@@ -1338,9 +1341,8 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
+ 		list_move(&p->mnt_list, &tmp_list);
+ 	}
+ 
+-	/* Hide the mounts from lookup_mnt and mnt_mounts */
++	/* Hide the mounts from mnt_mounts */
+ 	list_for_each_entry(p, &tmp_list, mnt_list) {
+-		hlist_del_init_rcu(&p->mnt_hash);
+ 		list_del_init(&p->mnt_child);
+ 	}
+ 
+@@ -1367,6 +1369,7 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
+ 			p->mnt_mountpoint = p->mnt.mnt_root;
+ 			p->mnt_parent = p;
+ 			p->mnt_mp = NULL;
++			hlist_del_init_rcu(&p->mnt_hash);
+ 		}
+ 		change_mnt_propagation(p, MS_PRIVATE);
+ 	}
+diff --git a/fs/pnode.c b/fs/pnode.c
+index ac3aa0d..c27ae38 100644
+--- a/fs/pnode.c
++++ b/fs/pnode.c
+@@ -383,7 +383,6 @@ static void __propagate_umount(struct mount *mnt)
+ 		 */
+ 		if (child && list_empty(&child->mnt_mounts)) {
+ 			list_del_init(&child->mnt_child);
+-			hlist_del_init_rcu(&child->mnt_hash);
+ 			child->mnt.mnt_flags |= MNT_UMOUNT;
+ 			list_move_tail(&child->mnt_list, &mnt->mnt_list);
+ 		}
+-- 
+2.3.6
+
+
+From 397dd1fc1225b478824134ddd5540f889b13809d Mon Sep 17 00:00:00 2001
+From: "Eric W. Biederman" <ebiederm@xmission.com>
+Date: Sat, 3 Jan 2015 05:39:35 -0600
+Subject: [PATCH 112/219] mnt: On an unmount propagate clearing of MNT_LOCKED
+Cc: mpagano@gentoo.org
+
+commit 5d88457eb5b86b475422dc882f089203faaeedb5 upstream.
+
+A prerequisite of calling umount_tree is that the point where the tree
+is mounted at is valid to unmount.
+
+If we are propagating the effect of the unmount clear MNT_LOCKED in
+every instance where the same filesystem is mounted on the same
+mountpoint in the mount tree, as we know (by virtue of the fact
+that umount_tree was called) that it is safe to reveal what
+is at that mountpoint.
+
+Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/namespace.c |  3 +++
+ fs/pnode.c     | 20 ++++++++++++++++++++
+ fs/pnode.h     |  1 +
+ 3 files changed, 24 insertions(+)
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 6c477be..7d9a69d 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1335,6 +1335,9 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
+ 	LIST_HEAD(tmp_list);
+ 	struct mount *p;
+ 
++	if (how & UMOUNT_PROPAGATE)
++		propagate_mount_unlock(mnt);
++
+ 	/* Gather the mounts to umount */
+ 	for (p = mnt; p; p = next_mnt(p, mnt)) {
+ 		p->mnt.mnt_flags |= MNT_UMOUNT;
+diff --git a/fs/pnode.c b/fs/pnode.c
+index c27ae38..8989029 100644
+--- a/fs/pnode.c
++++ b/fs/pnode.c
+@@ -362,6 +362,26 @@ int propagate_mount_busy(struct mount *mnt, int refcnt)
+ }
+ 
+ /*
++ * Clear MNT_LOCKED when it can be shown to be safe.
++ *
++ * mount_lock lock must be held for write
++ */
++void propagate_mount_unlock(struct mount *mnt)
++{
++	struct mount *parent = mnt->mnt_parent;
++	struct mount *m, *child;
++
++	BUG_ON(parent == mnt);
++
++	for (m = propagation_next(parent, parent); m;
++			m = propagation_next(m, parent)) {
++		child = __lookup_mnt_last(&m->mnt, mnt->mnt_mountpoint);
++		if (child)
++			child->mnt.mnt_flags &= ~MNT_LOCKED;
++	}
++}
++
++/*
+  * NOTE: unmounting 'mnt' naturally propagates to all other mounts its
+  * parent propagates to.
+  */
+diff --git a/fs/pnode.h b/fs/pnode.h
+index aa6d65d..af47d4b 100644
+--- a/fs/pnode.h
++++ b/fs/pnode.h
+@@ -42,6 +42,7 @@ int propagate_mnt(struct mount *, struct mountpoint *, struct mount *,
+ 		struct hlist_head *);
+ int propagate_umount(struct list_head *);
+ int propagate_mount_busy(struct mount *, int);
++void propagate_mount_unlock(struct mount *);
+ void mnt_release_group_id(struct mount *);
+ int get_dominating_id(struct mount *mnt, const struct path *root);
+ unsigned int mnt_get_count(struct mount *mnt);
+-- 
+2.3.6
+
+
+From 928116b22b1eb446c59a0fb93857d7a6d80930af Mon Sep 17 00:00:00 2001
+From: "Eric W. Biederman" <ebiederm@xmission.com>
+Date: Mon, 5 Jan 2015 13:38:04 -0600
+Subject: [PATCH 113/219] mnt: Don't propagate unmounts to locked mounts
+Cc: mpagano@gentoo.org
+
+commit 0c56fe31420ca599c90240315f7959bf1b4eb6ce upstream.
+
+If the first mount in shared subtree is locked don't unmount the
+shared subtree.
+
+This is ensured by walking through the mounts parents before children
+and marking a mount as unmountable if it is not locked or it is locked
+but it's parent is marked.
+
+This allows recursive mount detach to propagate through a set of
+mounts when unmounting them would not reveal what is under any locked
+mount.
+
+Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/pnode.c | 32 +++++++++++++++++++++++++++++---
+ fs/pnode.h |  1 +
+ 2 files changed, 30 insertions(+), 3 deletions(-)
+
+diff --git a/fs/pnode.c b/fs/pnode.c
+index 8989029..6367e1e 100644
+--- a/fs/pnode.c
++++ b/fs/pnode.c
+@@ -382,6 +382,26 @@ void propagate_mount_unlock(struct mount *mnt)
+ }
+ 
+ /*
++ * Mark all mounts that the MNT_LOCKED logic will allow to be unmounted.
++ */
++static void mark_umount_candidates(struct mount *mnt)
++{
++	struct mount *parent = mnt->mnt_parent;
++	struct mount *m;
++
++	BUG_ON(parent == mnt);
++
++	for (m = propagation_next(parent, parent); m;
++			m = propagation_next(m, parent)) {
++		struct mount *child = __lookup_mnt_last(&m->mnt,
++						mnt->mnt_mountpoint);
++		if (child && (!IS_MNT_LOCKED(child) || IS_MNT_MARKED(m))) {
++			SET_MNT_MARK(child);
++		}
++	}
++}
++
++/*
+  * NOTE: unmounting 'mnt' naturally propagates to all other mounts its
+  * parent propagates to.
+  */
+@@ -398,10 +418,13 @@ static void __propagate_umount(struct mount *mnt)
+ 		struct mount *child = __lookup_mnt_last(&m->mnt,
+ 						mnt->mnt_mountpoint);
+ 		/*
+-		 * umount the child only if the child has no
+-		 * other children
++		 * umount the child only if the child has no children
++		 * and the child is marked safe to unmount.
+ 		 */
+-		if (child && list_empty(&child->mnt_mounts)) {
++		if (!child || !IS_MNT_MARKED(child))
++			continue;
++		CLEAR_MNT_MARK(child);
++		if (list_empty(&child->mnt_mounts)) {
+ 			list_del_init(&child->mnt_child);
+ 			child->mnt.mnt_flags |= MNT_UMOUNT;
+ 			list_move_tail(&child->mnt_list, &mnt->mnt_list);
+@@ -420,6 +443,9 @@ int propagate_umount(struct list_head *list)
+ {
+ 	struct mount *mnt;
+ 
++	list_for_each_entry_reverse(mnt, list, mnt_list)
++		mark_umount_candidates(mnt);
++
+ 	list_for_each_entry(mnt, list, mnt_list)
+ 		__propagate_umount(mnt);
+ 	return 0;
+diff --git a/fs/pnode.h b/fs/pnode.h
+index af47d4b..0fcdbe7 100644
+--- a/fs/pnode.h
++++ b/fs/pnode.h
+@@ -19,6 +19,7 @@
+ #define IS_MNT_MARKED(m) ((m)->mnt.mnt_flags & MNT_MARKED)
+ #define SET_MNT_MARK(m) ((m)->mnt.mnt_flags |= MNT_MARKED)
+ #define CLEAR_MNT_MARK(m) ((m)->mnt.mnt_flags &= ~MNT_MARKED)
++#define IS_MNT_LOCKED(m) ((m)->mnt.mnt_flags & MNT_LOCKED)
+ 
+ #define CL_EXPIRE    		0x01
+ #define CL_SLAVE     		0x02
+-- 
+2.3.6
+
+
+From 92e35ac5954f9f7829ad88066930a4b2b58fe4dd Mon Sep 17 00:00:00 2001
+From: "Eric W. Biederman" <ebiederm@xmission.com>
+Date: Mon, 29 Dec 2014 13:03:41 -0600
+Subject: [PATCH 114/219] mnt: Factor out unhash_mnt from detach_mnt and
+ umount_tree
+Cc: mpagano@gentoo.org
+
+commit 7bdb11de8ee4f4ae195e2fa19efd304e0b36c63b upstream.
+
+Create a function unhash_mnt that contains the common code between
+detach_mnt and umount_tree, and use unhash_mnt in place of the common
+code.  This add a unncessary list_del_init(mnt->mnt_child) into
+umount_tree but given that mnt_child is already empty this extra
+line is a noop.
+
+Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/namespace.c | 21 ++++++++++++---------
+ 1 file changed, 12 insertions(+), 9 deletions(-)
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 7d9a69d..0e95c84 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -798,10 +798,8 @@ static void __touch_mnt_namespace(struct mnt_namespace *ns)
+ /*
+  * vfsmount lock must be held for write
+  */
+-static void detach_mnt(struct mount *mnt, struct path *old_path)
++static void unhash_mnt(struct mount *mnt)
+ {
+-	old_path->dentry = mnt->mnt_mountpoint;
+-	old_path->mnt = &mnt->mnt_parent->mnt;
+ 	mnt->mnt_parent = mnt;
+ 	mnt->mnt_mountpoint = mnt->mnt.mnt_root;
+ 	list_del_init(&mnt->mnt_child);
+@@ -814,6 +812,16 @@ static void detach_mnt(struct mount *mnt, struct path *old_path)
+ /*
+  * vfsmount lock must be held for write
+  */
++static void detach_mnt(struct mount *mnt, struct path *old_path)
++{
++	old_path->dentry = mnt->mnt_mountpoint;
++	old_path->mnt = &mnt->mnt_parent->mnt;
++	unhash_mnt(mnt);
++}
++
++/*
++ * vfsmount lock must be held for write
++ */
+ void mnt_set_mountpoint(struct mount *mnt,
+ 			struct mountpoint *mp,
+ 			struct mount *child_mnt)
+@@ -1364,15 +1372,10 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
+ 
+ 		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt, &unmounted);
+ 		if (mnt_has_parent(p)) {
+-			hlist_del_init(&p->mnt_mp_list);
+-			put_mountpoint(p->mnt_mp);
+ 			mnt_add_count(p->mnt_parent, -1);
+ 			/* old mountpoint will be dropped when we can do that */
+ 			p->mnt_ex_mountpoint = p->mnt_mountpoint;
+-			p->mnt_mountpoint = p->mnt.mnt_root;
+-			p->mnt_parent = p;
+-			p->mnt_mp = NULL;
+-			hlist_del_init_rcu(&p->mnt_hash);
++			unhash_mnt(p);
+ 		}
+ 		change_mnt_propagation(p, MS_PRIVATE);
+ 	}
+-- 
+2.3.6
+
+
+From 2db706971b3f28b3d59a9af231578803da85def8 Mon Sep 17 00:00:00 2001
+From: "Eric W. Biederman" <ebiederm@xmission.com>
+Date: Thu, 15 Jan 2015 22:58:33 -0600
+Subject: [PATCH 115/219] mnt: Factor umount_mnt from umount_tree
+Cc: mpagano@gentoo.org
+
+commit 6a46c5735c29175da55b2fa9d53775182422cdd7 upstream.
+
+For future use factor out a function umount_mnt from umount_tree.
+This function unhashes a mount and remembers where the mount
+was mounted so that eventually when the code makes it to a
+sleeping context the mountpoint can be dput.
+
+Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/namespace.c | 14 +++++++++++---
+ 1 file changed, 11 insertions(+), 3 deletions(-)
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 0e95c84..c905e48 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -822,6 +822,16 @@ static void detach_mnt(struct mount *mnt, struct path *old_path)
+ /*
+  * vfsmount lock must be held for write
+  */
++static void umount_mnt(struct mount *mnt)
++{
++	/* old mountpoint will be dropped when we can do that */
++	mnt->mnt_ex_mountpoint = mnt->mnt_mountpoint;
++	unhash_mnt(mnt);
++}
++
++/*
++ * vfsmount lock must be held for write
++ */
+ void mnt_set_mountpoint(struct mount *mnt,
+ 			struct mountpoint *mp,
+ 			struct mount *child_mnt)
+@@ -1373,9 +1383,7 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
+ 		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt, &unmounted);
+ 		if (mnt_has_parent(p)) {
+ 			mnt_add_count(p->mnt_parent, -1);
+-			/* old mountpoint will be dropped when we can do that */
+-			p->mnt_ex_mountpoint = p->mnt_mountpoint;
+-			unhash_mnt(p);
++			umount_mnt(p);
+ 		}
+ 		change_mnt_propagation(p, MS_PRIVATE);
+ 	}
+-- 
+2.3.6
+
+
+From 20e62ee6fa3da23a792ca31d4b68069060317260 Mon Sep 17 00:00:00 2001
+From: "Eric W. Biederman" <ebiederm@xmission.com>
+Date: Tue, 23 Dec 2014 21:37:03 -0600
+Subject: [PATCH 116/219] mnt: Honor MNT_LOCKED when detaching mounts
+Cc: mpagano@gentoo.org
+
+commit ce07d891a0891d3c0d0c2d73d577490486b809e1 upstream.
+
+Modify umount(MNT_DETACH) to keep mounts in the hash table that are
+locked to their parent mounts, when the parent is lazily unmounted.
+
+In mntput_no_expire detach the children from the hash table, depending
+on mnt_pin_kill in cleanup_mnt to decrement the mnt_count of the children.
+
+In __detach_mounts if there are any mounts that have been unmounted
+but still are on the list of mounts of a mountpoint, remove their
+children from the mount hash table and those children to the unmounted
+list so they won't linger potentially indefinitely waiting for their
+final mntput, now that the mounts serve no purpose.
+
+Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/namespace.c | 29 ++++++++++++++++++++++++++---
+ fs/pnode.h     |  2 ++
+ 2 files changed, 28 insertions(+), 3 deletions(-)
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index c905e48..24de1e9 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1099,6 +1099,13 @@ static void mntput_no_expire(struct mount *mnt)
+ 	rcu_read_unlock();
+ 
+ 	list_del(&mnt->mnt_instance);
++
++	if (unlikely(!list_empty(&mnt->mnt_mounts))) {
++		struct mount *p, *tmp;
++		list_for_each_entry_safe(p, tmp, &mnt->mnt_mounts,  mnt_child) {
++			umount_mnt(p);
++		}
++	}
+ 	unlock_mount_hash();
+ 
+ 	if (likely(!(mnt->mnt.mnt_flags & MNT_INTERNAL))) {
+@@ -1372,6 +1379,7 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
+ 		propagate_umount(&tmp_list);
+ 
+ 	while (!list_empty(&tmp_list)) {
++		bool disconnect;
+ 		p = list_first_entry(&tmp_list, struct mount, mnt_list);
+ 		list_del_init(&p->mnt_expire);
+ 		list_del_init(&p->mnt_list);
+@@ -1380,10 +1388,18 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
+ 		if (how & UMOUNT_SYNC)
+ 			p->mnt.mnt_flags |= MNT_SYNC_UMOUNT;
+ 
+-		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt, &unmounted);
++		disconnect = !IS_MNT_LOCKED_AND_LAZY(p);
++
++		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt,
++				 disconnect ? &unmounted : NULL);
+ 		if (mnt_has_parent(p)) {
+ 			mnt_add_count(p->mnt_parent, -1);
+-			umount_mnt(p);
++			if (!disconnect) {
++				/* Don't forget about p */
++				list_add_tail(&p->mnt_child, &p->mnt_parent->mnt_mounts);
++			} else {
++				umount_mnt(p);
++			}
+ 		}
+ 		change_mnt_propagation(p, MS_PRIVATE);
+ 	}
+@@ -1508,7 +1524,14 @@ void __detach_mounts(struct dentry *dentry)
+ 	lock_mount_hash();
+ 	while (!hlist_empty(&mp->m_list)) {
+ 		mnt = hlist_entry(mp->m_list.first, struct mount, mnt_mp_list);
+-		umount_tree(mnt, 0);
++		if (mnt->mnt.mnt_flags & MNT_UMOUNT) {
++			struct mount *p, *tmp;
++			list_for_each_entry_safe(p, tmp, &mnt->mnt_mounts,  mnt_child) {
++				hlist_add_head(&p->mnt_umount.s_list, &unmounted);
++				umount_mnt(p);
++			}
++		}
++		else umount_tree(mnt, 0);
+ 	}
+ 	unlock_mount_hash();
+ 	put_mountpoint(mp);
+diff --git a/fs/pnode.h b/fs/pnode.h
+index 0fcdbe7..7114ce6 100644
+--- a/fs/pnode.h
++++ b/fs/pnode.h
+@@ -20,6 +20,8 @@
+ #define SET_MNT_MARK(m) ((m)->mnt.mnt_flags |= MNT_MARKED)
+ #define CLEAR_MNT_MARK(m) ((m)->mnt.mnt_flags &= ~MNT_MARKED)
+ #define IS_MNT_LOCKED(m) ((m)->mnt.mnt_flags & MNT_LOCKED)
++#define IS_MNT_LOCKED_AND_LAZY(m) \
++	(((m)->mnt.mnt_flags & (MNT_LOCKED|MNT_SYNC_UMOUNT)) == MNT_LOCKED)
+ 
+ #define CL_EXPIRE    		0x01
+ #define CL_SLAVE     		0x02
+-- 
+2.3.6
+
+
+From c076cbf218f3cb83dffe6982587d2b9751318962 Mon Sep 17 00:00:00 2001
+From: "Eric W. Biederman" <ebiederm@xmission.com>
+Date: Mon, 19 Jan 2015 11:48:45 -0600
+Subject: [PATCH 117/219] mnt: Fix the error check in __detach_mounts
+Cc: mpagano@gentoo.org
+
+commit f53e57975151f54ad8caa1b0ac8a78091cd5700a upstream.
+
+lookup_mountpoint can return either NULL or an error value.
+Update the test in __detach_mounts to test for an error value
+to avoid pathological cases causing a NULL pointer dereferences.
+
+The callers of __detach_mounts should prevent it from ever being
+called on an unlinked dentry but don't take any chances.
+
+Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/namespace.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 24de1e9..9e33895 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1518,7 +1518,7 @@ void __detach_mounts(struct dentry *dentry)
+ 
+ 	namespace_lock();
+ 	mp = lookup_mountpoint(dentry);
+-	if (!mp)
++	if (IS_ERR_OR_NULL(mp))
+ 		goto out_unlock;
+ 
+ 	lock_mount_hash();
+-- 
+2.3.6
+
+
+From 84b78514033ff22c443473214ab6d0508394cf7a Mon Sep 17 00:00:00 2001
+From: "Eric W. Biederman" <ebiederm@xmission.com>
+Date: Wed, 1 Apr 2015 18:30:06 -0500
+Subject: [PATCH 118/219] mnt: Update detach_mounts to leave mounts connected
+Cc: mpagano@gentoo.org
+
+commit e0c9c0afd2fc958ffa34b697972721d81df8a56f upstream.
+
+Now that it is possible to lazily unmount an entire mount tree and
+leave the individual mounts connected to each other add a new flag
+UMOUNT_CONNECTED to umount_tree to force this behavior and use
+this flag in detach_mounts.
+
+This closes a bug where the deletion of a file or directory could
+trigger an unmount and reveal data under a mount point.
+
+Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/namespace.c | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 9e33895..4622ee3 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1350,6 +1350,7 @@ static inline void namespace_lock(void)
+ enum umount_tree_flags {
+ 	UMOUNT_SYNC = 1,
+ 	UMOUNT_PROPAGATE = 2,
++	UMOUNT_CONNECTED = 4,
+ };
+ /*
+  * mount_lock must be held
+@@ -1388,7 +1389,10 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
+ 		if (how & UMOUNT_SYNC)
+ 			p->mnt.mnt_flags |= MNT_SYNC_UMOUNT;
+ 
+-		disconnect = !IS_MNT_LOCKED_AND_LAZY(p);
++		disconnect = !(((how & UMOUNT_CONNECTED) &&
++				mnt_has_parent(p) &&
++				(p->mnt_parent->mnt.mnt_flags & MNT_UMOUNT)) ||
++			       IS_MNT_LOCKED_AND_LAZY(p));
+ 
+ 		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt,
+ 				 disconnect ? &unmounted : NULL);
+@@ -1531,7 +1535,7 @@ void __detach_mounts(struct dentry *dentry)
+ 				umount_mnt(p);
+ 			}
+ 		}
+-		else umount_tree(mnt, 0);
++		else umount_tree(mnt, UMOUNT_CONNECTED);
+ 	}
+ 	unlock_mount_hash();
+ 	put_mountpoint(mp);
+-- 
+2.3.6
+
+
+From 85c75cd8131b5aa9fe4efc6400ae1d0631497720 Mon Sep 17 00:00:00 2001
+From: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
+Date: Wed, 18 Mar 2015 08:17:14 +0200
+Subject: [PATCH 119/219] tpm: fix: sanitized code paths in tpm_chip_register()
+Cc: mpagano@gentoo.org
+
+commit 34d47b6322087665be33ca3aa81775b143a4d7ac upstream.
+
+I started to work with PPI interface so that it would be available
+under character device sysfs directory and realized that chip
+registeration was still too messy.
+
+In TPM 1.x in some rare scenarios (errors that almost never occur)
+wrong order in deinitialization steps was taken in teardown. I
+reproduced these scenarios by manually inserting error codes in the
+place of the corresponding function calls.
+
+The key problem is that the teardown is messy with two separate code
+paths (this was inherited when moving code from tpm-interface.c).
+
+Moved TPM 1.x specific register/unregister functionality to own helper
+functions and added single code path for teardown in tpm_chip_register().
+Now the code paths have been fixed and it should be easier to review
+later on this part of the code.
+
+Fixes: 7a1d7e6dd76a ("tpm: TPM 2.0 baseline support")
+Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
+Tested-by: Scot Doyle <lkml14@scotdoyle.com>
+Reviewed-by: Peter Huewe <peterhuewe@gmx.de>
+Signed-off-by: Peter Huewe <peterhuewe@gmx.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/char/tpm/tpm-chip.c | 66 ++++++++++++++++++++++++++++-----------------
+ 1 file changed, 42 insertions(+), 24 deletions(-)
+
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index e096e9c..283f00a 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -170,6 +170,41 @@ static void tpm_dev_del_device(struct tpm_chip *chip)
+ 	device_unregister(&chip->dev);
+ }
+ 
++static int tpm1_chip_register(struct tpm_chip *chip)
++{
++	int rc;
++
++	if (chip->flags & TPM_CHIP_FLAG_TPM2)
++		return 0;
++
++	rc = tpm_sysfs_add_device(chip);
++	if (rc)
++		return rc;
++
++	rc = tpm_add_ppi(chip);
++	if (rc) {
++		tpm_sysfs_del_device(chip);
++		return rc;
++	}
++
++	chip->bios_dir = tpm_bios_log_setup(chip->devname);
++
++	return 0;
++}
++
++static void tpm1_chip_unregister(struct tpm_chip *chip)
++{
++	if (chip->flags & TPM_CHIP_FLAG_TPM2)
++		return;
++
++	if (chip->bios_dir)
++		tpm_bios_log_teardown(chip->bios_dir);
++
++	tpm_remove_ppi(chip);
++
++	tpm_sysfs_del_device(chip);
++}
++
+ /*
+  * tpm_chip_register() - create a character device for the TPM chip
+  * @chip: TPM chip to use.
+@@ -185,22 +220,13 @@ int tpm_chip_register(struct tpm_chip *chip)
+ {
+ 	int rc;
+ 
+-	/* Populate sysfs for TPM1 devices. */
+-	if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
+-		rc = tpm_sysfs_add_device(chip);
+-		if (rc)
+-			goto del_misc;
+-
+-		rc = tpm_add_ppi(chip);
+-		if (rc)
+-			goto del_sysfs;
+-
+-		chip->bios_dir = tpm_bios_log_setup(chip->devname);
+-	}
++	rc = tpm1_chip_register(chip);
++	if (rc)
++		return rc;
+ 
+ 	rc = tpm_dev_add_device(chip);
+ 	if (rc)
+-		return rc;
++		goto out_err;
+ 
+ 	/* Make the chip available. */
+ 	spin_lock(&driver_lock);
+@@ -210,10 +236,8 @@ int tpm_chip_register(struct tpm_chip *chip)
+ 	chip->flags |= TPM_CHIP_FLAG_REGISTERED;
+ 
+ 	return 0;
+-del_sysfs:
+-	tpm_sysfs_del_device(chip);
+-del_misc:
+-	tpm_dev_del_device(chip);
++out_err:
++	tpm1_chip_unregister(chip);
+ 	return rc;
+ }
+ EXPORT_SYMBOL_GPL(tpm_chip_register);
+@@ -238,13 +262,7 @@ void tpm_chip_unregister(struct tpm_chip *chip)
+ 	spin_unlock(&driver_lock);
+ 	synchronize_rcu();
+ 
+-	if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
+-		if (chip->bios_dir)
+-			tpm_bios_log_teardown(chip->bios_dir);
+-		tpm_remove_ppi(chip);
+-		tpm_sysfs_del_device(chip);
+-	}
+-
++	tpm1_chip_unregister(chip);
+ 	tpm_dev_del_device(chip);
+ }
+ EXPORT_SYMBOL_GPL(tpm_chip_unregister);
+-- 
+2.3.6
+
+
+From b0566aa080d2ab7f5810f5bdea53c02dfc78ff16 Mon Sep 17 00:00:00 2001
+From: Vinson Lee <vlee@twitter.com>
+Date: Mon, 9 Feb 2015 16:29:37 -0800
+Subject: [PATCH 120/219] perf symbols: Define STT_GNU_IFUNC for glibc 2.9 and
+ older.
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+Cc: mpagano@gentoo.org
+
+commit 4e31050f482c02c822b150d71cf1ea5be7c9d6e4 upstream.
+
+The token STT_GNU_IFUNC is not available with glibc 2.9 and older.
+Define this token if it is not already defined.
+
+This patch fixes this build errors with older versions of glibc.
+
+  CC       util/symbol-elf.o
+util/symbol-elf.c: In function ‘elf_sym__is_function’:
+util/symbol-elf.c:75: error: ‘STT_GNU_IFUNC’ undeclared (first use in this function)
+util/symbol-elf.c:75: error: (Each undeclared identifier is reported only once
+util/symbol-elf.c:75: error: for each function it appears in.)
+make: *** [util/symbol-elf.o] Error 1
+
+Signed-off-by: Vinson Lee <vlee@twitter.com>
+Acked-by: Namhyung Kim <namhyung@kernel.org>
+Cc: Adrian Hunter <adrian.hunter@intel.com>
+Cc: Anton Blanchard <anton@samba.org>
+Cc: Avi Kivity <avi@cloudius-systems.com>
+Cc: Jiri Olsa <jolsa@redhat.com>
+Cc: Paul Mackerras <paulus@samba.org>
+Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
+Cc: Stephane Eranian <eranian@google.com>
+Cc: Waiman Long <Waiman.Long@hp.com>
+Link: http://lkml.kernel.org/r/1423528286-13630-1-git-send-email-vlee@twopensource.com
+Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ tools/perf/util/symbol-elf.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index 33b7a2a..9bdf007 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -74,6 +74,10 @@ static inline uint8_t elf_sym__type(const GElf_Sym *sym)
+ 	return GELF_ST_TYPE(sym->st_info);
+ }
+ 
++#ifndef STT_GNU_IFUNC
++#define STT_GNU_IFUNC 10
++#endif
++
+ static inline int elf_sym__is_function(const GElf_Sym *sym)
+ {
+ 	return (elf_sym__type(sym) == STT_FUNC ||
+-- 
+2.3.6
+
+
+From eefadbaae8af748e25d6fb903b56c6d3e38215b8 Mon Sep 17 00:00:00 2001
+From: "H.J. Lu" <hjl.tools@gmail.com>
+Date: Tue, 17 Mar 2015 15:27:48 -0700
+Subject: [PATCH 121/219] perf tools: Fix perf-read-vdsox32 not building and
+ lib64 install dir
+Cc: mpagano@gentoo.org
+
+commit 76aea7731e7050c066943a1d7456ec6510702601 upstream.
+
+Commit:
+
+  c6e5e9fbc3ea ("perf tools: Fix building error in x86_64 when dwarf unwind is on")
+
+removed the definition of IS_X86_64 but not all places using it, with
+the consequence that perf-read-vdsox32 would not be built anymore, and
+the default lib install directory was 'lib' instead of 'lib64'.
+
+Also needs to go to v3.19.
+
+Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
+Acked-by: Adrian Hunter <adrian.hunter@intel.com>
+Acked-by: Jiri Olsa <jolsa@kernel.org>
+Link: http://lkml.kernel.org/r/CAMe9rOqpGVq3D88w+D15ef7sv6G6k57ZeTvxBm46=WFgzo9p1w@mail.gmail.com
+Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ tools/perf/config/Makefile | 4 ++--
+ tools/perf/tests/make      | 2 +-
+ 2 files changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/tools/perf/config/Makefile b/tools/perf/config/Makefile
+index cc22408..0884d31 100644
+--- a/tools/perf/config/Makefile
++++ b/tools/perf/config/Makefile
+@@ -651,7 +651,7 @@ ifeq (${IS_64_BIT}, 1)
+       NO_PERF_READ_VDSO32 := 1
+     endif
+   endif
+-  ifneq (${IS_X86_64}, 1)
++  ifneq ($(ARCH), x86)
+     NO_PERF_READ_VDSOX32 := 1
+   endif
+   ifndef NO_PERF_READ_VDSOX32
+@@ -699,7 +699,7 @@ sysconfdir = $(prefix)/etc
+ ETC_PERFCONFIG = etc/perfconfig
+ endif
+ ifndef lib
+-ifeq ($(IS_X86_64),1)
++ifeq ($(ARCH)$(IS_64_BIT), x861)
+ lib = lib64
+ else
+ lib = lib
+diff --git a/tools/perf/tests/make b/tools/perf/tests/make
+index 75709d2..bff8532 100644
+--- a/tools/perf/tests/make
++++ b/tools/perf/tests/make
+@@ -5,7 +5,7 @@ include config/Makefile.arch
+ 
+ # FIXME looks like x86 is the only arch running tests ;-)
+ # we need some IS_(32/64) flag to make this generic
+-ifeq ($(IS_X86_64),1)
++ifeq ($(ARCH)$(IS_64_BIT), x861)
+ lib = lib64
+ else
+ lib = lib
+-- 
+2.3.6
+
+
+From a245448568a6f791b7d4617e622adf6e7118d174 Mon Sep 17 00:00:00 2001
+From: Vinson Lee <vlee@twitter.com>
+Date: Mon, 23 Mar 2015 12:09:16 -0700
+Subject: [PATCH 122/219] perf tools: Work around lack of sched_getcpu in glibc
+ < 2.6.
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+Cc: mpagano@gentoo.org
+
+commit e1e455f4f4d35850c30235747620d0d078fe9f64 upstream.
+
+This patch fixes this build error with glibc < 2.6.
+
+  CC       util/cloexec.o
+cc1: warnings being treated as errors
+util/cloexec.c: In function ‘perf_flag_probe’:
+util/cloexec.c:24: error: implicit declaration of function
+‘sched_getcpu’
+util/cloexec.c:24: error: nested extern declaration of ‘sched_getcpu’
+make: *** [util/cloexec.o] Error 1
+
+Signed-off-by: Vinson Lee <vlee@twitter.com>
+Acked-by: Jiri Olsa <jolsa@kernel.org>
+Acked-by: Namhyung Kim <namhyung@kernel.org>
+Cc: Adrian Hunter <adrian.hunter@intel.com>
+Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
+Cc: Paul Mackerras <paulus@samba.org>
+Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
+Cc: Yann Droneaud <ydroneaud@opteya.com>
+Link: http://lkml.kernel.org/r/1427137761-16119-1-git-send-email-vlee@twopensource.com
+Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ tools/perf/util/cloexec.c | 6 ++++++
+ tools/perf/util/cloexec.h | 6 ++++++
+ 2 files changed, 12 insertions(+)
+
+diff --git a/tools/perf/util/cloexec.c b/tools/perf/util/cloexec.c
+index 6da965b..85b5238 100644
+--- a/tools/perf/util/cloexec.c
++++ b/tools/perf/util/cloexec.c
+@@ -7,6 +7,12 @@
+ 
+ static unsigned long flag = PERF_FLAG_FD_CLOEXEC;
+ 
++int __weak sched_getcpu(void)
++{
++	errno = ENOSYS;
++	return -1;
++}
++
+ static int perf_flag_probe(void)
+ {
+ 	/* use 'safest' configuration as used in perf_evsel__fallback() */
+diff --git a/tools/perf/util/cloexec.h b/tools/perf/util/cloexec.h
+index 94a5a7d..68888c2 100644
+--- a/tools/perf/util/cloexec.h
++++ b/tools/perf/util/cloexec.h
+@@ -3,4 +3,10 @@
+ 
+ unsigned long perf_event_open_cloexec_flag(void);
+ 
++#ifdef __GLIBC_PREREQ
++#if !__GLIBC_PREREQ(2, 6)
++extern int sched_getcpu(void) __THROW;
++#endif
++#endif
++
+ #endif /* __PERF_CLOEXEC_H */
+-- 
+2.3.6
+
+
+From beda5943f15926783dc6768e8f821266ae6e8fb3 Mon Sep 17 00:00:00 2001
+From: Anton Blanchard <anton@samba.org>
+Date: Tue, 14 Apr 2015 07:51:03 +1000
+Subject: [PATCH 123/219] powerpc/perf: Cap 64bit userspace backtraces to
+ PERF_MAX_STACK_DEPTH
+Cc: mpagano@gentoo.org
+
+commit 9a5cbce421a283e6aea3c4007f141735bf9da8c3 upstream.
+
+We cap 32bit userspace backtraces to PERF_MAX_STACK_DEPTH
+(currently 127), but we forgot to do the same for 64bit backtraces.
+
+Signed-off-by: Anton Blanchard <anton@samba.org>
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/powerpc/perf/callchain.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
+index 2396dda..ead5535 100644
+--- a/arch/powerpc/perf/callchain.c
++++ b/arch/powerpc/perf/callchain.c
+@@ -243,7 +243,7 @@ static void perf_callchain_user_64(struct perf_callchain_entry *entry,
+ 	sp = regs->gpr[1];
+ 	perf_callchain_store(entry, next_ip);
+ 
+-	for (;;) {
++	while (entry->nr < PERF_MAX_STACK_DEPTH) {
+ 		fp = (unsigned long __user *) sp;
+ 		if (!valid_user_sp(sp, 1) || read_user_stack_64(fp, &next_sp))
+ 			return;
+-- 
+2.3.6
+
+
+From f0289e90ac96271337d6d0f9c9a6ceb2aea62a05 Mon Sep 17 00:00:00 2001
+From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>
+Date: Tue, 24 Mar 2015 09:57:55 -0400
+Subject: [PATCH 124/219] tools lib traceevent kbuffer: Remove extra update to
+ data pointer in PADDING
+Cc: mpagano@gentoo.org
+
+commit c5e691928bf166ac03430e957038b60adba3cf6c upstream.
+
+When a event PADDING is hit (a deleted event that is still in the ring
+buffer), translate_data() sets the length of the padding and also updates
+the data pointer which is passed back to the caller.
+
+This is unneeded because the caller also updates the data pointer with
+the passed back length. translate_data() should not update the pointer,
+only set the length.
+
+Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
+Cc: Andrew Morton <akpm@linux-foundation.org>
+Cc: Jiri Olsa <jolsa@redhat.com>
+Cc: Namhyung Kim <namhyung@kernel.org>
+Link: http://lkml.kernel.org/r/20150324135923.461431960@goodmis.org
+Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ tools/lib/traceevent/kbuffer-parse.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/tools/lib/traceevent/kbuffer-parse.c b/tools/lib/traceevent/kbuffer-parse.c
+index dcc6652..deb3569 100644
+--- a/tools/lib/traceevent/kbuffer-parse.c
++++ b/tools/lib/traceevent/kbuffer-parse.c
+@@ -372,7 +372,6 @@ translate_data(struct kbuffer *kbuf, void *data, void **rptr,
+ 	switch (type_len) {
+ 	case KBUFFER_TYPE_PADDING:
+ 		*length = read_4(kbuf, data);
+-		data += *length;
+ 		break;
+ 
+ 	case KBUFFER_TYPE_TIME_EXTEND:
+-- 
+2.3.6
+
+
+From e5e82af52cd373fed10be67faba90cd2eed6fb17 Mon Sep 17 00:00:00 2001
+From: Thomas D <whissi@whissi.de>
+Date: Mon, 5 Jan 2015 21:37:23 +0100
+Subject: [PATCH 125/219] tools/power turbostat: Use $(CURDIR) instead of
+ $(PWD) and add support for O= option in Makefile
+Cc: mpagano@gentoo.org
+
+commit f82263c6989c31ae9b94cecddffb29dcbec38710 upstream.
+
+Since commit ee0778a30153
+("tools/power: turbostat: make Makefile a bit more capable")
+turbostat's Makefile is using
+
+  [...]
+  BUILD_OUTPUT    := $(PWD)
+  [...]
+
+which obviously causes trouble when building "turbostat" with
+
+  make -C /usr/src/linux/tools/power/x86/turbostat ARCH=x86 turbostat
+
+because GNU make does not update nor guarantee that $PWD is set.
+
+This patch changes the Makefile to use $CURDIR instead, which GNU make
+guarantees to set and update (i.e. when using "make -C ...") and also
+adds support for the O= option (see "make help" in your root of your
+kernel source tree for more details).
+
+Link: https://bugs.gentoo.org/show_bug.cgi?id=533918
+Fixes: ee0778a30153 ("tools/power: turbostat: make Makefile a bit more capable")
+Signed-off-by: Thomas D. <whissi@whissi.de>
+Cc: Mark Asselstine <mark.asselstine@windriver.com>
+Signed-off-by: Len Brown <len.brown@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ tools/power/x86/turbostat/Makefile | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+diff --git a/tools/power/x86/turbostat/Makefile b/tools/power/x86/turbostat/Makefile
+index d1b3a36..4039854 100644
+--- a/tools/power/x86/turbostat/Makefile
++++ b/tools/power/x86/turbostat/Makefile
+@@ -1,8 +1,12 @@
+ CC		= $(CROSS_COMPILE)gcc
+-BUILD_OUTPUT	:= $(PWD)
++BUILD_OUTPUT	:= $(CURDIR)
+ PREFIX		:= /usr
+ DESTDIR		:=
+ 
++ifeq ("$(origin O)", "command line")
++	BUILD_OUTPUT := $(O)
++endif
++
+ turbostat : turbostat.c
+ CFLAGS +=	-Wall
+ CFLAGS +=	-DMSRHEADER='"../../../../arch/x86/include/uapi/asm/msr-index.h"'
+-- 
+2.3.6
+
+
+From 67e9563f2e494959696ff3128cf9d5fb1b3dbad7 Mon Sep 17 00:00:00 2001
+From: Brian Norris <computersforpeace@gmail.com>
+Date: Sat, 28 Feb 2015 02:23:25 -0800
+Subject: [PATCH 126/219] UBI: account for bitflips in both the VID header and
+ data
+Cc: mpagano@gentoo.org
+
+commit 8eef7d70f7c6772c3490f410ee2bceab3b543fa1 upstream.
+
+We are completely discarding the earlier value of 'bitflips', which
+could reflect a bitflip found in ubi_io_read_vid_hdr(). Let's use the
+bitwise OR of header and data 'bitflip' statuses instead.
+
+Coverity CID #1226856
+
+Signed-off-by: Brian Norris <computersforpeace@gmail.com>
+Signed-off-by: Richard Weinberger <richard@nod.at>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/mtd/ubi/attach.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/mtd/ubi/attach.c b/drivers/mtd/ubi/attach.c
+index 9d2e16f..b5e1548 100644
+--- a/drivers/mtd/ubi/attach.c
++++ b/drivers/mtd/ubi/attach.c
+@@ -410,7 +410,7 @@ int ubi_compare_lebs(struct ubi_device *ubi, const struct ubi_ainf_peb *aeb,
+ 		second_is_newer = !second_is_newer;
+ 	} else {
+ 		dbg_bld("PEB %d CRC is OK", pnum);
+-		bitflips = !!err;
++		bitflips |= !!err;
+ 	}
+ 	mutex_unlock(&ubi->buf_mutex);
+ 
+-- 
+2.3.6
+
+
+From 921b47c10b2b18b3562152aa0eacc1b2e56c6996 Mon Sep 17 00:00:00 2001
+From: Brian Norris <computersforpeace@gmail.com>
+Date: Sat, 28 Feb 2015 02:23:26 -0800
+Subject: [PATCH 127/219] UBI: fix out of bounds write
+Cc: mpagano@gentoo.org
+
+commit d74adbdb9abf0d2506a6c4afa534d894f28b763f upstream.
+
+If aeb->len >= vol->reserved_pebs, we should not be writing aeb into the
+PEB->LEB mapping.
+
+Caught by Coverity, CID #711212.
+
+Signed-off-by: Brian Norris <computersforpeace@gmail.com>
+Signed-off-by: Richard Weinberger <richard@nod.at>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/mtd/ubi/eba.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/mtd/ubi/eba.c b/drivers/mtd/ubi/eba.c
+index 16e34b3..8c9a710 100644
+--- a/drivers/mtd/ubi/eba.c
++++ b/drivers/mtd/ubi/eba.c
+@@ -1419,7 +1419,8 @@ int ubi_eba_init(struct ubi_device *ubi, struct ubi_attach_info *ai)
+ 				 * during re-size.
+ 				 */
+ 				ubi_move_aeb_to_list(av, aeb, &ai->erase);
+-			vol->eba_tbl[aeb->lnum] = aeb->pnum;
++			else
++				vol->eba_tbl[aeb->lnum] = aeb->pnum;
+ 		}
+ 	}
+ 
+-- 
+2.3.6
+
+
+From 5a156e848f96a0f0024ef94a3e19979f8f7e9dbc Mon Sep 17 00:00:00 2001
+From: Brian Norris <computersforpeace@gmail.com>
+Date: Sat, 28 Feb 2015 02:23:27 -0800
+Subject: [PATCH 128/219] UBI: initialize LEB number variable
+Cc: mpagano@gentoo.org
+
+commit f16db8071ce18819fbd705ddcc91c6f392fb61f8 upstream.
+
+In some of the 'out_not_moved' error paths, lnum may be used
+uninitialized. Don't ignore the warning; let's fix it.
+
+This uninitialized variable doesn't have much visible effect in the end,
+since we just schedule the PEB for erasure, and its LEB number doesn't
+really matter (it just gets printed in debug messages). But let's get it
+straight anyway.
+
+Coverity CID #113449
+
+Signed-off-by: Brian Norris <computersforpeace@gmail.com>
+Signed-off-by: Richard Weinberger <richard@nod.at>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/mtd/ubi/wl.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index 8f7bde6..0bd92d8 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -1002,7 +1002,7 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+ 				int shutdown)
+ {
+ 	int err, scrubbing = 0, torture = 0, protect = 0, erroneous = 0;
+-	int vol_id = -1, uninitialized_var(lnum);
++	int vol_id = -1, lnum = -1;
+ #ifdef CONFIG_MTD_UBI_FASTMAP
+ 	int anchor = wrk->anchor;
+ #endif
+-- 
+2.3.6
+
+
+From 075831830ff0277572a93633cce3807394955358 Mon Sep 17 00:00:00 2001
+From: Brian Norris <computersforpeace@gmail.com>
+Date: Sat, 28 Feb 2015 02:23:28 -0800
+Subject: [PATCH 129/219] UBI: fix check for "too many bytes"
+Cc: mpagano@gentoo.org
+
+commit 299d0c5b27346a77a0777c993372bf8777d4f2e5 upstream.
+
+The comparison from the previous line seems to have been erroneously
+(partially) copied-and-pasted onto the next. The second line should be
+checking req.bytes, not req.lnum.
+
+Coverity CID #139400
+
+Signed-off-by: Brian Norris <computersforpeace@gmail.com>
+[rw: Fixed comparison]
+Signed-off-by: Richard Weinberger <richard@nod.at>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/mtd/ubi/cdev.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/mtd/ubi/cdev.c b/drivers/mtd/ubi/cdev.c
+index d647e50..d16fccf 100644
+--- a/drivers/mtd/ubi/cdev.c
++++ b/drivers/mtd/ubi/cdev.c
+@@ -455,7 +455,7 @@ static long vol_cdev_ioctl(struct file *file, unsigned int cmd,
+ 		/* Validate the request */
+ 		err = -EINVAL;
+ 		if (req.lnum < 0 || req.lnum >= vol->reserved_pebs ||
+-		    req.bytes < 0 || req.lnum >= vol->usable_leb_size)
++		    req.bytes < 0 || req.bytes > vol->usable_leb_size)
+ 			break;
+ 
+ 		err = get_exclusive(desc);
+-- 
+2.3.6
+
+
+From 1d05935b31efb2e398e1772b76a6513b9484574a Mon Sep 17 00:00:00 2001
+From: "K. Y. Srinivasan" <kys@microsoft.com>
+Date: Fri, 27 Mar 2015 00:27:18 -0700
+Subject: [PATCH 130/219] scsi: storvsc: Fix a bug in copy_from_bounce_buffer()
+Cc: mpagano@gentoo.org
+
+commit 8de580742fee8bc34d116f57a20b22b9a5f08403 upstream.
+
+We may exit this function without properly freeing up the maapings
+we may have acquired. Fix the bug.
+
+Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
+Reviewed-by: Long Li <longli@microsoft.com>
+Signed-off-by: James Bottomley <JBottomley@Odin.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/scsi/storvsc_drv.c | 15 ++++++++-------
+ 1 file changed, 8 insertions(+), 7 deletions(-)
+
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index efc6e44..bf8c5c1 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -746,21 +746,22 @@ static unsigned int copy_to_bounce_buffer(struct scatterlist *orig_sgl,
+ 			if (bounce_sgl[j].length == PAGE_SIZE) {
+ 				/* full..move to next entry */
+ 				sg_kunmap_atomic(bounce_addr);
++				bounce_addr = 0;
+ 				j++;
++			}
+ 
+-				/* if we need to use another bounce buffer */
+-				if (srclen || i != orig_sgl_count - 1)
+-					bounce_addr = sg_kmap_atomic(bounce_sgl,j);
++			/* if we need to use another bounce buffer */
++			if (srclen && bounce_addr == 0)
++				bounce_addr = sg_kmap_atomic(bounce_sgl, j);
+ 
+-			} else if (srclen == 0 && i == orig_sgl_count - 1) {
+-				/* unmap the last bounce that is < PAGE_SIZE */
+-				sg_kunmap_atomic(bounce_addr);
+-			}
+ 		}
+ 
+ 		sg_kunmap_atomic(src_addr - orig_sgl[i].offset);
+ 	}
+ 
++	if (bounce_addr)
++		sg_kunmap_atomic(bounce_addr);
++
+ 	local_irq_restore(flags);
+ 
+ 	return total_copied;
+-- 
+2.3.6
+
+
+From 7f61df07930dae7b1a94f088365362a191d2f4ec Mon Sep 17 00:00:00 2001
+From: Nicholas Bellinger <nab@linux-iscsi.org>
+Date: Thu, 26 Feb 2015 22:19:15 -0800
+Subject: [PATCH 131/219] iscsi-target: Convert iscsi_thread_set usage to
+ kthread.h
+Cc: mpagano@gentoo.org
+
+commit 88dcd2dab5c23b1c9cfc396246d8f476c872f0ca upstream.
+
+This patch converts iscsi-target code to use modern kthread.h API
+callers for creating RX/TX threads for each new iscsi_conn descriptor,
+and releasing associated RX/TX threads during connection shutdown.
+
+This is done using iscsit_start_kthreads() -> kthread_run() to start
+new kthreads from within iscsi_post_login_handler(), and invoking
+kthread_stop() from existing iscsit_close_connection() code.
+
+Also, convert iscsit_logout_post_handler_closesession() code to use
+cmpxchg when determing when iscsit_cause_connection_reinstatement()
+needs to sleep waiting for completion.
+
+Reported-by: Sagi Grimberg <sagig@mellanox.com>
+Tested-by: Sagi Grimberg <sagig@mellanox.com>
+Cc: Slava Shwartsman <valyushash@gmail.com>
+Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/target/iscsi/iscsi_target.c       | 104 +++++++++++++-----------------
+ drivers/target/iscsi/iscsi_target_erl0.c  |  13 ++--
+ drivers/target/iscsi/iscsi_target_login.c |  59 +++++++++++++++--
+ include/target/iscsi/iscsi_target_core.h  |   7 ++
+ 4 files changed, 114 insertions(+), 69 deletions(-)
+
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index 77d6425..5e35612 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -537,7 +537,7 @@ static struct iscsit_transport iscsi_target_transport = {
+ 
+ static int __init iscsi_target_init_module(void)
+ {
+-	int ret = 0;
++	int ret = 0, size;
+ 
+ 	pr_debug("iSCSI-Target "ISCSIT_VERSION"\n");
+ 
+@@ -546,6 +546,7 @@ static int __init iscsi_target_init_module(void)
+ 		pr_err("Unable to allocate memory for iscsit_global\n");
+ 		return -1;
+ 	}
++	spin_lock_init(&iscsit_global->ts_bitmap_lock);
+ 	mutex_init(&auth_id_lock);
+ 	spin_lock_init(&sess_idr_lock);
+ 	idr_init(&tiqn_idr);
+@@ -555,15 +556,11 @@ static int __init iscsi_target_init_module(void)
+ 	if (ret < 0)
+ 		goto out;
+ 
+-	ret = iscsi_thread_set_init();
+-	if (ret < 0)
++	size = BITS_TO_LONGS(ISCSIT_BITMAP_BITS) * sizeof(long);
++	iscsit_global->ts_bitmap = vzalloc(size);
++	if (!iscsit_global->ts_bitmap) {
++		pr_err("Unable to allocate iscsit_global->ts_bitmap\n");
+ 		goto configfs_out;
+-
+-	if (iscsi_allocate_thread_sets(TARGET_THREAD_SET_COUNT) !=
+-			TARGET_THREAD_SET_COUNT) {
+-		pr_err("iscsi_allocate_thread_sets() returned"
+-			" unexpected value!\n");
+-		goto ts_out1;
+ 	}
+ 
+ 	lio_qr_cache = kmem_cache_create("lio_qr_cache",
+@@ -572,7 +569,7 @@ static int __init iscsi_target_init_module(void)
+ 	if (!lio_qr_cache) {
+ 		pr_err("nable to kmem_cache_create() for"
+ 				" lio_qr_cache\n");
+-		goto ts_out2;
++		goto bitmap_out;
+ 	}
+ 
+ 	lio_dr_cache = kmem_cache_create("lio_dr_cache",
+@@ -617,10 +614,8 @@ dr_out:
+ 	kmem_cache_destroy(lio_dr_cache);
+ qr_out:
+ 	kmem_cache_destroy(lio_qr_cache);
+-ts_out2:
+-	iscsi_deallocate_thread_sets();
+-ts_out1:
+-	iscsi_thread_set_free();
++bitmap_out:
++	vfree(iscsit_global->ts_bitmap);
+ configfs_out:
+ 	iscsi_target_deregister_configfs();
+ out:
+@@ -630,8 +625,6 @@ out:
+ 
+ static void __exit iscsi_target_cleanup_module(void)
+ {
+-	iscsi_deallocate_thread_sets();
+-	iscsi_thread_set_free();
+ 	iscsit_release_discovery_tpg();
+ 	iscsit_unregister_transport(&iscsi_target_transport);
+ 	kmem_cache_destroy(lio_qr_cache);
+@@ -641,6 +634,7 @@ static void __exit iscsi_target_cleanup_module(void)
+ 
+ 	iscsi_target_deregister_configfs();
+ 
++	vfree(iscsit_global->ts_bitmap);
+ 	kfree(iscsit_global);
+ }
+ 
+@@ -3715,17 +3709,16 @@ static int iscsit_send_reject(
+ 
+ void iscsit_thread_get_cpumask(struct iscsi_conn *conn)
+ {
+-	struct iscsi_thread_set *ts = conn->thread_set;
+ 	int ord, cpu;
+ 	/*
+-	 * thread_id is assigned from iscsit_global->ts_bitmap from
+-	 * within iscsi_thread_set.c:iscsi_allocate_thread_sets()
++	 * bitmap_id is assigned from iscsit_global->ts_bitmap from
++	 * within iscsit_start_kthreads()
+ 	 *
+-	 * Here we use thread_id to determine which CPU that this
+-	 * iSCSI connection's iscsi_thread_set will be scheduled to
++	 * Here we use bitmap_id to determine which CPU that this
++	 * iSCSI connection's RX/TX threads will be scheduled to
+ 	 * execute upon.
+ 	 */
+-	ord = ts->thread_id % cpumask_weight(cpu_online_mask);
++	ord = conn->bitmap_id % cpumask_weight(cpu_online_mask);
+ 	for_each_online_cpu(cpu) {
+ 		if (ord-- == 0) {
+ 			cpumask_set_cpu(cpu, conn->conn_cpumask);
+@@ -3914,7 +3907,7 @@ check_rsp_state:
+ 	switch (state) {
+ 	case ISTATE_SEND_LOGOUTRSP:
+ 		if (!iscsit_logout_post_handler(cmd, conn))
+-			goto restart;
++			return -ECONNRESET;
+ 		/* fall through */
+ 	case ISTATE_SEND_STATUS:
+ 	case ISTATE_SEND_ASYNCMSG:
+@@ -3942,8 +3935,6 @@ check_rsp_state:
+ 
+ err:
+ 	return -1;
+-restart:
+-	return -EAGAIN;
+ }
+ 
+ static int iscsit_handle_response_queue(struct iscsi_conn *conn)
+@@ -3970,21 +3961,13 @@ static int iscsit_handle_response_queue(struct iscsi_conn *conn)
+ int iscsi_target_tx_thread(void *arg)
+ {
+ 	int ret = 0;
+-	struct iscsi_conn *conn;
+-	struct iscsi_thread_set *ts = arg;
++	struct iscsi_conn *conn = arg;
+ 	/*
+ 	 * Allow ourselves to be interrupted by SIGINT so that a
+ 	 * connection recovery / failure event can be triggered externally.
+ 	 */
+ 	allow_signal(SIGINT);
+ 
+-restart:
+-	conn = iscsi_tx_thread_pre_handler(ts);
+-	if (!conn)
+-		goto out;
+-
+-	ret = 0;
+-
+ 	while (!kthread_should_stop()) {
+ 		/*
+ 		 * Ensure that both TX and RX per connection kthreads
+@@ -3993,11 +3976,9 @@ restart:
+ 		iscsit_thread_check_cpumask(conn, current, 1);
+ 
+ 		wait_event_interruptible(conn->queues_wq,
+-					 !iscsit_conn_all_queues_empty(conn) ||
+-					 ts->status == ISCSI_THREAD_SET_RESET);
++					 !iscsit_conn_all_queues_empty(conn));
+ 
+-		if ((ts->status == ISCSI_THREAD_SET_RESET) ||
+-		     signal_pending(current))
++		if (signal_pending(current))
+ 			goto transport_err;
+ 
+ get_immediate:
+@@ -4008,15 +3989,14 @@ get_immediate:
+ 		ret = iscsit_handle_response_queue(conn);
+ 		if (ret == 1)
+ 			goto get_immediate;
+-		else if (ret == -EAGAIN)
+-			goto restart;
++		else if (ret == -ECONNRESET)
++			goto out;
+ 		else if (ret < 0)
+ 			goto transport_err;
+ 	}
+ 
+ transport_err:
+ 	iscsit_take_action_for_connection_exit(conn);
+-	goto restart;
+ out:
+ 	return 0;
+ }
+@@ -4111,8 +4091,7 @@ int iscsi_target_rx_thread(void *arg)
+ 	int ret;
+ 	u8 buffer[ISCSI_HDR_LEN], opcode;
+ 	u32 checksum = 0, digest = 0;
+-	struct iscsi_conn *conn = NULL;
+-	struct iscsi_thread_set *ts = arg;
++	struct iscsi_conn *conn = arg;
+ 	struct kvec iov;
+ 	/*
+ 	 * Allow ourselves to be interrupted by SIGINT so that a
+@@ -4120,11 +4099,6 @@ int iscsi_target_rx_thread(void *arg)
+ 	 */
+ 	allow_signal(SIGINT);
+ 
+-restart:
+-	conn = iscsi_rx_thread_pre_handler(ts);
+-	if (!conn)
+-		goto out;
+-
+ 	if (conn->conn_transport->transport_type == ISCSI_INFINIBAND) {
+ 		struct completion comp;
+ 		int rc;
+@@ -4134,7 +4108,7 @@ restart:
+ 		if (rc < 0)
+ 			goto transport_err;
+ 
+-		goto out;
++		goto transport_err;
+ 	}
+ 
+ 	while (!kthread_should_stop()) {
+@@ -4210,8 +4184,6 @@ transport_err:
+ 	if (!signal_pending(current))
+ 		atomic_set(&conn->transport_failed, 1);
+ 	iscsit_take_action_for_connection_exit(conn);
+-	goto restart;
+-out:
+ 	return 0;
+ }
+ 
+@@ -4273,7 +4245,24 @@ int iscsit_close_connection(
+ 	if (conn->conn_transport->transport_type == ISCSI_TCP)
+ 		complete(&conn->conn_logout_comp);
+ 
+-	iscsi_release_thread_set(conn);
++	if (!strcmp(current->comm, ISCSI_RX_THREAD_NAME)) {
++		if (conn->tx_thread &&
++		    cmpxchg(&conn->tx_thread_active, true, false)) {
++			send_sig(SIGINT, conn->tx_thread, 1);
++			kthread_stop(conn->tx_thread);
++		}
++	} else if (!strcmp(current->comm, ISCSI_TX_THREAD_NAME)) {
++		if (conn->rx_thread &&
++		    cmpxchg(&conn->rx_thread_active, true, false)) {
++			send_sig(SIGINT, conn->rx_thread, 1);
++			kthread_stop(conn->rx_thread);
++		}
++	}
++
++	spin_lock(&iscsit_global->ts_bitmap_lock);
++	bitmap_release_region(iscsit_global->ts_bitmap, conn->bitmap_id,
++			      get_order(1));
++	spin_unlock(&iscsit_global->ts_bitmap_lock);
+ 
+ 	iscsit_stop_timers_for_cmds(conn);
+ 	iscsit_stop_nopin_response_timer(conn);
+@@ -4551,15 +4540,13 @@ static void iscsit_logout_post_handler_closesession(
+ 	struct iscsi_conn *conn)
+ {
+ 	struct iscsi_session *sess = conn->sess;
+-
+-	iscsi_set_thread_clear(conn, ISCSI_CLEAR_TX_THREAD);
+-	iscsi_set_thread_set_signal(conn, ISCSI_SIGNAL_TX_THREAD);
++	int sleep = cmpxchg(&conn->tx_thread_active, true, false);
+ 
+ 	atomic_set(&conn->conn_logout_remove, 0);
+ 	complete(&conn->conn_logout_comp);
+ 
+ 	iscsit_dec_conn_usage_count(conn);
+-	iscsit_stop_session(sess, 1, 1);
++	iscsit_stop_session(sess, sleep, sleep);
+ 	iscsit_dec_session_usage_count(sess);
+ 	target_put_session(sess->se_sess);
+ }
+@@ -4567,13 +4554,12 @@ static void iscsit_logout_post_handler_closesession(
+ static void iscsit_logout_post_handler_samecid(
+ 	struct iscsi_conn *conn)
+ {
+-	iscsi_set_thread_clear(conn, ISCSI_CLEAR_TX_THREAD);
+-	iscsi_set_thread_set_signal(conn, ISCSI_SIGNAL_TX_THREAD);
++	int sleep = cmpxchg(&conn->tx_thread_active, true, false);
+ 
+ 	atomic_set(&conn->conn_logout_remove, 0);
+ 	complete(&conn->conn_logout_comp);
+ 
+-	iscsit_cause_connection_reinstatement(conn, 1);
++	iscsit_cause_connection_reinstatement(conn, sleep);
+ 	iscsit_dec_conn_usage_count(conn);
+ }
+ 
+diff --git a/drivers/target/iscsi/iscsi_target_erl0.c b/drivers/target/iscsi/iscsi_target_erl0.c
+index bdd8731..e008ed2 100644
+--- a/drivers/target/iscsi/iscsi_target_erl0.c
++++ b/drivers/target/iscsi/iscsi_target_erl0.c
+@@ -860,7 +860,10 @@ void iscsit_connection_reinstatement_rcfr(struct iscsi_conn *conn)
+ 	}
+ 	spin_unlock_bh(&conn->state_lock);
+ 
+-	iscsi_thread_set_force_reinstatement(conn);
++	if (conn->tx_thread && conn->tx_thread_active)
++		send_sig(SIGINT, conn->tx_thread, 1);
++	if (conn->rx_thread && conn->rx_thread_active)
++		send_sig(SIGINT, conn->rx_thread, 1);
+ 
+ sleep:
+ 	wait_for_completion(&conn->conn_wait_rcfr_comp);
+@@ -885,10 +888,10 @@ void iscsit_cause_connection_reinstatement(struct iscsi_conn *conn, int sleep)
+ 		return;
+ 	}
+ 
+-	if (iscsi_thread_set_force_reinstatement(conn) < 0) {
+-		spin_unlock_bh(&conn->state_lock);
+-		return;
+-	}
++	if (conn->tx_thread && conn->tx_thread_active)
++		send_sig(SIGINT, conn->tx_thread, 1);
++	if (conn->rx_thread && conn->rx_thread_active)
++		send_sig(SIGINT, conn->rx_thread, 1);
+ 
+ 	atomic_set(&conn->connection_reinstatement, 1);
+ 	if (!sleep) {
+diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
+index 153fb66..345f073 100644
+--- a/drivers/target/iscsi/iscsi_target_login.c
++++ b/drivers/target/iscsi/iscsi_target_login.c
+@@ -699,6 +699,51 @@ static void iscsi_post_login_start_timers(struct iscsi_conn *conn)
+ 		iscsit_start_nopin_timer(conn);
+ }
+ 
++int iscsit_start_kthreads(struct iscsi_conn *conn)
++{
++	int ret = 0;
++
++	spin_lock(&iscsit_global->ts_bitmap_lock);
++	conn->bitmap_id = bitmap_find_free_region(iscsit_global->ts_bitmap,
++					ISCSIT_BITMAP_BITS, get_order(1));
++	spin_unlock(&iscsit_global->ts_bitmap_lock);
++
++	if (conn->bitmap_id < 0) {
++		pr_err("bitmap_find_free_region() failed for"
++		       " iscsit_start_kthreads()\n");
++		return -ENOMEM;
++	}
++
++	conn->tx_thread = kthread_run(iscsi_target_tx_thread, conn,
++				      "%s", ISCSI_TX_THREAD_NAME);
++	if (IS_ERR(conn->tx_thread)) {
++		pr_err("Unable to start iscsi_target_tx_thread\n");
++		ret = PTR_ERR(conn->tx_thread);
++		goto out_bitmap;
++	}
++	conn->tx_thread_active = true;
++
++	conn->rx_thread = kthread_run(iscsi_target_rx_thread, conn,
++				      "%s", ISCSI_RX_THREAD_NAME);
++	if (IS_ERR(conn->rx_thread)) {
++		pr_err("Unable to start iscsi_target_rx_thread\n");
++		ret = PTR_ERR(conn->rx_thread);
++		goto out_tx;
++	}
++	conn->rx_thread_active = true;
++
++	return 0;
++out_tx:
++	kthread_stop(conn->tx_thread);
++	conn->tx_thread_active = false;
++out_bitmap:
++	spin_lock(&iscsit_global->ts_bitmap_lock);
++	bitmap_release_region(iscsit_global->ts_bitmap, conn->bitmap_id,
++			      get_order(1));
++	spin_unlock(&iscsit_global->ts_bitmap_lock);
++	return ret;
++}
++
+ int iscsi_post_login_handler(
+ 	struct iscsi_np *np,
+ 	struct iscsi_conn *conn,
+@@ -709,7 +754,7 @@ int iscsi_post_login_handler(
+ 	struct se_session *se_sess = sess->se_sess;
+ 	struct iscsi_portal_group *tpg = sess->tpg;
+ 	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+-	struct iscsi_thread_set *ts;
++	int rc;
+ 
+ 	iscsit_inc_conn_usage_count(conn);
+ 
+@@ -724,7 +769,6 @@ int iscsi_post_login_handler(
+ 	/*
+ 	 * SCSI Initiator -> SCSI Target Port Mapping
+ 	 */
+-	ts = iscsi_get_thread_set();
+ 	if (!zero_tsih) {
+ 		iscsi_set_session_parameters(sess->sess_ops,
+ 				conn->param_list, 0);
+@@ -751,9 +795,11 @@ int iscsi_post_login_handler(
+ 			sess->sess_ops->InitiatorName);
+ 		spin_unlock_bh(&sess->conn_lock);
+ 
+-		iscsi_post_login_start_timers(conn);
++		rc = iscsit_start_kthreads(conn);
++		if (rc)
++			return rc;
+ 
+-		iscsi_activate_thread_set(conn, ts);
++		iscsi_post_login_start_timers(conn);
+ 		/*
+ 		 * Determine CPU mask to ensure connection's RX and TX kthreads
+ 		 * are scheduled on the same CPU.
+@@ -810,8 +856,11 @@ int iscsi_post_login_handler(
+ 		" iSCSI Target Portal Group: %hu\n", tpg->nsessions, tpg->tpgt);
+ 	spin_unlock_bh(&se_tpg->session_lock);
+ 
++	rc = iscsit_start_kthreads(conn);
++	if (rc)
++		return rc;
++
+ 	iscsi_post_login_start_timers(conn);
+-	iscsi_activate_thread_set(conn, ts);
+ 	/*
+ 	 * Determine CPU mask to ensure connection's RX and TX kthreads
+ 	 * are scheduled on the same CPU.
+diff --git a/include/target/iscsi/iscsi_target_core.h b/include/target/iscsi/iscsi_target_core.h
+index d3583d3..dd0f3ab 100644
+--- a/include/target/iscsi/iscsi_target_core.h
++++ b/include/target/iscsi/iscsi_target_core.h
+@@ -602,6 +602,11 @@ struct iscsi_conn {
+ 	struct iscsi_session	*sess;
+ 	/* Pointer to thread_set in use for this conn's threads */
+ 	struct iscsi_thread_set	*thread_set;
++	int			bitmap_id;
++	int			rx_thread_active;
++	struct task_struct	*rx_thread;
++	int			tx_thread_active;
++	struct task_struct	*tx_thread;
+ 	/* list_head for session connection list */
+ 	struct list_head	conn_list;
+ } ____cacheline_aligned;
+@@ -871,10 +876,12 @@ struct iscsit_global {
+ 	/* Unique identifier used for the authentication daemon */
+ 	u32			auth_id;
+ 	u32			inactive_ts;
++#define ISCSIT_BITMAP_BITS	262144
+ 	/* Thread Set bitmap count */
+ 	int			ts_bitmap_count;
+ 	/* Thread Set bitmap pointer */
+ 	unsigned long		*ts_bitmap;
++	spinlock_t		ts_bitmap_lock;
+ 	/* Used for iSCSI discovery session authentication */
+ 	struct iscsi_node_acl	discovery_acl;
+ 	struct iscsi_portal_group	*discovery_tpg;
+-- 
+2.3.6
+
+
+From ca7767a3f859d6e5487ddcf7a23515e19188b922 Mon Sep 17 00:00:00 2001
+From: Nicholas Bellinger <nab@linux-iscsi.org>
+Date: Tue, 7 Apr 2015 21:53:27 +0000
+Subject: [PATCH 132/219] target: Fix COMPARE_AND_WRITE with SG_TO_MEM_NOALLOC
+ handling
+Cc: mpagano@gentoo.org
+
+commit c8e639852ad720499912acedfd6b072325fd2807 upstream.
+
+This patch fixes a bug for COMPARE_AND_WRITE handling with
+fabrics using SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC.
+
+It adds the missing allocation for cmd->t_bidi_data_sg within
+transport_generic_new_cmd() that is used by COMPARE_AND_WRITE
+for the initial READ payload, even if the fabric is already
+providing a pre-allocated buffer for cmd->t_data_sg.
+
+Also, fix zero-length COMPARE_AND_WRITE handling within the
+compare_and_write_callback() and target_complete_ok_work()
+to queue the response, skipping the initial READ.
+
+This fixes COMPARE_AND_WRITE emulation with loopback, vhost,
+and xen-backend fabric drivers using SG_TO_MEM_NOALLOC.
+
+Reported-by: Christoph Hellwig <hch@lst.de>
+Cc: Christoph Hellwig <hch@lst.de>
+Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/target/target_core_sbc.c       | 15 +++++++++-----
+ drivers/target/target_core_transport.c | 37 ++++++++++++++++++++++++++++++----
+ include/target/target_core_base.h      |  2 +-
+ 3 files changed, 44 insertions(+), 10 deletions(-)
+
+diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
+index 3e72974..755bd9b3 100644
+--- a/drivers/target/target_core_sbc.c
++++ b/drivers/target/target_core_sbc.c
+@@ -312,7 +312,7 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
+ 	return 0;
+ }
+ 
+-static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd)
++static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd, bool success)
+ {
+ 	unsigned char *buf, *addr;
+ 	struct scatterlist *sg;
+@@ -376,7 +376,7 @@ sbc_execute_rw(struct se_cmd *cmd)
+ 			       cmd->data_direction);
+ }
+ 
+-static sense_reason_t compare_and_write_post(struct se_cmd *cmd)
++static sense_reason_t compare_and_write_post(struct se_cmd *cmd, bool success)
+ {
+ 	struct se_device *dev = cmd->se_dev;
+ 
+@@ -399,7 +399,7 @@ static sense_reason_t compare_and_write_post(struct se_cmd *cmd)
+ 	return TCM_NO_SENSE;
+ }
+ 
+-static sense_reason_t compare_and_write_callback(struct se_cmd *cmd)
++static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool success)
+ {
+ 	struct se_device *dev = cmd->se_dev;
+ 	struct scatterlist *write_sg = NULL, *sg;
+@@ -414,11 +414,16 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd)
+ 
+ 	/*
+ 	 * Handle early failure in transport_generic_request_failure(),
+-	 * which will not have taken ->caw_mutex yet..
++	 * which will not have taken ->caw_sem yet..
+ 	 */
+-	if (!cmd->t_data_sg || !cmd->t_bidi_data_sg)
++	if (!success && (!cmd->t_data_sg || !cmd->t_bidi_data_sg))
+ 		return TCM_NO_SENSE;
+ 	/*
++	 * Handle special case for zero-length COMPARE_AND_WRITE
++	 */
++	if (!cmd->data_length)
++		goto out;
++	/*
+ 	 * Immediately exit + release dev->caw_sem if command has already
+ 	 * been failed with a non-zero SCSI status.
+ 	 */
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index ac3cbab..f786de0 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -1615,11 +1615,11 @@ void transport_generic_request_failure(struct se_cmd *cmd,
+ 	transport_complete_task_attr(cmd);
+ 	/*
+ 	 * Handle special case for COMPARE_AND_WRITE failure, where the
+-	 * callback is expected to drop the per device ->caw_mutex.
++	 * callback is expected to drop the per device ->caw_sem.
+ 	 */
+ 	if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) &&
+ 	     cmd->transport_complete_callback)
+-		cmd->transport_complete_callback(cmd);
++		cmd->transport_complete_callback(cmd, false);
+ 
+ 	switch (sense_reason) {
+ 	case TCM_NON_EXISTENT_LUN:
+@@ -1975,8 +1975,12 @@ static void target_complete_ok_work(struct work_struct *work)
+ 	if (cmd->transport_complete_callback) {
+ 		sense_reason_t rc;
+ 
+-		rc = cmd->transport_complete_callback(cmd);
++		rc = cmd->transport_complete_callback(cmd, true);
+ 		if (!rc && !(cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE_POST)) {
++			if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) &&
++			    !cmd->data_length)
++				goto queue_rsp;
++
+ 			return;
+ 		} else if (rc) {
+ 			ret = transport_send_check_condition_and_sense(cmd,
+@@ -1990,6 +1994,7 @@ static void target_complete_ok_work(struct work_struct *work)
+ 		}
+ 	}
+ 
++queue_rsp:
+ 	switch (cmd->data_direction) {
+ 	case DMA_FROM_DEVICE:
+ 		spin_lock(&cmd->se_lun->lun_sep_lock);
+@@ -2094,6 +2099,16 @@ static inline void transport_reset_sgl_orig(struct se_cmd *cmd)
+ static inline void transport_free_pages(struct se_cmd *cmd)
+ {
+ 	if (cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) {
++		/*
++		 * Release special case READ buffer payload required for
++		 * SG_TO_MEM_NOALLOC to function with COMPARE_AND_WRITE
++		 */
++		if (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) {
++			transport_free_sgl(cmd->t_bidi_data_sg,
++					   cmd->t_bidi_data_nents);
++			cmd->t_bidi_data_sg = NULL;
++			cmd->t_bidi_data_nents = 0;
++		}
+ 		transport_reset_sgl_orig(cmd);
+ 		return;
+ 	}
+@@ -2246,6 +2261,7 @@ sense_reason_t
+ transport_generic_new_cmd(struct se_cmd *cmd)
+ {
+ 	int ret = 0;
++	bool zero_flag = !(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB);
+ 
+ 	/*
+ 	 * Determine is the TCM fabric module has already allocated physical
+@@ -2254,7 +2270,6 @@ transport_generic_new_cmd(struct se_cmd *cmd)
+ 	 */
+ 	if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) &&
+ 	    cmd->data_length) {
+-		bool zero_flag = !(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB);
+ 
+ 		if ((cmd->se_cmd_flags & SCF_BIDI) ||
+ 		    (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE)) {
+@@ -2285,6 +2300,20 @@ transport_generic_new_cmd(struct se_cmd *cmd)
+ 				       cmd->data_length, zero_flag);
+ 		if (ret < 0)
+ 			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
++	} else if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) &&
++		    cmd->data_length) {
++		/*
++		 * Special case for COMPARE_AND_WRITE with fabrics
++		 * using SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC.
++		 */
++		u32 caw_length = cmd->t_task_nolb *
++				 cmd->se_dev->dev_attrib.block_size;
++
++		ret = target_alloc_sgl(&cmd->t_bidi_data_sg,
++				       &cmd->t_bidi_data_nents,
++				       caw_length, zero_flag);
++		if (ret < 0)
++			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+ 	}
+ 	/*
+ 	 * If this command is not a write we can execute it right here,
+diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
+index 672150b..985ca4c 100644
+--- a/include/target/target_core_base.h
++++ b/include/target/target_core_base.h
+@@ -524,7 +524,7 @@ struct se_cmd {
+ 	sense_reason_t		(*execute_cmd)(struct se_cmd *);
+ 	sense_reason_t		(*execute_rw)(struct se_cmd *, struct scatterlist *,
+ 					      u32, enum dma_data_direction);
+-	sense_reason_t (*transport_complete_callback)(struct se_cmd *);
++	sense_reason_t (*transport_complete_callback)(struct se_cmd *, bool);
+ 
+ 	unsigned char		*t_task_cdb;
+ 	unsigned char		__t_task_cdb[TCM_MAX_COMMAND_SIZE];
+-- 
+2.3.6
+
+
+From 54afccf4a4f42da1ef3eca9b56ed8dd25a8d7f1c Mon Sep 17 00:00:00 2001
+From: Akinobu Mita <akinobu.mita@gmail.com>
+Date: Mon, 13 Apr 2015 23:21:56 +0900
+Subject: [PATCH 133/219] target/file: Fix BUG() when CONFIG_DEBUG_SG=y and DIF
+ protection enabled
+Cc: mpagano@gentoo.org
+
+commit 38da0f49e8aa1649af397d53f88e163d0e60c058 upstream.
+
+When CONFIG_DEBUG_SG=y and DIF protection support enabled, kernel
+BUG()s are triggered due to the following two issues:
+
+1) prot_sg is not initialized by sg_init_table().
+
+When CONFIG_DEBUG_SG=y, scatterlist helpers check sg entry has a
+correct magic value.
+
+2) vmalloc'ed buffer is passed to sg_set_buf().
+
+sg_set_buf() uses virt_to_page() to convert virtual address to struct
+page, but it doesn't work with vmalloc address.  vmalloc_to_page()
+should be used instead.  As prot_buf isn't usually too large, so
+fix it by allocating prot_buf by kmalloc instead of vmalloc.
+
+Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
+Cc: Sagi Grimberg <sagig@mellanox.com>
+Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
+Cc: Christoph Hellwig <hch@lst.de>
+Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
+Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/target/target_core_file.c | 15 ++++++++-------
+ 1 file changed, 8 insertions(+), 7 deletions(-)
+
+diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
+index 44620fb..8ca1883 100644
+--- a/drivers/target/target_core_file.c
++++ b/drivers/target/target_core_file.c
+@@ -274,7 +274,7 @@ static int fd_do_prot_rw(struct se_cmd *cmd, struct fd_prot *fd_prot,
+ 		     se_dev->prot_length;
+ 
+ 	if (!is_write) {
+-		fd_prot->prot_buf = vzalloc(prot_size);
++		fd_prot->prot_buf = kzalloc(prot_size, GFP_KERNEL);
+ 		if (!fd_prot->prot_buf) {
+ 			pr_err("Unable to allocate fd_prot->prot_buf\n");
+ 			return -ENOMEM;
+@@ -286,9 +286,10 @@ static int fd_do_prot_rw(struct se_cmd *cmd, struct fd_prot *fd_prot,
+ 					   fd_prot->prot_sg_nents, GFP_KERNEL);
+ 		if (!fd_prot->prot_sg) {
+ 			pr_err("Unable to allocate fd_prot->prot_sg\n");
+-			vfree(fd_prot->prot_buf);
++			kfree(fd_prot->prot_buf);
+ 			return -ENOMEM;
+ 		}
++		sg_init_table(fd_prot->prot_sg, fd_prot->prot_sg_nents);
+ 		size = prot_size;
+ 
+ 		for_each_sg(fd_prot->prot_sg, sg, fd_prot->prot_sg_nents, i) {
+@@ -318,7 +319,7 @@ static int fd_do_prot_rw(struct se_cmd *cmd, struct fd_prot *fd_prot,
+ 
+ 	if (is_write || ret < 0) {
+ 		kfree(fd_prot->prot_sg);
+-		vfree(fd_prot->prot_buf);
++		kfree(fd_prot->prot_buf);
+ 	}
+ 
+ 	return ret;
+@@ -658,11 +659,11 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
+ 						 0, fd_prot.prot_sg, 0);
+ 			if (rc) {
+ 				kfree(fd_prot.prot_sg);
+-				vfree(fd_prot.prot_buf);
++				kfree(fd_prot.prot_buf);
+ 				return rc;
+ 			}
+ 			kfree(fd_prot.prot_sg);
+-			vfree(fd_prot.prot_buf);
++			kfree(fd_prot.prot_buf);
+ 		}
+ 	} else {
+ 		memset(&fd_prot, 0, sizeof(struct fd_prot));
+@@ -678,7 +679,7 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
+ 						  0, fd_prot.prot_sg, 0);
+ 			if (rc) {
+ 				kfree(fd_prot.prot_sg);
+-				vfree(fd_prot.prot_buf);
++				kfree(fd_prot.prot_buf);
+ 				return rc;
+ 			}
+ 		}
+@@ -714,7 +715,7 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
+ 
+ 	if (ret < 0) {
+ 		kfree(fd_prot.prot_sg);
+-		vfree(fd_prot.prot_buf);
++		kfree(fd_prot.prot_buf);
+ 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+ 	}
+ 
+-- 
+2.3.6
+
+
+From 1d6b56f309d72a9ce2be3129f41c4a1138693091 Mon Sep 17 00:00:00 2001
+From: Akinobu Mita <akinobu.mita@gmail.com>
+Date: Mon, 13 Apr 2015 23:21:58 +0900
+Subject: [PATCH 134/219] target/file: Fix UNMAP with DIF protection support
+Cc: mpagano@gentoo.org
+
+commit 64d240b721b21e266ffde645ec965c3b6d1c551f upstream.
+
+When UNMAP command is issued with DIF protection support enabled,
+the protection info for the unmapped region is remain unchanged.
+So READ command for the region causes data integrity failure.
+
+This fixes it by invalidating protection info for the unmapped region
+by filling with 0xff pattern.  This change also adds helper function
+fd_do_prot_fill() in order to reduce code duplication with existing
+fd_format_prot().
+
+Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
+Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
+Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com>
+Cc: Christoph Hellwig <hch@lst.de>
+Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
+Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/target/target_core_file.c | 86 +++++++++++++++++++++++++++------------
+ 1 file changed, 61 insertions(+), 25 deletions(-)
+
+diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
+index 8ca1883..7e12909 100644
+--- a/drivers/target/target_core_file.c
++++ b/drivers/target/target_core_file.c
+@@ -550,6 +550,56 @@ fd_execute_write_same(struct se_cmd *cmd)
+ 	return 0;
+ }
+ 
++static int
++fd_do_prot_fill(struct se_device *se_dev, sector_t lba, sector_t nolb,
++		void *buf, size_t bufsize)
++{
++	struct fd_dev *fd_dev = FD_DEV(se_dev);
++	struct file *prot_fd = fd_dev->fd_prot_file;
++	sector_t prot_length, prot;
++	loff_t pos = lba * se_dev->prot_length;
++
++	if (!prot_fd) {
++		pr_err("Unable to locate fd_dev->fd_prot_file\n");
++		return -ENODEV;
++	}
++
++	prot_length = nolb * se_dev->prot_length;
++
++	for (prot = 0; prot < prot_length;) {
++		sector_t len = min_t(sector_t, bufsize, prot_length - prot);
++		ssize_t ret = kernel_write(prot_fd, buf, len, pos + prot);
++
++		if (ret != len) {
++			pr_err("vfs_write to prot file failed: %zd\n", ret);
++			return ret < 0 ? ret : -ENODEV;
++		}
++		prot += ret;
++	}
++
++	return 0;
++}
++
++static int
++fd_do_prot_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
++{
++	void *buf;
++	int rc;
++
++	buf = (void *)__get_free_page(GFP_KERNEL);
++	if (!buf) {
++		pr_err("Unable to allocate FILEIO prot buf\n");
++		return -ENOMEM;
++	}
++	memset(buf, 0xff, PAGE_SIZE);
++
++	rc = fd_do_prot_fill(cmd->se_dev, lba, nolb, buf, PAGE_SIZE);
++
++	free_page((unsigned long)buf);
++
++	return rc;
++}
++
+ static sense_reason_t
+ fd_do_unmap(struct se_cmd *cmd, void *priv, sector_t lba, sector_t nolb)
+ {
+@@ -557,6 +607,12 @@ fd_do_unmap(struct se_cmd *cmd, void *priv, sector_t lba, sector_t nolb)
+ 	struct inode *inode = file->f_mapping->host;
+ 	int ret;
+ 
++	if (cmd->se_dev->dev_attrib.pi_prot_type) {
++		ret = fd_do_prot_unmap(cmd, lba, nolb);
++		if (ret)
++			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
++	}
++
+ 	if (S_ISBLK(inode->i_mode)) {
+ 		/* The backend is block device, use discard */
+ 		struct block_device *bdev = inode->i_bdev;
+@@ -879,48 +935,28 @@ static int fd_init_prot(struct se_device *dev)
+ 
+ static int fd_format_prot(struct se_device *dev)
+ {
+-	struct fd_dev *fd_dev = FD_DEV(dev);
+-	struct file *prot_fd = fd_dev->fd_prot_file;
+-	sector_t prot_length, prot;
+ 	unsigned char *buf;
+-	loff_t pos = 0;
+ 	int unit_size = FDBD_FORMAT_UNIT_SIZE * dev->dev_attrib.block_size;
+-	int rc, ret = 0, size, len;
++	int ret;
+ 
+ 	if (!dev->dev_attrib.pi_prot_type) {
+ 		pr_err("Unable to format_prot while pi_prot_type == 0\n");
+ 		return -ENODEV;
+ 	}
+-	if (!prot_fd) {
+-		pr_err("Unable to locate fd_dev->fd_prot_file\n");
+-		return -ENODEV;
+-	}
+ 
+ 	buf = vzalloc(unit_size);
+ 	if (!buf) {
+ 		pr_err("Unable to allocate FILEIO prot buf\n");
+ 		return -ENOMEM;
+ 	}
+-	prot_length = (dev->transport->get_blocks(dev) + 1) * dev->prot_length;
+-	size = prot_length;
+ 
+ 	pr_debug("Using FILEIO prot_length: %llu\n",
+-		 (unsigned long long)prot_length);
++		 (unsigned long long)(dev->transport->get_blocks(dev) + 1) *
++					dev->prot_length);
+ 
+ 	memset(buf, 0xff, unit_size);
+-	for (prot = 0; prot < prot_length; prot += unit_size) {
+-		len = min(unit_size, size);
+-		rc = kernel_write(prot_fd, buf, len, pos);
+-		if (rc != len) {
+-			pr_err("vfs_write to prot file failed: %d\n", rc);
+-			ret = -ENODEV;
+-			goto out;
+-		}
+-		pos += len;
+-		size -= len;
+-	}
+-
+-out:
++	ret = fd_do_prot_fill(dev, 0, dev->transport->get_blocks(dev) + 1,
++			      buf, unit_size);
+ 	vfree(buf);
+ 	return ret;
+ }
+-- 
+2.3.6
+
+
+From 53e5aa168e3ba918741417ac2177db04a84f77c1 Mon Sep 17 00:00:00 2001
+From: Akinobu Mita <akinobu.mita@gmail.com>
+Date: Mon, 13 Apr 2015 23:21:57 +0900
+Subject: [PATCH 135/219] target/file: Fix SG table for prot_buf initialization
+Cc: mpagano@gentoo.org
+
+commit c836777830428372074d5129ac513e1472c99791 upstream.
+
+In fd_do_prot_rw(), it allocates prot_buf which is used to copy from
+se_cmd->t_prot_sg by sbc_dif_copy_prot().  The SG table for prot_buf
+is also initialized by allocating 'se_cmd->t_prot_nents' entries of
+scatterlist and setting the data length of each entry to PAGE_SIZE
+at most.
+
+However if se_cmd->t_prot_sg contains a clustered entry (i.e.
+sg->length > PAGE_SIZE), the SG table for prot_buf can't be
+initialized correctly and sbc_dif_copy_prot() can't copy to prot_buf.
+(This actually happened with TCM loopback fabric module)
+
+As prot_buf is allocated by kzalloc() and it's physically contiguous,
+we only need a single scatterlist entry.
+
+Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
+Cc: Sagi Grimberg <sagig@mellanox.com>
+Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
+Cc: Christoph Hellwig <hch@lst.de>
+Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
+Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/target/target_core_file.c | 21 ++++++---------------
+ 1 file changed, 6 insertions(+), 15 deletions(-)
+
+diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
+index 7e12909..cbb0cc2 100644
+--- a/drivers/target/target_core_file.c
++++ b/drivers/target/target_core_file.c
+@@ -264,11 +264,10 @@ static int fd_do_prot_rw(struct se_cmd *cmd, struct fd_prot *fd_prot,
+ 	struct se_device *se_dev = cmd->se_dev;
+ 	struct fd_dev *dev = FD_DEV(se_dev);
+ 	struct file *prot_fd = dev->fd_prot_file;
+-	struct scatterlist *sg;
+ 	loff_t pos = (cmd->t_task_lba * se_dev->prot_length);
+ 	unsigned char *buf;
+-	u32 prot_size, len, size;
+-	int rc, ret = 1, i;
++	u32 prot_size;
++	int rc, ret = 1;
+ 
+ 	prot_size = (cmd->data_length / se_dev->dev_attrib.block_size) *
+ 		     se_dev->prot_length;
+@@ -281,24 +280,16 @@ static int fd_do_prot_rw(struct se_cmd *cmd, struct fd_prot *fd_prot,
+ 		}
+ 		buf = fd_prot->prot_buf;
+ 
+-		fd_prot->prot_sg_nents = cmd->t_prot_nents;
+-		fd_prot->prot_sg = kzalloc(sizeof(struct scatterlist) *
+-					   fd_prot->prot_sg_nents, GFP_KERNEL);
++		fd_prot->prot_sg_nents = 1;
++		fd_prot->prot_sg = kzalloc(sizeof(struct scatterlist),
++					   GFP_KERNEL);
+ 		if (!fd_prot->prot_sg) {
+ 			pr_err("Unable to allocate fd_prot->prot_sg\n");
+ 			kfree(fd_prot->prot_buf);
+ 			return -ENOMEM;
+ 		}
+ 		sg_init_table(fd_prot->prot_sg, fd_prot->prot_sg_nents);
+-		size = prot_size;
+-
+-		for_each_sg(fd_prot->prot_sg, sg, fd_prot->prot_sg_nents, i) {
+-
+-			len = min_t(u32, PAGE_SIZE, size);
+-			sg_set_buf(sg, buf, len);
+-			size -= len;
+-			buf += len;
+-		}
++		sg_set_buf(fd_prot->prot_sg, buf, prot_size);
+ 	}
+ 
+ 	if (is_write) {
+-- 
+2.3.6
+
+
+From 6c617001eadca79dc3c26a6e2d2844ad48c1a178 Mon Sep 17 00:00:00 2001
+From: Sagi Grimberg <sagig@mellanox.com>
+Date: Sun, 29 Mar 2015 15:52:03 +0300
+Subject: [PATCH 136/219] iser-target: Fix session hang in case of an rdma read
+ DIF error
+Cc: mpagano@gentoo.org
+
+commit 364189f0ada5478e4faf8a552d6071a650d757cd upstream.
+
+This hang was a result of a missing command put when
+a DIF error occurred during a rdma read (and we sent
+an CHECK_CONDITION error without passing it to the
+backend).
+
+Signed-off-by: Sagi Grimberg <sagig@mellanox.com>
+Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/infiniband/ulp/isert/ib_isert.c | 6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index 075b19c..4b8d518 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -1861,11 +1861,13 @@ isert_completion_rdma_read(struct iser_tx_desc *tx_desc,
+ 	cmd->i_state = ISTATE_RECEIVED_LAST_DATAOUT;
+ 	spin_unlock_bh(&cmd->istate_lock);
+ 
+-	if (ret)
++	if (ret) {
++		target_put_sess_cmd(se_cmd->se_sess, se_cmd);
+ 		transport_send_check_condition_and_sense(se_cmd,
+ 							 se_cmd->pi_err, 0);
+-	else
++	} else {
+ 		target_execute_cmd(se_cmd);
++	}
+ }
+ 
+ static void
+-- 
+2.3.6
+
+
+From c1398bc9478760e098fd1a36c9d67eeaf1bc5813 Mon Sep 17 00:00:00 2001
+From: Sagi Grimberg <sagig@mellanox.com>
+Date: Sun, 29 Mar 2015 15:52:04 +0300
+Subject: [PATCH 137/219] iser-target: Fix possible deadlock in RDMA_CM
+ connection error
+Cc: mpagano@gentoo.org
+
+commit 4a579da2586bd3b79b025947ea24ede2bbfede62 upstream.
+
+Before we reach to connection established we may get an
+error event. In this case the core won't teardown this
+connection (never established it), so we take care of freeing
+it ourselves.
+
+Signed-off-by: Sagi Grimberg <sagig@mellanox.com>
+Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/infiniband/ulp/isert/ib_isert.c | 14 +++++++++-----
+ 1 file changed, 9 insertions(+), 5 deletions(-)
+
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index 4b8d518..147029a 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -222,7 +222,7 @@ fail:
+ static void
+ isert_free_rx_descriptors(struct isert_conn *isert_conn)
+ {
+-	struct ib_device *ib_dev = isert_conn->conn_cm_id->device;
++	struct ib_device *ib_dev = isert_conn->conn_device->ib_device;
+ 	struct iser_rx_desc *rx_desc;
+ 	int i;
+ 
+@@ -719,8 +719,8 @@ out:
+ static void
+ isert_connect_release(struct isert_conn *isert_conn)
+ {
+-	struct ib_device *ib_dev = isert_conn->conn_cm_id->device;
+ 	struct isert_device *device = isert_conn->conn_device;
++	struct ib_device *ib_dev = device->ib_device;
+ 
+ 	isert_dbg("conn %p\n", isert_conn);
+ 
+@@ -728,7 +728,8 @@ isert_connect_release(struct isert_conn *isert_conn)
+ 		isert_conn_free_fastreg_pool(isert_conn);
+ 
+ 	isert_free_rx_descriptors(isert_conn);
+-	rdma_destroy_id(isert_conn->conn_cm_id);
++	if (isert_conn->conn_cm_id)
++		rdma_destroy_id(isert_conn->conn_cm_id);
+ 
+ 	if (isert_conn->conn_qp) {
+ 		struct isert_comp *comp = isert_conn->conn_qp->recv_cq->cq_context;
+@@ -878,12 +879,15 @@ isert_disconnected_handler(struct rdma_cm_id *cma_id,
+ 	return 0;
+ }
+ 
+-static void
++static int
+ isert_connect_error(struct rdma_cm_id *cma_id)
+ {
+ 	struct isert_conn *isert_conn = cma_id->qp->qp_context;
+ 
++	isert_conn->conn_cm_id = NULL;
+ 	isert_put_conn(isert_conn);
++
++	return -1;
+ }
+ 
+ static int
+@@ -912,7 +916,7 @@ isert_cma_handler(struct rdma_cm_id *cma_id, struct rdma_cm_event *event)
+ 	case RDMA_CM_EVENT_REJECTED:       /* FALLTHRU */
+ 	case RDMA_CM_EVENT_UNREACHABLE:    /* FALLTHRU */
+ 	case RDMA_CM_EVENT_CONNECT_ERROR:
+-		isert_connect_error(cma_id);
++		ret = isert_connect_error(cma_id);
+ 		break;
+ 	default:
+ 		isert_err("Unhandled RDMA CMA event: %d\n", event->event);
+-- 
+2.3.6
+
+
+From 1ed449ae56cbf5db4f3ea0560a5bfbe95e30e89a Mon Sep 17 00:00:00 2001
+From: Alexander Ploumistos <alex.ploumistos@gmail.com>
+Date: Fri, 13 Feb 2015 21:05:11 +0200
+Subject: [PATCH 138/219] Bluetooth: ath3k: Add support Atheros AR5B195 combo
+ Mini PCIe card
+Cc: mpagano@gentoo.org
+
+commit 2eeff0b4317a02f0e281df891d990194f0737aae upstream.
+
+Add 04f2:aff1 to ath3k.c supported devices list and btusb.c blacklist, so
+that the device can load the ath3k firmware and re-enumerate itself as an
+AR3011 device.
+
+T:  Bus=05 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#=  2 Spd=12   MxCh= 0
+D:  Ver= 1.10 Cls=e0(wlcon) Sub=01 Prot=01 MxPS=64 #Cfgs=  1
+P:  Vendor=04f2 ProdID=aff1 Rev= 0.01
+C:* #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=100mA
+I:* If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
+E:  Ad=81(I) Atr=03(Int.) MxPS=  16 Ivl=1ms
+E:  Ad=82(I) Atr=02(Bulk) MxPS=  64 Ivl=0ms
+E:  Ad=02(O) Atr=02(Bulk) MxPS=  64 Ivl=0ms
+I:* If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
+E:  Ad=83(I) Atr=01(Isoc) MxPS=   0 Ivl=1ms
+E:  Ad=03(O) Atr=01(Isoc) MxPS=   0 Ivl=1ms
+I:  If#= 1 Alt= 1 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
+E:  Ad=83(I) Atr=01(Isoc) MxPS=   9 Ivl=1ms
+E:  Ad=03(O) Atr=01(Isoc) MxPS=   9 Ivl=1ms
+I:  If#= 1 Alt= 2 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
+E:  Ad=83(I) Atr=01(Isoc) MxPS=  17 Ivl=1ms
+E:  Ad=03(O) Atr=01(Isoc) MxPS=  17 Ivl=1ms
+I:  If#= 1 Alt= 3 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
+E:  Ad=83(I) Atr=01(Isoc) MxPS=  25 Ivl=1ms
+E:  Ad=03(O) Atr=01(Isoc) MxPS=  25 Ivl=1ms
+I:  If#= 1 Alt= 4 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
+E:  Ad=83(I) Atr=01(Isoc) MxPS=  33 Ivl=1ms
+E:  Ad=03(O) Atr=01(Isoc) MxPS=  33 Ivl=1ms
+I:  If#= 1 Alt= 5 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
+E:  Ad=83(I) Atr=01(Isoc) MxPS=  49 Ivl=1ms
+E:  Ad=03(O) Atr=01(Isoc) MxPS=  49 Ivl=1ms
+
+Signed-off-by: Alexander Ploumistos <alexpl@fedoraproject.org>
+Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/bluetooth/ath3k.c | 1 +
+ drivers/bluetooth/btusb.c | 1 +
+ 2 files changed, 2 insertions(+)
+
+diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c
+index de4c849..288547a 100644
+--- a/drivers/bluetooth/ath3k.c
++++ b/drivers/bluetooth/ath3k.c
+@@ -65,6 +65,7 @@ static const struct usb_device_id ath3k_table[] = {
+ 	/* Atheros AR3011 with sflash firmware*/
+ 	{ USB_DEVICE(0x0489, 0xE027) },
+ 	{ USB_DEVICE(0x0489, 0xE03D) },
++	{ USB_DEVICE(0x04F2, 0xAFF1) },
+ 	{ USB_DEVICE(0x0930, 0x0215) },
+ 	{ USB_DEVICE(0x0CF3, 0x3002) },
+ 	{ USB_DEVICE(0x0CF3, 0xE019) },
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 8bfc4c2..2c527da 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -159,6 +159,7 @@ static const struct usb_device_id blacklist_table[] = {
+ 	/* Atheros 3011 with sflash firmware */
+ 	{ USB_DEVICE(0x0489, 0xe027), .driver_info = BTUSB_IGNORE },
+ 	{ USB_DEVICE(0x0489, 0xe03d), .driver_info = BTUSB_IGNORE },
++	{ USB_DEVICE(0x04f2, 0xaff1), .driver_info = BTUSB_IGNORE },
+ 	{ USB_DEVICE(0x0930, 0x0215), .driver_info = BTUSB_IGNORE },
+ 	{ USB_DEVICE(0x0cf3, 0x3002), .driver_info = BTUSB_IGNORE },
+ 	{ USB_DEVICE(0x0cf3, 0xe019), .driver_info = BTUSB_IGNORE },
+-- 
+2.3.6
+
+
+From 929315920e42097f53f97bfc88c6da4a41e19f66 Mon Sep 17 00:00:00 2001
+From: Bo Yan <byan@nvidia.com>
+Date: Tue, 31 Mar 2015 21:30:48 +0100
+Subject: [PATCH 139/219] arm64: fix midr range for Cortex-A57 erratum 832075
+Cc: mpagano@gentoo.org
+
+commit 6d1966dfd6e0ad2f8aa4b664ae1a62e33abe1998 upstream.
+
+Register MIDR_EL1 is masked to get variant and revision fields, then
+compared against midr_range_min and midr_range_max when checking
+whether CPU is affected by any particular erratum. However, variant
+and revision fields in MIDR_EL1 are separated by 16 bits, so the min
+and max of midr range should be constructed accordingly, otherwise
+the patch will not be applied when variant field is non-0.
+
+Acked-by: Andre Przywara <andre.przywara@arm.com>
+Reviewed-by: Paul Walmsley <paul@pwsan.com>
+Signed-off-by: Bo Yan <byan@nvidia.com>
+[will: use MIDR_VARIANT_SHIFT to construct upper bound]
+Signed-off-by: Will Deacon <will.deacon@arm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/arm64/kernel/cpu_errata.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index fa62637..7c48494 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -88,7 +88,8 @@ struct arm64_cpu_capabilities arm64_errata[] = {
+ 	/* Cortex-A57 r0p0 - r1p2 */
+ 		.desc = "ARM erratum 832075",
+ 		.capability = ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE,
+-		MIDR_RANGE(MIDR_CORTEX_A57, 0x00, 0x12),
++		MIDR_RANGE(MIDR_CORTEX_A57, 0x00,
++			   (1 << MIDR_VARIANT_SHIFT) | 2),
+ 	},
+ #endif
+ 	{
+-- 
+2.3.6
+
+
+From 28a75aebb66869d9b48970bc9ad2c50d06ca2368 Mon Sep 17 00:00:00 2001
+From: Mark Rutland <mark.rutland@arm.com>
+Date: Tue, 24 Mar 2015 13:50:27 +0000
+Subject: [PATCH 140/219] arm64: head.S: ensure visibility of page tables
+Cc: mpagano@gentoo.org
+
+commit 91d57155dc5ab4b311624b7ee570339b6af19ad5 upstream.
+
+After writing the page tables, we use __inval_cache_range to invalidate
+any stale cache entries. Strongly Ordered memory accesses are not
+ordered w.r.t. cache maintenance instructions, and hence explicit memory
+barriers are required to provide this ordering. However,
+__inval_cache_range was written to be used on Normal Cacheable memory
+once the MMU and caches are on, and does not have any barriers prior to
+the DC instructions.
+
+This patch adds a DMB between the page tables being written and the
+corresponding cachelines being invalidated, ensuring that the
+invalidation makes the new data visible to subsequent cacheable
+accesses. A barrier is not required before the prior invalidate as we do
+not access the page table memory area prior to this, and earlier
+barriers in preserve_boot_args and set_cpu_boot_mode_flag ensures
+ordering w.r.t. any stores performed prior to entering Linux.
+
+Signed-off-by: Mark Rutland <mark.rutland@arm.com>
+Cc: Catalin Marinas <catalin.marinas@arm.com>
+Cc: Will Deacon <will.deacon@arm.com>
+Fixes: c218bca74eeafa2f ("arm64: Relax the kernel cache requirements for boot")
+Signed-off-by: Will Deacon <will.deacon@arm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/arm64/kernel/head.S | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index 07f9305..c237ffb 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -426,6 +426,7 @@ __create_page_tables:
+ 	 */
+ 	mov	x0, x25
+ 	add	x1, x26, #SWAPPER_DIR_SIZE
++	dmb	sy
+ 	bl	__inval_cache_range
+ 
+ 	mov	lr, x27
+-- 
+2.3.6
+
+
+From 3b4f68e9d08a42860dd7491e973a1ba2abcf4ea7 Mon Sep 17 00:00:00 2001
+From: Steve Capper <steve.capper@linaro.org>
+Date: Mon, 16 Mar 2015 09:30:39 +0000
+Subject: [PATCH 141/219] arm64: Adjust EFI libstub object include logic
+Cc: mpagano@gentoo.org
+
+commit ad08fd494bf00c03ae372e0bbd9cefa37bf608d6 upstream.
+
+Commit f4f75ad5 ("efi: efistub: Convert into static library")
+introduced a static library for EFI stub, libstub.
+
+The EFI libstub directory is referenced by the kernel build system via
+a obj subdirectory rule in:
+drivers/firmware/efi/Makefile
+
+Unfortunately, arm64 also references the EFI libstub via:
+libs-$(CONFIG_EFI_STUB) += drivers/firmware/efi/libstub/
+
+If we're unlucky, the kernel build system can enter libstub via two
+simultaneous threads resulting in build failures such as:
+
+fixdep: error opening depfile: drivers/firmware/efi/libstub/.efi-stub-helper.o.d: No such file or directory
+scripts/Makefile.build:257: recipe for target 'drivers/firmware/efi/libstub/efi-stub-helper.o' failed
+make[1]: *** [drivers/firmware/efi/libstub/efi-stub-helper.o] Error 2
+Makefile:939: recipe for target 'drivers/firmware/efi/libstub' failed
+make: *** [drivers/firmware/efi/libstub] Error 2
+make: *** Waiting for unfinished jobs....
+
+This patch adjusts the arm64 Makefile to reference the compiled library
+explicitly (as is currently done in x86), rather than the directory.
+
+Fixes: f4f75ad5 efi: efistub: Convert into static library
+Signed-off-by: Steve Capper <steve.capper@linaro.org>
+Signed-off-by: Will Deacon <will.deacon@arm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/arm64/Makefile | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index 69ceedc..4d2a925 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -48,7 +48,7 @@ core-$(CONFIG_KVM) += arch/arm64/kvm/
+ core-$(CONFIG_XEN) += arch/arm64/xen/
+ core-$(CONFIG_CRYPTO) += arch/arm64/crypto/
+ libs-y		:= arch/arm64/lib/ $(libs-y)
+-libs-$(CONFIG_EFI_STUB) += drivers/firmware/efi/libstub/
++core-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a
+ 
+ # Default target when executing plain make
+ KBUILD_IMAGE	:= Image.gz
+-- 
+2.3.6
+
+
+From f5fc6d70222ede94eb601c8f2697df1a9bcd9535 Mon Sep 17 00:00:00 2001
+From: Mark Rutland <mark.rutland@arm.com>
+Date: Fri, 13 Mar 2015 16:14:34 +0000
+Subject: [PATCH 142/219] arm64: apply alternatives for !SMP kernels
+Cc: mpagano@gentoo.org
+
+commit 137650aad96c9594683445e41afa8ac5a2097520 upstream.
+
+Currently we only perform alternative patching for kernels built with
+CONFIG_SMP, as we call apply_alternatives_all() in smp.c, which is only
+built for CONFIG_SMP. Thus !SMP kernels may not have necessary
+alternatives patched in.
+
+This patch ensures that we call apply_alternatives_all() once all CPUs
+are booted, even for !SMP kernels, by having the smp_init_cpus() stub
+call this for !SMP kernels via up_late_init. A new wrapper,
+do_post_cpus_up_work, is added so we can hook other calls here later
+(e.g. boot mode logging).
+
+Cc: Andre Przywara <andre.przywara@arm.com>
+Cc: Catalin Marinas <catalin.marinas@arm.com>
+Fixes: e039ee4ee3fcf174 ("arm64: add alternative runtime patching")
+Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
+Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
+Signed-off-by: Mark Rutland <mark.rutland@arm.com>
+Signed-off-by: Will Deacon <will.deacon@arm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/arm64/Kconfig                |  4 ++++
+ arch/arm64/include/asm/smp_plat.h |  2 ++
+ arch/arm64/kernel/setup.c         | 12 ++++++++++++
+ arch/arm64/kernel/smp.c           |  2 +-
+ 4 files changed, 19 insertions(+), 1 deletion(-)
+
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 1b8e973..0d46deb 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -470,6 +470,10 @@ config HOTPLUG_CPU
+ 
+ source kernel/Kconfig.preempt
+ 
++config UP_LATE_INIT
++       def_bool y
++       depends on !SMP
++
+ config HZ
+ 	int
+ 	default 100
+diff --git a/arch/arm64/include/asm/smp_plat.h b/arch/arm64/include/asm/smp_plat.h
+index 59e2823..8dcd61e 100644
+--- a/arch/arm64/include/asm/smp_plat.h
++++ b/arch/arm64/include/asm/smp_plat.h
+@@ -40,4 +40,6 @@ static inline u32 mpidr_hash_size(void)
+ extern u64 __cpu_logical_map[NR_CPUS];
+ #define cpu_logical_map(cpu)    __cpu_logical_map[cpu]
+ 
++void __init do_post_cpus_up_work(void);
++
+ #endif /* __ASM_SMP_PLAT_H */
+diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
+index e8420f6..781f469 100644
+--- a/arch/arm64/kernel/setup.c
++++ b/arch/arm64/kernel/setup.c
+@@ -207,6 +207,18 @@ static void __init smp_build_mpidr_hash(void)
+ }
+ #endif
+ 
++void __init do_post_cpus_up_work(void)
++{
++	apply_alternatives_all();
++}
++
++#ifdef CONFIG_UP_LATE_INIT
++void __init up_late_init(void)
++{
++	do_post_cpus_up_work();
++}
++#endif /* CONFIG_UP_LATE_INIT */
++
+ static void __init setup_processor(void)
+ {
+ 	struct cpu_info *cpu_info;
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index 328b8ce..4257369 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -309,7 +309,7 @@ void cpu_die(void)
+ void __init smp_cpus_done(unsigned int max_cpus)
+ {
+ 	pr_info("SMP: Total of %d processors activated.\n", num_online_cpus());
+-	apply_alternatives_all();
++	do_post_cpus_up_work();
+ }
+ 
+ void __init smp_prepare_boot_cpu(void)
+-- 
+2.3.6
+
+
+From d56f1962494430ce86e221537a2116a8ff0dca7e Mon Sep 17 00:00:00 2001
+From: Will Deacon <will.deacon@arm.com>
+Date: Mon, 23 Mar 2015 19:07:02 +0000
+Subject: [PATCH 143/219] arm64: errata: add workaround for cortex-a53 erratum
+ #845719
+Cc: mpagano@gentoo.org
+
+commit 905e8c5dcaa147163672b06fe9dcb5abaacbc711 upstream.
+
+When running a compat (AArch32) userspace on Cortex-A53, a load at EL0
+from a virtual address that matches the bottom 32 bits of the virtual
+address used by a recent load at (AArch64) EL1 might return incorrect
+data.
+
+This patch works around the issue by writing to the contextidr_el1
+register on the exception return path when returning to a 32-bit task.
+This workaround is patched in at runtime based on the MIDR value of the
+processor.
+
+Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
+Tested-by: Mark Rutland <mark.rutland@arm.com>
+Signed-off-by: Will Deacon <will.deacon@arm.com>
+Signed-off-by: Kevin Hilman <khilman@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/arm64/Kconfig                  | 21 +++++++++++++++++++++
+ arch/arm64/include/asm/cpufeature.h |  3 ++-
+ arch/arm64/kernel/cpu_errata.c      |  8 ++++++++
+ arch/arm64/kernel/entry.S           | 20 ++++++++++++++++++++
+ 4 files changed, 51 insertions(+), 1 deletion(-)
+
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 0d46deb..a6186c2 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -361,6 +361,27 @@ config ARM64_ERRATUM_832075
+ 
+ 	  If unsure, say Y.
+ 
++config ARM64_ERRATUM_845719
++	bool "Cortex-A53: 845719: a load might read incorrect data"
++	depends on COMPAT
++	default y
++	help
++	  This option adds an alternative code sequence to work around ARM
++	  erratum 845719 on Cortex-A53 parts up to r0p4.
++
++	  When running a compat (AArch32) userspace on an affected Cortex-A53
++	  part, a load at EL0 from a virtual address that matches the bottom 32
++	  bits of the virtual address used by a recent load at (AArch64) EL1
++	  might return incorrect data.
++
++	  The workaround is to write the contextidr_el1 register on exception
++	  return to a 32-bit task.
++	  Please note that this does not necessarily enable the workaround,
++	  as it depends on the alternative framework, which will only patch
++	  the kernel if an affected CPU is detected.
++
++	  If unsure, say Y.
++
+ endmenu
+ 
+ 
+diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
+index b6c16d5..3f0c53c 100644
+--- a/arch/arm64/include/asm/cpufeature.h
++++ b/arch/arm64/include/asm/cpufeature.h
+@@ -23,8 +23,9 @@
+ 
+ #define ARM64_WORKAROUND_CLEAN_CACHE		0
+ #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE	1
++#define ARM64_WORKAROUND_845719			2
+ 
+-#define ARM64_NCAPS				2
++#define ARM64_NCAPS				3
+ 
+ #ifndef __ASSEMBLY__
+ 
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 7c48494..ad6d523 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -92,6 +92,14 @@ struct arm64_cpu_capabilities arm64_errata[] = {
+ 			   (1 << MIDR_VARIANT_SHIFT) | 2),
+ 	},
+ #endif
++#ifdef CONFIG_ARM64_ERRATUM_845719
++	{
++	/* Cortex-A53 r0p[01234] */
++		.desc = "ARM erratum 845719",
++		.capability = ARM64_WORKAROUND_845719,
++		MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x04),
++	},
++#endif
+ 	{
+ 	}
+ };
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index cf21bb3..959fe87 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -21,8 +21,10 @@
+ #include <linux/init.h>
+ #include <linux/linkage.h>
+ 
++#include <asm/alternative-asm.h>
+ #include <asm/assembler.h>
+ #include <asm/asm-offsets.h>
++#include <asm/cpufeature.h>
+ #include <asm/errno.h>
+ #include <asm/esr.h>
+ #include <asm/thread_info.h>
+@@ -120,6 +122,24 @@
+ 	ct_user_enter
+ 	ldr	x23, [sp, #S_SP]		// load return stack pointer
+ 	msr	sp_el0, x23
++
++#ifdef CONFIG_ARM64_ERRATUM_845719
++	alternative_insn						\
++	"nop",								\
++	"tbz x22, #4, 1f",						\
++	ARM64_WORKAROUND_845719
++#ifdef CONFIG_PID_IN_CONTEXTIDR
++	alternative_insn						\
++	"nop; nop",							\
++	"mrs x29, contextidr_el1; msr contextidr_el1, x29; 1:",		\
++	ARM64_WORKAROUND_845719
++#else
++	alternative_insn						\
++	"nop",								\
++	"msr contextidr_el1, xzr; 1:",					\
++	ARM64_WORKAROUND_845719
++#endif
++#endif
+ 	.endif
+ 	msr	elr_el1, x21			// set up the return data
+ 	msr	spsr_el1, x22
+-- 
+2.3.6
+
+
+From aa54f8fb00ef9c739f564672048ec0fcc08a61dc Mon Sep 17 00:00:00 2001
+From: Gavin Shan <gwshan@linux.vnet.ibm.com>
+Date: Fri, 27 Mar 2015 11:29:00 +1100
+Subject: [PATCH 144/219] powerpc/powernv: Don't map M64 segments using M32DT
+Cc: mpagano@gentoo.org
+
+commit 027fa02f84e851e21daffdf8900d6117071890f8 upstream.
+
+If M64 has been supported, the prefetchable 64-bits memory resources
+shouldn't be mapped to the corresponding PE# via M32DT. Unfortunately,
+we're doing that in pnv_ioda_setup_pe_seg() wrongly. The issue was
+introduced by commit 262af55 ("powerpc/powernv: Enable M64 aperatus
+for PHB3"). The patch fixes the issue by simply skipping M64 resources
+when updating to M32DT.
+
+Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
+Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/powerpc/platforms/powernv/pci-ioda.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index 6c9ff2b..1d9369e 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -1777,7 +1777,8 @@ static void pnv_ioda_setup_pe_seg(struct pci_controller *hose,
+ 				region.start += phb->ioda.io_segsize;
+ 				index++;
+ 			}
+-		} else if (res->flags & IORESOURCE_MEM) {
++		} else if ((res->flags & IORESOURCE_MEM) &&
++			   !pnv_pci_is_mem_pref_64(res->flags)) {
+ 			region.start = res->start -
+ 				       hose->mem_offset[0] -
+ 				       phb->ioda.m32_pci_base;
+-- 
+2.3.6
+
+
+From 7ef1951eca49005fdbb4768574b7076cae1eeb4c Mon Sep 17 00:00:00 2001
+From: Dave Olson <olson@cumulusnetworks.com>
+Date: Thu, 2 Apr 2015 21:28:45 -0700
+Subject: [PATCH 145/219] powerpc: Fix missing L2 cache size in
+ /sys/devices/system/cpu
+Cc: mpagano@gentoo.org
+
+commit f7e9e358362557c3aa2c1ec47490f29fe880a09e upstream.
+
+This problem appears to have been introduced in 2.6.29 by commit
+93197a36a9c1 "Rewrite sysfs processor cache info code".
+
+This caused lscpu to error out on at least e500v2 devices, eg:
+
+  error: cannot open /sys/devices/system/cpu/cpu0/cache/index2/size: No such file or directory
+
+Some embedded powerpc systems use cache-size in DTS for the unified L2
+cache size, not d-cache-size, so we need to allow for both DTS names.
+Added a new CACHE_TYPE_UNIFIED_D cache_type_info structure to handle
+this.
+
+Fixes: 93197a36a9c1 ("powerpc: Rewrite sysfs processor cache info code")
+Signed-off-by: Dave Olson <olson@cumulusnetworks.com>
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/powerpc/kernel/cacheinfo.c | 44 +++++++++++++++++++++++++++++++----------
+ 1 file changed, 34 insertions(+), 10 deletions(-)
+
+diff --git a/arch/powerpc/kernel/cacheinfo.c b/arch/powerpc/kernel/cacheinfo.c
+index ae77b7e..c641983 100644
+--- a/arch/powerpc/kernel/cacheinfo.c
++++ b/arch/powerpc/kernel/cacheinfo.c
+@@ -61,12 +61,22 @@ struct cache_type_info {
+ };
+ 
+ /* These are used to index the cache_type_info array. */
+-#define CACHE_TYPE_UNIFIED     0
+-#define CACHE_TYPE_INSTRUCTION 1
+-#define CACHE_TYPE_DATA        2
++#define CACHE_TYPE_UNIFIED     0 /* cache-size, cache-block-size, etc. */
++#define CACHE_TYPE_UNIFIED_D   1 /* d-cache-size, d-cache-block-size, etc */
++#define CACHE_TYPE_INSTRUCTION 2
++#define CACHE_TYPE_DATA        3
+ 
+ static const struct cache_type_info cache_type_info[] = {
+ 	{
++		/* Embedded systems that use cache-size, cache-block-size,
++		 * etc. for the Unified (typically L2) cache. */
++		.name            = "Unified",
++		.size_prop       = "cache-size",
++		.line_size_props = { "cache-line-size",
++				     "cache-block-size", },
++		.nr_sets_prop    = "cache-sets",
++	},
++	{
+ 		/* PowerPC Processor binding says the [di]-cache-*
+ 		 * must be equal on unified caches, so just use
+ 		 * d-cache properties. */
+@@ -293,7 +303,8 @@ static struct cache *cache_find_first_sibling(struct cache *cache)
+ {
+ 	struct cache *iter;
+ 
+-	if (cache->type == CACHE_TYPE_UNIFIED)
++	if (cache->type == CACHE_TYPE_UNIFIED ||
++	    cache->type == CACHE_TYPE_UNIFIED_D)
+ 		return cache;
+ 
+ 	list_for_each_entry(iter, &cache_list, list)
+@@ -324,16 +335,29 @@ static bool cache_node_is_unified(const struct device_node *np)
+ 	return of_get_property(np, "cache-unified", NULL);
+ }
+ 
+-static struct cache *cache_do_one_devnode_unified(struct device_node *node,
+-						  int level)
++/*
++ * Unified caches can have two different sets of tags.  Most embedded
++ * use cache-size, etc. for the unified cache size, but open firmware systems
++ * use d-cache-size, etc.   Check on initialization for which type we have, and
++ * return the appropriate structure type.  Assume it's embedded if it isn't
++ * open firmware.  If it's yet a 3rd type, then there will be missing entries
++ * in /sys/devices/system/cpu/cpu0/cache/index2/, and this code will need
++ * to be extended further.
++ */
++static int cache_is_unified_d(const struct device_node *np)
+ {
+-	struct cache *cache;
++	return of_get_property(np,
++		cache_type_info[CACHE_TYPE_UNIFIED_D].size_prop, NULL) ?
++		CACHE_TYPE_UNIFIED_D : CACHE_TYPE_UNIFIED;
++}
+ 
++/*
++ */
++static struct cache *cache_do_one_devnode_unified(struct device_node *node, int level)
++{
+ 	pr_debug("creating L%d ucache for %s\n", level, node->full_name);
+ 
+-	cache = new_cache(CACHE_TYPE_UNIFIED, level, node);
+-
+-	return cache;
++	return new_cache(cache_is_unified_d(node), level, node);
+ }
+ 
+ static struct cache *cache_do_one_devnode_split(struct device_node *node,
+-- 
+2.3.6
+
+
+From 9fb1018337f9767398e0d62e5dce8499fd0f2bf0 Mon Sep 17 00:00:00 2001
+From: Michael Ellerman <mpe@ellerman.id.au>
+Date: Fri, 3 Apr 2015 14:11:53 +1100
+Subject: [PATCH 146/219] powerpc/cell: Fix crash in iic_setup_cpu() after
+ per_cpu changes
+Cc: mpagano@gentoo.org
+
+commit b0dd00addc5035f87ec9c5820dacc1ebc7fcb3e6 upstream.
+
+The conversion from __get_cpu_var() to this_cpu_ptr() in iic_setup_cpu()
+is wrong. It causes an oops at boot.
+
+We need the per-cpu address of struct cpu_iic, not cpu_iic.regs->prio.
+
+Sparse noticed this, because we pass a non-iomem pointer to out_be64(),
+but we obviously don't check the sparse results often enough.
+
+Fixes: 69111bac42f5 ("powerpc: Replace __get_cpu_var uses")
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/powerpc/platforms/cell/interrupt.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/powerpc/platforms/cell/interrupt.c b/arch/powerpc/platforms/cell/interrupt.c
+index 4c11421..3af8324 100644
+--- a/arch/powerpc/platforms/cell/interrupt.c
++++ b/arch/powerpc/platforms/cell/interrupt.c
+@@ -163,7 +163,7 @@ static unsigned int iic_get_irq(void)
+ 
+ void iic_setup_cpu(void)
+ {
+-	out_be64(this_cpu_ptr(&cpu_iic.regs->prio), 0xff);
++	out_be64(&this_cpu_ptr(&cpu_iic)->regs->prio, 0xff);
+ }
+ 
+ u8 iic_get_target_id(int cpu)
+-- 
+2.3.6
+
+
+From 94a5f3b014e7d81936ae02cc095cdf895f94fb19 Mon Sep 17 00:00:00 2001
+From: Michael Ellerman <mpe@ellerman.id.au>
+Date: Fri, 3 Apr 2015 14:11:54 +1100
+Subject: [PATCH 147/219] powerpc/cell: Fix cell iommu after it_page_shift
+ changes
+Cc: mpagano@gentoo.org
+
+commit 7261b956b276aa97fbf60d00f1d7717d2ea6ee78 upstream.
+
+The patch to add it_page_shift incorrectly changed the increment of
+uaddr to use it_page_shift, rather then (1 << it_page_shift).
+
+This broke booting on at least some Cell blades, as the iommu was
+basically non-functional.
+
+Fixes: 3a553170d35d ("powerpc/iommu: Add it_page_shift field to determine iommu page size")
+Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/powerpc/platforms/cell/iommu.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c
+index c7c8720..63db1b0 100644
+--- a/arch/powerpc/platforms/cell/iommu.c
++++ b/arch/powerpc/platforms/cell/iommu.c
+@@ -197,7 +197,7 @@ static int tce_build_cell(struct iommu_table *tbl, long index, long npages,
+ 
+ 	io_pte = (unsigned long *)tbl->it_base + (index - tbl->it_offset);
+ 
+-	for (i = 0; i < npages; i++, uaddr += tbl->it_page_shift)
++	for (i = 0; i < npages; i++, uaddr += (1 << tbl->it_page_shift))
+ 		io_pte[i] = base_pte | (__pa(uaddr) & CBE_IOPTE_RPN_Mask);
+ 
+ 	mb();
+-- 
+2.3.6
+
+
+From 755b29de0d793e3915b35f35c716705d9910109f Mon Sep 17 00:00:00 2001
+From: Pascal Huerst <pascal.huerst@gmail.com>
+Date: Thu, 2 Apr 2015 10:17:40 +0200
+Subject: [PATCH 148/219] ASoC: cs4271: Increase delay time after reset
+Cc: mpagano@gentoo.org
+
+commit 74ff960222d90999508b4ba0d3449f796695b6d5 upstream.
+
+The delay time after a reset in the codec probe callback was too short,
+and did not work on certain hw because the codec needs more time to
+power on. This increases the delay time from 1us to 1ms.
+
+Signed-off-by: Pascal Huerst <pascal.huerst@gmail.com>
+Acked-by: Brian Austin <brian.austin@cirrus.com>
+Signed-off-by: Mark Brown <broonie@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ sound/soc/codecs/cs4271.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/sound/soc/codecs/cs4271.c b/sound/soc/codecs/cs4271.c
+index 7d3a6ac..e770ee6 100644
+--- a/sound/soc/codecs/cs4271.c
++++ b/sound/soc/codecs/cs4271.c
+@@ -561,10 +561,10 @@ static int cs4271_codec_probe(struct snd_soc_codec *codec)
+ 	if (gpio_is_valid(cs4271->gpio_nreset)) {
+ 		/* Reset codec */
+ 		gpio_direction_output(cs4271->gpio_nreset, 0);
+-		udelay(1);
++		mdelay(1);
+ 		gpio_set_value(cs4271->gpio_nreset, 1);
+ 		/* Give the codec time to wake up */
+-		udelay(1);
++		mdelay(1);
+ 	}
+ 
+ 	ret = regmap_update_bits(cs4271->regmap, CS4271_MODE2,
+-- 
+2.3.6
+
+
+From d9493a0723e5a23b0250f43ea5e6d8ed66e1a343 Mon Sep 17 00:00:00 2001
+From: Sergej Sawazki <ce3a@gmx.de>
+Date: Tue, 24 Mar 2015 21:13:22 +0100
+Subject: [PATCH 149/219] ASoC: wm8741: Fix rates constraints values
+Cc: mpagano@gentoo.org
+
+commit 8787041d9bb832b9449b1eb878cedcebce42c61a upstream.
+
+The WM8741 DAC supports the following typical audio sampling rates:
+  44.1kHz, 88.2kHz, 176.4kHz (eg: with a master clock of 22.5792MHz)
+  32kHz, 48kHz, 96kHz, 192kHz (eg: with a master clock of 24.576MHz)
+
+For the rates lists, we should use 82000 instead of 88235, 176400
+instead of 1764000 and 192000 instead of 19200 (seems to be a typo).
+
+Signed-off-by: Sergej Sawazki <ce3a@gmx.de>
+Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
+Signed-off-by: Mark Brown <broonie@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ sound/soc/codecs/wm8741.c | 8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/sound/soc/codecs/wm8741.c b/sound/soc/codecs/wm8741.c
+index 31bb480..9e71c76 100644
+--- a/sound/soc/codecs/wm8741.c
++++ b/sound/soc/codecs/wm8741.c
+@@ -123,7 +123,7 @@ static struct {
+ };
+ 
+ static const unsigned int rates_11289[] = {
+-	44100, 88235,
++	44100, 88200,
+ };
+ 
+ static const struct snd_pcm_hw_constraint_list constraints_11289 = {
+@@ -150,7 +150,7 @@ static const struct snd_pcm_hw_constraint_list constraints_16384 = {
+ };
+ 
+ static const unsigned int rates_16934[] = {
+-	44100, 88235,
++	44100, 88200,
+ };
+ 
+ static const struct snd_pcm_hw_constraint_list constraints_16934 = {
+@@ -168,7 +168,7 @@ static const struct snd_pcm_hw_constraint_list constraints_18432 = {
+ };
+ 
+ static const unsigned int rates_22579[] = {
+-	44100, 88235, 1764000
++	44100, 88200, 176400
+ };
+ 
+ static const struct snd_pcm_hw_constraint_list constraints_22579 = {
+@@ -186,7 +186,7 @@ static const struct snd_pcm_hw_constraint_list constraints_24576 = {
+ };
+ 
+ static const unsigned int rates_36864[] = {
+-	48000, 96000, 19200
++	48000, 96000, 192000
+ };
+ 
+ static const struct snd_pcm_hw_constraint_list constraints_36864 = {
+-- 
+2.3.6
+
+
+From f7a469cdb54b146db35083f167e9f844ffc31f0c Mon Sep 17 00:00:00 2001
+From: Manish Badarkhe <manishvb@ti.com>
+Date: Thu, 26 Mar 2015 15:38:25 +0200
+Subject: [PATCH 150/219] ASoC: davinci-evm: drop un-necessary remove function
+Cc: mpagano@gentoo.org
+
+commit a57069e33fbc6625f39e1b09c88ea44629a35206 upstream.
+
+As davinci card gets registered using 'devm_' api
+there is no need to unregister the card in 'remove'
+function.
+Hence drop the 'remove' function.
+
+Fixes: ee2f615d6e59c (ASoC: davinci-evm: Add device tree binding)
+Signed-off-by: Manish Badarkhe <manishvb@ti.com>
+Signed-off-by: Jyri Sarha <jsarha@ti.com>
+Signed-off-by: Mark Brown <broonie@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ sound/soc/davinci/davinci-evm.c | 10 ----------
+ 1 file changed, 10 deletions(-)
+
+diff --git a/sound/soc/davinci/davinci-evm.c b/sound/soc/davinci/davinci-evm.c
+index b6bb594..8c2b9be 100644
+--- a/sound/soc/davinci/davinci-evm.c
++++ b/sound/soc/davinci/davinci-evm.c
+@@ -425,18 +425,8 @@ static int davinci_evm_probe(struct platform_device *pdev)
+ 	return ret;
+ }
+ 
+-static int davinci_evm_remove(struct platform_device *pdev)
+-{
+-	struct snd_soc_card *card = platform_get_drvdata(pdev);
+-
+-	snd_soc_unregister_card(card);
+-
+-	return 0;
+-}
+-
+ static struct platform_driver davinci_evm_driver = {
+ 	.probe		= davinci_evm_probe,
+-	.remove		= davinci_evm_remove,
+ 	.driver		= {
+ 		.name	= "davinci_evm",
+ 		.pm	= &snd_soc_pm_ops,
+-- 
+2.3.6
+
+
+From f646e040a619bcea31a6cab378ccaccb6f4cb659 Mon Sep 17 00:00:00 2001
+From: Howard Mitchell <hm@hmbedded.co.uk>
+Date: Thu, 19 Mar 2015 12:08:30 +0000
+Subject: [PATCH 151/219] ASoC: pcm512x: Add 'Analogue' prefix to analogue
+ volume controls
+Cc: mpagano@gentoo.org
+
+commit 4d9b13c7cc803fbde59d7e998f7de2b9a2101c7e upstream.
+
+This is to ensure that 'alsactl restore' does not apply default
+initialisation as the chip reset defaults are preferred.
+
+Signed-off-by: Howard Mitchell <hm@hmbedded.co.uk>
+Signed-off-by: Mark Brown <broonie@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ sound/soc/codecs/pcm512x.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/sound/soc/codecs/pcm512x.c b/sound/soc/codecs/pcm512x.c
+index 474cae8..b48624c 100644
+--- a/sound/soc/codecs/pcm512x.c
++++ b/sound/soc/codecs/pcm512x.c
+@@ -304,9 +304,9 @@ static const struct soc_enum pcm512x_veds =
+ static const struct snd_kcontrol_new pcm512x_controls[] = {
+ SOC_DOUBLE_R_TLV("Digital Playback Volume", PCM512x_DIGITAL_VOLUME_2,
+ 		 PCM512x_DIGITAL_VOLUME_3, 0, 255, 1, digital_tlv),
+-SOC_DOUBLE_TLV("Playback Volume", PCM512x_ANALOG_GAIN_CTRL,
++SOC_DOUBLE_TLV("Analogue Playback Volume", PCM512x_ANALOG_GAIN_CTRL,
+ 	       PCM512x_LAGN_SHIFT, PCM512x_RAGN_SHIFT, 1, 1, analog_tlv),
+-SOC_DOUBLE_TLV("Playback Boost Volume", PCM512x_ANALOG_GAIN_BOOST,
++SOC_DOUBLE_TLV("Analogue Playback Boost Volume", PCM512x_ANALOG_GAIN_BOOST,
+ 	       PCM512x_AGBL_SHIFT, PCM512x_AGBR_SHIFT, 1, 0, boost_tlv),
+ SOC_DOUBLE("Digital Playback Switch", PCM512x_MUTE, PCM512x_RQML_SHIFT,
+ 	   PCM512x_RQMR_SHIFT, 1, 1),
+-- 
+2.3.6
+
+
+From 43ebd1a85ee86416c2d45a3834e7425c396890e9 Mon Sep 17 00:00:00 2001
+From: Howard Mitchell <hm@hmbedded.co.uk>
+Date: Fri, 20 Mar 2015 21:13:45 +0000
+Subject: [PATCH 152/219] ASoC: pcm512x: Fix divide by zero issue
+Cc: mpagano@gentoo.org
+
+commit f073faa73626f41db7050a69edd5074c53ce6d6c upstream.
+
+If den=1 and pllin_rate>20MHz then den and num are adjusted to 0
+causing a divide by zero error a few lines further on. Therefore
+this patch correctly scales num and den such that
+pllin_rate/den < 20MHz as required in the device data sheet.
+
+Signed-off-by: Howard Mitchell <hm@hmbedded.co.uk>
+Signed-off-by: Mark Brown <broonie@sirena.org.uk>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ sound/soc/codecs/pcm512x.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/sound/soc/codecs/pcm512x.c b/sound/soc/codecs/pcm512x.c
+index b48624c..8c09e3f 100644
+--- a/sound/soc/codecs/pcm512x.c
++++ b/sound/soc/codecs/pcm512x.c
+@@ -576,8 +576,8 @@ static int pcm512x_find_pll_coeff(struct snd_soc_dai *dai,
+ 
+ 	/* pllin_rate / P (or here, den) cannot be greater than 20 MHz */
+ 	if (pllin_rate / den > 20000000 && num < 8) {
+-		num *= 20000000 / (pllin_rate / den);
+-		den *= 20000000 / (pllin_rate / den);
++		num *= DIV_ROUND_UP(pllin_rate / den, 20000000);
++		den *= DIV_ROUND_UP(pllin_rate / den, 20000000);
+ 	}
+ 	dev_dbg(dev, "num / den = %lu / %lu\n", num, den);
+ 
+-- 
+2.3.6
+
+
+From 650a628d5725e7eb8ed5f979fee058795cb06355 Mon Sep 17 00:00:00 2001
+From: Lv Zheng <lv.zheng@intel.com>
+Date: Mon, 13 Apr 2015 11:48:58 +0800
+Subject: [PATCH 153/219] ACPICA: Utilities: split IO address types from data
+ type models.
+Cc: mpagano@gentoo.org
+
+commit 2b8760100e1de69b6ff004c986328a82947db4ad upstream.
+
+ACPICA commit aacf863cfffd46338e268b7415f7435cae93b451
+
+It is reported that on a physically 64-bit addressed machine, 32-bit kernel
+can trigger crashes in accessing the memory regions that are beyond the
+32-bit boundary. The region field's start address should still be 32-bit
+compliant, but after a calculation (adding some offsets), it may exceed the
+32-bit boundary. This case is rare and buggy, but there are real BIOSes
+leaked with such issues (see References below).
+
+This patch fixes this gap by always defining IO addresses as 64-bit, and
+allows OSPMs to optimize it for a real 32-bit machine to reduce the size of
+the internal objects.
+
+Internal acpi_physical_address usages in the structures that can be fixed
+by this change include:
+ 1. struct acpi_object_region:
+    acpi_physical_address		address;
+ 2. struct acpi_address_range:
+    acpi_physical_address		start_address;
+    acpi_physical_address		end_address;
+ 3. struct acpi_mem_space_context;
+    acpi_physical_address		address;
+ 4. struct acpi_table_desc
+    acpi_physical_address		address;
+See known issues 1 for other usages.
+
+Note that acpi_io_address which is used for ACPI_PROCESSOR may also suffer
+from same problem, so this patch changes it accordingly.
+
+For iasl, it will enforce acpi_physical_address as 32-bit to generate
+32-bit OSPM compatible tables on 32-bit platforms, we need to define
+ACPI_32BIT_PHYSICAL_ADDRESS for it in acenv.h.
+
+Known issues:
+ 1. Cleanup of mapped virtual address
+   In struct acpi_mem_space_context, acpi_physical_address is used as a virtual
+   address:
+    acpi_physical_address                   mapped_physical_address;
+   It is better to introduce acpi_virtual_address or use acpi_size instead.
+   This patch doesn't make such a change. Because this should be done along
+   with a change to acpi_os_map_memory()/acpi_os_unmap_memory().
+   There should be no functional problem to leave this unchanged except
+   that only this structure is enlarged unexpectedly.
+
+Link: https://github.com/acpica/acpica/commit/aacf863c
+Reference: https://bugzilla.kernel.org/show_bug.cgi?id=87971
+Reference: https://bugzilla.kernel.org/show_bug.cgi?id=79501
+Reported-and-tested-by: Paul Menzel <paulepanter@users.sourceforge.net>
+Reported-and-tested-by: Sial Nije <sialnije@gmail.com>
+Signed-off-by: Lv Zheng <lv.zheng@intel.com>
+Signed-off-by: Bob Moore <robert.moore@intel.com>
+Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ include/acpi/actypes.h        | 20 ++++++++++++++++++++
+ include/acpi/platform/acenv.h |  1 +
+ 2 files changed, 21 insertions(+)
+
+diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h
+index b034f10..658c42e 100644
+--- a/include/acpi/actypes.h
++++ b/include/acpi/actypes.h
+@@ -199,9 +199,29 @@ typedef int s32;
+ typedef s32 acpi_native_int;
+ 
+ typedef u32 acpi_size;
++
++#ifdef ACPI_32BIT_PHYSICAL_ADDRESS
++
++/*
++ * OSPMs can define this to shrink the size of the structures for 32-bit
++ * none PAE environment. ASL compiler may always define this to generate
++ * 32-bit OSPM compliant tables.
++ */
+ typedef u32 acpi_io_address;
+ typedef u32 acpi_physical_address;
+ 
++#else				/* ACPI_32BIT_PHYSICAL_ADDRESS */
++
++/*
++ * It is reported that, after some calculations, the physical addresses can
++ * wrap over the 32-bit boundary on 32-bit PAE environment.
++ * https://bugzilla.kernel.org/show_bug.cgi?id=87971
++ */
++typedef u64 acpi_io_address;
++typedef u64 acpi_physical_address;
++
++#endif				/* ACPI_32BIT_PHYSICAL_ADDRESS */
++
+ #define ACPI_MAX_PTR                    ACPI_UINT32_MAX
+ #define ACPI_SIZE_MAX                   ACPI_UINT32_MAX
+ 
+diff --git a/include/acpi/platform/acenv.h b/include/acpi/platform/acenv.h
+index ad74dc5..ecdf940 100644
+--- a/include/acpi/platform/acenv.h
++++ b/include/acpi/platform/acenv.h
+@@ -76,6 +76,7 @@
+ #define ACPI_LARGE_NAMESPACE_NODE
+ #define ACPI_DATA_TABLE_DISASSEMBLY
+ #define ACPI_SINGLE_THREADED
++#define ACPI_32BIT_PHYSICAL_ADDRESS
+ #endif
+ 
+ /* acpi_exec configuration. Multithreaded with full AML debugger */
+-- 
+2.3.6
+
+
+From 5980bf8bc5dbb8e5338a3db6e311539eeb6242da Mon Sep 17 00:00:00 2001
+From: Octavian Purdila <octavian.purdila@intel.com>
+Date: Mon, 13 Apr 2015 11:49:05 +0800
+Subject: [PATCH 154/219] ACPICA: Tables: Don't release ACPI_MTX_TABLES in
+ acpi_tb_install_standard_table().
+Cc: mpagano@gentoo.org
+
+commit 77ddc2fe08329e375505bc36a3df3233fe57317b upstream.
+
+ACPICA commit c70434d4da13e65b6163c79a5aa16b40193631c7
+
+ACPI_MTX_TABLES is acquired and released by the callers of
+acpi_tb_install_standard_table() so releasing it in the function itself is
+causing the following error in Linux kernel if the table is reloaded:
+
+ACPI Error: Mutex [0x2] is not acquired, cannot release (20141107/utmutex-321)
+Call Trace:
+  [<ffffffff81b0bd48>] dump_stack+0x4f/0x7b
+  [<ffffffff81546bf5>] acpi_ut_release_mutex+0x47/0x67
+  [<ffffffff81544357>] acpi_load_table+0x73/0xcb
+
+Link: https://github.com/acpica/acpica/commit/c70434d4
+Signed-off-by: Octavian Purdila <octavian.purdila@intel.com>
+Signed-off-by: Lv Zheng <lv.zheng@intel.com>
+Signed-off-by: Bob Moore <robert.moore@intel.com>
+Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/acpi/acpica/tbinstal.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/drivers/acpi/acpica/tbinstal.c b/drivers/acpi/acpica/tbinstal.c
+index 9bad45e..7fbc2b9 100644
+--- a/drivers/acpi/acpica/tbinstal.c
++++ b/drivers/acpi/acpica/tbinstal.c
+@@ -346,7 +346,6 @@ acpi_tb_install_standard_table(acpi_physical_address address,
+ 				 */
+ 				acpi_tb_uninstall_table(&new_table_desc);
+ 				*table_index = i;
+-				(void)acpi_ut_release_mutex(ACPI_MTX_TABLES);
+ 				return_ACPI_STATUS(AE_OK);
+ 			}
+ 		}
+-- 
+2.3.6
+
+
+From afaed716d9f945416e6f0967384714ee3b066020 Mon Sep 17 00:00:00 2001
+From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
+Date: Wed, 15 Apr 2015 04:00:27 +0200
+Subject: [PATCH 155/219] ACPICA: Store GPE register enable masks upfront
+Cc: mpagano@gentoo.org
+
+commit 0ee0d34985ceffe4036319e1e46df8bff591b9e3 upstream.
+
+It is reported that ACPI interrupts do not work any more on
+Dell Latitude D600 after commit c50f13c672df (ACPICA: Save
+current masks of enabled GPEs after enable register writes).
+The problem turns out to be related to the fact that the
+enable_mask and enable_for_run GPE bit masks are not in
+sync (in the absence of any system suspend/resume events)
+for at least one GPE register on that machine.
+
+Address this problem by writing the enable_for_run mask into
+enable_mask as soon as enable_for_run is updated instead of
+doing that only after the subsequent register write has
+succeeded.  For consistency, update acpi_hw_gpe_enable_write()
+to store the bit mask to be written into the GPE register
+in enable_mask unconditionally before the write.
+
+Since the ACPI_GPE_SAVE_MASK flag is not necessary any more after
+that, drop it along with the symbols depending on it.
+
+Reported-and-tested-by: Jim Bos <jim876@xs4all.nl>
+Fixes: c50f13c672df (ACPICA: Save current masks of enabled GPEs after enable register writes)
+Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/acpi/acpica/evgpe.c |  5 +++--
+ drivers/acpi/acpica/hwgpe.c | 11 ++++-------
+ include/acpi/actypes.h      |  4 ----
+ 3 files changed, 7 insertions(+), 13 deletions(-)
+
+diff --git a/drivers/acpi/acpica/evgpe.c b/drivers/acpi/acpica/evgpe.c
+index 5ed064e..ccf7932 100644
+--- a/drivers/acpi/acpica/evgpe.c
++++ b/drivers/acpi/acpica/evgpe.c
+@@ -92,6 +92,7 @@ acpi_ev_update_gpe_enable_mask(struct acpi_gpe_event_info *gpe_event_info)
+ 		ACPI_SET_BIT(gpe_register_info->enable_for_run,
+ 			     (u8)register_bit);
+ 	}
++	gpe_register_info->enable_mask = gpe_register_info->enable_for_run;
+ 
+ 	return_ACPI_STATUS(AE_OK);
+ }
+@@ -123,7 +124,7 @@ acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info)
+ 
+ 	/* Enable the requested GPE */
+ 
+-	status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE_SAVE);
++	status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE);
+ 	return_ACPI_STATUS(status);
+ }
+ 
+@@ -202,7 +203,7 @@ acpi_ev_remove_gpe_reference(struct acpi_gpe_event_info *gpe_event_info)
+ 		if (ACPI_SUCCESS(status)) {
+ 			status =
+ 			    acpi_hw_low_set_gpe(gpe_event_info,
+-						ACPI_GPE_DISABLE_SAVE);
++						ACPI_GPE_DISABLE);
+ 		}
+ 
+ 		if (ACPI_FAILURE(status)) {
+diff --git a/drivers/acpi/acpica/hwgpe.c b/drivers/acpi/acpica/hwgpe.c
+index 84bc550..af6514e 100644
+--- a/drivers/acpi/acpica/hwgpe.c
++++ b/drivers/acpi/acpica/hwgpe.c
+@@ -89,6 +89,8 @@ u32 acpi_hw_get_gpe_register_bit(struct acpi_gpe_event_info *gpe_event_info)
+  * RETURN:	Status
+  *
+  * DESCRIPTION: Enable or disable a single GPE in the parent enable register.
++ *              The enable_mask field of the involved GPE register must be
++ *              updated by the caller if necessary.
+  *
+  ******************************************************************************/
+ 
+@@ -119,7 +121,7 @@ acpi_hw_low_set_gpe(struct acpi_gpe_event_info *gpe_event_info, u32 action)
+ 	/* Set or clear just the bit that corresponds to this GPE */
+ 
+ 	register_bit = acpi_hw_get_gpe_register_bit(gpe_event_info);
+-	switch (action & ~ACPI_GPE_SAVE_MASK) {
++	switch (action) {
+ 	case ACPI_GPE_CONDITIONAL_ENABLE:
+ 
+ 		/* Only enable if the corresponding enable_mask bit is set */
+@@ -149,9 +151,6 @@ acpi_hw_low_set_gpe(struct acpi_gpe_event_info *gpe_event_info, u32 action)
+ 	/* Write the updated enable mask */
+ 
+ 	status = acpi_hw_write(enable_mask, &gpe_register_info->enable_address);
+-	if (ACPI_SUCCESS(status) && (action & ACPI_GPE_SAVE_MASK)) {
+-		gpe_register_info->enable_mask = (u8)enable_mask;
+-	}
+ 	return (status);
+ }
+ 
+@@ -286,10 +285,8 @@ acpi_hw_gpe_enable_write(u8 enable_mask,
+ {
+ 	acpi_status status;
+ 
++	gpe_register_info->enable_mask = enable_mask;
+ 	status = acpi_hw_write(enable_mask, &gpe_register_info->enable_address);
+-	if (ACPI_SUCCESS(status)) {
+-		gpe_register_info->enable_mask = enable_mask;
+-	}
+ 	return (status);
+ }
+ 
+diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h
+index 658c42e..0d58525 100644
+--- a/include/acpi/actypes.h
++++ b/include/acpi/actypes.h
+@@ -756,10 +756,6 @@ typedef u32 acpi_event_status;
+ #define ACPI_GPE_ENABLE                 0
+ #define ACPI_GPE_DISABLE                1
+ #define ACPI_GPE_CONDITIONAL_ENABLE     2
+-#define ACPI_GPE_SAVE_MASK              4
+-
+-#define ACPI_GPE_ENABLE_SAVE            (ACPI_GPE_ENABLE | ACPI_GPE_SAVE_MASK)
+-#define ACPI_GPE_DISABLE_SAVE           (ACPI_GPE_DISABLE | ACPI_GPE_SAVE_MASK)
+ 
+ /*
+  * GPE info flags - Per GPE
+-- 
+2.3.6
+
+
+From 7b2f4da529f27b81d06a9c5d49803dc4b1d5eea3 Mon Sep 17 00:00:00 2001
+From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
+Date: Sat, 18 Apr 2015 01:25:46 +0200
+Subject: [PATCH 156/219] ACPI / scan: Annotate physical_node_lock in
+ acpi_scan_is_offline()
+Cc: mpagano@gentoo.org
+
+commit 4c533c801d1c9b5c38458a0e7516e0cf50643782 upstream.
+
+acpi_scan_is_offline() may be called under the physical_node_lock
+lock of the given device object's parent, so prevent lockdep from
+complaining about that by annotating that instance with
+SINGLE_DEPTH_NESTING.
+
+Fixes: caa73ea158de (ACPI / hotplug / driver core: Handle containers in a special way)
+Reported-and-tested-by: Xie XiuQi <xiexiuqi@huawei.com>
+Reviewed-by: Toshi Kani <toshi.kani@hp.com>
+Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/acpi/scan.c | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index bbca783..349f4fd 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -298,7 +298,11 @@ bool acpi_scan_is_offline(struct acpi_device *adev, bool uevent)
+ 	struct acpi_device_physical_node *pn;
+ 	bool offline = true;
+ 
+-	mutex_lock(&adev->physical_node_lock);
++	/*
++	 * acpi_container_offline() calls this for all of the container's
++	 * children under the container's physical_node_lock lock.
++	 */
++	mutex_lock_nested(&adev->physical_node_lock, SINGLE_DEPTH_NESTING);
+ 
+ 	list_for_each_entry(pn, &adev->physical_node_list, node)
+ 		if (device_supports_offline(pn->dev) && !pn->dev->offline) {
+-- 
+2.3.6
+
+
+From 042741ecc3287d365daab83a5fd287aee607ea32 Mon Sep 17 00:00:00 2001
+From: Max Filippov <jcmvbkbc@gmail.com>
+Date: Fri, 27 Feb 2015 06:28:00 +0300
+Subject: [PATCH 157/219] xtensa: xtfpga: fix hardware lockup caused by LCD
+ driver
+Cc: mpagano@gentoo.org
+
+commit 4949009eb8d40a441dcddcd96e101e77d31cf1b2 upstream.
+
+LCD driver is always built for the XTFPGA platform, but its base address
+is not configurable, and is wrong for ML605/KC705. Its initialization
+locks up KC705 board hardware.
+
+Make the whole driver optional, and its base address and bus width
+configurable. Implement 4-bit bus access method.
+
+Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/xtensa/Kconfig                                | 30 ++++++++++++
+ arch/xtensa/platforms/xtfpga/Makefile              |  3 +-
+ .../platforms/xtfpga/include/platform/hardware.h   |  3 --
+ .../xtensa/platforms/xtfpga/include/platform/lcd.h | 15 ++++++
+ arch/xtensa/platforms/xtfpga/lcd.c                 | 55 +++++++++++++---------
+ 5 files changed, 81 insertions(+), 25 deletions(-)
+
+diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
+index e31d494..87be10e 100644
+--- a/arch/xtensa/Kconfig
++++ b/arch/xtensa/Kconfig
+@@ -428,6 +428,36 @@ config DEFAULT_MEM_SIZE
+ 
+ 	  If unsure, leave the default value here.
+ 
++config XTFPGA_LCD
++	bool "Enable XTFPGA LCD driver"
++	depends on XTENSA_PLATFORM_XTFPGA
++	default n
++	help
++	  There's a 2x16 LCD on most of XTFPGA boards, kernel may output
++	  progress messages there during bootup/shutdown. It may be useful
++	  during board bringup.
++
++	  If unsure, say N.
++
++config XTFPGA_LCD_BASE_ADDR
++	hex "XTFPGA LCD base address"
++	depends on XTFPGA_LCD
++	default "0x0d0c0000"
++	help
++	  Base address of the LCD controller inside KIO region.
++	  Different boards from XTFPGA family have LCD controller at different
++	  addresses. Please consult prototyping user guide for your board for
++	  the correct address. Wrong address here may lead to hardware lockup.
++
++config XTFPGA_LCD_8BIT_ACCESS
++	bool "Use 8-bit access to XTFPGA LCD"
++	depends on XTFPGA_LCD
++	default n
++	help
++	  LCD may be connected with 4- or 8-bit interface, 8-bit access may
++	  only be used with 8-bit interface. Please consult prototyping user
++	  guide for your board for the correct interface width.
++
+ endmenu
+ 
+ menu "Executable file formats"
+diff --git a/arch/xtensa/platforms/xtfpga/Makefile b/arch/xtensa/platforms/xtfpga/Makefile
+index b9ae206..7839d38 100644
+--- a/arch/xtensa/platforms/xtfpga/Makefile
++++ b/arch/xtensa/platforms/xtfpga/Makefile
+@@ -6,4 +6,5 @@
+ #
+ # Note 2! The CFLAGS definitions are in the main makefile...
+ 
+-obj-y			= setup.o lcd.o
++obj-y			+= setup.o
++obj-$(CONFIG_XTFPGA_LCD) += lcd.o
+diff --git a/arch/xtensa/platforms/xtfpga/include/platform/hardware.h b/arch/xtensa/platforms/xtfpga/include/platform/hardware.h
+index 6edd20b..4e0af26 100644
+--- a/arch/xtensa/platforms/xtfpga/include/platform/hardware.h
++++ b/arch/xtensa/platforms/xtfpga/include/platform/hardware.h
+@@ -40,9 +40,6 @@
+ 
+ /* UART */
+ #define DUART16552_PADDR	(XCHAL_KIO_PADDR + 0x0D050020)
+-/* LCD instruction and data addresses. */
+-#define LCD_INSTR_ADDR		((char *)IOADDR(0x0D040000))
+-#define LCD_DATA_ADDR		((char *)IOADDR(0x0D040004))
+ 
+ /* Misc. */
+ #define XTFPGA_FPGAREGS_VADDR	IOADDR(0x0D020000)
+diff --git a/arch/xtensa/platforms/xtfpga/include/platform/lcd.h b/arch/xtensa/platforms/xtfpga/include/platform/lcd.h
+index 0e43564..4c8541e 100644
+--- a/arch/xtensa/platforms/xtfpga/include/platform/lcd.h
++++ b/arch/xtensa/platforms/xtfpga/include/platform/lcd.h
+@@ -11,10 +11,25 @@
+ #ifndef __XTENSA_XTAVNET_LCD_H
+ #define __XTENSA_XTAVNET_LCD_H
+ 
++#ifdef CONFIG_XTFPGA_LCD
+ /* Display string STR at position POS on the LCD. */
+ void lcd_disp_at_pos(char *str, unsigned char pos);
+ 
+ /* Shift the contents of the LCD display left or right. */
+ void lcd_shiftleft(void);
+ void lcd_shiftright(void);
++#else
++static inline void lcd_disp_at_pos(char *str, unsigned char pos)
++{
++}
++
++static inline void lcd_shiftleft(void)
++{
++}
++
++static inline void lcd_shiftright(void)
++{
++}
++#endif
++
+ #endif
+diff --git a/arch/xtensa/platforms/xtfpga/lcd.c b/arch/xtensa/platforms/xtfpga/lcd.c
+index 2872301..4dc0c1b 100644
+--- a/arch/xtensa/platforms/xtfpga/lcd.c
++++ b/arch/xtensa/platforms/xtfpga/lcd.c
+@@ -1,50 +1,63 @@
+ /*
+- * Driver for the LCD display on the Tensilica LX60 Board.
++ * Driver for the LCD display on the Tensilica XTFPGA board family.
++ * http://www.mytechcorp.com/cfdata/productFile/File1/MOC-16216B-B-A0A04.pdf
+  *
+  * This file is subject to the terms and conditions of the GNU General Public
+  * License.  See the file "COPYING" in the main directory of this archive
+  * for more details.
+  *
+  * Copyright (C) 2001, 2006 Tensilica Inc.
++ * Copyright (C) 2015 Cadence Design Systems Inc.
+  */
+ 
+-/*
+- *
+- * FIXME: this code is from the examples from the LX60 user guide.
+- *
+- * The lcd_pause function does busy waiting, which is probably not
+- * great. Maybe the code could be changed to use kernel timers, or
+- * change the hardware to not need to wait.
+- */
+-
++#include <linux/delay.h>
+ #include <linux/init.h>
+ #include <linux/io.h>
+ 
+ #include <platform/hardware.h>
+ #include <platform/lcd.h>
+-#include <linux/delay.h>
+ 
+-#define LCD_PAUSE_ITERATIONS	4000
++/* LCD instruction and data addresses. */
++#define LCD_INSTR_ADDR		((char *)IOADDR(CONFIG_XTFPGA_LCD_BASE_ADDR))
++#define LCD_DATA_ADDR		(LCD_INSTR_ADDR + 4)
++
+ #define LCD_CLEAR		0x1
+ #define LCD_DISPLAY_ON		0xc
+ 
+ /* 8bit and 2 lines display */
+ #define LCD_DISPLAY_MODE8BIT	0x38
++#define LCD_DISPLAY_MODE4BIT	0x28
+ #define LCD_DISPLAY_POS		0x80
+ #define LCD_SHIFT_LEFT		0x18
+ #define LCD_SHIFT_RIGHT		0x1c
+ 
++static void lcd_put_byte(u8 *addr, u8 data)
++{
++#ifdef CONFIG_XTFPGA_LCD_8BIT_ACCESS
++	ACCESS_ONCE(*addr) = data;
++#else
++	ACCESS_ONCE(*addr) = data & 0xf0;
++	ACCESS_ONCE(*addr) = (data << 4) & 0xf0;
++#endif
++}
++
+ static int __init lcd_init(void)
+ {
+-	*LCD_INSTR_ADDR = LCD_DISPLAY_MODE8BIT;
++	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE8BIT;
+ 	mdelay(5);
+-	*LCD_INSTR_ADDR = LCD_DISPLAY_MODE8BIT;
++	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE8BIT;
+ 	udelay(200);
+-	*LCD_INSTR_ADDR = LCD_DISPLAY_MODE8BIT;
++	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE8BIT;
++	udelay(50);
++#ifndef CONFIG_XTFPGA_LCD_8BIT_ACCESS
++	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE4BIT;
++	udelay(50);
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_DISPLAY_MODE4BIT);
+ 	udelay(50);
+-	*LCD_INSTR_ADDR = LCD_DISPLAY_ON;
++#endif
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_DISPLAY_ON);
+ 	udelay(50);
+-	*LCD_INSTR_ADDR = LCD_CLEAR;
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_CLEAR);
+ 	mdelay(10);
+ 	lcd_disp_at_pos("XTENSA LINUX", 0);
+ 	return 0;
+@@ -52,10 +65,10 @@ static int __init lcd_init(void)
+ 
+ void lcd_disp_at_pos(char *str, unsigned char pos)
+ {
+-	*LCD_INSTR_ADDR = LCD_DISPLAY_POS | pos;
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_DISPLAY_POS | pos);
+ 	udelay(100);
+ 	while (*str != 0) {
+-		*LCD_DATA_ADDR = *str;
++		lcd_put_byte(LCD_DATA_ADDR, *str);
+ 		udelay(200);
+ 		str++;
+ 	}
+@@ -63,13 +76,13 @@ void lcd_disp_at_pos(char *str, unsigned char pos)
+ 
+ void lcd_shiftleft(void)
+ {
+-	*LCD_INSTR_ADDR = LCD_SHIFT_LEFT;
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_SHIFT_LEFT);
+ 	udelay(50);
+ }
+ 
+ void lcd_shiftright(void)
+ {
+-	*LCD_INSTR_ADDR = LCD_SHIFT_RIGHT;
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_SHIFT_RIGHT);
+ 	udelay(50);
+ }
+ 
+-- 
+2.3.6
+
+
+From 3d421b4703e664742e5f8b80c8f61d64d6435fa2 Mon Sep 17 00:00:00 2001
+From: Max Filippov <jcmvbkbc@gmail.com>
+Date: Fri, 27 Feb 2015 11:02:38 +0300
+Subject: [PATCH 158/219] xtensa: provide __NR_sync_file_range2 instead of
+ __NR_sync_file_range
+Cc: mpagano@gentoo.org
+
+commit 01e84c70fe40c8111f960987bcf7f931842e6d07 upstream.
+
+xtensa actually uses sync_file_range2 implementation, so it should
+define __NR_sync_file_range2 as other architectures that use that
+function. That fixes userspace interface (that apparently never worked)
+and avoids special-casing xtensa in libc implementations.
+See the thread ending at
+http://lists.busybox.net/pipermail/uclibc/2015-February/048833.html
+for more details.
+
+Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/xtensa/include/uapi/asm/unistd.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/xtensa/include/uapi/asm/unistd.h b/arch/xtensa/include/uapi/asm/unistd.h
+index db5bb72..62d8465 100644
+--- a/arch/xtensa/include/uapi/asm/unistd.h
++++ b/arch/xtensa/include/uapi/asm/unistd.h
+@@ -715,7 +715,7 @@ __SYSCALL(323, sys_process_vm_writev, 6)
+ __SYSCALL(324, sys_name_to_handle_at, 5)
+ #define __NR_open_by_handle_at			325
+ __SYSCALL(325, sys_open_by_handle_at, 3)
+-#define __NR_sync_file_range			326
++#define __NR_sync_file_range2			326
+ __SYSCALL(326, sys_sync_file_range2, 6)
+ #define __NR_perf_event_open			327
+ __SYSCALL(327, sys_perf_event_open, 5)
+-- 
+2.3.6
+
+
+From 63c94a9787fee217938e65b3e11bed2b7179481f Mon Sep 17 00:00:00 2001
+From: Max Filippov <jcmvbkbc@gmail.com>
+Date: Fri, 3 Apr 2015 09:56:21 +0300
+Subject: [PATCH 159/219] xtensa: ISS: fix locking in TAP network adapter
+Cc: mpagano@gentoo.org
+
+commit 24e94454c8cb6a13634f5a2f5a01da53a546a58d upstream.
+
+- don't lock lp->lock in the iss_net_timer for the call of iss_net_poll,
+  it will lock it itself;
+- invert order of lp->lock and opened_lock acquisition in the
+  iss_net_open to make it consistent with iss_net_poll;
+- replace spin_lock with spin_lock_bh when acquiring locks used in
+  iss_net_timer from non-atomic context;
+- replace spin_lock_irqsave with spin_lock_bh in the iss_net_start_xmit
+  as the driver doesn't use lp->lock in the hard IRQ context;
+- replace __SPIN_LOCK_UNLOCKED(lp.lock) with spin_lock_init, otherwise
+  lockdep is unhappy about using non-static key.
+
+Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/xtensa/platforms/iss/network.c | 29 +++++++++++++++--------------
+ 1 file changed, 15 insertions(+), 14 deletions(-)
+
+diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c
+index d05f8fe..17b1ef3 100644
+--- a/arch/xtensa/platforms/iss/network.c
++++ b/arch/xtensa/platforms/iss/network.c
+@@ -349,8 +349,8 @@ static void iss_net_timer(unsigned long priv)
+ {
+ 	struct iss_net_private *lp = (struct iss_net_private *)priv;
+ 
+-	spin_lock(&lp->lock);
+ 	iss_net_poll();
++	spin_lock(&lp->lock);
+ 	mod_timer(&lp->timer, jiffies + lp->timer_val);
+ 	spin_unlock(&lp->lock);
+ }
+@@ -361,7 +361,7 @@ static int iss_net_open(struct net_device *dev)
+ 	struct iss_net_private *lp = netdev_priv(dev);
+ 	int err;
+ 
+-	spin_lock(&lp->lock);
++	spin_lock_bh(&lp->lock);
+ 
+ 	err = lp->tp.open(lp);
+ 	if (err < 0)
+@@ -376,9 +376,11 @@ static int iss_net_open(struct net_device *dev)
+ 	while ((err = iss_net_rx(dev)) > 0)
+ 		;
+ 
+-	spin_lock(&opened_lock);
++	spin_unlock_bh(&lp->lock);
++	spin_lock_bh(&opened_lock);
+ 	list_add(&lp->opened_list, &opened);
+-	spin_unlock(&opened_lock);
++	spin_unlock_bh(&opened_lock);
++	spin_lock_bh(&lp->lock);
+ 
+ 	init_timer(&lp->timer);
+ 	lp->timer_val = ISS_NET_TIMER_VALUE;
+@@ -387,7 +389,7 @@ static int iss_net_open(struct net_device *dev)
+ 	mod_timer(&lp->timer, jiffies + lp->timer_val);
+ 
+ out:
+-	spin_unlock(&lp->lock);
++	spin_unlock_bh(&lp->lock);
+ 	return err;
+ }
+ 
+@@ -395,7 +397,7 @@ static int iss_net_close(struct net_device *dev)
+ {
+ 	struct iss_net_private *lp = netdev_priv(dev);
+ 	netif_stop_queue(dev);
+-	spin_lock(&lp->lock);
++	spin_lock_bh(&lp->lock);
+ 
+ 	spin_lock(&opened_lock);
+ 	list_del(&opened);
+@@ -405,18 +407,17 @@ static int iss_net_close(struct net_device *dev)
+ 
+ 	lp->tp.close(lp);
+ 
+-	spin_unlock(&lp->lock);
++	spin_unlock_bh(&lp->lock);
+ 	return 0;
+ }
+ 
+ static int iss_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct iss_net_private *lp = netdev_priv(dev);
+-	unsigned long flags;
+ 	int len;
+ 
+ 	netif_stop_queue(dev);
+-	spin_lock_irqsave(&lp->lock, flags);
++	spin_lock_bh(&lp->lock);
+ 
+ 	len = lp->tp.write(lp, &skb);
+ 
+@@ -438,7 +439,7 @@ static int iss_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		pr_err("%s: %s failed(%d)\n", dev->name, __func__, len);
+ 	}
+ 
+-	spin_unlock_irqrestore(&lp->lock, flags);
++	spin_unlock_bh(&lp->lock);
+ 
+ 	dev_kfree_skb(skb);
+ 	return NETDEV_TX_OK;
+@@ -466,9 +467,9 @@ static int iss_net_set_mac(struct net_device *dev, void *addr)
+ 
+ 	if (!is_valid_ether_addr(hwaddr->sa_data))
+ 		return -EADDRNOTAVAIL;
+-	spin_lock(&lp->lock);
++	spin_lock_bh(&lp->lock);
+ 	memcpy(dev->dev_addr, hwaddr->sa_data, ETH_ALEN);
+-	spin_unlock(&lp->lock);
++	spin_unlock_bh(&lp->lock);
+ 	return 0;
+ }
+ 
+@@ -520,11 +521,11 @@ static int iss_net_configure(int index, char *init)
+ 	*lp = (struct iss_net_private) {
+ 		.device_list		= LIST_HEAD_INIT(lp->device_list),
+ 		.opened_list		= LIST_HEAD_INIT(lp->opened_list),
+-		.lock			= __SPIN_LOCK_UNLOCKED(lp.lock),
+ 		.dev			= dev,
+ 		.index			= index,
+-		};
++	};
+ 
++	spin_lock_init(&lp->lock);
+ 	/*
+ 	 * If this name ends up conflicting with an existing registered
+ 	 * netdevice, that is OK, register_netdev{,ice}() will notice this
+-- 
+2.3.6
+
+
+From 6d4724e609d9640755996c9dc8f3f4ee79790957 Mon Sep 17 00:00:00 2001
+From: Gregory CLEMENT <gregory.clement@free-electrons.com>
+Date: Thu, 2 Apr 2015 17:11:11 +0200
+Subject: [PATCH 160/219] gpio: mvebu: Fix mask/unmask managment per irq chip
+ type
+Cc: mpagano@gentoo.org
+
+commit 61819549f572edd7fce53f228c0d8420cdc85f71 upstream.
+
+Level IRQ handlers and edge IRQ handler are managed by tow different
+sets of registers. But currently the driver uses the same mask for the
+both registers. It lead to issues with the following scenario:
+
+First, an IRQ is requested on a GPIO to be triggered on front. After,
+this an other IRQ is requested for a GPIO of the same bank but
+triggered on level. Then the first one will be also setup to be
+triggered on level. It leads to an interrupt storm.
+
+The different kind of handler are already associated with two
+different irq chip type. With this patch the driver uses a private
+mask for each one which solves this issue.
+
+It has been tested on an Armada XP based board and on an Armada 375
+board. For the both boards, with this patch is applied, there is no
+such interrupt storm when running the previous scenario.
+
+This bug was already fixed but in a different way in the legacy
+version of this driver by Evgeniy Dushistov:
+9ece8839b1277fb9128ff6833411614ab6c88d68 "ARM: orion: Fix for certain
+sequence of request_irq can cause irq storm". The fact the new version
+of the gpio drive could be affected had been discussed there:
+http://thread.gmane.org/gmane.linux.ports.arm.kernel/344670/focus=364012
+
+Reported-by: Evgeniy A. Dushistov <dushistov@mail.ru>
+Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
+Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/gpio/gpio-mvebu.c | 24 ++++++++++++++++--------
+ 1 file changed, 16 insertions(+), 8 deletions(-)
+
+diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c
+index d0bc123..1a54205 100644
+--- a/drivers/gpio/gpio-mvebu.c
++++ b/drivers/gpio/gpio-mvebu.c
+@@ -320,11 +320,13 @@ static void mvebu_gpio_edge_irq_mask(struct irq_data *d)
+ {
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
++	struct irq_chip_type *ct = irq_data_get_chip_type(d);
+ 	u32 mask = 1 << (d->irq - gc->irq_base);
+ 
+ 	irq_gc_lock(gc);
+-	gc->mask_cache &= ~mask;
+-	writel_relaxed(gc->mask_cache, mvebu_gpioreg_edge_mask(mvchip));
++	ct->mask_cache_priv &= ~mask;
++
++	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_edge_mask(mvchip));
+ 	irq_gc_unlock(gc);
+ }
+ 
+@@ -332,11 +334,13 @@ static void mvebu_gpio_edge_irq_unmask(struct irq_data *d)
+ {
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
++	struct irq_chip_type *ct = irq_data_get_chip_type(d);
++
+ 	u32 mask = 1 << (d->irq - gc->irq_base);
+ 
+ 	irq_gc_lock(gc);
+-	gc->mask_cache |= mask;
+-	writel_relaxed(gc->mask_cache, mvebu_gpioreg_edge_mask(mvchip));
++	ct->mask_cache_priv |= mask;
++	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_edge_mask(mvchip));
+ 	irq_gc_unlock(gc);
+ }
+ 
+@@ -344,11 +348,13 @@ static void mvebu_gpio_level_irq_mask(struct irq_data *d)
+ {
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
++	struct irq_chip_type *ct = irq_data_get_chip_type(d);
++
+ 	u32 mask = 1 << (d->irq - gc->irq_base);
+ 
+ 	irq_gc_lock(gc);
+-	gc->mask_cache &= ~mask;
+-	writel_relaxed(gc->mask_cache, mvebu_gpioreg_level_mask(mvchip));
++	ct->mask_cache_priv &= ~mask;
++	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_level_mask(mvchip));
+ 	irq_gc_unlock(gc);
+ }
+ 
+@@ -356,11 +362,13 @@ static void mvebu_gpio_level_irq_unmask(struct irq_data *d)
+ {
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
++	struct irq_chip_type *ct = irq_data_get_chip_type(d);
++
+ 	u32 mask = 1 << (d->irq - gc->irq_base);
+ 
+ 	irq_gc_lock(gc);
+-	gc->mask_cache |= mask;
+-	writel_relaxed(gc->mask_cache, mvebu_gpioreg_level_mask(mvchip));
++	ct->mask_cache_priv |= mask;
++	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_level_mask(mvchip));
+ 	irq_gc_unlock(gc);
+ }
+ 
+-- 
+2.3.6
+
+
+From fb8e85723598714f519a827184910324690e2896 Mon Sep 17 00:00:00 2001
+From: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
+Date: Fri, 27 Mar 2015 17:27:10 +0100
+Subject: [PATCH 161/219] clk: samsung: exynos4: Disable ARMCLK down feature on
+ Exynos4210 SoC
+Cc: mpagano@gentoo.org
+
+commit 3a9e9cb65be84d6c64fbe9c69a73c15d59f29454 upstream.
+
+Commit 42773b28e71d ("clk: samsung: exynos4: Enable ARMCLK
+down feature") enabled ARMCLK down feature on all Exynos4
+SoCs.  Unfortunately on Exynos4210 SoC ARMCLK down feature
+causes a lockup when ondemand cpufreq governor is used.
+Fix it by limiting ARMCLK down feature to Exynos4x12 SoCs.
+
+This patch was tested on:
+- Exynos4210 SoC based Trats board
+- Exynos4210 SoC based Origen board
+- Exynos4412 SoC based Trats2 board
+- Exynos4412 SoC based Odroid-U3 board
+
+Cc: Daniel Drake <drake@endlessm.com>
+Cc: Tomasz Figa <t.figa@samsung.com>
+Cc: Kukjin Kim <kgene@kernel.org>
+Fixes: 42773b28e71d ("clk: samsung: exynos4: Enable ARMCLK down feature")
+Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
+Signed-off-by: Michael Turquette <mturquette@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/clk/samsung/clk-exynos4.c | 11 +++++------
+ 1 file changed, 5 insertions(+), 6 deletions(-)
+
+diff --git a/drivers/clk/samsung/clk-exynos4.c b/drivers/clk/samsung/clk-exynos4.c
+index 51462e8..714d6ba 100644
+--- a/drivers/clk/samsung/clk-exynos4.c
++++ b/drivers/clk/samsung/clk-exynos4.c
+@@ -1354,7 +1354,7 @@ static struct samsung_pll_clock exynos4x12_plls[nr_plls] __initdata = {
+ 			VPLL_LOCK, VPLL_CON0, NULL),
+ };
+ 
+-static void __init exynos4_core_down_clock(enum exynos4_soc soc)
++static void __init exynos4x12_core_down_clock(void)
+ {
+ 	unsigned int tmp;
+ 
+@@ -1373,11 +1373,9 @@ static void __init exynos4_core_down_clock(enum exynos4_soc soc)
+ 	__raw_writel(tmp, reg_base + PWR_CTRL1);
+ 
+ 	/*
+-	 * Disable the clock up feature on Exynos4x12, in case it was
+-	 * enabled by bootloader.
++	 * Disable the clock up feature in case it was enabled by bootloader.
+ 	 */
+-	if (exynos4_soc == EXYNOS4X12)
+-		__raw_writel(0x0, reg_base + E4X12_PWR_CTRL2);
++	__raw_writel(0x0, reg_base + E4X12_PWR_CTRL2);
+ }
+ 
+ /* register exynos4 clocks */
+@@ -1474,7 +1472,8 @@ static void __init exynos4_clk_init(struct device_node *np,
+ 	samsung_clk_register_alias(ctx, exynos4_aliases,
+ 			ARRAY_SIZE(exynos4_aliases));
+ 
+-	exynos4_core_down_clock(soc);
++	if (soc == EXYNOS4X12)
++		exynos4x12_core_down_clock();
+ 	exynos4_clk_sleep_init();
+ 
+ 	samsung_clk_of_add_provider(np, ctx);
+-- 
+2.3.6
+
+
+From 41761ed1e3b457699c416c4e5eea1c86aa2d307c Mon Sep 17 00:00:00 2001
+From: Thierry Reding <treding@nvidia.com>
+Date: Mon, 23 Mar 2015 10:57:46 +0100
+Subject: [PATCH 162/219] clk: tegra: Register the proper number of resets
+Cc: mpagano@gentoo.org
+
+commit 5e43e259171e1eee8bc074d9c44be434e685087b upstream.
+
+The number of resets controls is 32 times the number of peripheral
+register banks rather than 32 times the number of clocks. This reduces
+(drastically) the number of reset controls registered from 10080 (315
+clocks * 32) to 224 (6 peripheral register banks * 32).
+
+This also fixes a potential crash because trying to use any of the
+excess reset controls (224-10079) would have caused accesses beyond
+the array bounds of the peripheral register banks definition array.
+
+Cc: Peter De Schrijver <pdeschrijver@nvidia.com>
+Cc: Prashant Gaikwad <pgaikwad@nvidia.com>
+Fixes: 6d5b988e7dc5 ("clk: tegra: implement a reset driver")
+Signed-off-by: Thierry Reding <treding@nvidia.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/clk/tegra/clk.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/clk/tegra/clk.c b/drivers/clk/tegra/clk.c
+index 9ddb754..7a1df61 100644
+--- a/drivers/clk/tegra/clk.c
++++ b/drivers/clk/tegra/clk.c
+@@ -272,7 +272,7 @@ void __init tegra_add_of_provider(struct device_node *np)
+ 	of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data);
+ 
+ 	rst_ctlr.of_node = np;
+-	rst_ctlr.nr_resets = clk_num * 32;
++	rst_ctlr.nr_resets = periph_banks * 32;
+ 	reset_controller_register(&rst_ctlr);
+ }
+ 
+-- 
+2.3.6
+
+
+From 7c646709786798cd41b4e2feb7f9136214169c92 Mon Sep 17 00:00:00 2001
+From: Thierry Reding <treding@nvidia.com>
+Date: Thu, 26 Mar 2015 17:53:01 +0100
+Subject: [PATCH 163/219] clk: tegra: Use the proper parent for plld_dsi
+Cc: mpagano@gentoo.org
+
+commit c1d676cec572544616273d5853cb7cc38fbaa62b upstream.
+
+The current parent, plld_out0, does not exist. The proper name is
+pll_d_out0. While at it, rename the plld_dsi clock to pll_d_dsi_out to
+be more consistent with other clock names.
+
+Fixes: b270491eb9a0 ("clk: tegra: Define PLLD_DSI and remove dsia(b)_mux")
+Signed-off-by: Thierry Reding <treding@nvidia.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/clk/tegra/clk-tegra124.c                | 14 ++++++++------
+ include/dt-bindings/clock/tegra124-car-common.h |  2 +-
+ 2 files changed, 9 insertions(+), 7 deletions(-)
+
+diff --git a/drivers/clk/tegra/clk-tegra124.c b/drivers/clk/tegra/clk-tegra124.c
+index 9a893f2..23ce0af 100644
+--- a/drivers/clk/tegra/clk-tegra124.c
++++ b/drivers/clk/tegra/clk-tegra124.c
+@@ -1110,16 +1110,18 @@ static __init void tegra124_periph_clk_init(void __iomem *clk_base,
+ 					1, 2);
+ 	clks[TEGRA124_CLK_XUSB_SS_DIV2] = clk;
+ 
+-	clk = clk_register_gate(NULL, "plld_dsi", "plld_out0", 0,
++	clk = clk_register_gate(NULL, "pll_d_dsi_out", "pll_d_out0", 0,
+ 				clk_base + PLLD_MISC, 30, 0, &pll_d_lock);
+-	clks[TEGRA124_CLK_PLLD_DSI] = clk;
++	clks[TEGRA124_CLK_PLL_D_DSI_OUT] = clk;
+ 
+-	clk = tegra_clk_register_periph_gate("dsia", "plld_dsi", 0, clk_base,
+-					     0, 48, periph_clk_enb_refcnt);
++	clk = tegra_clk_register_periph_gate("dsia", "pll_d_dsi_out", 0,
++					     clk_base, 0, 48,
++					     periph_clk_enb_refcnt);
+ 	clks[TEGRA124_CLK_DSIA] = clk;
+ 
+-	clk = tegra_clk_register_periph_gate("dsib", "plld_dsi", 0, clk_base,
+-					     0, 82, periph_clk_enb_refcnt);
++	clk = tegra_clk_register_periph_gate("dsib", "pll_d_dsi_out", 0,
++					     clk_base, 0, 82,
++					     periph_clk_enb_refcnt);
+ 	clks[TEGRA124_CLK_DSIB] = clk;
+ 
+ 	/* emc mux */
+diff --git a/include/dt-bindings/clock/tegra124-car-common.h b/include/dt-bindings/clock/tegra124-car-common.h
+index ae2eb17..a215609 100644
+--- a/include/dt-bindings/clock/tegra124-car-common.h
++++ b/include/dt-bindings/clock/tegra124-car-common.h
+@@ -297,7 +297,7 @@
+ #define TEGRA124_CLK_PLL_C4 270
+ #define TEGRA124_CLK_PLL_DP 271
+ #define TEGRA124_CLK_PLL_E_MUX 272
+-#define TEGRA124_CLK_PLLD_DSI 273
++#define TEGRA124_CLK_PLL_D_DSI_OUT 273
+ /* 274 */
+ /* 275 */
+ /* 276 */
+-- 
+2.3.6
+
+
+From 1d77b1031e7230917ed6c8fd1ac82f18a9c33c9d Mon Sep 17 00:00:00 2001
+From: Stephen Boyd <sboyd@codeaurora.org>
+Date: Mon, 23 Feb 2015 13:30:28 -0800
+Subject: [PATCH 164/219] clk: qcom: Fix i2c frequency table
+Cc: mpagano@gentoo.org
+
+commit 0bf0ff82c34da02ee5795101b328225a2d519594 upstream.
+
+PXO is 25MHz, not 27MHz. Fix the table.
+
+Fixes: 24d8fba44af3 "clk: qcom: Add support for IPQ8064's global
+clock controller (GCC)"
+
+Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
+Reviewed-by: Andy Gross <agross@codeaurora.org>
+Tested-by: Andy Gross <agross@codeaurora.org>
+Signed-off-by: Michael Turquette <mturquette@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/clk/qcom/gcc-ipq806x.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/clk/qcom/gcc-ipq806x.c b/drivers/clk/qcom/gcc-ipq806x.c
+index cbdc31d..a015bb0 100644
+--- a/drivers/clk/qcom/gcc-ipq806x.c
++++ b/drivers/clk/qcom/gcc-ipq806x.c
+@@ -525,8 +525,8 @@ static struct freq_tbl clk_tbl_gsbi_qup[] = {
+ 	{ 10800000, P_PXO,  1, 2,  5 },
+ 	{ 15060000, P_PLL8, 1, 2, 51 },
+ 	{ 24000000, P_PLL8, 4, 1,  4 },
++	{ 25000000, P_PXO,  1, 0,  0 },
+ 	{ 25600000, P_PLL8, 1, 1, 15 },
+-	{ 27000000, P_PXO,  1, 0,  0 },
+ 	{ 48000000, P_PLL8, 4, 1,  2 },
+ 	{ 51200000, P_PLL8, 1, 2, 15 },
+ 	{ }
+-- 
+2.3.6
+
+
+From 6761ec536ade4be25c5b846e71f96c8ecdc08347 Mon Sep 17 00:00:00 2001
+From: Stephen Boyd <sboyd@codeaurora.org>
+Date: Fri, 6 Mar 2015 15:41:53 -0800
+Subject: [PATCH 165/219] clk: qcom: Properly change rates for ahbix clock
+Cc: mpagano@gentoo.org
+
+commit 9d3745d44a7faa7d24db7facb1949a1378162f3e upstream.
+
+The ahbix clock can never be turned off in practice. To change the
+rates we need to switch the mux off the M/N counter to an always on
+source (XO), reprogram the M/N counter to get the rate we want and
+finally switch back to the M/N counter. Add a new ops structure
+for this type of clock so that we can set the rate properly.
+
+Fixes: c99e515a92e9 "clk: qcom: Add IPQ806X LPASS clock controller (LCC) driver"
+Tested-by: Kenneth Westfield <kwestfie@codeaurora.org>
+Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/clk/qcom/clk-rcg.c     | 62 ++++++++++++++++++++++++++++++++++++++++++
+ drivers/clk/qcom/clk-rcg.h     |  1 +
+ drivers/clk/qcom/lcc-ipq806x.c |  5 ++--
+ 3 files changed, 65 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/clk/qcom/clk-rcg.c b/drivers/clk/qcom/clk-rcg.c
+index 0039bd7..466f30c 100644
+--- a/drivers/clk/qcom/clk-rcg.c
++++ b/drivers/clk/qcom/clk-rcg.c
+@@ -495,6 +495,57 @@ static int clk_rcg_bypass_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	return __clk_rcg_set_rate(rcg, rcg->freq_tbl);
+ }
+ 
++/*
++ * This type of clock has a glitch-free mux that switches between the output of
++ * the M/N counter and an always on clock source (XO). When clk_set_rate() is
++ * called we need to make sure that we don't switch to the M/N counter if it
++ * isn't clocking because the mux will get stuck and the clock will stop
++ * outputting a clock. This can happen if the framework isn't aware that this
++ * clock is on and so clk_set_rate() doesn't turn on the new parent. To fix
++ * this we switch the mux in the enable/disable ops and reprogram the M/N
++ * counter in the set_rate op. We also make sure to switch away from the M/N
++ * counter in set_rate if software thinks the clock is off.
++ */
++static int clk_rcg_lcc_set_rate(struct clk_hw *hw, unsigned long rate,
++				unsigned long parent_rate)
++{
++	struct clk_rcg *rcg = to_clk_rcg(hw);
++	const struct freq_tbl *f;
++	int ret;
++	u32 gfm = BIT(10);
++
++	f = qcom_find_freq(rcg->freq_tbl, rate);
++	if (!f)
++		return -EINVAL;
++
++	/* Switch to XO to avoid glitches */
++	regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, 0);
++	ret = __clk_rcg_set_rate(rcg, f);
++	/* Switch back to M/N if it's clocking */
++	if (__clk_is_enabled(hw->clk))
++		regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, gfm);
++
++	return ret;
++}
++
++static int clk_rcg_lcc_enable(struct clk_hw *hw)
++{
++	struct clk_rcg *rcg = to_clk_rcg(hw);
++	u32 gfm = BIT(10);
++
++	/* Use M/N */
++	return regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, gfm);
++}
++
++static void clk_rcg_lcc_disable(struct clk_hw *hw)
++{
++	struct clk_rcg *rcg = to_clk_rcg(hw);
++	u32 gfm = BIT(10);
++
++	/* Use XO */
++	regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, 0);
++}
++
+ static int __clk_dyn_rcg_set_rate(struct clk_hw *hw, unsigned long rate)
+ {
+ 	struct clk_dyn_rcg *rcg = to_clk_dyn_rcg(hw);
+@@ -543,6 +594,17 @@ const struct clk_ops clk_rcg_bypass_ops = {
+ };
+ EXPORT_SYMBOL_GPL(clk_rcg_bypass_ops);
+ 
++const struct clk_ops clk_rcg_lcc_ops = {
++	.enable = clk_rcg_lcc_enable,
++	.disable = clk_rcg_lcc_disable,
++	.get_parent = clk_rcg_get_parent,
++	.set_parent = clk_rcg_set_parent,
++	.recalc_rate = clk_rcg_recalc_rate,
++	.determine_rate = clk_rcg_determine_rate,
++	.set_rate = clk_rcg_lcc_set_rate,
++};
++EXPORT_SYMBOL_GPL(clk_rcg_lcc_ops);
++
+ const struct clk_ops clk_dyn_rcg_ops = {
+ 	.enable = clk_enable_regmap,
+ 	.is_enabled = clk_is_enabled_regmap,
+diff --git a/drivers/clk/qcom/clk-rcg.h b/drivers/clk/qcom/clk-rcg.h
+index 687e41f..d09d06b 100644
+--- a/drivers/clk/qcom/clk-rcg.h
++++ b/drivers/clk/qcom/clk-rcg.h
+@@ -96,6 +96,7 @@ struct clk_rcg {
+ 
+ extern const struct clk_ops clk_rcg_ops;
+ extern const struct clk_ops clk_rcg_bypass_ops;
++extern const struct clk_ops clk_rcg_lcc_ops;
+ 
+ #define to_clk_rcg(_hw) container_of(to_clk_regmap(_hw), struct clk_rcg, clkr)
+ 
+diff --git a/drivers/clk/qcom/lcc-ipq806x.c b/drivers/clk/qcom/lcc-ipq806x.c
+index c9ff27b..19378b0 100644
+--- a/drivers/clk/qcom/lcc-ipq806x.c
++++ b/drivers/clk/qcom/lcc-ipq806x.c
+@@ -386,13 +386,12 @@ static struct clk_rcg ahbix_clk = {
+ 	.freq_tbl = clk_tbl_ahbix,
+ 	.clkr = {
+ 		.enable_reg = 0x38,
+-		.enable_mask = BIT(10), /* toggle the gfmux to select mn/pxo */
++		.enable_mask = BIT(11),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "ahbix",
+ 			.parent_names = lcc_pxo_pll4,
+ 			.num_parents = 2,
+-			.ops = &clk_rcg_ops,
+-			.flags = CLK_SET_RATE_GATE,
++			.ops = &clk_rcg_lcc_ops,
+ 		},
+ 	},
+ };
+-- 
+2.3.6
+
+
+From 0602addf5fe488d8ced792e6a8f7da073516d33b Mon Sep 17 00:00:00 2001
+From: Archit Taneja <architt@codeaurora.org>
+Date: Wed, 4 Mar 2015 15:19:35 +0530
+Subject: [PATCH 166/219] clk: qcom: fix RCG M/N counter configuration
+Cc: mpagano@gentoo.org
+
+commit 0b21503dbbfa669dbd847b33578d4041513cddb2 upstream.
+
+Currently, a RCG's M/N counter (used for fraction division) is
+set to either 'bypass' (counter disabled) or 'dual edge' (counter
+enabled) based on whether the corresponding rcg struct has a mnd
+field specified and a non-zero N.
+
+In the case where M and N are the same value, the M/N counter is
+still enabled by code even though no division takes place.
+Leaving the RCG in such a state can result in improper behavior.
+This was observed with the DSI pixel clock RCG when M and N were
+both set to 1.
+
+Add an additional check (M != N) to enable the M/N counter only
+when it's needed for fraction division.
+
+Signed-off-by: Archit Taneja <architt@codeaurora.org>
+Fixes: bcd61c0f535a (clk: qcom: Add support for root clock
+generators (RCGs))
+Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/clk/qcom/clk-rcg2.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
+index 742acfa..381f274 100644
+--- a/drivers/clk/qcom/clk-rcg2.c
++++ b/drivers/clk/qcom/clk-rcg2.c
+@@ -243,7 +243,7 @@ static int clk_rcg2_configure(struct clk_rcg2 *rcg, const struct freq_tbl *f)
+ 	mask |= CFG_SRC_SEL_MASK | CFG_MODE_MASK;
+ 	cfg = f->pre_div << CFG_SRC_DIV_SHIFT;
+ 	cfg |= rcg->parent_map[f->src] << CFG_SRC_SEL_SHIFT;
+-	if (rcg->mnd_width && f->n)
++	if (rcg->mnd_width && f->n && (f->m != f->n))
+ 		cfg |= CFG_MODE_DUAL_EDGE;
+ 	ret = regmap_update_bits(rcg->clkr.regmap,
+ 			rcg->cmd_rcgr + CFG_REG, mask, cfg);
+-- 
+2.3.6
+
+
+From ea8ae530984cacf55cebc6a12bc43061f1dd41ed Mon Sep 17 00:00:00 2001
+From: Stephen Boyd <sboyd@codeaurora.org>
+Date: Thu, 26 Feb 2015 19:34:35 -0800
+Subject: [PATCH 167/219] clk: qcom: Fix ipq806x LCC frequency tables
+Cc: mpagano@gentoo.org
+
+commit b3261d768bcdd4b368179ed85becf38c95461848 upstream.
+
+These frequency tables list the wrong rates. Either they don't
+have the correct frequency at all, or they're specified in kHz
+instead of Hz. Fix it.
+
+Fixes: c99e515a92e9 "clk: qcom: Add IPQ806X LPASS clock controller (LCC) driver"
+Tested-by: Kenneth Westfield <kwestfie@codeaurora.org>
+Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/clk/qcom/lcc-ipq806x.c | 18 +++++++++---------
+ 1 file changed, 9 insertions(+), 9 deletions(-)
+
+diff --git a/drivers/clk/qcom/lcc-ipq806x.c b/drivers/clk/qcom/lcc-ipq806x.c
+index 19378b0..a6d3a67 100644
+--- a/drivers/clk/qcom/lcc-ipq806x.c
++++ b/drivers/clk/qcom/lcc-ipq806x.c
+@@ -294,14 +294,14 @@ static struct clk_regmap_mux pcm_clk = {
+ };
+ 
+ static struct freq_tbl clk_tbl_aif_osr[] = {
+-	{  22050, P_PLL4, 1, 147, 20480 },
+-	{  32000, P_PLL4, 1,   1,    96 },
+-	{  44100, P_PLL4, 1, 147, 10240 },
+-	{  48000, P_PLL4, 1,   1,    64 },
+-	{  88200, P_PLL4, 1, 147,  5120 },
+-	{  96000, P_PLL4, 1,   1,    32 },
+-	{ 176400, P_PLL4, 1, 147,  2560 },
+-	{ 192000, P_PLL4, 1,   1,    16 },
++	{  2822400, P_PLL4, 1, 147, 20480 },
++	{  4096000, P_PLL4, 1,   1,    96 },
++	{  5644800, P_PLL4, 1, 147, 10240 },
++	{  6144000, P_PLL4, 1,   1,    64 },
++	{ 11289600, P_PLL4, 1, 147,  5120 },
++	{ 12288000, P_PLL4, 1,   1,    32 },
++	{ 22579200, P_PLL4, 1, 147,  2560 },
++	{ 24576000, P_PLL4, 1,   1,    16 },
+ 	{ },
+ };
+ 
+@@ -360,7 +360,7 @@ static struct clk_branch spdif_clk = {
+ };
+ 
+ static struct freq_tbl clk_tbl_ahbix[] = {
+-	{ 131072, P_PLL4, 1, 1, 3 },
++	{ 131072000, P_PLL4, 1, 1, 3 },
+ 	{ },
+ };
+ 
+-- 
+2.3.6
+
+
+From b1c9b99dda6dfe49023214a772ff59debfaa6824 Mon Sep 17 00:00:00 2001
+From: Ben Collins <ben.c@servergy.com>
+Date: Fri, 3 Apr 2015 16:09:46 +0000
+Subject: [PATCH 168/219] dm crypt: fix deadlock when async crypto algorithm
+ returns -EBUSY
+Cc: mpagano@gentoo.org
+
+commit 0618764cb25f6fa9fb31152995de42a8a0496475 upstream.
+
+I suspect this doesn't show up for most anyone because software
+algorithms typically don't have a sense of being too busy.  However,
+when working with the Freescale CAAM driver it will return -EBUSY on
+occasion under heavy -- which resulted in dm-crypt deadlock.
+
+After checking the logic in some other drivers, the scheme for
+crypt_convert() and it's callback, kcryptd_async_done(), were not
+correctly laid out to properly handle -EBUSY or -EINPROGRESS.
+
+Fix this by using the completion for both -EBUSY and -EINPROGRESS.  Now
+crypt_convert()'s use of completion is comparable to
+af_alg_wait_for_completion().  Similarly, kcryptd_async_done() follows
+the pattern used in af_alg_complete().
+
+Before this fix dm-crypt would lockup within 1-2 minutes running with
+the CAAM driver.  Fix was regression tested against software algorithms
+on PPC32 and x86_64, and things seem perfectly happy there as well.
+
+Signed-off-by: Ben Collins <ben.c@servergy.com>
+Signed-off-by: Mike Snitzer <snitzer@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/md/dm-crypt.c | 12 ++++++------
+ 1 file changed, 6 insertions(+), 6 deletions(-)
+
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 713a962..41473929 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -925,11 +925,10 @@ static int crypt_convert(struct crypt_config *cc,
+ 
+ 		switch (r) {
+ 		/* async */
++		case -EINPROGRESS:
+ 		case -EBUSY:
+ 			wait_for_completion(&ctx->restart);
+ 			reinit_completion(&ctx->restart);
+-			/* fall through*/
+-		case -EINPROGRESS:
+ 			ctx->req = NULL;
+ 			ctx->cc_sector++;
+ 			continue;
+@@ -1346,10 +1345,8 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
+ 	struct dm_crypt_io *io = container_of(ctx, struct dm_crypt_io, ctx);
+ 	struct crypt_config *cc = io->cc;
+ 
+-	if (error == -EINPROGRESS) {
+-		complete(&ctx->restart);
++	if (error == -EINPROGRESS)
+ 		return;
+-	}
+ 
+ 	if (!error && cc->iv_gen_ops && cc->iv_gen_ops->post)
+ 		error = cc->iv_gen_ops->post(cc, iv_of_dmreq(cc, dmreq), dmreq);
+@@ -1360,12 +1357,15 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
+ 	crypt_free_req(cc, req_of_dmreq(cc, dmreq), io->base_bio);
+ 
+ 	if (!atomic_dec_and_test(&ctx->cc_pending))
+-		return;
++		goto done;
+ 
+ 	if (bio_data_dir(io->base_bio) == READ)
+ 		kcryptd_crypt_read_done(io);
+ 	else
+ 		kcryptd_crypt_write_io_submit(io, 1);
++done:
++	if (!completion_done(&ctx->restart))
++		complete(&ctx->restart);
+ }
+ 
+ static void kcryptd_crypt(struct work_struct *work)
+-- 
+2.3.6
+
+
+From 39b991a4765e2f7bd2faa383c66df5237117a8bb Mon Sep 17 00:00:00 2001
+From: Ken Xue <Ken.Xue@amd.com>
+Date: Mon, 9 Mar 2015 17:10:13 +0800
+Subject: [PATCH 169/219] serial: 8250_dw: add support for AMD SOC Carrizo
+Cc: mpagano@gentoo.org
+
+commit 5ef86b74209db33c133b5f18738dd8f3189b63a1 upstream.
+
+Add ACPI identifier for UART on AMD SOC Carrizo.
+
+Signed-off-by: Ken Xue <Ken.Xue@amd.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/tty/serial/8250/8250_dw.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index 6ae5b85..7a80250 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -629,6 +629,7 @@ static const struct acpi_device_id dw8250_acpi_match[] = {
+ 	{ "80860F0A", 0 },
+ 	{ "8086228A", 0 },
+ 	{ "APMC0D08", 0},
++	{ "AMD0020", 0 },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(acpi, dw8250_acpi_match);
+-- 
+2.3.6
+
+
+From 8067aec1b07ce3f80c8209eb3589abdf38753ac1 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Uwe=20Kleine-K=C3=B6nig?= <u.kleine-koenig@pengutronix.de>
+Date: Tue, 24 Feb 2015 11:17:05 +0100
+Subject: [PATCH 170/219] serial: imx: Fix clearing of receiver overrun flag
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+Cc: mpagano@gentoo.org
+
+commit 91555ce9012557b2d621d7b0b6ec694218a2a9bc upstream.
+
+The writeable bits in the USR2 register are all "write 1 to
+clear" so only write the bits that actually should be cleared.
+
+Fixes: f1f836e4209e ("serial: imx: Add Rx Fifo overrun error message")
+Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/tty/serial/imx.c | 8 +++-----
+ 1 file changed, 3 insertions(+), 5 deletions(-)
+
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 0eb29b1..2306191 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -818,7 +818,7 @@ static irqreturn_t imx_int(int irq, void *dev_id)
+ 	if (sts2 & USR2_ORE) {
+ 		dev_err(sport->port.dev, "Rx FIFO overrun\n");
+ 		sport->port.icount.overrun++;
+-		writel(sts2 | USR2_ORE, sport->port.membase + USR2);
++		writel(USR2_ORE, sport->port.membase + USR2);
+ 	}
+ 
+ 	return IRQ_HANDLED;
+@@ -1181,10 +1181,12 @@ static int imx_startup(struct uart_port *port)
+ 		imx_uart_dma_init(sport);
+ 
+ 	spin_lock_irqsave(&sport->port.lock, flags);
++
+ 	/*
+ 	 * Finally, clear and enable interrupts
+ 	 */
+ 	writel(USR1_RTSD, sport->port.membase + USR1);
++	writel(USR2_ORE, sport->port.membase + USR2);
+ 
+ 	if (sport->dma_is_inited && !sport->dma_is_enabled)
+ 		imx_enable_dma(sport);
+@@ -1199,10 +1201,6 @@ static int imx_startup(struct uart_port *port)
+ 
+ 	writel(temp, sport->port.membase + UCR1);
+ 
+-	/* Clear any pending ORE flag before enabling interrupt */
+-	temp = readl(sport->port.membase + USR2);
+-	writel(temp | USR2_ORE, sport->port.membase + USR2);
+-
+ 	temp = readl(sport->port.membase + UCR4);
+ 	temp |= UCR4_OREN;
+ 	writel(temp, sport->port.membase + UCR4);
+-- 
+2.3.6
+
+
+From cc1064fc8f1d71f9c3429e6bdd8129629fc39784 Mon Sep 17 00:00:00 2001
+From: Peter Hurley <peter@hurleysoftware.com>
+Date: Mon, 9 Mar 2015 14:05:01 -0400
+Subject: [PATCH 171/219] serial: 8250: Check UART_SCR is writable
+Cc: mpagano@gentoo.org
+
+commit f01a0bd8921b9d6668d41fae3198970e6318f532 upstream.
+
+Au1x00/RT2800+ doesn't implement the 8250 scratch register (and
+this may be true of other h/w currently supported by the 8250 driver);
+read back the canary value written to the scratch register to enable
+the console h/w restart after resume from system suspend.
+
+Fixes: 4516d50aabedb ("serial: 8250: Use canary to restart console ...")
+Reported-by: Mason <slash.tmp@free.fr>
+Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/tty/serial/8250/8250_core.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
+index deae122..d465ace 100644
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -3444,7 +3444,8 @@ void serial8250_suspend_port(int line)
+ 	    port->type != PORT_8250) {
+ 		unsigned char canary = 0xa5;
+ 		serial_out(up, UART_SCR, canary);
+-		up->canary = canary;
++		if (serial_in(up, UART_SCR) == canary)
++			up->canary = canary;
+ 	}
+ 
+ 	uart_suspend_port(&serial8250_reg, port);
+-- 
+2.3.6
+
+
+From 5cd06dd45f7cc5c15517266a61f8051ec16912ff Mon Sep 17 00:00:00 2001
+From: "Martin K. Petersen" <martin.petersen@oracle.com>
+Date: Tue, 14 Apr 2015 16:56:23 -0400
+Subject: [PATCH 172/219] sd: Unregister integrity profile
+Cc: mpagano@gentoo.org
+
+commit e727c42bd55794765c460b7ac2b6cc969f2a9698 upstream.
+
+The new integrity code did not correctly unregister the profile for SD
+disks. Call blk_integrity_unregister() when we release a disk.
+
+Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
+Reported-by: Sagi Grimberg <sagig@dev.mellanox.co.il>
+Tested-by: Sagi Grimberg <sagig@mellanox.com>
+Signed-off-by: James Bottomley <JBottomley@Odin.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/scsi/sd.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 6b78476..3290a3e 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3100,6 +3100,7 @@ static void scsi_disk_release(struct device *dev)
+ 	ida_remove(&sd_index_ida, sdkp->index);
+ 	spin_unlock(&sd_index_lock);
+ 
++	blk_integrity_unregister(disk);
+ 	disk->private_data = NULL;
+ 	put_disk(disk);
+ 	put_device(&sdkp->device->sdev_gendev);
+-- 
+2.3.6
+
+
+From 5c87838eadeb1a63546e36f76917241d8fa6ea52 Mon Sep 17 00:00:00 2001
+From: "Martin K. Petersen" <martin.petersen@oracle.com>
+Date: Tue, 14 Apr 2015 17:11:03 -0400
+Subject: [PATCH 173/219] sd: Fix missing ATO tag check
+Cc: mpagano@gentoo.org
+
+commit e557990e358934fb168d30371c9c0f63e314c6b8 upstream.
+
+3aec2f41a8bae introduced a merge error where we would end up check for
+sdkp instead of sdkp->ATO. Fix this so we register app tag capability
+correctly.
+
+Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
+Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
+Signed-off-by: James Bottomley <JBottomley@Odin.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/scsi/sd_dif.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/scsi/sd_dif.c b/drivers/scsi/sd_dif.c
+index 14c7d42..5c06d29 100644
+--- a/drivers/scsi/sd_dif.c
++++ b/drivers/scsi/sd_dif.c
+@@ -77,7 +77,7 @@ void sd_dif_config_host(struct scsi_disk *sdkp)
+ 
+ 		disk->integrity->flags |= BLK_INTEGRITY_DEVICE_CAPABLE;
+ 
+-		if (!sdkp)
++		if (!sdkp->ATO)
+ 			return;
+ 
+ 		if (type == SD_DIF_TYPE3_PROTECTION)
+-- 
+2.3.6
+
+
+From b9b4320c38bf2fadfd9299c36165c46f131200e0 Mon Sep 17 00:00:00 2001
+From: "K. Y. Srinivasan" <kys@microsoft.com>
+Date: Fri, 27 Feb 2015 11:26:04 -0800
+Subject: [PATCH 174/219] Drivers: hv: vmbus: Fix a bug in the error path in
+ vmbus_open()
+Cc: mpagano@gentoo.org
+
+commit 40384e4bbeb9f2651fe9bffc0062d9f31ef625bf upstream.
+
+Correctly rollback state if the failure occurs after we have handed over
+the ownership of the buffer to the host.
+
+Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/hv/channel.c | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
+index 2978f5e..00bc30e 100644
+--- a/drivers/hv/channel.c
++++ b/drivers/hv/channel.c
+@@ -135,7 +135,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
+ 			   GFP_KERNEL);
+ 	if (!open_info) {
+ 		err = -ENOMEM;
+-		goto error0;
++		goto error_gpadl;
+ 	}
+ 
+ 	init_completion(&open_info->waitevent);
+@@ -151,7 +151,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
+ 
+ 	if (userdatalen > MAX_USER_DEFINED_BYTES) {
+ 		err = -EINVAL;
+-		goto error0;
++		goto error_gpadl;
+ 	}
+ 
+ 	if (userdatalen)
+@@ -195,6 +195,9 @@ error1:
+ 	list_del(&open_info->msglistentry);
+ 	spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
+ 
++error_gpadl:
++	vmbus_teardown_gpadl(newchannel, newchannel->ringbuffer_gpadlhandle);
++
+ error0:
+ 	free_pages((unsigned long)out,
+ 		get_order(send_ringbuffer_size + recv_ringbuffer_size));
+-- 
+2.3.6
+
+
+From 1f77a24829ac6dbe9a942752ee15054d403653d9 Mon Sep 17 00:00:00 2001
+From: James Bottomley <JBottomley@Odin.com>
+Date: Wed, 15 Apr 2015 22:16:01 -0700
+Subject: [PATCH 175/219] mvsas: fix panic on expander attached SATA devices
+Cc: mpagano@gentoo.org
+
+commit 56cbd0ccc1b508de19561211d7ab9e1c77e6b384 upstream.
+
+mvsas is giving a General protection fault when it encounters an expander
+attached ATA device.  Analysis of mvs_task_prep_ata() shows that the driver is
+assuming all ATA devices are locally attached and obtaining the phy mask by
+indexing the local phy table (in the HBA structure) with the phy id.  Since
+expanders have many more phys than the HBA, this is causing the index into the
+HBA phy table to overflow and returning rubbish as the pointer.
+
+mvs_task_prep_ssp() instead does the phy mask using the port properties.
+Mirror this in mvs_task_prep_ata() to fix the panic.
+
+Reported-by: Adam Talbot <ajtalbot1@gmail.com>
+Tested-by: Adam Talbot <ajtalbot1@gmail.com>
+Signed-off-by: James Bottomley <JBottomley@Odin.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/scsi/mvsas/mv_sas.c | 5 +----
+ 1 file changed, 1 insertion(+), 4 deletions(-)
+
+diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
+index 2d5ab6d..454536c 100644
+--- a/drivers/scsi/mvsas/mv_sas.c
++++ b/drivers/scsi/mvsas/mv_sas.c
+@@ -441,14 +441,11 @@ static u32 mvs_get_ncq_tag(struct sas_task *task, u32 *tag)
+ static int mvs_task_prep_ata(struct mvs_info *mvi,
+ 			     struct mvs_task_exec_info *tei)
+ {
+-	struct sas_ha_struct *sha = mvi->sas;
+ 	struct sas_task *task = tei->task;
+ 	struct domain_device *dev = task->dev;
+ 	struct mvs_device *mvi_dev = dev->lldd_dev;
+ 	struct mvs_cmd_hdr *hdr = tei->hdr;
+ 	struct asd_sas_port *sas_port = dev->port;
+-	struct sas_phy *sphy = dev->phy;
+-	struct asd_sas_phy *sas_phy = sha->sas_phy[sphy->number];
+ 	struct mvs_slot_info *slot;
+ 	void *buf_prd;
+ 	u32 tag = tei->tag, hdr_tag;
+@@ -468,7 +465,7 @@ static int mvs_task_prep_ata(struct mvs_info *mvi,
+ 	slot->tx = mvi->tx_prod;
+ 	del_q = TXQ_MODE_I | tag |
+ 		(TXQ_CMD_STP << TXQ_CMD_SHIFT) |
+-		(MVS_PHY_ID << TXQ_PHY_SHIFT) |
++		((sas_port->phy_mask & TXQ_PHY_MASK) << TXQ_PHY_SHIFT) |
+ 		(mvi_dev->taskfileset << TXQ_SRS_SHIFT);
+ 	mvi->tx[mvi->tx_prod] = cpu_to_le32(del_q);
+ 
+-- 
+2.3.6
+
+
+From 287189f739322ef2f2b7698e613c85e7be8c9b9c Mon Sep 17 00:00:00 2001
+From: Sifan Naeem <sifan.naeem@imgtec.com>
+Date: Tue, 10 Feb 2015 07:41:56 -0300
+Subject: [PATCH 176/219] rc: img-ir: fix error in parameters passed to
+ irq_free()
+Cc: mpagano@gentoo.org
+
+commit 80ccf4ad06dc9d2f06a8347b2d309cdc959f72b3 upstream.
+
+img_ir_remove() passes a pointer to the ISR function as the 2nd
+parameter to irq_free() instead of a pointer to the device data
+structure.
+This issue causes unloading img-ir module to fail with the below
+warning after building and loading img-ir as a module.
+
+WARNING: CPU: 2 PID: 155 at ../kernel/irq/manage.c:1278
+__free_irq+0xb4/0x214() Trying to free already-free IRQ 58
+Modules linked in: img_ir(-)
+CPU: 2 PID: 155 Comm: rmmod Not tainted 3.14.0 #55 ...
+Call Trace:
+...
+[<8048d420>] __free_irq+0xb4/0x214
+[<8048d6b4>] free_irq+0xac/0xf4
+[<c009b130>] img_ir_remove+0x54/0xd4 [img_ir] [<8073ded0>]
+platform_drv_remove+0x30/0x54 ...
+
+Fixes: 160a8f8aec4d ("[media] rc: img-ir: add base driver")
+
+Signed-off-by: Sifan Naeem <sifan.naeem@imgtec.com>
+Acked-by: James Hogan <james.hogan@imgtec.com>
+Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/media/rc/img-ir/img-ir-core.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/media/rc/img-ir/img-ir-core.c b/drivers/media/rc/img-ir/img-ir-core.c
+index 77c78de..7020659 100644
+--- a/drivers/media/rc/img-ir/img-ir-core.c
++++ b/drivers/media/rc/img-ir/img-ir-core.c
+@@ -146,7 +146,7 @@ static int img_ir_remove(struct platform_device *pdev)
+ {
+ 	struct img_ir_priv *priv = platform_get_drvdata(pdev);
+ 
+-	free_irq(priv->irq, img_ir_isr);
++	free_irq(priv->irq, priv);
+ 	img_ir_remove_hw(priv);
+ 	img_ir_remove_raw(priv);
+ 
+-- 
+2.3.6
+
+
+From ecfdbe6a56ddd74036337f651bb2bd933341faa7 Mon Sep 17 00:00:00 2001
+From: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
+Date: Tue, 10 Mar 2015 11:37:14 -0300
+Subject: [PATCH 177/219] stk1160: Make sure current buffer is released
+Cc: mpagano@gentoo.org
+
+commit aeff09276748b66072f2db2e668cec955cf41959 upstream.
+
+The available (i.e. not used) buffers are returned by stk1160_clear_queue(),
+on the stop_streaming() path. However, this is insufficient and the current
+buffer must be released as well. Fix it.
+
+Signed-off-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
+Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
+Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/media/usb/stk1160/stk1160-v4l.c | 17 +++++++++++++++--
+ 1 file changed, 15 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/media/usb/stk1160/stk1160-v4l.c b/drivers/media/usb/stk1160/stk1160-v4l.c
+index 65a326c..749ad56 100644
+--- a/drivers/media/usb/stk1160/stk1160-v4l.c
++++ b/drivers/media/usb/stk1160/stk1160-v4l.c
+@@ -240,6 +240,11 @@ static int stk1160_stop_streaming(struct stk1160 *dev)
+ 	if (mutex_lock_interruptible(&dev->v4l_lock))
+ 		return -ERESTARTSYS;
+ 
++	/*
++	 * Once URBs are cancelled, the URB complete handler
++	 * won't be running. This is required to safely release the
++	 * current buffer (dev->isoc_ctl.buf).
++	 */
+ 	stk1160_cancel_isoc(dev);
+ 
+ 	/*
+@@ -620,8 +625,16 @@ void stk1160_clear_queue(struct stk1160 *dev)
+ 		stk1160_info("buffer [%p/%d] aborted\n",
+ 				buf, buf->vb.v4l2_buf.index);
+ 	}
+-	/* It's important to clear current buffer */
+-	dev->isoc_ctl.buf = NULL;
++
++	/* It's important to release the current buffer */
++	if (dev->isoc_ctl.buf) {
++		buf = dev->isoc_ctl.buf;
++		dev->isoc_ctl.buf = NULL;
++
++		vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR);
++		stk1160_info("buffer [%p/%d] aborted\n",
++				buf, buf->vb.v4l2_buf.index);
++	}
+ 	spin_unlock_irqrestore(&dev->buf_lock, flags);
+ }
+ 
+-- 
+2.3.6
+
+
+From d9bc10f7ccda1d662f3cd98f0949a03fe27b69e4 Mon Sep 17 00:00:00 2001
+From: Yann Droneaud <ydroneaud@opteya.com>
+Date: Mon, 13 Apr 2015 14:56:22 +0200
+Subject: [PATCH 178/219] IB/core: disallow registering 0-sized memory region
+Cc: mpagano@gentoo.org
+
+commit 8abaae62f3fdead8f4ce0ab46b4ab93dee39bab2 upstream.
+
+If ib_umem_get() is called with a size equal to 0 and an
+non-page aligned address, one page will be pinned and a
+0-sized umem will be returned to the caller.
+
+This should not be allowed: it's not expected for a memory
+region to have a size equal to 0.
+
+This patch adds a check to explicitly refuse to register
+a 0-sized region.
+
+Link: http://mid.gmane.org/cover.1428929103.git.ydroneaud@opteya.com
+Cc: Shachar Raindel <raindel@mellanox.com>
+Cc: Jack Morgenstein <jackm@mellanox.com>
+Cc: Or Gerlitz <ogerlitz@mellanox.com>
+Signed-off-by: Yann Droneaud <ydroneaud@opteya.com>
+Signed-off-by: Doug Ledford <dledford@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/infiniband/core/umem.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
+index 8c014b5..9ac4068 100644
+--- a/drivers/infiniband/core/umem.c
++++ b/drivers/infiniband/core/umem.c
+@@ -99,6 +99,9 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
+ 	if (dmasync)
+ 		dma_set_attr(DMA_ATTR_WRITE_BARRIER, &attrs);
+ 
++	if (!size)
++		return ERR_PTR(-EINVAL);
++
+ 	/*
+ 	 * If the combination of the addr and size requested for this memory
+ 	 * region causes an integer overflow, return error.
+-- 
+2.3.6
+
+
+From d0ddb13fc24a64a940e8050ea076e59bb04597f4 Mon Sep 17 00:00:00 2001
+From: Yann Droneaud <ydroneaud@opteya.com>
+Date: Mon, 13 Apr 2015 14:56:23 +0200
+Subject: [PATCH 179/219] IB/core: don't disallow registering region starting
+ at 0x0
+Cc: mpagano@gentoo.org
+
+commit 66578b0b2f69659f00b6169e6fe7377c4b100d18 upstream.
+
+In a call to ib_umem_get(), if address is 0x0 and size is
+already page aligned, check added in commit 8494057ab5e4
+("IB/uverbs: Prevent integer overflow in ib_umem_get address
+arithmetic") will refuse to register a memory region that
+could otherwise be valid (provided vm.mmap_min_addr sysctl
+and mmap_low_allowed SELinux knobs allow userspace to map
+something at address 0x0).
+
+This patch allows back such registration: ib_umem_get()
+should probably don't care of the base address provided it
+can be pinned with get_user_pages().
+
+There's two possible overflows, in (addr + size) and in
+PAGE_ALIGN(addr + size), this patch keep ensuring none
+of them happen while allowing to pin memory at address
+0x0. Anyway, the case of size equal 0 is no more (partially)
+handled as 0-length memory region are disallowed by an
+earlier check.
+
+Link: http://mid.gmane.org/cover.1428929103.git.ydroneaud@opteya.com
+Cc: Shachar Raindel <raindel@mellanox.com>
+Cc: Jack Morgenstein <jackm@mellanox.com>
+Cc: Or Gerlitz <ogerlitz@mellanox.com>
+Signed-off-by: Yann Droneaud <ydroneaud@opteya.com>
+Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
+Reviewed-by: Haggai Eran <haggaie@mellanox.com>
+Signed-off-by: Doug Ledford <dledford@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/infiniband/core/umem.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
+index 9ac4068..38acb3c 100644
+--- a/drivers/infiniband/core/umem.c
++++ b/drivers/infiniband/core/umem.c
+@@ -106,8 +106,8 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
+ 	 * If the combination of the addr and size requested for this memory
+ 	 * region causes an integer overflow, return error.
+ 	 */
+-	if ((PAGE_ALIGN(addr + size) <= size) ||
+-	    (PAGE_ALIGN(addr + size) <= addr))
++	if (((addr + size) < addr) ||
++	    PAGE_ALIGN(addr + size) < (addr + size))
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	if (!can_do_mlock())
+-- 
+2.3.6
+
+
+From 7fc80a4ea6d5b307470a6bb165b293e334b22c20 Mon Sep 17 00:00:00 2001
+From: Erez Shitrit <erezsh@mellanox.com>
+Date: Thu, 2 Apr 2015 13:39:05 +0300
+Subject: [PATCH 180/219] IB/mlx4: Fix WQE LSO segment calculation
+Cc: mpagano@gentoo.org
+
+commit ca9b590caa17bcbbea119594992666e96cde9c2f upstream.
+
+The current code decreases from the mss size (which is the gso_size
+from the kernel skb) the size of the packet headers.
+
+It shouldn't do that because the mss that comes from the stack
+(e.g IPoIB) includes only the tcp payload without the headers.
+
+The result is indication to the HW that each packet that the HW sends
+is smaller than what it could be, and too many packets will be sent
+for big messages.
+
+An easy way to demonstrate one more aspect of the problem is by
+configuring the ipoib mtu to be less than 2*hlen (2*56) and then
+run app sending big TCP messages. This will tell the HW to send packets
+with giant (negative value which under unsigned arithmetics becomes
+a huge positive one) length and the QP moves to SQE state.
+
+Fixes: b832be1e4007 ('IB/mlx4: Add IPoIB LSO support')
+Reported-by: Matthew Finlay <matt@mellanox.com>
+Signed-off-by: Erez Shitrit <erezsh@mellanox.com>
+Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
+Signed-off-by: Doug Ledford <dledford@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/infiniband/hw/mlx4/qp.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
+index ed2bd67..fbde33a 100644
+--- a/drivers/infiniband/hw/mlx4/qp.c
++++ b/drivers/infiniband/hw/mlx4/qp.c
+@@ -2605,8 +2605,7 @@ static int build_lso_seg(struct mlx4_wqe_lso_seg *wqe, struct ib_send_wr *wr,
+ 
+ 	memcpy(wqe->header, wr->wr.ud.header, wr->wr.ud.hlen);
+ 
+-	*lso_hdr_sz  = cpu_to_be32((wr->wr.ud.mss - wr->wr.ud.hlen) << 16 |
+-				   wr->wr.ud.hlen);
++	*lso_hdr_sz  = cpu_to_be32(wr->wr.ud.mss << 16 | wr->wr.ud.hlen);
+ 	*lso_seg_len = halign;
+ 	return 0;
+ }
+-- 
+2.3.6
+
+
+From 6fb5785d6c07d834567ccf3f3ba2df9c3803b28b Mon Sep 17 00:00:00 2001
+From: Sagi Grimberg <sagig@mellanox.com>
+Date: Tue, 14 Apr 2015 18:08:13 +0300
+Subject: [PATCH 181/219] IB/iser: Fix wrong calculation of protection buffer
+ length
+Cc: mpagano@gentoo.org
+
+commit a065fe6aa25ba6ba93c02dc13486131bb3c64d5f upstream.
+
+This length miss-calculation may cause a silent data corruption
+in the DIX case and cause the device to reference unmapped area.
+
+Fixes: d77e65350f2d ('libiscsi, iser: Adjust data_length to include protection information')
+Signed-off-by: Sagi Grimberg <sagig@mellanox.com>
+Signed-off-by: Doug Ledford <dledford@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/infiniband/ulp/iser/iser_initiator.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/infiniband/ulp/iser/iser_initiator.c b/drivers/infiniband/ulp/iser/iser_initiator.c
+index 20e859a..76eb57b 100644
+--- a/drivers/infiniband/ulp/iser/iser_initiator.c
++++ b/drivers/infiniband/ulp/iser/iser_initiator.c
+@@ -409,8 +409,8 @@ int iser_send_command(struct iscsi_conn *conn,
+ 	if (scsi_prot_sg_count(sc)) {
+ 		prot_buf->buf  = scsi_prot_sglist(sc);
+ 		prot_buf->size = scsi_prot_sg_count(sc);
+-		prot_buf->data_len = data_buf->data_len >>
+-				     ilog2(sc->device->sector_size) * 8;
++		prot_buf->data_len = (data_buf->data_len >>
++				     ilog2(sc->device->sector_size)) * 8;
+ 	}
+ 
+ 	if (hdr->flags & ISCSI_FLAG_CMD_READ) {
+-- 
+2.3.6
+
+
+From c62b024af945d20e01c3e8c416b9e00d137e6f02 Mon Sep 17 00:00:00 2001
+From: Rabin Vincent <rabin@rab.in>
+Date: Mon, 13 Apr 2015 22:30:12 +0200
+Subject: [PATCH 182/219] tracing: Handle ftrace_dump() atomic context in
+ graph_trace_open()
+Cc: mpagano@gentoo.org
+
+commit ef99b88b16bee753fa51207abdc58ae660453ec6 upstream.
+
+graph_trace_open() can be called in atomic context from ftrace_dump().
+Use GFP_ATOMIC for the memory allocations when that's the case, in order
+to avoid the following splat.
+
+ BUG: sleeping function called from invalid context at mm/slab.c:2849
+ in_atomic(): 1, irqs_disabled(): 128, pid: 0, name: swapper/0
+ Backtrace:
+ ..
+ [<8004dc94>] (__might_sleep) from [<801371f4>] (kmem_cache_alloc_trace+0x160/0x238)
+  r7:87800040 r6:000080d0 r5:810d16e8 r4:000080d0
+ [<80137094>] (kmem_cache_alloc_trace) from [<800cbd60>] (graph_trace_open+0x30/0xd0)
+  r10:00000100 r9:809171a8 r8:00008e28 r7:810d16f0 r6:00000001 r5:810d16e8
+  r4:810d16f0
+ [<800cbd30>] (graph_trace_open) from [<800c79c4>] (trace_init_global_iter+0x50/0x9c)
+  r8:00008e28 r7:808c853c r6:00000001 r5:810d16e8 r4:810d16f0 r3:800cbd30
+ [<800c7974>] (trace_init_global_iter) from [<800c7aa0>] (ftrace_dump+0x90/0x2ec)
+  r4:810d2580 r3:00000000
+ [<800c7a10>] (ftrace_dump) from [<80414b2c>] (sysrq_ftrace_dump+0x1c/0x20)
+  r10:00000100 r9:809171a8 r8:808f6e7c r7:00000001 r6:00000007 r5:0000007a
+  r4:808d5394
+ [<80414b10>] (sysrq_ftrace_dump) from [<800169b8>] (return_to_handler+0x0/0x18)
+ [<80415498>] (__handle_sysrq) from [<800169b8>] (return_to_handler+0x0/0x18)
+  r8:808c8100 r7:808c8444 r6:00000101 r5:00000010 r4:84eb3210
+ [<80415668>] (handle_sysrq) from [<800169b8>] (return_to_handler+0x0/0x18)
+ [<8042a760>] (pl011_int) from [<800169b8>] (return_to_handler+0x0/0x18)
+  r10:809171bc r9:809171a8 r8:00000001 r7:00000026 r6:808c6000 r5:84f01e60
+  r4:8454fe00
+ [<8007782c>] (handle_irq_event_percpu) from [<80077b44>] (handle_irq_event+0x4c/0x6c)
+  r10:808c7ef0 r9:87283e00 r8:00000001 r7:00000000 r6:8454fe00 r5:84f01e60
+  r4:84f01e00
+ [<80077af8>] (handle_irq_event) from [<8007aa28>] (handle_fasteoi_irq+0xf0/0x1ac)
+  r6:808f52a4 r5:84f01e60 r4:84f01e00 r3:00000000
+ [<8007a938>] (handle_fasteoi_irq) from [<80076dc0>] (generic_handle_irq+0x3c/0x4c)
+  r6:00000026 r5:00000000 r4:00000026 r3:8007a938
+ [<80076d84>] (generic_handle_irq) from [<80077128>] (__handle_domain_irq+0x8c/0xfc)
+  r4:808c1e38 r3:0000002e
+ [<8007709c>] (__handle_domain_irq) from [<800087b8>] (gic_handle_irq+0x34/0x6c)
+  r10:80917748 r9:00000001 r8:88802100 r7:808c7ef0 r6:808c8fb0 r5:00000015
+  r4:8880210c r3:808c7ef0
+ [<80008784>] (gic_handle_irq) from [<80014044>] (__irq_svc+0x44/0x7c)
+
+Link: http://lkml.kernel.org/r/1428953721-31349-1-git-send-email-rabin@rab.in
+Link: http://lkml.kernel.org/r/1428957012-2319-1-git-send-email-rabin@rab.in
+
+Signed-off-by: Rabin Vincent <rabin@rab.in>
+Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ kernel/trace/trace_functions_graph.c | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
+index 2d25ad1..b6fce36 100644
+--- a/kernel/trace/trace_functions_graph.c
++++ b/kernel/trace/trace_functions_graph.c
+@@ -1309,15 +1309,19 @@ void graph_trace_open(struct trace_iterator *iter)
+ {
+ 	/* pid and depth on the last trace processed */
+ 	struct fgraph_data *data;
++	gfp_t gfpflags;
+ 	int cpu;
+ 
+ 	iter->private = NULL;
+ 
+-	data = kzalloc(sizeof(*data), GFP_KERNEL);
++	/* We can be called in atomic context via ftrace_dump() */
++	gfpflags = (in_atomic() || irqs_disabled()) ? GFP_ATOMIC : GFP_KERNEL;
++
++	data = kzalloc(sizeof(*data), gfpflags);
+ 	if (!data)
+ 		goto out_err;
+ 
+-	data->cpu_data = alloc_percpu(struct fgraph_cpu_data);
++	data->cpu_data = alloc_percpu_gfp(struct fgraph_cpu_data, gfpflags);
+ 	if (!data->cpu_data)
+ 		goto out_err_free;
+ 
+-- 
+2.3.6
+
+
+From aaeb6f4d936e550fef1f068d2e883a23f757d5f5 Mon Sep 17 00:00:00 2001
+From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+Date: Thu, 16 Apr 2015 13:44:44 +0900
+Subject: [PATCH 183/219] tracing: Fix incorrect enabling of trace events by
+ boot cmdline
+Cc: mpagano@gentoo.org
+
+commit 84fce9db4d7eaebd6cb2ee30c15da6d4e4daf846 upstream.
+
+There is a problem that trace events are not properly enabled with
+boot cmdline. The problem is that if we pass "trace_event=kmem:mm_page_alloc"
+to the boot cmdline, it enables all kmem trace events, and not just
+the page_alloc event.
+
+This is caused by the parsing mechanism. When we parse the cmdline, the buffer
+contents is modified due to tokenization. And, if we use this buffer
+again, we will get the wrong result.
+
+Unfortunately, this buffer is be accessed three times to set trace events
+properly at boot time. So, we need to handle this situation.
+
+There is already code handling ",", but we need another for ":".
+This patch adds it.
+
+Link: http://lkml.kernel.org/r/1429159484-22977-1-git-send-email-iamjoonsoo.kim@lge.com
+
+Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+[ added missing return ret; ]
+Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ kernel/trace/trace_events.c | 9 ++++++++-
+ 1 file changed, 8 insertions(+), 1 deletion(-)
+
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index db54dda..a9c10a3 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -565,6 +565,7 @@ static int __ftrace_set_clr_event(struct trace_array *tr, const char *match,
+ static int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set)
+ {
+ 	char *event = NULL, *sub = NULL, *match;
++	int ret;
+ 
+ 	/*
+ 	 * The buf format can be <subsystem>:<event-name>
+@@ -590,7 +591,13 @@ static int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set)
+ 			event = NULL;
+ 	}
+ 
+-	return __ftrace_set_clr_event(tr, match, sub, event, set);
++	ret = __ftrace_set_clr_event(tr, match, sub, event, set);
++
++	/* Put back the colon to allow this to be called again */
++	if (buf)
++		*(buf - 1) = ':';
++
++	return ret;
+ }
+ 
+ /**
+-- 
+2.3.6
+
+
+From c5bc4117a935b13fdc40db4753b9d32307d2e304 Mon Sep 17 00:00:00 2001
+From: Wolfram Sang <wsa+renesas@sang-engineering.com>
+Date: Thu, 23 Apr 2015 10:29:09 +0200
+Subject: [PATCH 184/219] i2c: mux: use proper dev when removing "channel-X"
+ symlinks
+Cc: mpagano@gentoo.org
+
+commit 133778482ec6c8fde69406be380333963627c17a upstream.
+
+Those symlinks are created for the mux_dev, so we need to remove it from
+there. Currently, it breaks for muxes where the mux_dev is not the device
+of the parent adapter like this:
+
+[   78.234644] WARNING: CPU: 0 PID: 365 at fs/sysfs/dir.c:31 sysfs_warn_dup+0x5c/0x78()
+[   78.242438] sysfs: cannot create duplicate filename '/devices/platform/i2cbus@8/channel-0'
+
+Remove confusing comments while we are here.
+
+Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
+Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
+Fixes: c9449affad2ae0
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/i2c/i2c-mux.c | 8 +++++---
+ 1 file changed, 5 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/i2c/i2c-mux.c b/drivers/i2c/i2c-mux.c
+index 593f7ca..06cc1ff 100644
+--- a/drivers/i2c/i2c-mux.c
++++ b/drivers/i2c/i2c-mux.c
+@@ -32,8 +32,9 @@ struct i2c_mux_priv {
+ 	struct i2c_algorithm algo;
+ 
+ 	struct i2c_adapter *parent;
+-	void *mux_priv;	/* the mux chip/device */
+-	u32  chan_id;	/* the channel id */
++	struct device *mux_dev;
++	void *mux_priv;
++	u32 chan_id;
+ 
+ 	int (*select)(struct i2c_adapter *, void *mux_priv, u32 chan_id);
+ 	int (*deselect)(struct i2c_adapter *, void *mux_priv, u32 chan_id);
+@@ -119,6 +120,7 @@ struct i2c_adapter *i2c_add_mux_adapter(struct i2c_adapter *parent,
+ 
+ 	/* Set up private adapter data */
+ 	priv->parent = parent;
++	priv->mux_dev = mux_dev;
+ 	priv->mux_priv = mux_priv;
+ 	priv->chan_id = chan_id;
+ 	priv->select = select;
+@@ -203,7 +205,7 @@ void i2c_del_mux_adapter(struct i2c_adapter *adap)
+ 	char symlink_name[20];
+ 
+ 	snprintf(symlink_name, sizeof(symlink_name), "channel-%u", priv->chan_id);
+-	sysfs_remove_link(&adap->dev.parent->kobj, symlink_name);
++	sysfs_remove_link(&priv->mux_dev->kobj, symlink_name);
+ 
+ 	sysfs_remove_link(&priv->adap.dev.kobj, "mux_device");
+ 	i2c_del_adapter(adap);
+-- 
+2.3.6
+
+
+From 7a86d818f4f71fdd0e1d16c07026e2b9a52be2d6 Mon Sep 17 00:00:00 2001
+From: Dmitry Torokhov <dmitry.torokhov@gmail.com>
+Date: Mon, 20 Apr 2015 15:14:47 -0700
+Subject: [PATCH 185/219] i2c: rk3x: report number of messages transmitted
+Cc: mpagano@gentoo.org
+
+commit c6cbfb91b878224e78408a2e15901c79de77115a upstream.
+
+master_xfer() method should return number of i2c messages transferred,
+but on Rockchip we were usually returning just 1, which caused trouble
+with users that actually check number of transferred messages vs.
+checking for negative error codes.
+
+Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
+Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/i2c/busses/i2c-rk3x.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/i2c/busses/i2c-rk3x.c b/drivers/i2c/busses/i2c-rk3x.c
+index 5f96b1b..019d542 100644
+--- a/drivers/i2c/busses/i2c-rk3x.c
++++ b/drivers/i2c/busses/i2c-rk3x.c
+@@ -833,7 +833,7 @@ static int rk3x_i2c_xfer(struct i2c_adapter *adap,
+ 	clk_disable(i2c->clk);
+ 	spin_unlock_irqrestore(&i2c->lock, flags);
+ 
+-	return ret;
++	return ret < 0 ? ret : num;
+ }
+ 
+ static u32 rk3x_i2c_func(struct i2c_adapter *adap)
+-- 
+2.3.6
+
+
+From 184848b540e3c7df18a22b983319fa4f64acec15 Mon Sep 17 00:00:00 2001
+From: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
+Date: Thu, 16 Apr 2015 13:05:19 +0100
+Subject: [PATCH 186/219] i2c: Mark adapter devices with
+ pm_runtime_no_callbacks
+Cc: mpagano@gentoo.org
+
+commit 6ada5c1e1b077ab98fc144d7ac132b4dcc0148ec upstream.
+
+Commit 523c5b89640e ("i2c: Remove support for legacy PM") removed the PM
+ops from the bus type, which causes the pm operations on the s3c2410
+adapter device to fail (-ENOSUPP in rpm_callback). The adapter device
+doesn't get bound to a driver and as such can't have its own pm_runtime
+callbacks. Previously this was fine as the bus callbacks would have been
+used, but now this can cause devices which use PM runtime and are
+attached over I2C to fail to resume.
+
+This commit fixes this issue by marking all adapter devices with
+pm_runtime_no_callbacks, since they can't have any.
+
+Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
+Acked-by: Beata Michalska <b.michalska@samsung.com>
+Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
+Fixes: 523c5b89640e
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/i2c/i2c-core.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/drivers/i2c/i2c-core.c b/drivers/i2c/i2c-core.c
+index edf274c..526c5a5 100644
+--- a/drivers/i2c/i2c-core.c
++++ b/drivers/i2c/i2c-core.c
+@@ -1410,6 +1410,8 @@ static int i2c_register_adapter(struct i2c_adapter *adap)
+ 
+ 	dev_dbg(&adap->dev, "adapter [%s] registered\n", adap->name);
+ 
++	pm_runtime_no_callbacks(&adap->dev);
++
+ #ifdef CONFIG_I2C_COMPAT
+ 	res = class_compat_create_link(i2c_adapter_compat_class, &adap->dev,
+ 				       adap->dev.parent);
+-- 
+2.3.6
+
+
+From 00b2c92fe1b560e1a984edf0671f0feb7886a7ed Mon Sep 17 00:00:00 2001
+From: Mark Brown <broonie@kernel.org>
+Date: Wed, 15 Apr 2015 19:18:39 +0100
+Subject: [PATCH 187/219] i2c: core: Export bus recovery functions
+Cc: mpagano@gentoo.org
+
+commit c1c21f4e60ed4523292f1a89ff45a208bddd3849 upstream.
+
+Current -next fails to link an ARM allmodconfig because drivers that use
+the core recovery functions can be built as modules but those functions
+are not exported:
+
+ERROR: "i2c_generic_gpio_recovery" [drivers/i2c/busses/i2c-davinci.ko] undefined!
+ERROR: "i2c_generic_scl_recovery" [drivers/i2c/busses/i2c-davinci.ko] undefined!
+ERROR: "i2c_recover_bus" [drivers/i2c/busses/i2c-davinci.ko] undefined!
+
+Add exports to fix this.
+
+Fixes: 5f9296ba21b3c (i2c: Add bus recovery infrastructure)
+Signed-off-by: Mark Brown <broonie@kernel.org>
+Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/i2c/i2c-core.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/drivers/i2c/i2c-core.c b/drivers/i2c/i2c-core.c
+index 526c5a5..8143162 100644
+--- a/drivers/i2c/i2c-core.c
++++ b/drivers/i2c/i2c-core.c
+@@ -596,6 +596,7 @@ int i2c_generic_scl_recovery(struct i2c_adapter *adap)
+ 	adap->bus_recovery_info->set_scl(adap, 1);
+ 	return i2c_generic_recovery(adap);
+ }
++EXPORT_SYMBOL_GPL(i2c_generic_scl_recovery);
+ 
+ int i2c_generic_gpio_recovery(struct i2c_adapter *adap)
+ {
+@@ -610,6 +611,7 @@ int i2c_generic_gpio_recovery(struct i2c_adapter *adap)
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(i2c_generic_gpio_recovery);
+ 
+ int i2c_recover_bus(struct i2c_adapter *adap)
+ {
+@@ -619,6 +621,7 @@ int i2c_recover_bus(struct i2c_adapter *adap)
+ 	dev_dbg(&adap->dev, "Trying i2c bus recovery\n");
+ 	return adap->bus_recovery_info->recover_bus(adap);
+ }
++EXPORT_SYMBOL_GPL(i2c_recover_bus);
+ 
+ static int i2c_device_probe(struct device *dev)
+ {
+-- 
+2.3.6
+
+
+From 87479d71ffe1c2b63f7621fefbdc1cedd95dd49d Mon Sep 17 00:00:00 2001
+From: Alex Deucher <alexander.deucher@amd.com>
+Date: Tue, 24 Feb 2015 11:29:21 -0500
+Subject: [PATCH 188/219] drm/radeon: fix doublescan modes (v2)
+Cc: mpagano@gentoo.org
+
+commit fd99a0943ffaa0320ea4f69d09ed188f950c0432 upstream.
+
+Use the correct flags for atom.
+
+v2: handle DRM_MODE_FLAG_DBLCLK
+
+Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/gpu/drm/radeon/atombios_crtc.c | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c
+index 86807ee..9bd5611 100644
+--- a/drivers/gpu/drm/radeon/atombios_crtc.c
++++ b/drivers/gpu/drm/radeon/atombios_crtc.c
+@@ -330,8 +330,10 @@ atombios_set_crtc_dtd_timing(struct drm_crtc *crtc,
+ 		misc |= ATOM_COMPOSITESYNC;
+ 	if (mode->flags & DRM_MODE_FLAG_INTERLACE)
+ 		misc |= ATOM_INTERLACE;
+-	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
+ 		misc |= ATOM_DOUBLE_CLOCK_MODE;
++	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++		misc |= ATOM_H_REPLICATIONBY2 | ATOM_V_REPLICATIONBY2;
+ 
+ 	args.susModeMiscInfo.usAccess = cpu_to_le16(misc);
+ 	args.ucCRTC = radeon_crtc->crtc_id;
+@@ -374,8 +376,10 @@ static void atombios_crtc_set_timing(struct drm_crtc *crtc,
+ 		misc |= ATOM_COMPOSITESYNC;
+ 	if (mode->flags & DRM_MODE_FLAG_INTERLACE)
+ 		misc |= ATOM_INTERLACE;
+-	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
+ 		misc |= ATOM_DOUBLE_CLOCK_MODE;
++	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++		misc |= ATOM_H_REPLICATIONBY2 | ATOM_V_REPLICATIONBY2;
+ 
+ 	args.susModeMiscInfo.usAccess = cpu_to_le16(misc);
+ 	args.ucCRTC = radeon_crtc->crtc_id;
+-- 
+2.3.6
+
+
+From 7b645d942ed7101136f35bad5f6cb225c6e2adaa Mon Sep 17 00:00:00 2001
+From: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Date: Tue, 7 Apr 2015 22:28:50 +0900
+Subject: [PATCH 189/219] drm/exynos: Enable DP clock to fix display on
+ Exynos5250 and other
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+Cc: mpagano@gentoo.org
+
+commit 1c363c7cccf64128087002b0779986ad16aff6dc upstream.
+
+After adding display power domain for Exynos5250 in commit
+2d2c9a8d0a4f ("ARM: dts: add display power domain for exynos5250") the
+display on Chromebook Snow and others stopped working after boot.
+
+The reason for this suggested Andrzej Hajda: the DP clock was disabled.
+This clock is required by Display Port and is enabled by bootloader.
+However when FIMD driver probing was deferred, the display power domain
+was turned off. This effectively reset the value of DP clock enable
+register.
+
+When exynos-dp is later probed, the clock is not enabled and display is
+not properly configured:
+
+exynos-dp 145b0000.dp-controller: Timeout of video streamclk ok
+exynos-dp 145b0000.dp-controller: unable to config video
+
+Fixes: 2d2c9a8d0a4f ("ARM: dts: add display power domain for exynos5250")
+
+Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Reported-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk>
+Tested-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk>
+Tested-by: Andreas Färber <afaerber@suse.de>
+Signed-off-by: Inki Dae <inki.dae@samsung.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/gpu/drm/exynos/exynos_dp_core.c  | 10 ++++++++++
+ drivers/gpu/drm/exynos/exynos_drm_fimd.c | 19 +++++++++++++++++++
+ drivers/gpu/drm/exynos/exynos_drm_fimd.h | 15 +++++++++++++++
+ include/video/samsung_fimd.h             |  6 ++++++
+ 4 files changed, 50 insertions(+)
+ create mode 100644 drivers/gpu/drm/exynos/exynos_drm_fimd.h
+
+diff --git a/drivers/gpu/drm/exynos/exynos_dp_core.c b/drivers/gpu/drm/exynos/exynos_dp_core.c
+index bf17a60..1dbfba5 100644
+--- a/drivers/gpu/drm/exynos/exynos_dp_core.c
++++ b/drivers/gpu/drm/exynos/exynos_dp_core.c
+@@ -32,10 +32,16 @@
+ #include <drm/bridge/ptn3460.h>
+ 
+ #include "exynos_dp_core.h"
++#include "exynos_drm_fimd.h"
+ 
+ #define ctx_from_connector(c)	container_of(c, struct exynos_dp_device, \
+ 					connector)
+ 
++static inline struct exynos_drm_crtc *dp_to_crtc(struct exynos_dp_device *dp)
++{
++	return to_exynos_crtc(dp->encoder->crtc);
++}
++
+ static inline struct exynos_dp_device *
+ display_to_dp(struct exynos_drm_display *d)
+ {
+@@ -1070,6 +1076,8 @@ static void exynos_dp_poweron(struct exynos_dp_device *dp)
+ 		}
+ 	}
+ 
++	fimd_dp_clock_enable(dp_to_crtc(dp), true);
++
+ 	clk_prepare_enable(dp->clock);
+ 	exynos_dp_phy_init(dp);
+ 	exynos_dp_init_dp(dp);
+@@ -1094,6 +1102,8 @@ static void exynos_dp_poweroff(struct exynos_dp_device *dp)
+ 	exynos_dp_phy_exit(dp);
+ 	clk_disable_unprepare(dp->clock);
+ 
++	fimd_dp_clock_enable(dp_to_crtc(dp), false);
++
+ 	if (dp->panel) {
+ 		if (drm_panel_unprepare(dp->panel))
+ 			DRM_ERROR("failed to turnoff the panel\n");
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.c b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+index 33a10ce..5d58f6c 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_fimd.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+@@ -32,6 +32,7 @@
+ #include "exynos_drm_fbdev.h"
+ #include "exynos_drm_crtc.h"
+ #include "exynos_drm_iommu.h"
++#include "exynos_drm_fimd.h"
+ 
+ /*
+  * FIMD stands for Fully Interactive Mobile Display and
+@@ -1233,6 +1234,24 @@ static int fimd_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++void fimd_dp_clock_enable(struct exynos_drm_crtc *crtc, bool enable)
++{
++	struct fimd_context *ctx = crtc->ctx;
++	u32 val;
++
++	/*
++	 * Only Exynos 5250, 5260, 5410 and 542x requires enabling DP/MIE
++	 * clock. On these SoCs the bootloader may enable it but any
++	 * power domain off/on will reset it to disable state.
++	 */
++	if (ctx->driver_data != &exynos5_fimd_driver_data)
++		return;
++
++	val = enable ? DP_MIE_CLK_DP_ENABLE : DP_MIE_CLK_DISABLE;
++	writel(DP_MIE_CLK_DP_ENABLE, ctx->regs + DP_MIE_CLKCON);
++}
++EXPORT_SYMBOL_GPL(fimd_dp_clock_enable);
++
+ struct platform_driver fimd_driver = {
+ 	.probe		= fimd_probe,
+ 	.remove		= fimd_remove,
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.h b/drivers/gpu/drm/exynos/exynos_drm_fimd.h
+new file mode 100644
+index 0000000..b4fcaa5
+--- /dev/null
++++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.h
+@@ -0,0 +1,15 @@
++/*
++ * Copyright (c) 2015 Samsung Electronics Co., Ltd.
++ *
++ * This program is free software; you can redistribute  it and/or modify it
++ * under  the terms of  the GNU General  Public License as published by the
++ * Free Software Foundation;  either version 2 of the  License, or (at your
++ * option) any later version.
++ */
++
++#ifndef _EXYNOS_DRM_FIMD_H_
++#define _EXYNOS_DRM_FIMD_H_
++
++extern void fimd_dp_clock_enable(struct exynos_drm_crtc *crtc, bool enable);
++
++#endif /* _EXYNOS_DRM_FIMD_H_ */
+diff --git a/include/video/samsung_fimd.h b/include/video/samsung_fimd.h
+index a20e4a3..847a0a2 100644
+--- a/include/video/samsung_fimd.h
++++ b/include/video/samsung_fimd.h
+@@ -436,6 +436,12 @@
+ #define BLENDCON_NEW_8BIT_ALPHA_VALUE		(1 << 0)
+ #define BLENDCON_NEW_4BIT_ALPHA_VALUE		(0 << 0)
+ 
++/* Display port clock control */
++#define DP_MIE_CLKCON				0x27c
++#define DP_MIE_CLK_DISABLE			0x0
++#define DP_MIE_CLK_DP_ENABLE			0x2
++#define DP_MIE_CLK_MIE_ENABLE			0x3
++
+ /* Notes on per-window bpp settings
+  *
+  * Value	Win0	 Win1	  Win2	   Win3	    Win 4
+-- 
+2.3.6
+
+
+From 9dc473bad145b361c179c4f115ea781b8b73448d Mon Sep 17 00:00:00 2001
+From: Daniel Vetter <daniel.vetter@ffwll.ch>
+Date: Wed, 1 Apr 2015 13:43:46 +0200
+Subject: [PATCH 190/219] drm/i915: Dont enable CS_PARSER_ERROR interrupts at
+ all
+Cc: mpagano@gentoo.org
+
+commit 37ef01ab5d24d1d520dc79f6a98099d451c2a901 upstream.
+
+We stopped handling them in
+
+commit aaecdf611a05cac26a94713bad25297e60225c29
+Author: Daniel Vetter <daniel.vetter@ffwll.ch>
+Date:   Tue Nov 4 15:52:22 2014 +0100
+
+    drm/i915: Stop gathering error states for CS error interrupts
+
+but just clearing is apparently not enough: A sufficiently dead gpu
+left behind by firmware (*cough* coreboot *cough*) can keep the gpu in
+an endless loop of such interrupts, eventually leading to the nmi
+firing. And definitely to what looks like a machine hang.
+
+Since we don't even enable these interrupts on gen5+ let's do the same
+on earlier platforms.
+
+Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=93171
+Tested-by: Mono <mono-for-kernel-org@donderklumpen.de>
+Tested-by: info@gluglug.org.uk
+Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
+Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
+Signed-off-by: Jani Nikula <jani.nikula@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/gpu/drm/i915/i915_irq.c | 8 ++------
+ 1 file changed, 2 insertions(+), 6 deletions(-)
+
+diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
+index ede5bbb..07320cb 100644
+--- a/drivers/gpu/drm/i915/i915_irq.c
++++ b/drivers/gpu/drm/i915/i915_irq.c
+@@ -3718,14 +3718,12 @@ static int i8xx_irq_postinstall(struct drm_device *dev)
+ 		~(I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
+ 		  I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
+ 		  I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
+-		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT |
+-		  I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT);
++		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT);
+ 	I915_WRITE16(IMR, dev_priv->irq_mask);
+ 
+ 	I915_WRITE16(IER,
+ 		     I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
+ 		     I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
+-		     I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT |
+ 		     I915_USER_INTERRUPT);
+ 	POSTING_READ16(IER);
+ 
+@@ -3887,14 +3885,12 @@ static int i915_irq_postinstall(struct drm_device *dev)
+ 		  I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
+ 		  I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
+ 		  I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
+-		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT |
+-		  I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT);
++		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT);
+ 
+ 	enable_mask =
+ 		I915_ASLE_INTERRUPT |
+ 		I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
+ 		I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
+-		I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT |
+ 		I915_USER_INTERRUPT;
+ 
+ 	if (I915_HAS_HOTPLUG(dev)) {
+-- 
+2.3.6
+
+
+From 244f81177e5bc0ecb2f5507ef4371dc4752fea94 Mon Sep 17 00:00:00 2001
+From: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
+Date: Wed, 18 Feb 2015 15:19:33 +0200
+Subject: [PATCH 191/219] drm: adv7511: Fix DDC error interrupt handling
+Cc: mpagano@gentoo.org
+
+commit 2e96206c4f952295e11c311fbb2a7aa2105024af upstream.
+
+The DDC error interrupt bit is located in REG_INT1, not REG_INT0. Update
+both the interrupt wait code and the interrupt sources reset code
+accordingly.
+
+Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/gpu/drm/i2c/adv7511.c | 14 ++++++++++----
+ 1 file changed, 10 insertions(+), 4 deletions(-)
+
+diff --git a/drivers/gpu/drm/i2c/adv7511.c b/drivers/gpu/drm/i2c/adv7511.c
+index fa140e0..5109c21 100644
+--- a/drivers/gpu/drm/i2c/adv7511.c
++++ b/drivers/gpu/drm/i2c/adv7511.c
+@@ -467,14 +467,16 @@ static int adv7511_get_edid_block(void *data, u8 *buf, unsigned int block,
+ 				     block);
+ 			ret = adv7511_wait_for_interrupt(adv7511,
+ 					ADV7511_INT0_EDID_READY |
+-					ADV7511_INT1_DDC_ERROR, 200);
++					(ADV7511_INT1_DDC_ERROR << 8), 200);
+ 
+ 			if (!(ret & ADV7511_INT0_EDID_READY))
+ 				return -EIO;
+ 		}
+ 
+ 		regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
+-			     ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
++			     ADV7511_INT0_EDID_READY);
++		regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
++			     ADV7511_INT1_DDC_ERROR);
+ 
+ 		/* Break this apart, hopefully more I2C controllers will
+ 		 * support 64 byte transfers than 256 byte transfers
+@@ -528,7 +530,9 @@ static int adv7511_get_modes(struct drm_encoder *encoder,
+ 	/* Reading the EDID only works if the device is powered */
+ 	if (adv7511->dpms_mode != DRM_MODE_DPMS_ON) {
+ 		regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
+-			     ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
++			     ADV7511_INT0_EDID_READY);
++		regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
++			     ADV7511_INT1_DDC_ERROR);
+ 		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
+ 				   ADV7511_POWER_POWER_DOWN, 0);
+ 		adv7511->current_edid_segment = -1;
+@@ -563,7 +567,9 @@ static void adv7511_encoder_dpms(struct drm_encoder *encoder, int mode)
+ 		adv7511->current_edid_segment = -1;
+ 
+ 		regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
+-			     ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
++			     ADV7511_INT0_EDID_READY);
++		regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
++			     ADV7511_INT1_DDC_ERROR);
+ 		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
+ 				   ADV7511_POWER_POWER_DOWN, 0);
+ 		/*
+-- 
+2.3.6
+
+
+From 74ed38596ea50609c61bd10f048f97d6161e73b4 Mon Sep 17 00:00:00 2001
+From: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
+Date: Wed, 18 Feb 2015 15:19:33 +0200
+Subject: [PATCH 192/219] drm: adv7511: Fix nested sleep when reading EDID
+Cc: mpagano@gentoo.org
+
+commit a5241289c4139f0521b89e34a70f5f998463ae15 upstream.
+
+The EDID read code waits for the read completion interrupt to occur
+using wait_event_interruptible(). The condition passed to the macro
+reads I2C registers. This results in sleeping with the task state set
+to TASK_INTERRUPTIBLE, triggering a WARN_ON() introduced in commit
+8eb23b9f35aae ("sched: Debug nested sleeps").
+
+Fix this by reworking the EDID read code. Instead of checking whether
+the read is complete through I2C reads, handle the interrupt registers
+in the interrupt handler and update a new edid_read flag accordingly. As
+a side effect both the IRQ and polling code paths now process the
+interrupt sources through the same code path, simplifying the code.
+
+Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/gpu/drm/i2c/adv7511.c | 96 +++++++++++++++++++++----------------------
+ 1 file changed, 46 insertions(+), 50 deletions(-)
+
+diff --git a/drivers/gpu/drm/i2c/adv7511.c b/drivers/gpu/drm/i2c/adv7511.c
+index 5109c21..60ab1f7 100644
+--- a/drivers/gpu/drm/i2c/adv7511.c
++++ b/drivers/gpu/drm/i2c/adv7511.c
+@@ -33,6 +33,7 @@ struct adv7511 {
+ 
+ 	unsigned int current_edid_segment;
+ 	uint8_t edid_buf[256];
++	bool edid_read;
+ 
+ 	wait_queue_head_t wq;
+ 	struct drm_encoder *encoder;
+@@ -379,69 +380,71 @@ static bool adv7511_hpd(struct adv7511 *adv7511)
+ 	return false;
+ }
+ 
+-static irqreturn_t adv7511_irq_handler(int irq, void *devid)
+-{
+-	struct adv7511 *adv7511 = devid;
+-
+-	if (adv7511_hpd(adv7511))
+-		drm_helper_hpd_irq_event(adv7511->encoder->dev);
+-
+-	wake_up_all(&adv7511->wq);
+-
+-	return IRQ_HANDLED;
+-}
+-
+-static unsigned int adv7511_is_interrupt_pending(struct adv7511 *adv7511,
+-						 unsigned int irq)
++static int adv7511_irq_process(struct adv7511 *adv7511)
+ {
+ 	unsigned int irq0, irq1;
+-	unsigned int pending;
+ 	int ret;
+ 
+ 	ret = regmap_read(adv7511->regmap, ADV7511_REG_INT(0), &irq0);
+ 	if (ret < 0)
+-		return 0;
++		return ret;
++
+ 	ret = regmap_read(adv7511->regmap, ADV7511_REG_INT(1), &irq1);
+ 	if (ret < 0)
+-		return 0;
++		return ret;
++
++	regmap_write(adv7511->regmap, ADV7511_REG_INT(0), irq0);
++	regmap_write(adv7511->regmap, ADV7511_REG_INT(1), irq1);
++
++	if (irq0 & ADV7511_INT0_HDP)
++		drm_helper_hpd_irq_event(adv7511->encoder->dev);
++
++	if (irq0 & ADV7511_INT0_EDID_READY || irq1 & ADV7511_INT1_DDC_ERROR) {
++		adv7511->edid_read = true;
+ 
+-	pending = (irq1 << 8) | irq0;
++		if (adv7511->i2c_main->irq)
++			wake_up_all(&adv7511->wq);
++	}
+ 
+-	return pending & irq;
++	return 0;
+ }
+ 
+-static int adv7511_wait_for_interrupt(struct adv7511 *adv7511, int irq,
+-				      int timeout)
++static irqreturn_t adv7511_irq_handler(int irq, void *devid)
++{
++	struct adv7511 *adv7511 = devid;
++	int ret;
++
++	ret = adv7511_irq_process(adv7511);
++	return ret < 0 ? IRQ_NONE : IRQ_HANDLED;
++}
++
++/* -----------------------------------------------------------------------------
++ * EDID retrieval
++ */
++
++static int adv7511_wait_for_edid(struct adv7511 *adv7511, int timeout)
+ {
+-	unsigned int pending;
+ 	int ret;
+ 
+ 	if (adv7511->i2c_main->irq) {
+ 		ret = wait_event_interruptible_timeout(adv7511->wq,
+-				adv7511_is_interrupt_pending(adv7511, irq),
+-				msecs_to_jiffies(timeout));
+-		if (ret <= 0)
+-			return 0;
+-		pending = adv7511_is_interrupt_pending(adv7511, irq);
++				adv7511->edid_read, msecs_to_jiffies(timeout));
+ 	} else {
+-		if (timeout < 25)
+-			timeout = 25;
+-		do {
+-			pending = adv7511_is_interrupt_pending(adv7511, irq);
+-			if (pending)
++		for (; timeout > 0; timeout -= 25) {
++			ret = adv7511_irq_process(adv7511);
++			if (ret < 0)
++				break;
++
++			if (adv7511->edid_read)
+ 				break;
++
+ 			msleep(25);
+-			timeout -= 25;
+-		} while (timeout >= 25);
++		}
+ 	}
+ 
+-	return pending;
++	return adv7511->edid_read ? 0 : -EIO;
+ }
+ 
+-/* -----------------------------------------------------------------------------
+- * EDID retrieval
+- */
+-
+ static int adv7511_get_edid_block(void *data, u8 *buf, unsigned int block,
+ 				  size_t len)
+ {
+@@ -463,21 +466,14 @@ static int adv7511_get_edid_block(void *data, u8 *buf, unsigned int block,
+ 			return ret;
+ 
+ 		if (status != 2) {
++			adv7511->edid_read = false;
+ 			regmap_write(adv7511->regmap, ADV7511_REG_EDID_SEGMENT,
+ 				     block);
+-			ret = adv7511_wait_for_interrupt(adv7511,
+-					ADV7511_INT0_EDID_READY |
+-					(ADV7511_INT1_DDC_ERROR << 8), 200);
+-
+-			if (!(ret & ADV7511_INT0_EDID_READY))
+-				return -EIO;
++			ret = adv7511_wait_for_edid(adv7511, 200);
++			if (ret < 0)
++				return ret;
+ 		}
+ 
+-		regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
+-			     ADV7511_INT0_EDID_READY);
+-		regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
+-			     ADV7511_INT1_DDC_ERROR);
+-
+ 		/* Break this apart, hopefully more I2C controllers will
+ 		 * support 64 byte transfers than 256 byte transfers
+ 		 */
+-- 
+2.3.6
+
+
+From 959905cf28ee80f8830b717f4e1ac28a61732974 Mon Sep 17 00:00:00 2001
+From: Imre Deak <imre.deak@intel.com>
+Date: Wed, 15 Apr 2015 16:52:30 -0700
+Subject: [PATCH 193/219] drm/i915: vlv: fix save/restore of GFX_MAX_REQ_COUNT
+ reg
+Cc: mpagano@gentoo.org
+
+commit b5f1c97f944482e98e6e39208af356630389d1ea upstream.
+
+Due this typo we don't save/restore the GFX_MAX_REQ_COUNT register across
+suspend/resume, so fix this.
+
+This was introduced in
+
+commit ddeea5b0c36f3665446518c609be91f9336ef674
+Author: Imre Deak <imre.deak@intel.com>
+Date:   Mon May 5 15:19:56 2014 +0300
+
+    drm/i915: vlv: add runtime PM support
+
+I noticed this only by reading the code. To my knowledge it shouldn't
+cause any real problems at the moment, since the power well backing this
+register remains on across a runtime s/r. This may change once
+system-wide s0ix functionality is enabled in the kernel.
+
+v2:
+- resend after a missing git add -u :/
+
+Signed-off-by: Imre Deak <imre.deak@intel.com>
+Tested-By: PRC QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com)
+Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
+Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
+Signed-off-by: Jani Nikula <jani.nikula@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/gpu/drm/i915/i915_drv.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index 5c66b56..ec4d932 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -1042,7 +1042,7 @@ static void vlv_save_gunit_s0ix_state(struct drm_i915_private *dev_priv)
+ 		s->lra_limits[i] = I915_READ(GEN7_LRA_LIMITS_BASE + i * 4);
+ 
+ 	s->media_max_req_count	= I915_READ(GEN7_MEDIA_MAX_REQ_COUNT);
+-	s->gfx_max_req_count	= I915_READ(GEN7_MEDIA_MAX_REQ_COUNT);
++	s->gfx_max_req_count	= I915_READ(GEN7_GFX_MAX_REQ_COUNT);
+ 
+ 	s->render_hwsp		= I915_READ(RENDER_HWS_PGA_GEN7);
+ 	s->ecochk		= I915_READ(GAM_ECOCHK);
+@@ -1124,7 +1124,7 @@ static void vlv_restore_gunit_s0ix_state(struct drm_i915_private *dev_priv)
+ 		I915_WRITE(GEN7_LRA_LIMITS_BASE + i * 4, s->lra_limits[i]);
+ 
+ 	I915_WRITE(GEN7_MEDIA_MAX_REQ_COUNT, s->media_max_req_count);
+-	I915_WRITE(GEN7_MEDIA_MAX_REQ_COUNT, s->gfx_max_req_count);
++	I915_WRITE(GEN7_GFX_MAX_REQ_COUNT, s->gfx_max_req_count);
+ 
+ 	I915_WRITE(RENDER_HWS_PGA_GEN7,	s->render_hwsp);
+ 	I915_WRITE(GAM_ECOCHK,		s->ecochk);
+-- 
+2.3.6
+
+
+From 0f14e0aa4e606b77387e807b89a0ee8faf10accb Mon Sep 17 00:00:00 2001
+From: Dmitry Torokhov <dmitry.torokhov@gmail.com>
+Date: Tue, 21 Apr 2015 09:49:11 -0700
+Subject: [PATCH 194/219] drm/i915: cope with large i2c transfers
+Cc: mpagano@gentoo.org
+
+commit 9535c4757b881e06fae72a857485ad57c422b8d2 upstream.
+
+The hardware, according to the specs, is limited to 256 byte transfers,
+and current driver has no protections in case users attempt to do larger
+transfers. The code will just stomp over status register and mayhem
+ensues.
+
+Let's split larger transfers into digestable chunks. Doing this allows
+Atmel MXT driver on Pixel 1 function properly (it hasn't since commit
+9d8dc3e529a19e427fd379118acd132520935c5d "Input: atmel_mxt_ts -
+implement T44 message handling" which tries to consume multiple
+touchscreen/touchpad reports in a single transaction).
+
+Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
+Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
+Signed-off-by: Jani Nikula <jani.nikula@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/gpu/drm/i915/i915_reg.h  |  1 +
+ drivers/gpu/drm/i915/intel_i2c.c | 66 ++++++++++++++++++++++++++++++++++------
+ 2 files changed, 57 insertions(+), 10 deletions(-)
+
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 33b3d0a2..f536ff2 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -1740,6 +1740,7 @@ enum punit_power_well {
+ #define   GMBUS_CYCLE_INDEX	(2<<25)
+ #define   GMBUS_CYCLE_STOP	(4<<25)
+ #define   GMBUS_BYTE_COUNT_SHIFT 16
++#define   GMBUS_BYTE_COUNT_MAX   256U
+ #define   GMBUS_SLAVE_INDEX_SHIFT 8
+ #define   GMBUS_SLAVE_ADDR_SHIFT 1
+ #define   GMBUS_SLAVE_READ	(1<<0)
+diff --git a/drivers/gpu/drm/i915/intel_i2c.c b/drivers/gpu/drm/i915/intel_i2c.c
+index b31088a..56e437e 100644
+--- a/drivers/gpu/drm/i915/intel_i2c.c
++++ b/drivers/gpu/drm/i915/intel_i2c.c
+@@ -270,18 +270,17 @@ gmbus_wait_idle(struct drm_i915_private *dev_priv)
+ }
+ 
+ static int
+-gmbus_xfer_read(struct drm_i915_private *dev_priv, struct i2c_msg *msg,
+-		u32 gmbus1_index)
++gmbus_xfer_read_chunk(struct drm_i915_private *dev_priv,
++		      unsigned short addr, u8 *buf, unsigned int len,
++		      u32 gmbus1_index)
+ {
+ 	int reg_offset = dev_priv->gpio_mmio_base;
+-	u16 len = msg->len;
+-	u8 *buf = msg->buf;
+ 
+ 	I915_WRITE(GMBUS1 + reg_offset,
+ 		   gmbus1_index |
+ 		   GMBUS_CYCLE_WAIT |
+ 		   (len << GMBUS_BYTE_COUNT_SHIFT) |
+-		   (msg->addr << GMBUS_SLAVE_ADDR_SHIFT) |
++		   (addr << GMBUS_SLAVE_ADDR_SHIFT) |
+ 		   GMBUS_SLAVE_READ | GMBUS_SW_RDY);
+ 	while (len) {
+ 		int ret;
+@@ -303,11 +302,35 @@ gmbus_xfer_read(struct drm_i915_private *dev_priv, struct i2c_msg *msg,
+ }
+ 
+ static int
+-gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
++gmbus_xfer_read(struct drm_i915_private *dev_priv, struct i2c_msg *msg,
++		u32 gmbus1_index)
+ {
+-	int reg_offset = dev_priv->gpio_mmio_base;
+-	u16 len = msg->len;
+ 	u8 *buf = msg->buf;
++	unsigned int rx_size = msg->len;
++	unsigned int len;
++	int ret;
++
++	do {
++		len = min(rx_size, GMBUS_BYTE_COUNT_MAX);
++
++		ret = gmbus_xfer_read_chunk(dev_priv, msg->addr,
++					    buf, len, gmbus1_index);
++		if (ret)
++			return ret;
++
++		rx_size -= len;
++		buf += len;
++	} while (rx_size != 0);
++
++	return 0;
++}
++
++static int
++gmbus_xfer_write_chunk(struct drm_i915_private *dev_priv,
++		       unsigned short addr, u8 *buf, unsigned int len)
++{
++	int reg_offset = dev_priv->gpio_mmio_base;
++	unsigned int chunk_size = len;
+ 	u32 val, loop;
+ 
+ 	val = loop = 0;
+@@ -319,8 +342,8 @@ gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
+ 	I915_WRITE(GMBUS3 + reg_offset, val);
+ 	I915_WRITE(GMBUS1 + reg_offset,
+ 		   GMBUS_CYCLE_WAIT |
+-		   (msg->len << GMBUS_BYTE_COUNT_SHIFT) |
+-		   (msg->addr << GMBUS_SLAVE_ADDR_SHIFT) |
++		   (chunk_size << GMBUS_BYTE_COUNT_SHIFT) |
++		   (addr << GMBUS_SLAVE_ADDR_SHIFT) |
+ 		   GMBUS_SLAVE_WRITE | GMBUS_SW_RDY);
+ 	while (len) {
+ 		int ret;
+@@ -337,6 +360,29 @@ gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
+ 		if (ret)
+ 			return ret;
+ 	}
++
++	return 0;
++}
++
++static int
++gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
++{
++	u8 *buf = msg->buf;
++	unsigned int tx_size = msg->len;
++	unsigned int len;
++	int ret;
++
++	do {
++		len = min(tx_size, GMBUS_BYTE_COUNT_MAX);
++
++		ret = gmbus_xfer_write_chunk(dev_priv, msg->addr, buf, len);
++		if (ret)
++			return ret;
++
++		buf += len;
++		tx_size -= len;
++	} while (tx_size != 0);
++
+ 	return 0;
+ }
+ 
+-- 
+2.3.6
+
+
+From f5e360ea796b5833aa7ddf281ed49d72f9eba1e3 Mon Sep 17 00:00:00 2001
+From: Al Viro <viro@zeniv.linux.org.uk>
+Date: Fri, 24 Apr 2015 15:47:07 -0400
+Subject: [PATCH 195/219] RCU pathwalk breakage when running into a symlink
+ overmounting something
+Cc: mpagano@gentoo.org
+
+commit 3cab989afd8d8d1bc3d99fef0e7ed87c31e7b647 upstream.
+
+Calling unlazy_walk() in walk_component() and do_last() when we find
+a symlink that needs to be followed doesn't acquire a reference to vfsmount.
+That's fine when the symlink is on the same vfsmount as the parent directory
+(which is almost always the case), but it's not always true - one _can_
+manage to bind a symlink on top of something.  And in such cases we end up
+with excessive mntput().
+
+Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/namei.c | 6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+diff --git a/fs/namei.c b/fs/namei.c
+index c83145a..caa38a2 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -1591,7 +1591,8 @@ static inline int walk_component(struct nameidata *nd, struct path *path,
+ 
+ 	if (should_follow_link(path->dentry, follow)) {
+ 		if (nd->flags & LOOKUP_RCU) {
+-			if (unlikely(unlazy_walk(nd, path->dentry))) {
++			if (unlikely(nd->path.mnt != path->mnt ||
++				     unlazy_walk(nd, path->dentry))) {
+ 				err = -ECHILD;
+ 				goto out_err;
+ 			}
+@@ -3047,7 +3048,8 @@ finish_lookup:
+ 
+ 	if (should_follow_link(path->dentry, !symlink_ok)) {
+ 		if (nd->flags & LOOKUP_RCU) {
+-			if (unlikely(unlazy_walk(nd, path->dentry))) {
++			if (unlikely(nd->path.mnt != path->mnt ||
++				     unlazy_walk(nd, path->dentry))) {
+ 				error = -ECHILD;
+ 				goto out;
+ 			}
+-- 
+2.3.6
+
+
+From 04dcce2b2b45c99fdaebd0baa19640674ea388f4 Mon Sep 17 00:00:00 2001
+From: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
+Date: Thu, 16 Apr 2015 18:48:39 +0800
+Subject: [PATCH 196/219] Revert "nfs: replace nfs_add_stats with nfs_inc_stats
+ when add one"
+Cc: mpagano@gentoo.org
+
+commit 3708f842e107b9b79d54a75d152e666b693649e8 upstream.
+
+This reverts commit 5a254d08b086d80cbead2ebcee6d2a4b3a15587a.
+
+Since commit 5a254d08b086 ("nfs: replace nfs_add_stats with
+nfs_inc_stats when add one"), nfs_readpage and nfs_do_writepage use
+nfs_inc_stats to increment NFSIOS_READPAGES and NFSIOS_WRITEPAGES
+instead of nfs_add_stats.
+
+However nfs_inc_stats does not do the same thing as nfs_add_stats with
+value 1 because these functions work on distinct stats:
+nfs_inc_stats increments stats from "enum nfs_stat_eventcounters" (in
+server->io_stats->events) and nfs_add_stats those from "enum
+nfs_stat_bytecounters" (in server->io_stats->bytes).
+
+Signed-off-by: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
+Fixes: 5a254d08b086 ("nfs: replace nfs_add_stats with nfs_inc_stats...")
+Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/nfs/read.c  | 2 +-
+ fs/nfs/write.c | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/fs/nfs/read.c b/fs/nfs/read.c
+index 568ecf0..848d8b1 100644
+--- a/fs/nfs/read.c
++++ b/fs/nfs/read.c
+@@ -284,7 +284,7 @@ int nfs_readpage(struct file *file, struct page *page)
+ 	dprintk("NFS: nfs_readpage (%p %ld@%lu)\n",
+ 		page, PAGE_CACHE_SIZE, page_file_index(page));
+ 	nfs_inc_stats(inode, NFSIOS_VFSREADPAGE);
+-	nfs_inc_stats(inode, NFSIOS_READPAGES);
++	nfs_add_stats(inode, NFSIOS_READPAGES, 1);
+ 
+ 	/*
+ 	 * Try to flush any pending writes to the file..
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 849ed78..41b3f1096 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -580,7 +580,7 @@ static int nfs_do_writepage(struct page *page, struct writeback_control *wbc, st
+ 	int ret;
+ 
+ 	nfs_inc_stats(inode, NFSIOS_VFSWRITEPAGE);
+-	nfs_inc_stats(inode, NFSIOS_WRITEPAGES);
++	nfs_add_stats(inode, NFSIOS_WRITEPAGES, 1);
+ 
+ 	nfs_pageio_cond_complete(pgio, page_file_index(page));
+ 	ret = nfs_page_async_flush(pgio, page, wbc->sync_mode == WB_SYNC_NONE);
+-- 
+2.3.6
+
+
+From 2556cb4a63a559a09112aba49d0112bd7dc4d2d6 Mon Sep 17 00:00:00 2001
+From: "J. Bruce Fields" <bfields@redhat.com>
+Date: Fri, 3 Apr 2015 16:24:27 -0400
+Subject: [PATCH 197/219] nfsd4: disallow ALLOCATE with special stateids
+Cc: mpagano@gentoo.org
+
+commit 5ba4a25ab7b13be528b23f85182f4d09cf7f71ad upstream.
+
+vfs_fallocate will hit a NULL dereference if the client tries an
+ALLOCATE or DEALLOCATE with a special stateid.  Fix that.  (We also
+depend on the open to have broken any conflicting leases or delegations
+for us.)
+
+(If it turns out we need to allow special stateid's then we could do a
+temporary open here in the special-stateid case, as we do for read and
+write.  For now I'm assuming it's not necessary.)
+
+Fixes: 95d871f03cae "nfsd: Add ALLOCATE support"
+Cc: Anna Schumaker <Anna.Schumaker@Netapp.com>
+Signed-off-by: J. Bruce Fields <bfields@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/nfsd/nfs4proc.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 92b9d97..5912967 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1030,6 +1030,8 @@ nfsd4_fallocate(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		dprintk("NFSD: nfsd4_fallocate: couldn't process stateid!\n");
+ 		return status;
+ 	}
++	if (!file)
++		return nfserr_bad_stateid;
+ 
+ 	status = nfsd4_vfs_fallocate(rqstp, &cstate->current_fh, file,
+ 				     fallocate->falloc_offset,
+-- 
+2.3.6
+
+
+From e2efc21fbad9a8d055586716fad4d4baaf210b56 Mon Sep 17 00:00:00 2001
+From: "J. Bruce Fields" <bfields@redhat.com>
+Date: Fri, 3 Apr 2015 17:19:41 -0400
+Subject: [PATCH 198/219] nfsd4: fix READ permission checking
+Cc: mpagano@gentoo.org
+
+commit 6e4891dc289cd191d46ab7ba1dcb29646644f9ca upstream.
+
+In the case we already have a struct file (derived from a stateid), we
+still need to do permission-checking; otherwise an unauthorized user
+could gain access to a file by sniffing or guessing somebody else's
+stateid.
+
+Fixes: dc97618ddda9 "nfsd4: separate splice and readv cases"
+Signed-off-by: J. Bruce Fields <bfields@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/nfsd/nfs4xdr.c | 12 ++++++++----
+ 1 file changed, 8 insertions(+), 4 deletions(-)
+
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 5fb7e78..5b33ce1 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3422,6 +3422,7 @@ nfsd4_encode_read(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 	unsigned long maxcount;
+ 	struct xdr_stream *xdr = &resp->xdr;
+ 	struct file *file = read->rd_filp;
++	struct svc_fh *fhp = read->rd_fhp;
+ 	int starting_len = xdr->buf->len;
+ 	struct raparms *ra;
+ 	__be32 *p;
+@@ -3445,12 +3446,15 @@ nfsd4_encode_read(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 	maxcount = min_t(unsigned long, maxcount, (xdr->buf->buflen - xdr->buf->len));
+ 	maxcount = min_t(unsigned long, maxcount, read->rd_length);
+ 
+-	if (!read->rd_filp) {
++	if (read->rd_filp)
++		err = nfsd_permission(resp->rqstp, fhp->fh_export,
++				fhp->fh_dentry,
++				NFSD_MAY_READ|NFSD_MAY_OWNER_OVERRIDE);
++	else
+ 		err = nfsd_get_tmp_read_open(resp->rqstp, read->rd_fhp,
+ 						&file, &ra);
+-		if (err)
+-			goto err_truncate;
+-	}
++	if (err)
++		goto err_truncate;
+ 
+ 	if (file->f_op->splice_read && test_bit(RQ_SPLICE_OK, &resp->rqstp->rq_flags))
+ 		err = nfsd4_encode_splice_read(resp, read, file, maxcount);
+-- 
+2.3.6
+
+
+From 6fd154a83b18bc81aa3f1071e74c36d9076ff4b9 Mon Sep 17 00:00:00 2001
+From: "J. Bruce Fields" <bfields@redhat.com>
+Date: Tue, 21 Apr 2015 15:25:39 -0400
+Subject: [PATCH 199/219] nfsd4: disallow SEEK with special stateids
+Cc: mpagano@gentoo.org
+
+commit 980608fb50aea34993ba956b71cd4602aa42b14b upstream.
+
+If the client uses a special stateid then we'll pass a NULL file to
+vfs_llseek.
+
+Fixes: 24bab491220f " NFSD: Implement SEEK"
+Cc: Anna Schumaker <Anna.Schumaker@Netapp.com>
+Reported-by: Christoph Hellwig <hch@infradead.org>
+Signed-off-by: J. Bruce Fields <bfields@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/nfsd/nfs4proc.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 5912967..5416968 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1071,6 +1071,8 @@ nfsd4_seek(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		dprintk("NFSD: nfsd4_seek: couldn't process stateid!\n");
+ 		return status;
+ 	}
++	if (!file)
++		return nfserr_bad_stateid;
+ 
+ 	switch (seek->seek_whence) {
+ 	case NFS4_CONTENT_DATA:
+-- 
+2.3.6
+
+
+From 1f8303c597803d7d7c6943708dff333dbbc009a1 Mon Sep 17 00:00:00 2001
+From: Mark Salter <msalter@redhat.com>
+Date: Mon, 6 Apr 2015 09:46:00 -0400
+Subject: [PATCH 200/219] nfsd: eliminate NFSD_DEBUG
+Cc: mpagano@gentoo.org
+
+commit 135dd002c23054aaa056ea3162c1e0356905c195 upstream.
+
+Commit f895b252d4edf ("sunrpc: eliminate RPC_DEBUG") introduced
+use of IS_ENABLED() in a uapi header which leads to a build
+failure for userspace apps trying to use <linux/nfsd/debug.h>:
+
+   linux/nfsd/debug.h:18:15: error: missing binary operator before token "("
+  #if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
+                ^
+
+Since this was only used to define NFSD_DEBUG if CONFIG_SUNRPC_DEBUG
+is enabled, replace instances of NFSD_DEBUG with CONFIG_SUNRPC_DEBUG.
+
+Fixes: f895b252d4edf "sunrpc: eliminate RPC_DEBUG"
+Signed-off-by: Mark Salter <msalter@redhat.com>
+Reviewed-by: Jeff Layton <jlayton@primarydata.com>
+Signed-off-by: J. Bruce Fields <bfields@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/lockd/svcsubs.c              | 2 +-
+ fs/nfsd/nfs4state.c             | 2 +-
+ fs/nfsd/nfsd.h                  | 2 +-
+ include/uapi/linux/nfsd/debug.h | 8 --------
+ 4 files changed, 3 insertions(+), 11 deletions(-)
+
+diff --git a/fs/lockd/svcsubs.c b/fs/lockd/svcsubs.c
+index 665ef5a..a563ddb 100644
+--- a/fs/lockd/svcsubs.c
++++ b/fs/lockd/svcsubs.c
+@@ -31,7 +31,7 @@
+ static struct hlist_head	nlm_files[FILE_NRHASH];
+ static DEFINE_MUTEX(nlm_file_mutex);
+ 
+-#ifdef NFSD_DEBUG
++#ifdef CONFIG_SUNRPC_DEBUG
+ static inline void nlm_debug_print_fh(char *msg, struct nfs_fh *f)
+ {
+ 	u32 *fhp = (u32*)f->data;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 8ba1d88..ee1cccd 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1139,7 +1139,7 @@ hash_sessionid(struct nfs4_sessionid *sessionid)
+ 	return sid->sequence % SESSION_HASH_SIZE;
+ }
+ 
+-#ifdef NFSD_DEBUG
++#ifdef CONFIG_SUNRPC_DEBUG
+ static inline void
+ dump_sessionid(const char *fn, struct nfs4_sessionid *sessionid)
+ {
+diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
+index 565c4da..cf98052 100644
+--- a/fs/nfsd/nfsd.h
++++ b/fs/nfsd/nfsd.h
+@@ -24,7 +24,7 @@
+ #include "export.h"
+ 
+ #undef ifdebug
+-#ifdef NFSD_DEBUG
++#ifdef CONFIG_SUNRPC_DEBUG
+ # define ifdebug(flag)		if (nfsd_debug & NFSDDBG_##flag)
+ #else
+ # define ifdebug(flag)		if (0)
+diff --git a/include/uapi/linux/nfsd/debug.h b/include/uapi/linux/nfsd/debug.h
+index 0bf130a..28ec6c9 100644
+--- a/include/uapi/linux/nfsd/debug.h
++++ b/include/uapi/linux/nfsd/debug.h
+@@ -12,14 +12,6 @@
+ #include <linux/sunrpc/debug.h>
+ 
+ /*
+- * Enable debugging for nfsd.
+- * Requires RPC_DEBUG.
+- */
+-#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
+-# define NFSD_DEBUG		1
+-#endif
+-
+-/*
+  * knfsd debug flags
+  */
+ #define NFSDDBG_SOCK		0x0001
+-- 
+2.3.6
+
+
+From d5d30089c2a59d079a074eb37c8c223b81664ceb Mon Sep 17 00:00:00 2001
+From: Giuseppe Cantavenera <giuseppe.cantavenera.ext@nokia.com>
+Date: Mon, 20 Apr 2015 18:00:08 +0200
+Subject: [PATCH 201/219] nfsd: fix nsfd startup race triggering BUG_ON
+Cc: mpagano@gentoo.org
+
+commit bb7ffbf29e76b89a86ca4c3ee0d4690641f2f772 upstream.
+
+nfsd triggered a BUG_ON in net_generic(...) when rpc_pipefs_event(...)
+in fs/nfsd/nfs4recover.c was called before assigning ntfsd_net_id.
+The following was observed on a MIPS 32-core processor:
+kernel: Call Trace:
+kernel: [<ffffffffc00bc5e4>] rpc_pipefs_event+0x7c/0x158 [nfsd]
+kernel: [<ffffffff8017a2a0>] notifier_call_chain+0x70/0xb8
+kernel: [<ffffffff8017a4e4>] __blocking_notifier_call_chain+0x4c/0x70
+kernel: [<ffffffff8053aff8>] rpc_fill_super+0xf8/0x1a0
+kernel: [<ffffffff8022204c>] mount_ns+0xb4/0xf0
+kernel: [<ffffffff80222b48>] mount_fs+0x50/0x1f8
+kernel: [<ffffffff8023dc00>] vfs_kern_mount+0x58/0xf0
+kernel: [<ffffffff802404ac>] do_mount+0x27c/0xa28
+kernel: [<ffffffff80240cf0>] SyS_mount+0x98/0xe8
+kernel: [<ffffffff80135d24>] handle_sys64+0x44/0x68
+kernel:
+kernel:
+        Code: 0040f809  00000000  2e020001 <00020336> 3c12c00d
+                3c02801a  de100000 6442eb98  0040f809
+kernel: ---[ end trace 7471374335809536 ]---
+
+Fixed this behaviour by calling register_pernet_subsys(&nfsd_net_ops) before
+registering rpc_pipefs_event(...) with the notifier chain.
+
+Signed-off-by: Giuseppe Cantavenera <giuseppe.cantavenera.ext@nokia.com>
+Signed-off-by: Lorenzo Restelli <lorenzo.restelli.ext@nokia.com>
+Reviewed-by: Kinlong Mee <kinglongmee@gmail.com>
+Signed-off-by: J. Bruce Fields <bfields@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/nfsd/nfsctl.c | 16 ++++++++--------
+ 1 file changed, 8 insertions(+), 8 deletions(-)
+
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index aa47d75..9690cb4 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1250,15 +1250,15 @@ static int __init init_nfsd(void)
+ 	int retval;
+ 	printk(KERN_INFO "Installing knfsd (copyright (C) 1996 okir@monad.swb.de).\n");
+ 
+-	retval = register_cld_notifier();
+-	if (retval)
+-		return retval;
+ 	retval = register_pernet_subsys(&nfsd_net_ops);
+ 	if (retval < 0)
+-		goto out_unregister_notifier;
+-	retval = nfsd4_init_slabs();
++		return retval;
++	retval = register_cld_notifier();
+ 	if (retval)
+ 		goto out_unregister_pernet;
++	retval = nfsd4_init_slabs();
++	if (retval)
++		goto out_unregister_notifier;
+ 	retval = nfsd4_init_pnfs();
+ 	if (retval)
+ 		goto out_free_slabs;
+@@ -1290,10 +1290,10 @@ out_exit_pnfs:
+ 	nfsd4_exit_pnfs();
+ out_free_slabs:
+ 	nfsd4_free_slabs();
+-out_unregister_pernet:
+-	unregister_pernet_subsys(&nfsd_net_ops);
+ out_unregister_notifier:
+ 	unregister_cld_notifier();
++out_unregister_pernet:
++	unregister_pernet_subsys(&nfsd_net_ops);
+ 	return retval;
+ }
+ 
+@@ -1308,8 +1308,8 @@ static void __exit exit_nfsd(void)
+ 	nfsd4_exit_pnfs();
+ 	nfsd_fault_inject_cleanup();
+ 	unregister_filesystem(&nfsd_fs_type);
+-	unregister_pernet_subsys(&nfsd_net_ops);
+ 	unregister_cld_notifier();
++	unregister_pernet_subsys(&nfsd_net_ops);
+ }
+ 
+ MODULE_AUTHOR("Olaf Kirch <okir@monad.swb.de>");
+-- 
+2.3.6
+
+
+From c59908b7a9d4b76f72367f055559663e1da274fc Mon Sep 17 00:00:00 2001
+From: Jeff Layton <jlayton@poochiereds.net>
+Date: Fri, 20 Mar 2015 15:15:14 -0400
+Subject: [PATCH 202/219] nfs: fix high load average due to callback thread
+ sleeping
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+Cc: mpagano@gentoo.org
+
+commit 5d05e54af3cdbb13cf19c557ff2184781b91a22c upstream.
+
+Chuck pointed out a problem that crept in with commit 6ffa30d3f734 (nfs:
+don't call blocking operations while !TASK_RUNNING). Linux counts tasks
+in uninterruptible sleep against the load average, so this caused the
+system's load average to be pinned at at least 1 when there was a
+NFSv4.1+ mount active.
+
+Not a huge problem, but it's probably worth fixing before we get too
+many complaints about it. This patch converts the code back to use
+TASK_INTERRUPTIBLE sleep, simply has it flush any signals on each loop
+iteration. In practice no one should really be signalling this thread at
+all, so I think this is reasonably safe.
+
+With this change, there's also no need to game the hung task watchdog so
+we can also convert the schedule_timeout call back to a normal schedule.
+
+Reported-by: Chuck Lever <chuck.lever@oracle.com>
+Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
+Tested-by: Chuck Lever <chuck.lever@oracle.com>
+Fixes: commit 6ffa30d3f734 (“nfs: don't call blocking . . .”)
+Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/nfs/callback.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
+index 351be920..8d129bb 100644
+--- a/fs/nfs/callback.c
++++ b/fs/nfs/callback.c
+@@ -128,7 +128,7 @@ nfs41_callback_svc(void *vrqstp)
+ 		if (try_to_freeze())
+ 			continue;
+ 
+-		prepare_to_wait(&serv->sv_cb_waitq, &wq, TASK_UNINTERRUPTIBLE);
++		prepare_to_wait(&serv->sv_cb_waitq, &wq, TASK_INTERRUPTIBLE);
+ 		spin_lock_bh(&serv->sv_cb_lock);
+ 		if (!list_empty(&serv->sv_cb_list)) {
+ 			req = list_first_entry(&serv->sv_cb_list,
+@@ -142,10 +142,10 @@ nfs41_callback_svc(void *vrqstp)
+ 				error);
+ 		} else {
+ 			spin_unlock_bh(&serv->sv_cb_lock);
+-			/* schedule_timeout to game the hung task watchdog */
+-			schedule_timeout(60 * HZ);
++			schedule();
+ 			finish_wait(&serv->sv_cb_waitq, &wq);
+ 		}
++		flush_signals(current);
+ 	}
+ 	return 0;
+ }
+-- 
+2.3.6
+
+
+From dcd8d0c80e86b8821c5a453b5bf782328d8580e1 Mon Sep 17 00:00:00 2001
+From: Peng Tao <tao.peng@primarydata.com>
+Date: Thu, 9 Apr 2015 23:02:16 +0800
+Subject: [PATCH 203/219] nfs: fix DIO good bytes calculation
+Cc: mpagano@gentoo.org
+
+commit 1ccbad9f9f9bd36db26a10f0b17fbaf12b3ae93a upstream.
+
+For direct read that has IO size larger than rsize, we'll split
+it into several READ requests and nfs_direct_good_bytes() would
+count completed bytes incorrectly by eating last zero count reply.
+
+Fix it by handling mirror and non-mirror cases differently such that
+we only count mirrored writes differently.
+
+This fixes 5fadeb47("nfs: count DIO good bytes correctly with mirroring").
+
+Reported-by: Jean Spector <jean@primarydata.com>
+Signed-off-by: Peng Tao <tao.peng@primarydata.com>
+Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/nfs/direct.c | 29 +++++++++++++++++------------
+ 1 file changed, 17 insertions(+), 12 deletions(-)
+
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index e907c8c..5e451a7 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -131,20 +131,25 @@ nfs_direct_good_bytes(struct nfs_direct_req *dreq, struct nfs_pgio_header *hdr)
+ 
+ 	WARN_ON_ONCE(hdr->pgio_mirror_idx >= dreq->mirror_count);
+ 
+-	count = dreq->mirrors[hdr->pgio_mirror_idx].count;
+-	if (count + dreq->io_start < hdr->io_start + hdr->good_bytes) {
+-		count = hdr->io_start + hdr->good_bytes - dreq->io_start;
+-		dreq->mirrors[hdr->pgio_mirror_idx].count = count;
+-	}
+-
+-	/* update the dreq->count by finding the minimum agreed count from all
+-	 * mirrors */
+-	count = dreq->mirrors[0].count;
++	if (dreq->mirror_count == 1) {
++		dreq->mirrors[hdr->pgio_mirror_idx].count += hdr->good_bytes;
++		dreq->count += hdr->good_bytes;
++	} else {
++		/* mirrored writes */
++		count = dreq->mirrors[hdr->pgio_mirror_idx].count;
++		if (count + dreq->io_start < hdr->io_start + hdr->good_bytes) {
++			count = hdr->io_start + hdr->good_bytes - dreq->io_start;
++			dreq->mirrors[hdr->pgio_mirror_idx].count = count;
++		}
++		/* update the dreq->count by finding the minimum agreed count from all
++		 * mirrors */
++		count = dreq->mirrors[0].count;
+ 
+-	for (i = 1; i < dreq->mirror_count; i++)
+-		count = min(count, dreq->mirrors[i].count);
++		for (i = 1; i < dreq->mirror_count; i++)
++			count = min(count, dreq->mirrors[i].count);
+ 
+-	dreq->count = count;
++		dreq->count = count;
++	}
+ }
+ 
+ /*
+-- 
+2.3.6
+
+
+From 5efdfc74ab7d8ccfce9f8517012e3962939c91fc Mon Sep 17 00:00:00 2001
+From: Peng Tao <tao.peng@primarydata.com>
+Date: Thu, 9 Apr 2015 23:02:17 +0800
+Subject: [PATCH 204/219] nfs: remove WARN_ON_ONCE from nfs_direct_good_bytes
+Cc: mpagano@gentoo.org
+
+commit 05f54903d9d370a4cd302a85681304d3ec59e5c1 upstream.
+
+For flexfiles driver, we might choose to read from mirror index other
+than 0 while mirror_count is always 1 for read.
+
+Reported-by: Jean Spector <jean@primarydata.com>
+Cc: Weston Andros Adamson <dros@primarydata.com>
+Signed-off-by: Peng Tao <tao.peng@primarydata.com>
+Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/nfs/direct.c | 2 --
+ 1 file changed, 2 deletions(-)
+
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 5e451a7..ab21ef1 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -129,8 +129,6 @@ nfs_direct_good_bytes(struct nfs_direct_req *dreq, struct nfs_pgio_header *hdr)
+ 	int i;
+ 	ssize_t count;
+ 
+-	WARN_ON_ONCE(hdr->pgio_mirror_idx >= dreq->mirror_count);
+-
+ 	if (dreq->mirror_count == 1) {
+ 		dreq->mirrors[hdr->pgio_mirror_idx].count += hdr->good_bytes;
+ 		dreq->count += hdr->good_bytes;
+-- 
+2.3.6
+
+
+From ecb403f5eaf05dd7a9160fae030d55e23a5a4445 Mon Sep 17 00:00:00 2001
+From: Anna Schumaker <Anna.Schumaker@netapp.com>
+Date: Tue, 14 Apr 2015 10:34:20 -0400
+Subject: [PATCH 205/219] NFS: Add a stub for GETDEVICELIST
+Cc: mpagano@gentoo.org
+
+commit 7c61f0d3897eeeff6f3294adb9f910ddefa8035a upstream.
+
+d4b18c3e (pnfs: remove GETDEVICELIST implementation) removed the
+GETDEVICELIST operation from the NFS client, but left a "hole" in the
+nfs4_procedures array.  This caused /proc/self/mountstats to report an
+operation named "51" where GETDEVICELIST used to be.  This patch adds a
+stub to fix mountstats.
+
+Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
+Fixes: d4b18c3e (pnfs: remove GETDEVICELIST implementation)
+Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ fs/nfs/nfs4xdr.c | 6 ++++++
+ 1 file changed, 6 insertions(+)
+
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index 5c399ec..d494ea2 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -7365,6 +7365,11 @@ nfs4_stat_to_errno(int stat)
+ 	.p_name   = #proc,					\
+ }
+ 
++#define STUB(proc)		\
++[NFSPROC4_CLNT_##proc] = {	\
++	.p_name = #proc,	\
++}
++
+ struct rpc_procinfo	nfs4_procedures[] = {
+ 	PROC(READ,		enc_read,		dec_read),
+ 	PROC(WRITE,		enc_write,		dec_write),
+@@ -7417,6 +7422,7 @@ struct rpc_procinfo	nfs4_procedures[] = {
+ 	PROC(SECINFO_NO_NAME,	enc_secinfo_no_name,	dec_secinfo_no_name),
+ 	PROC(TEST_STATEID,	enc_test_stateid,	dec_test_stateid),
+ 	PROC(FREE_STATEID,	enc_free_stateid,	dec_free_stateid),
++	STUB(GETDEVICELIST),
+ 	PROC(BIND_CONN_TO_SESSION,
+ 			enc_bind_conn_to_session, dec_bind_conn_to_session),
+ 	PROC(DESTROY_CLIENTID,	enc_destroy_clientid,	dec_destroy_clientid),
+-- 
+2.3.6
+
+
+From a0e97e698901d058b984bcf1c13693f7a33375b3 Mon Sep 17 00:00:00 2001
+From: Juri Lelli <juri.lelli@arm.com>
+Date: Tue, 31 Mar 2015 09:53:36 +0100
+Subject: [PATCH 206/219] sched/deadline: Always enqueue on previous rq when
+ dl_task_timer() fires
+Cc: mpagano@gentoo.org
+
+commit 4cd57f97135840f637431c92380c8da3edbe44ed upstream.
+
+dl_task_timer() may fire on a different rq from where a task was removed
+after throttling. Since the call path is:
+
+  dl_task_timer() ->
+    enqueue_task_dl() ->
+      enqueue_dl_entity() ->
+        replenish_dl_entity()
+
+and replenish_dl_entity() uses dl_se's rq, we can't use current's rq
+in dl_task_timer(), but we need to lock the task's previous one.
+
+Tested-by: Wanpeng Li <wanpeng.li@linux.intel.com>
+Signed-off-by: Juri Lelli <juri.lelli@arm.com>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Acked-by: Kirill Tkhai <ktkhai@parallels.com>
+Cc: Juri Lelli <juri.lelli@gmail.com>
+Fixes: 3960c8c0c789 ("sched: Make dl_task_time() use task_rq_lock()")
+Link: http://lkml.kernel.org/r/1427792017-7356-1-git-send-email-juri.lelli@arm.com
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ kernel/sched/deadline.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 3fa8fa6..f670cbb 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -514,7 +514,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
+ 	unsigned long flags;
+ 	struct rq *rq;
+ 
+-	rq = task_rq_lock(current, &flags);
++	rq = task_rq_lock(p, &flags);
+ 
+ 	/*
+ 	 * We need to take care of several possible races here:
+@@ -569,7 +569,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
+ 		push_dl_task(rq);
+ #endif
+ unlock:
+-	task_rq_unlock(rq, current, &flags);
++	task_rq_unlock(rq, p, &flags);
+ 
+ 	return HRTIMER_NORESTART;
+ }
+-- 
+2.3.6
+
+
+From 9279e1f98b13d5e5b40805114896ec33313ad019 Mon Sep 17 00:00:00 2001
+From: Sabrina Dubroca <sd@queasysnail.net>
+Date: Thu, 26 Feb 2015 05:35:41 +0000
+Subject: [PATCH 207/219] e1000: add dummy allocator to fix race condition
+ between mtu change and netpoll
+Cc: mpagano@gentoo.org
+
+commit 08e8331654d1d7b2c58045e549005bc356aa7810 upstream.
+
+There is a race condition between e1000_change_mtu's cleanups and
+netpoll, when we change the MTU across jumbo size:
+
+Changing MTU frees all the rx buffers:
+    e1000_change_mtu -> e1000_down -> e1000_clean_all_rx_rings ->
+        e1000_clean_rx_ring
+
+Then, close to the end of e1000_change_mtu:
+    pr_info -> ... -> netpoll_poll_dev -> e1000_clean ->
+        e1000_clean_rx_irq -> e1000_alloc_rx_buffers -> e1000_alloc_frag
+
+And when we come back to do the rest of the MTU change:
+    e1000_up -> e1000_configure -> e1000_configure_rx ->
+        e1000_alloc_jumbo_rx_buffers
+
+alloc_jumbo finds the buffers already != NULL, since data (shared with
+page in e1000_rx_buffer->rxbuf) has been re-alloc'd, but it's garbage,
+or at least not what is expected when in jumbo state.
+
+This results in an unusable adapter (packets don't get through), and a
+NULL pointer dereference on the next call to e1000_clean_rx_ring
+(other mtu change, link down, shutdown):
+
+BUG: unable to handle kernel NULL pointer dereference at           (null)
+IP: [<ffffffff81194d6e>] put_compound_page+0x7e/0x330
+
+    [...]
+
+Call Trace:
+ [<ffffffff81195445>] put_page+0x55/0x60
+ [<ffffffff815d9f44>] e1000_clean_rx_ring+0x134/0x200
+ [<ffffffff815da055>] e1000_clean_all_rx_rings+0x45/0x60
+ [<ffffffff815df5e0>] e1000_down+0x1c0/0x1d0
+ [<ffffffff811e2260>] ? deactivate_slab+0x7f0/0x840
+ [<ffffffff815e21bc>] e1000_change_mtu+0xdc/0x170
+ [<ffffffff81647050>] dev_set_mtu+0xa0/0x140
+ [<ffffffff81664218>] do_setlink+0x218/0xac0
+ [<ffffffff814459e9>] ? nla_parse+0xb9/0x120
+ [<ffffffff816652d0>] rtnl_newlink+0x6d0/0x890
+ [<ffffffff8104f000>] ? kvm_clock_read+0x20/0x40
+ [<ffffffff810a2068>] ? sched_clock_cpu+0xa8/0x100
+ [<ffffffff81663802>] rtnetlink_rcv_msg+0x92/0x260
+
+By setting the allocator to a dummy version, netpoll can't mess up our
+rx buffers.  The allocator is set back to a sane value in
+e1000_configure_rx.
+
+Fixes: edbbb3ca1077 ("e1000: implement jumbo receive with partial descriptors")
+Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
+Tested-by: Aaron Brown <aaron.f.brown@intel.com>
+Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/net/ethernet/intel/e1000/e1000_main.c | 10 +++++++++-
+ 1 file changed, 9 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
+index 7f997d3..a71c446 100644
+--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
++++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
+@@ -144,6 +144,11 @@ static bool e1000_clean_rx_irq(struct e1000_adapter *adapter,
+ static bool e1000_clean_jumbo_rx_irq(struct e1000_adapter *adapter,
+ 				     struct e1000_rx_ring *rx_ring,
+ 				     int *work_done, int work_to_do);
++static void e1000_alloc_dummy_rx_buffers(struct e1000_adapter *adapter,
++					 struct e1000_rx_ring *rx_ring,
++					 int cleaned_count)
++{
++}
+ static void e1000_alloc_rx_buffers(struct e1000_adapter *adapter,
+ 				   struct e1000_rx_ring *rx_ring,
+ 				   int cleaned_count);
+@@ -3552,8 +3557,11 @@ static int e1000_change_mtu(struct net_device *netdev, int new_mtu)
+ 		msleep(1);
+ 	/* e1000_down has a dependency on max_frame_size */
+ 	hw->max_frame_size = max_frame;
+-	if (netif_running(netdev))
++	if (netif_running(netdev)) {
++		/* prevent buffers from being reallocated */
++		adapter->alloc_rx_buf = e1000_alloc_dummy_rx_buffers;
+ 		e1000_down(adapter);
++	}
+ 
+ 	/* NOTE: netdev_alloc_skb reserves 16 bytes, and typically NET_IP_ALIGN
+ 	 * means we reserve 2 more, this pushes us to allocate from the next
+-- 
+2.3.6
+
+
+From dada7797e4595606cf730600d8c9a03955a8264b Mon Sep 17 00:00:00 2001
+From: Johannes Berg <johannes.berg@intel.com>
+Date: Sat, 21 Mar 2015 07:41:04 +0100
+Subject: [PATCH 208/219] mac80211: send AP probe as unicast again
+Cc: mpagano@gentoo.org
+
+commit a73f8e21f3f93159bc19e154e8f50891c22c11db upstream.
+
+Louis reported that a static checker was complaining that
+the 'dst' variable was set (multiple times) but not used.
+This is due to a previous commit having removed the usage
+(apparently erroneously), so add it back.
+
+Fixes: a344d6778a98 ("mac80211: allow drivers to support NL80211_SCAN_FLAG_RANDOM_ADDR")
+Reported-by: Louis Langholtz <lou_langholtz@me.com>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ net/mac80211/mlme.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 142f66a..0ca013d 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -2260,7 +2260,7 @@ static void ieee80211_mgd_probe_ap_send(struct ieee80211_sub_if_data *sdata)
+ 		else
+ 			ssid_len = ssid[1];
+ 
+-		ieee80211_send_probe_req(sdata, sdata->vif.addr, NULL,
++		ieee80211_send_probe_req(sdata, sdata->vif.addr, dst,
+ 					 ssid + 2, ssid_len, NULL,
+ 					 0, (u32) -1, true, 0,
+ 					 ifmgd->associated->channel, false);
+-- 
+2.3.6
+
+
+From e86ecd8a7bbc590987b4046c523d8caaef8f8b5f Mon Sep 17 00:00:00 2001
+From: Daniel Borkmann <daniel@iogearbox.net>
+Date: Thu, 12 Mar 2015 17:21:42 +0100
+Subject: [PATCH 209/219] ebpf: verifier: check that call reg with ARG_ANYTHING
+ is initialized
+Cc: mpagano@gentoo.org
+
+commit 80f1d68ccba70b1060c9c7360ca83da430f66bed upstream.
+
+I noticed that a helper function with argument type ARG_ANYTHING does
+not need to have an initialized value (register).
+
+This can worst case lead to unintented stack memory leakage in future
+helper functions if they are not carefully designed, or unintended
+application behaviour in case the application developer was not careful
+enough to match a correct helper function signature in the API.
+
+The underlying issue is that ARG_ANYTHING should actually be split
+into two different semantics:
+
+  1) ARG_DONTCARE for function arguments that the helper function
+     does not care about (in other words: the default for unused
+     function arguments), and
+
+  2) ARG_ANYTHING that is an argument actually being used by a
+     helper function and *guaranteed* to be an initialized register.
+
+The current risk is low: ARG_ANYTHING is only used for the 'flags'
+argument (r4) in bpf_map_update_elem() that internally does strict
+checking.
+
+Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)")
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Acked-by: Alexei Starovoitov <ast@plumgrid.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ include/linux/bpf.h   | 4 +++-
+ kernel/bpf/verifier.c | 5 ++++-
+ 2 files changed, 7 insertions(+), 2 deletions(-)
+
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index bbfceb7..33b52fb 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -48,7 +48,7 @@ struct bpf_map *bpf_map_get(struct fd f);
+ 
+ /* function argument constraints */
+ enum bpf_arg_type {
+-	ARG_ANYTHING = 0,	/* any argument is ok */
++	ARG_DONTCARE = 0,	/* unused argument in helper function */
+ 
+ 	/* the following constraints used to prototype
+ 	 * bpf_map_lookup/update/delete_elem() functions
+@@ -62,6 +62,8 @@ enum bpf_arg_type {
+ 	 */
+ 	ARG_PTR_TO_STACK,	/* any pointer to eBPF program stack */
+ 	ARG_CONST_STACK_SIZE,	/* number of bytes accessed from stack */
++
++	ARG_ANYTHING,		/* any (initialized) argument is ok */
+ };
+ 
+ /* type of values returned from helper functions */
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 36508e6..5d8ea3d 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -755,7 +755,7 @@ static int check_func_arg(struct verifier_env *env, u32 regno,
+ 	enum bpf_reg_type expected_type;
+ 	int err = 0;
+ 
+-	if (arg_type == ARG_ANYTHING)
++	if (arg_type == ARG_DONTCARE)
+ 		return 0;
+ 
+ 	if (reg->type == NOT_INIT) {
+@@ -763,6 +763,9 @@ static int check_func_arg(struct verifier_env *env, u32 regno,
+ 		return -EACCES;
+ 	}
+ 
++	if (arg_type == ARG_ANYTHING)
++		return 0;
++
+ 	if (arg_type == ARG_PTR_TO_STACK || arg_type == ARG_PTR_TO_MAP_KEY ||
+ 	    arg_type == ARG_PTR_TO_MAP_VALUE) {
+ 		expected_type = PTR_TO_STACK;
+-- 
+2.3.6
+
+
+From 0b97a15f6fedf422d276245866319990c2c771c5 Mon Sep 17 00:00:00 2001
+From: David Rientjes <rientjes@google.com>
+Date: Tue, 14 Apr 2015 15:46:58 -0700
+Subject: [PATCH 210/219] mm, thp: really limit transparent hugepage allocation
+ to local node
+Cc: mpagano@gentoo.org
+
+commit 5265047ac30191ea24b16503165000c225f54feb upstream.
+
+Commit 077fcf116c8c ("mm/thp: allocate transparent hugepages on local
+node") restructured alloc_hugepage_vma() with the intent of only
+allocating transparent hugepages locally when there was not an effective
+interleave mempolicy.
+
+alloc_pages_exact_node() does not limit the allocation to the single node,
+however, but rather prefers it.  This is because __GFP_THISNODE is not set
+which would cause the node-local nodemask to be passed.  Without it, only
+a nodemask that prefers the local node is passed.
+
+Fix this by passing __GFP_THISNODE and falling back to small pages when
+the allocation fails.
+
+Commit 9f1b868a13ac ("mm: thp: khugepaged: add policy for finding target
+node") suffers from a similar problem for khugepaged, which is also fixed.
+
+Fixes: 077fcf116c8c ("mm/thp: allocate transparent hugepages on local node")
+Fixes: 9f1b868a13ac ("mm: thp: khugepaged: add policy for finding target node")
+Signed-off-by: David Rientjes <rientjes@google.com>
+Acked-by: Vlastimil Babka <vbabka@suse.cz>
+Cc: Christoph Lameter <cl@linux.com>
+Cc: Pekka Enberg <penberg@kernel.org>
+Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
+Cc: Johannes Weiner <hannes@cmpxchg.org>
+Cc: Mel Gorman <mgorman@suse.de>
+Cc: Pravin Shelar <pshelar@nicira.com>
+Cc: Jarno Rajahalme <jrajahalme@nicira.com>
+Cc: Li Zefan <lizefan@huawei.com>
+Cc: Greg Thelen <gthelen@google.com>
+Cc: Tejun Heo <tj@kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ mm/huge_memory.c | 9 +++++++--
+ mm/mempolicy.c   | 3 ++-
+ 2 files changed, 9 insertions(+), 3 deletions(-)
+
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 6817b03..956d4db 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2316,8 +2316,14 @@ static struct page
+ 		       struct vm_area_struct *vma, unsigned long address,
+ 		       int node)
+ {
++	gfp_t flags;
++
+ 	VM_BUG_ON_PAGE(*hpage, *hpage);
+ 
++	/* Only allocate from the target node */
++	flags = alloc_hugepage_gfpmask(khugepaged_defrag(), __GFP_OTHER_NODE) |
++	        __GFP_THISNODE;
++
+ 	/*
+ 	 * Before allocating the hugepage, release the mmap_sem read lock.
+ 	 * The allocation can take potentially a long time if it involves
+@@ -2326,8 +2332,7 @@ static struct page
+ 	 */
+ 	up_read(&mm->mmap_sem);
+ 
+-	*hpage = alloc_pages_exact_node(node, alloc_hugepage_gfpmask(
+-		khugepaged_defrag(), __GFP_OTHER_NODE), HPAGE_PMD_ORDER);
++	*hpage = alloc_pages_exact_node(node, flags, HPAGE_PMD_ORDER);
+ 	if (unlikely(!*hpage)) {
+ 		count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
+ 		*hpage = ERR_PTR(-ENOMEM);
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 4721046..de5dc5e 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -1985,7 +1985,8 @@ retry_cpuset:
+ 		nmask = policy_nodemask(gfp, pol);
+ 		if (!nmask || node_isset(node, *nmask)) {
+ 			mpol_cond_put(pol);
+-			page = alloc_pages_exact_node(node, gfp, order);
++			page = alloc_pages_exact_node(node,
++						gfp | __GFP_THISNODE, order);
+ 			goto out;
+ 		}
+ 	}
+-- 
+2.3.6
+
+
+From 2649caa31cc3143b2ad3039ac581dacd7529a631 Mon Sep 17 00:00:00 2001
+From: mancha security <mancha1@zoho.com>
+Date: Wed, 18 Mar 2015 18:47:25 +0100
+Subject: [PATCH 211/219] lib: memzero_explicit: use barrier instead of
+ OPTIMIZER_HIDE_VAR
+Cc: mpagano@gentoo.org
+
+commit 0b053c9518292705736329a8fe20ef4686ffc8e9 upstream.
+
+OPTIMIZER_HIDE_VAR(), as defined when using gcc, is insufficient to
+ensure protection from dead store optimization.
+
+For the random driver and crypto drivers, calls are emitted ...
+
+  $ gdb vmlinux
+  (gdb) disassemble memzero_explicit
+  Dump of assembler code for function memzero_explicit:
+    0xffffffff813a18b0 <+0>:	push   %rbp
+    0xffffffff813a18b1 <+1>:	mov    %rsi,%rdx
+    0xffffffff813a18b4 <+4>:	xor    %esi,%esi
+    0xffffffff813a18b6 <+6>:	mov    %rsp,%rbp
+    0xffffffff813a18b9 <+9>:	callq  0xffffffff813a7120 <memset>
+    0xffffffff813a18be <+14>:	pop    %rbp
+    0xffffffff813a18bf <+15>:	retq
+  End of assembler dump.
+
+  (gdb) disassemble extract_entropy
+  [...]
+    0xffffffff814a5009 <+313>:	mov    %r12,%rdi
+    0xffffffff814a500c <+316>:	mov    $0xa,%esi
+    0xffffffff814a5011 <+321>:	callq  0xffffffff813a18b0 <memzero_explicit>
+    0xffffffff814a5016 <+326>:	mov    -0x48(%rbp),%rax
+  [...]
+
+... but in case in future we might use facilities such as LTO, then
+OPTIMIZER_HIDE_VAR() is not sufficient to protect gcc from a possible
+eviction of the memset(). We have to use a compiler barrier instead.
+
+Minimal test example when we assume memzero_explicit() would *not* be
+a call, but would have been *inlined* instead:
+
+  static inline void memzero_explicit(void *s, size_t count)
+  {
+    memset(s, 0, count);
+    <foo>
+  }
+
+  int main(void)
+  {
+    char buff[20];
+
+    snprintf(buff, sizeof(buff) - 1, "test");
+    printf("%s", buff);
+
+    memzero_explicit(buff, sizeof(buff));
+    return 0;
+  }
+
+With <foo> := OPTIMIZER_HIDE_VAR():
+
+  (gdb) disassemble main
+  Dump of assembler code for function main:
+  [...]
+   0x0000000000400464 <+36>:	callq  0x400410 <printf@plt>
+   0x0000000000400469 <+41>:	xor    %eax,%eax
+   0x000000000040046b <+43>:	add    $0x28,%rsp
+   0x000000000040046f <+47>:	retq
+  End of assembler dump.
+
+With <foo> := barrier():
+
+  (gdb) disassemble main
+  Dump of assembler code for function main:
+  [...]
+   0x0000000000400464 <+36>:	callq  0x400410 <printf@plt>
+   0x0000000000400469 <+41>:	movq   $0x0,(%rsp)
+   0x0000000000400471 <+49>:	movq   $0x0,0x8(%rsp)
+   0x000000000040047a <+58>:	movl   $0x0,0x10(%rsp)
+   0x0000000000400482 <+66>:	xor    %eax,%eax
+   0x0000000000400484 <+68>:	add    $0x28,%rsp
+   0x0000000000400488 <+72>:	retq
+  End of assembler dump.
+
+As can be seen, movq, movq, movl are being emitted inlined
+via memset().
+
+Reference: http://thread.gmane.org/gmane.linux.kernel.cryptoapi/13764/
+Fixes: d4c5efdb9777 ("random: add and use memzero_explicit() for clearing data")
+Cc: Theodore Ts'o <tytso@mit.edu>
+Signed-off-by: mancha security <mancha1@zoho.com>
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
+Acked-by: Stephan Mueller <smueller@chronox.de>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ lib/string.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/lib/string.c b/lib/string.c
+index ce81aae..a579201 100644
+--- a/lib/string.c
++++ b/lib/string.c
+@@ -607,7 +607,7 @@ EXPORT_SYMBOL(memset);
+ void memzero_explicit(void *s, size_t count)
+ {
+ 	memset(s, 0, count);
+-	OPTIMIZER_HIDE_VAR(s);
++	barrier();
+ }
+ EXPORT_SYMBOL(memzero_explicit);
+ 
+-- 
+2.3.6
+
+
+From 1cd176dfd9e5e4d0cae0545fa8c56ecd582b2e9a Mon Sep 17 00:00:00 2001
+From: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
+Date: Fri, 13 Mar 2015 15:17:14 +0800
+Subject: [PATCH 212/219] wl18xx: show rx_frames_per_rates as an array as it
+ really is
+Cc: mpagano@gentoo.org
+
+commit a3fa71c40f1853d0c27e8f5bc01a722a705d9682 upstream.
+
+In struct wl18xx_acx_rx_rate_stat, rx_frames_per_rates field is an
+array, not a number.  This means WL18XX_DEBUGFS_FWSTATS_FILE can't be
+used to display this field in debugfs (it would display a pointer, not
+the actual data).  Use WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY instead.
+
+This bug has been found by adding a __printf attribute to
+wl1271_format_buffer.  gcc complained about "format '%u' expects
+argument of type 'unsigned int', but argument 5 has type 'u32 *'".
+
+Fixes: c5d94169e818 ("wl18xx: use new fw stats structures")
+Signed-off-by: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
+Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/net/wireless/ti/wl18xx/debugfs.c | 2 +-
+ drivers/net/wireless/ti/wlcore/debugfs.h | 4 ++--
+ 2 files changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/net/wireless/ti/wl18xx/debugfs.c b/drivers/net/wireless/ti/wl18xx/debugfs.c
+index c93fae9..5fbd223 100644
+--- a/drivers/net/wireless/ti/wl18xx/debugfs.c
++++ b/drivers/net/wireless/ti/wl18xx/debugfs.c
+@@ -139,7 +139,7 @@ WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, protection_filter, "%u");
+ WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, accum_arp_pend_requests, "%u");
+ WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, max_arp_queue_dep, "%u");
+ 
+-WL18XX_DEBUGFS_FWSTATS_FILE(rx_rate, rx_frames_per_rates, "%u");
++WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY(rx_rate, rx_frames_per_rates, 50);
+ 
+ WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY(aggr_size, tx_agg_vs_rate,
+ 				  AGGR_STATS_TX_AGG*AGGR_STATS_TX_RATE);
+diff --git a/drivers/net/wireless/ti/wlcore/debugfs.h b/drivers/net/wireless/ti/wlcore/debugfs.h
+index 0f2cfb0..bf14676 100644
+--- a/drivers/net/wireless/ti/wlcore/debugfs.h
++++ b/drivers/net/wireless/ti/wlcore/debugfs.h
+@@ -26,8 +26,8 @@
+ 
+ #include "wlcore.h"
+ 
+-int wl1271_format_buffer(char __user *userbuf, size_t count,
+-			 loff_t *ppos, char *fmt, ...);
++__printf(4, 5) int wl1271_format_buffer(char __user *userbuf, size_t count,
++					loff_t *ppos, char *fmt, ...);
+ 
+ int wl1271_debugfs_init(struct wl1271 *wl);
+ void wl1271_debugfs_exit(struct wl1271 *wl);
+-- 
+2.3.6
+
+
+From 8a7e1640e89ee191d677e2d994476ce68e2160ea Mon Sep 17 00:00:00 2001
+From: "Vutla, Lokesh" <lokeshvutla@ti.com>
+Date: Tue, 31 Mar 2015 09:52:25 +0530
+Subject: [PATCH 213/219] crypto: omap-aes - Fix support for unequal lengths
+Cc: mpagano@gentoo.org
+
+commit 6d7e7e02a044025237b6f62a20521170b794537f upstream.
+
+For cases where total length of an input SGs is not same as
+length of the input data for encryption, omap-aes driver
+crashes. This happens in the case when IPsec is trying to use
+omap-aes driver.
+
+To avoid this, we copy all the pages from the input SG list
+into a contiguous buffer and prepare a single element SG list
+for this buffer with length as the total bytes to crypt, which is
+similar thing that is done in case of unaligned lengths.
+
+Fixes: 6242332ff2f3 ("crypto: omap-aes - Add support for cases of unaligned lengths")
+Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/crypto/omap-aes.c | 14 +++++++++++---
+ 1 file changed, 11 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
+index 42f95a4..9a28b7e 100644
+--- a/drivers/crypto/omap-aes.c
++++ b/drivers/crypto/omap-aes.c
+@@ -554,15 +554,23 @@ static int omap_aes_crypt_dma_stop(struct omap_aes_dev *dd)
+ 	return err;
+ }
+ 
+-static int omap_aes_check_aligned(struct scatterlist *sg)
++static int omap_aes_check_aligned(struct scatterlist *sg, int total)
+ {
++	int len = 0;
++
+ 	while (sg) {
+ 		if (!IS_ALIGNED(sg->offset, 4))
+ 			return -1;
+ 		if (!IS_ALIGNED(sg->length, AES_BLOCK_SIZE))
+ 			return -1;
++
++		len += sg->length;
+ 		sg = sg_next(sg);
+ 	}
++
++	if (len != total)
++		return -1;
++
+ 	return 0;
+ }
+ 
+@@ -633,8 +641,8 @@ static int omap_aes_handle_queue(struct omap_aes_dev *dd,
+ 	dd->in_sg = req->src;
+ 	dd->out_sg = req->dst;
+ 
+-	if (omap_aes_check_aligned(dd->in_sg) ||
+-	    omap_aes_check_aligned(dd->out_sg)) {
++	if (omap_aes_check_aligned(dd->in_sg, dd->total) ||
++	    omap_aes_check_aligned(dd->out_sg, dd->total)) {
+ 		if (omap_aes_copy_sgs(dd))
+ 			pr_err("Failed to copy SGs for unaligned cases\n");
+ 		dd->sgs_copied = 1;
+-- 
+2.3.6
+
+
+From 78775b31ea25fc6d25f2444c634b2eec0ed90bca Mon Sep 17 00:00:00 2001
+From: Nishanth Menon <nm@ti.com>
+Date: Sat, 7 Mar 2015 03:39:05 -0600
+Subject: [PATCH 214/219] C6x: time: Ensure consistency in __init
+Cc: mpagano@gentoo.org
+
+commit f4831605f2dacd12730fe73961c77253cc2ea425 upstream.
+
+time_init invokes timer64_init (which is __init annotation)
+since all of these are invoked at init time, lets maintain
+consistency by ensuring time_init is marked appropriately
+as well.
+
+This fixes the following warning with CONFIG_DEBUG_SECTION_MISMATCH=y
+
+WARNING: vmlinux.o(.text+0x3bfc): Section mismatch in reference from the function time_init() to the function .init.text:timer64_init()
+The function time_init() references
+the function __init timer64_init().
+This is often because time_init lacks a __init
+annotation or the annotation of timer64_init is wrong.
+
+Fixes: 546a39546c64 ("C6X: time management")
+Signed-off-by: Nishanth Menon <nm@ti.com>
+Signed-off-by: Mark Salter <msalter@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ arch/c6x/kernel/time.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/c6x/kernel/time.c b/arch/c6x/kernel/time.c
+index 356ee84..04845aa 100644
+--- a/arch/c6x/kernel/time.c
++++ b/arch/c6x/kernel/time.c
+@@ -49,7 +49,7 @@ u64 sched_clock(void)
+ 	return (tsc * sched_clock_multiplier) >> SCHED_CLOCK_SHIFT;
+ }
+ 
+-void time_init(void)
++void __init time_init(void)
+ {
+ 	u64 tmp = (u64)NSEC_PER_SEC << SCHED_CLOCK_SHIFT;
+ 
+-- 
+2.3.6
+
+
+From df0bffebd40ba332f01193e2b6694042a0a2f56c Mon Sep 17 00:00:00 2001
+From: Dan Carpenter <dan.carpenter@oracle.com>
+Date: Thu, 16 Apr 2015 12:48:35 -0700
+Subject: [PATCH 215/219] memstick: mspro_block: add missing curly braces
+Cc: mpagano@gentoo.org
+
+commit 13f6b191aaa11c7fd718d35a0c565f3c16bc1d99 upstream.
+
+Using the indenting we can see the curly braces were obviously intended.
+This is a static checker fix, but my guess is that we don't read enough
+bytes, because we don't calculate "t_len" correctly.
+
+Fixes: f1d82698029b ('memstick: use fully asynchronous request processing')
+Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
+Cc: Alex Dubov <oakad@yahoo.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/memstick/core/mspro_block.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
+index fc145d2..922a750 100644
+--- a/drivers/memstick/core/mspro_block.c
++++ b/drivers/memstick/core/mspro_block.c
+@@ -758,7 +758,7 @@ static int mspro_block_complete_req(struct memstick_dev *card, int error)
+ 
+ 		if (error || (card->current_mrq.tpc == MSPRO_CMD_STOP)) {
+ 			if (msb->data_dir == READ) {
+-				for (cnt = 0; cnt < msb->current_seg; cnt++)
++				for (cnt = 0; cnt < msb->current_seg; cnt++) {
+ 					t_len += msb->req_sg[cnt].length
+ 						 / msb->page_size;
+ 
+@@ -766,6 +766,7 @@ static int mspro_block_complete_req(struct memstick_dev *card, int error)
+ 						t_len += msb->current_page - 1;
+ 
+ 					t_len *= msb->page_size;
++				}
+ 			}
+ 		} else
+ 			t_len = blk_rq_bytes(msb->block_req);
+-- 
+2.3.6
+
+
+From 6361409a1274060993b246c688c24a7c863c7eeb Mon Sep 17 00:00:00 2001
+From: Linus Walleij <linus.walleij@linaro.org>
+Date: Wed, 18 Feb 2015 17:12:18 +0100
+Subject: [PATCH 216/219] drivers: platform: parse IRQ flags from resources
+Cc: mpagano@gentoo.org
+
+commit 7085a7401ba54e92bbb5aa24d6f428071e18e509 upstream.
+
+This fixes a regression from the net subsystem:
+After commit d52fdbb735c36a209f36a628d40ca9185b349ba7
+"smc91x: retrieve IRQ and trigger flags in a modern way"
+a regression would appear on some legacy platforms such
+as the ARM PXA Zylonite that specify IRQ resources like
+this:
+
+static struct resource r = {
+       .start  = X,
+       .end    = X,
+       .flags  = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHEDGE,
+};
+
+The previous code would retrieve the resource and parse
+the high edge setting in the SMC91x driver, a use pattern
+that means every driver specifying an IRQ flag from a
+static resource need to parse resource flags and apply
+them at runtime.
+
+As we switched the code to use IRQ descriptors to retrieve
+the the trigger type like this:
+
+  irqd_get_trigger_type(irq_get_irq_data(...));
+
+the code would work for new platforms using e.g. device
+tree as the backing irq descriptor would have its flags
+properly set, whereas this kind of oldstyle static
+resources at no point assign the trigger flags to the
+corresponding IRQ descriptor.
+
+To make the behaviour identical on modern device tree
+and legacy static platform data platforms, modify
+platform_get_irq() to assign the trigger flags to the
+irq descriptor when a client looks up an IRQ from static
+resources.
+
+Fixes: d52fdbb735c3 ("smc91x: retrieve IRQ and trigger flags in a modern way")
+Tested-by: Robert Jarzmik <robert.jarzmik@free.fr>
+Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/base/platform.c | 9 +++++++++
+ 1 file changed, 9 insertions(+)
+
+diff --git a/drivers/base/platform.c b/drivers/base/platform.c
+index 9421fed..e68ab79 100644
+--- a/drivers/base/platform.c
++++ b/drivers/base/platform.c
+@@ -101,6 +101,15 @@ int platform_get_irq(struct platform_device *dev, unsigned int num)
+ 	}
+ 
+ 	r = platform_get_resource(dev, IORESOURCE_IRQ, num);
++	/*
++	 * The resources may pass trigger flags to the irqs that need
++	 * to be set up. It so happens that the trigger flags for
++	 * IORESOURCE_BITS correspond 1-to-1 to the IRQF_TRIGGER*
++	 * settings.
++	 */
++	if (r && r->flags & IORESOURCE_BITS)
++		irqd_set_trigger_type(irq_get_irq_data(r->start),
++				      r->flags & IORESOURCE_BITS);
+ 
+ 	return r ? r->start : -ENXIO;
+ #endif
+-- 
+2.3.6
+
+
+From 4c0a56b2ee7b3a3741339e943acd2692c146fcb1 Mon Sep 17 00:00:00 2001
+From: Junjie Mao <junjie_mao@yeah.net>
+Date: Wed, 28 Jan 2015 10:02:44 +0800
+Subject: [PATCH 217/219] driver core: bus: Goto appropriate labels on failure
+ in bus_add_device
+Cc: mpagano@gentoo.org
+
+commit 1c34203a1496d1849ba978021b878b3447d433c8 upstream.
+
+It is not necessary to call device_remove_groups() when device_add_groups()
+fails.
+
+The group added by device_add_groups() should be removed if sysfs_create_link()
+fails.
+
+Fixes: fa6fdb33b486 ("driver core: bus_type: add dev_groups")
+Signed-off-by: Junjie Mao <junjie_mao@yeah.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/base/bus.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/base/bus.c b/drivers/base/bus.c
+index 876bae5..79bc203 100644
+--- a/drivers/base/bus.c
++++ b/drivers/base/bus.c
+@@ -515,11 +515,11 @@ int bus_add_device(struct device *dev)
+ 			goto out_put;
+ 		error = device_add_groups(dev, bus->dev_groups);
+ 		if (error)
+-			goto out_groups;
++			goto out_id;
+ 		error = sysfs_create_link(&bus->p->devices_kset->kobj,
+ 						&dev->kobj, dev_name(dev));
+ 		if (error)
+-			goto out_id;
++			goto out_groups;
+ 		error = sysfs_create_link(&dev->kobj,
+ 				&dev->bus->p->subsys.kobj, "subsystem");
+ 		if (error)
+-- 
+2.3.6
+
+
+From cf1cab07a20abcfa17f0cf431d103471ebd7b33c Mon Sep 17 00:00:00 2001
+From: Florian Westphal <fw@strlen.de>
+Date: Wed, 1 Apr 2015 22:36:27 +0200
+Subject: [PATCH 218/219] netfilter: bridge: really save frag_max_size between
+ PRE and POST_ROUTING
+Cc: mpagano@gentoo.org
+
+commit 0b67c43ce36a9964f1d5e3f973ee19eefd3f9f8f upstream.
+
+We also need to save/store in forward, else br_parse_ip_options call
+will zero frag_max_size as well.
+
+Fixes: 93fdd47e5 ('bridge: Save frag_max_size between PRE_ROUTING and POST_ROUTING')
+Signed-off-by: Florian Westphal <fw@strlen.de>
+Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ net/bridge/br_netfilter.c | 17 +++++++++++++++--
+ 1 file changed, 15 insertions(+), 2 deletions(-)
+
+diff --git a/net/bridge/br_netfilter.c b/net/bridge/br_netfilter.c
+index 0ee453f..f371cbf 100644
+--- a/net/bridge/br_netfilter.c
++++ b/net/bridge/br_netfilter.c
+@@ -651,6 +651,13 @@ static int br_nf_forward_finish(struct sk_buff *skb)
+ 	struct net_device *in;
+ 
+ 	if (!IS_ARP(skb) && !IS_VLAN_ARP(skb)) {
++		int frag_max_size;
++
++		if (skb->protocol == htons(ETH_P_IP)) {
++			frag_max_size = IPCB(skb)->frag_max_size;
++			BR_INPUT_SKB_CB(skb)->frag_max_size = frag_max_size;
++		}
++
+ 		in = nf_bridge->physindev;
+ 		if (nf_bridge->mask & BRNF_PKT_TYPE) {
+ 			skb->pkt_type = PACKET_OTHERHOST;
+@@ -710,8 +717,14 @@ static unsigned int br_nf_forward_ip(const struct nf_hook_ops *ops,
+ 		nf_bridge->mask |= BRNF_PKT_TYPE;
+ 	}
+ 
+-	if (pf == NFPROTO_IPV4 && br_parse_ip_options(skb))
+-		return NF_DROP;
++	if (pf == NFPROTO_IPV4) {
++		int frag_max = BR_INPUT_SKB_CB(skb)->frag_max_size;
++
++		if (br_parse_ip_options(skb))
++			return NF_DROP;
++
++		IPCB(skb)->frag_max_size = frag_max;
++	}
+ 
+ 	/* The physdev module checks on this */
+ 	nf_bridge->mask |= BRNF_BRIDGED;
+-- 
+2.3.6
+
+
+From 072cab659c9368586d6417cfd6ec2d2c68469c67 Mon Sep 17 00:00:00 2001
+From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Date: Wed, 6 May 2015 22:04:23 +0200
+Subject: [PATCH 219/219] Linux 4.0.2
+Cc: mpagano@gentoo.org
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ Makefile | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/Makefile b/Makefile
+index f499cd2..0649a60 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+-- 
+2.3.6
+


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-05-07 19:37 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-05-07 19:37 UTC (permalink / raw
  To: gentoo-commits

commit:     6896bc5b6d9e445f5cdb7f401ba0391bb32ad436
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May  7 19:37:11 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May  7 19:37:11 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6896bc5b

Fix linux patch 4.0.2

 1001_linux-4.0.2.patch | 23654 +++++++++++++++--------------------------------
 1 file changed, 7692 insertions(+), 15962 deletions(-)

diff --git a/1001_linux-4.0.2.patch b/1001_linux-4.0.2.patch
index 5650c4e..38a75b2 100644
--- a/1001_linux-4.0.2.patch
+++ b/1001_linux-4.0.2.patch
@@ -1,2977 +1,1474 @@
-From 7bebf970047f59c16ddd5660b54562c8bcd40074 Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?Sebastian=20P=C3=B6hn?= <sebastian.poehn@gmail.com>
-Date: Mon, 20 Apr 2015 09:19:20 +0200
-Subject: [PATCH 001/219] ip_forward: Drop frames with attached skb->sk
-Cc: mpagano@gentoo.org
-
-[ Upstream commit 2ab957492d13bb819400ac29ae55911d50a82a13 ]
-
-Initial discussion was:
-[FYI] xfrm: Don't lookup sk_policy for timewait sockets
-
-Forwarded frames should not have a socket attached. Especially
-tw sockets will lead to panics later-on in the stack.
-
-This was observed with TPROXY assigning a tw socket and broken
-policy routing (misconfigured). As a result frame enters
-forwarding path instead of input. We cannot solve this in
-TPROXY as it cannot know that policy routing is broken.
-
-v2:
-Remove useless comment
-
-Signed-off-by: Sebastian Poehn <sebastian.poehn@gmail.com>
-Signed-off-by: David S. Miller <davem@davemloft.net>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- net/ipv4/ip_forward.c | 3 +++
- 1 file changed, 3 insertions(+)
-
-diff --git a/net/ipv4/ip_forward.c b/net/ipv4/ip_forward.c
-index d9bc28a..53bd53f 100644
---- a/net/ipv4/ip_forward.c
-+++ b/net/ipv4/ip_forward.c
-@@ -82,6 +82,9 @@ int ip_forward(struct sk_buff *skb)
- 	if (skb->pkt_type != PACKET_HOST)
- 		goto drop;
+diff --git a/Documentation/networking/scaling.txt b/Documentation/networking/scaling.txt
+index 99ca40e..5c204df 100644
+--- a/Documentation/networking/scaling.txt
++++ b/Documentation/networking/scaling.txt
+@@ -282,7 +282,7 @@ following is true:
  
-+	if (unlikely(skb->sk))
-+		goto drop;
-+
- 	if (skb_warn_if_lro(skb))
- 		goto drop;
+ - The current CPU's queue head counter >= the recorded tail counter
+   value in rps_dev_flow[i]
+-- The current CPU is unset (equal to RPS_NO_CPU)
++- The current CPU is unset (>= nr_cpu_ids)
+ - The current CPU is offline
  
--- 
-2.3.6
-
-
-From 8a6846e3226bb475db9686590da85bcc609c75a9 Mon Sep 17 00:00:00 2001
-From: Tom Herbert <tom@herbertland.com>
-Date: Mon, 20 Apr 2015 14:10:04 -0700
-Subject: [PATCH 002/219] net: add skb_checksum_complete_unset
-Cc: mpagano@gentoo.org
-
-[ Upstream commit 4e18b9adf2f910ec4d30b811a74a5b626e6c6125 ]
-
-This function changes ip_summed to CHECKSUM_NONE if CHECKSUM_COMPLETE
-is set. This is called to discard checksum-complete when packet
-is being modified and checksum is not pulled for headers in a layer.
-
-Signed-off-by: Tom Herbert <tom@herbertland.com>
-Signed-off-by: David S. Miller <davem@davemloft.net>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- include/linux/skbuff.h | 12 ++++++++++++
- 1 file changed, 12 insertions(+)
-
-diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
-index f54d665..b5c204c 100644
---- a/include/linux/skbuff.h
-+++ b/include/linux/skbuff.h
-@@ -3013,6 +3013,18 @@ static inline bool __skb_checksum_validate_needed(struct sk_buff *skb,
-  */
- #define CHECKSUM_BREAK 76
+ After this check, the packet is sent to the (possibly updated) current
+diff --git a/Documentation/virtual/kvm/devices/s390_flic.txt b/Documentation/virtual/kvm/devices/s390_flic.txt
+index 4ceef53..d1ad9d5 100644
+--- a/Documentation/virtual/kvm/devices/s390_flic.txt
++++ b/Documentation/virtual/kvm/devices/s390_flic.txt
+@@ -27,6 +27,9 @@ Groups:
+     Copies all floating interrupts into a buffer provided by userspace.
+     When the buffer is too small it returns -ENOMEM, which is the indication
+     for userspace to try again with a bigger buffer.
++    -ENOBUFS is returned when the allocation of a kernelspace buffer has
++    failed.
++    -EFAULT is returned when copying data to userspace failed.
+     All interrupts remain pending, i.e. are not deleted from the list of
+     currently pending interrupts.
+     attr->addr contains the userspace address of the buffer into which all
+diff --git a/Makefile b/Makefile
+index f499cd2..0649a60 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
  
-+/* Unset checksum-complete
-+ *
-+ * Unset checksum complete can be done when packet is being modified
-+ * (uncompressed for instance) and checksum-complete value is
-+ * invalidated.
-+ */
-+static inline void skb_checksum_complete_unset(struct sk_buff *skb)
-+{
-+	if (skb->ip_summed == CHECKSUM_COMPLETE)
-+		skb->ip_summed = CHECKSUM_NONE;
-+}
-+
- /* Validate (init) checksum based on checksum complete.
-  *
-  * Return values:
--- 
-2.3.6
-
-
-From 5a248fca60021d0e35a9de9bd0620eff840365ca Mon Sep 17 00:00:00 2001
-From: Tom Herbert <tom@herbertland.com>
-Date: Mon, 20 Apr 2015 14:10:05 -0700
-Subject: [PATCH 003/219] ppp: call skb_checksum_complete_unset in
- ppp_receive_frame
-Cc: mpagano@gentoo.org
-
-[ Upstream commit 3dfb05340ec6676e6fc71a9ae87bbbe66d3c2998 ]
-
-Call checksum_complete_unset in PPP receive to discard checksum-complete
-value. PPP does not pull checksum for headers and also modifies packet
-as in VJ compression.
-
-Signed-off-by: Tom Herbert <tom@herbertland.com>
-Signed-off-by: David S. Miller <davem@davemloft.net>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/net/ppp/ppp_generic.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
-index af034db..9d15566 100644
---- a/drivers/net/ppp/ppp_generic.c
-+++ b/drivers/net/ppp/ppp_generic.c
-@@ -1716,6 +1716,7 @@ ppp_receive_frame(struct ppp *ppp, struct sk_buff *skb, struct channel *pch)
- {
- 	/* note: a 0-length skb is used as an error indication */
- 	if (skb->len > 0) {
-+		skb_checksum_complete_unset(skb);
- #ifdef CONFIG_PPP_MULTILINK
- 		/* XXX do channel-level decompression here */
- 		if (PPP_PROTO(skb) == PPP_MP)
--- 
-2.3.6
-
-
-From e1b095eb7de9dc2235c86e15be6b9d0bff56a6ab Mon Sep 17 00:00:00 2001
-From: Eric Dumazet <edumazet@google.com>
-Date: Tue, 21 Apr 2015 18:32:24 -0700
-Subject: [PATCH 004/219] tcp: fix possible deadlock in tcp_send_fin()
-Cc: mpagano@gentoo.org
-
-[ Upstream commit d83769a580f1132ac26439f50068a29b02be535e ]
-
-Using sk_stream_alloc_skb() in tcp_send_fin() is dangerous in
-case a huge process is killed by OOM, and tcp_mem[2] is hit.
-
-To be able to free memory we need to make progress, so this
-patch allows FIN packets to not care about tcp_mem[2], if
-skb allocation succeeded.
-
-In a follow-up patch, we might abort tcp_send_fin() infinite loop
-in case TIF_MEMDIE is set on this thread, as memory allocator
-did its best getting extra memory already.
-
-This patch reverts d22e15371811 ("tcp: fix tcp fin memory accounting")
-
-Fixes: d22e15371811 ("tcp: fix tcp fin memory accounting")
-Signed-off-by: Eric Dumazet <edumazet@google.com>
-Signed-off-by: David S. Miller <davem@davemloft.net>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- net/ipv4/tcp_output.c | 20 +++++++++++++++++++-
- 1 file changed, 19 insertions(+), 1 deletion(-)
-
-diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
-index d520492..f911dc2 100644
---- a/net/ipv4/tcp_output.c
-+++ b/net/ipv4/tcp_output.c
-@@ -2751,6 +2751,21 @@ begin_fwd:
- 	}
- }
+diff --git a/arch/arm/boot/dts/at91-sama5d3_xplained.dts b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+index fec1fca..6c4bc53 100644
+--- a/arch/arm/boot/dts/at91-sama5d3_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+@@ -167,7 +167,13 @@
  
-+/* We allow to exceed memory limits for FIN packets to expedite
-+ * connection tear down and (memory) recovery.
-+ * Otherwise tcp_send_fin() could loop forever.
-+ */
-+static void sk_forced_wmem_schedule(struct sock *sk, int size)
-+{
-+	int amt, status;
-+
-+	if (size <= sk->sk_forward_alloc)
-+		return;
-+	amt = sk_mem_pages(size);
-+	sk->sk_forward_alloc += amt * SK_MEM_QUANTUM;
-+	sk_memory_allocated_add(sk, amt, &status);
-+}
+ 			macb1: ethernet@f802c000 {
+ 				phy-mode = "rmii";
++				#address-cells = <1>;
++				#size-cells = <0>;
+ 				status = "okay";
 +
- /* Send a fin.  The caller locks the socket for us.  This cannot be
-  * allowed to fail queueing a FIN frame under any circumstances.
-  */
-@@ -2773,11 +2788,14 @@ void tcp_send_fin(struct sock *sk)
- 	} else {
- 		/* Socket is locked, keep trying until memory is available. */
- 		for (;;) {
--			skb = sk_stream_alloc_skb(sk, 0, sk->sk_allocation);
-+			skb = alloc_skb_fclone(MAX_TCP_HEADER,
-+					       sk->sk_allocation);
- 			if (skb)
- 				break;
- 			yield();
- 		}
-+		skb_reserve(skb, MAX_TCP_HEADER);
-+		sk_forced_wmem_schedule(sk, skb->truesize);
- 		/* FIN eats a sequence byte, write_seq advanced by tcp_queue_skb(). */
- 		tcp_init_nondata_skb(skb, tp->write_seq,
- 				     TCPHDR_ACK | TCPHDR_FIN);
--- 
-2.3.6
-
-
-From 7e72469760dd73a44e8cfd6105bf695b7572e246 Mon Sep 17 00:00:00 2001
-From: Eric Dumazet <edumazet@google.com>
-Date: Thu, 23 Apr 2015 10:42:39 -0700
-Subject: [PATCH 005/219] tcp: avoid looping in tcp_send_fin()
-Cc: mpagano@gentoo.org
-
-[ Upstream commit 845704a535e9b3c76448f52af1b70e4422ea03fd ]
-
-Presence of an unbound loop in tcp_send_fin() had always been hard
-to explain when analyzing crash dumps involving gigantic dying processes
-with millions of sockets.
-
-Lets try a different strategy :
-
-In case of memory pressure, try to add the FIN flag to last packet
-in write queue, even if packet was already sent. TCP stack will
-be able to deliver this FIN after a timeout event. Note that this
-FIN being delivered by a retransmit, it also carries a Push flag
-given our current implementation.
-
-By checking sk_under_memory_pressure(), we anticipate that cooking
-many FIN packets might deplete tcp memory.
-
-In the case we could not allocate a packet, even with __GFP_WAIT
-allocation, then not sending a FIN seems quite reasonable if it allows
-to get rid of this socket, free memory, and not block the process from
-eventually doing other useful work.
-
-Signed-off-by: Eric Dumazet <edumazet@google.com>
-Signed-off-by: David S. Miller <davem@davemloft.net>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- net/ipv4/tcp_output.c | 50 +++++++++++++++++++++++++++++---------------------
- 1 file changed, 29 insertions(+), 21 deletions(-)
-
-diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
-index f911dc2..9d48dc4 100644
---- a/net/ipv4/tcp_output.c
-+++ b/net/ipv4/tcp_output.c
-@@ -2753,7 +2753,8 @@ begin_fwd:
++				ethernet-phy@1 {
++					reg = <0x1>;
++				};
+ 			};
  
- /* We allow to exceed memory limits for FIN packets to expedite
-  * connection tear down and (memory) recovery.
-- * Otherwise tcp_send_fin() could loop forever.
-+ * Otherwise tcp_send_fin() could be tempted to either delay FIN
-+ * or even be forced to close flow without any FIN.
-  */
- static void sk_forced_wmem_schedule(struct sock *sk, int size)
- {
-@@ -2766,33 +2767,40 @@ static void sk_forced_wmem_schedule(struct sock *sk, int size)
- 	sk_memory_allocated_add(sk, amt, &status);
- }
+ 			dbgu: serial@ffffee00 {
+diff --git a/arch/arm/boot/dts/dove.dtsi b/arch/arm/boot/dts/dove.dtsi
+index a5441d5..3cc8b83 100644
+--- a/arch/arm/boot/dts/dove.dtsi
++++ b/arch/arm/boot/dts/dove.dtsi
+@@ -154,7 +154,7 @@
  
--/* Send a fin.  The caller locks the socket for us.  This cannot be
-- * allowed to fail queueing a FIN frame under any circumstances.
-+/* Send a FIN. The caller locks the socket for us.
-+ * We should try to send a FIN packet really hard, but eventually give up.
-  */
- void tcp_send_fin(struct sock *sk)
- {
-+	struct sk_buff *skb, *tskb = tcp_write_queue_tail(sk);
- 	struct tcp_sock *tp = tcp_sk(sk);
--	struct sk_buff *skb = tcp_write_queue_tail(sk);
--	int mss_now;
+ 			uart2: serial@12200 {
+ 				compatible = "ns16550a";
+-				reg = <0x12000 0x100>;
++				reg = <0x12200 0x100>;
+ 				reg-shift = <2>;
+ 				interrupts = <9>;
+ 				clocks = <&core_clk 0>;
+@@ -163,7 +163,7 @@
  
--	/* Optimization, tack on the FIN if we have a queue of
--	 * unsent frames.  But be careful about outgoing SACKS
--	 * and IP options.
-+	/* Optimization, tack on the FIN if we have one skb in write queue and
-+	 * this skb was not yet sent, or we are under memory pressure.
-+	 * Note: in the latter case, FIN packet will be sent after a timeout,
-+	 * as TCP stack thinks it has already been transmitted.
- 	 */
--	mss_now = tcp_current_mss(sk);
+ 			uart3: serial@12300 {
+ 				compatible = "ns16550a";
+-				reg = <0x12100 0x100>;
++				reg = <0x12300 0x100>;
+ 				reg-shift = <2>;
+ 				interrupts = <10>;
+ 				clocks = <&core_clk 0>;
+diff --git a/arch/arm/boot/dts/exynos5250-spring.dts b/arch/arm/boot/dts/exynos5250-spring.dts
+index f027754..c41600e 100644
+--- a/arch/arm/boot/dts/exynos5250-spring.dts
++++ b/arch/arm/boot/dts/exynos5250-spring.dts
+@@ -429,7 +429,6 @@
+ &mmc_0 {
+ 	status = "okay";
+ 	num-slots = <1>;
+-	supports-highspeed;
+ 	broken-cd;
+ 	card-detect-delay = <200>;
+ 	samsung,dw-mshc-ciu-div = <3>;
+@@ -437,11 +436,8 @@
+ 	samsung,dw-mshc-ddr-timing = <1 2>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&sd0_clk &sd0_cmd &sd0_cd &sd0_bus4 &sd0_bus8>;
 -
--	if (tcp_send_head(sk) != NULL) {
--		TCP_SKB_CB(skb)->tcp_flags |= TCPHDR_FIN;
--		TCP_SKB_CB(skb)->end_seq++;
-+	if (tskb && (tcp_send_head(sk) || sk_under_memory_pressure(sk))) {
-+coalesce:
-+		TCP_SKB_CB(tskb)->tcp_flags |= TCPHDR_FIN;
-+		TCP_SKB_CB(tskb)->end_seq++;
- 		tp->write_seq++;
-+		if (!tcp_send_head(sk)) {
-+			/* This means tskb was already sent.
-+			 * Pretend we included the FIN on previous transmit.
-+			 * We need to set tp->snd_nxt to the value it would have
-+			 * if FIN had been sent. This is because retransmit path
-+			 * does not change tp->snd_nxt.
-+			 */
-+			tp->snd_nxt++;
-+			return;
-+		}
- 	} else {
--		/* Socket is locked, keep trying until memory is available. */
--		for (;;) {
--			skb = alloc_skb_fclone(MAX_TCP_HEADER,
--					       sk->sk_allocation);
--			if (skb)
--				break;
--			yield();
-+		skb = alloc_skb_fclone(MAX_TCP_HEADER, sk->sk_allocation);
-+		if (unlikely(!skb)) {
-+			if (tskb)
-+				goto coalesce;
-+			return;
- 		}
- 		skb_reserve(skb, MAX_TCP_HEADER);
- 		sk_forced_wmem_schedule(sk, skb->truesize);
-@@ -2801,7 +2809,7 @@ void tcp_send_fin(struct sock *sk)
- 				     TCPHDR_ACK | TCPHDR_FIN);
- 		tcp_queue_skb(sk, skb);
- 	}
--	__tcp_push_pending_frames(sk, mss_now, TCP_NAGLE_OFF);
-+	__tcp_push_pending_frames(sk, tcp_current_mss(sk), TCP_NAGLE_OFF);
- }
+-	slot@0 {
+-		reg = <0>;
+-		bus-width = <8>;
+-	};
++	bus-width = <8>;
++	cap-mmc-highspeed;
+ };
  
- /* We get here when a process closes a file descriptor (either due to
--- 
-2.3.6
-
-
-From e591662c1a5fb0e9ee486bf8edbed14d0507cfb4 Mon Sep 17 00:00:00 2001
-From: Eric Dumazet <edumazet@google.com>
-Date: Wed, 22 Apr 2015 07:33:36 -0700
-Subject: [PATCH 006/219] net: do not deplete pfmemalloc reserve
-Cc: mpagano@gentoo.org
-
-[ Upstream commit 79930f5892e134c6da1254389577fffb8bd72c66 ]
-
-build_skb() should look at the page pfmemalloc status.
-If set, this means page allocator allocated this page in the
-expectation it would help to free other pages. Networking
-stack can do that only if skb->pfmemalloc is also set.
-
-Also, we must refrain using high order pages from the pfmemalloc
-reserve, so __page_frag_refill() must also use __GFP_NOMEMALLOC for
-them. Under memory pressure, using order-0 pages is probably the best
-strategy.
-
-Signed-off-by: Eric Dumazet <edumazet@google.com>
-Signed-off-by: David S. Miller <davem@davemloft.net>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- net/core/skbuff.c | 9 +++++++--
- 1 file changed, 7 insertions(+), 2 deletions(-)
-
-diff --git a/net/core/skbuff.c b/net/core/skbuff.c
-index 98d45fe..5ec3742 100644
---- a/net/core/skbuff.c
-+++ b/net/core/skbuff.c
-@@ -311,7 +311,11 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size)
+ /*
+@@ -451,7 +447,6 @@
+ &mmc_1 {
+ 	status = "okay";
+ 	num-slots = <1>;
+-	supports-highspeed;
+ 	broken-cd;
+ 	card-detect-delay = <200>;
+ 	samsung,dw-mshc-ciu-div = <3>;
+@@ -459,11 +454,8 @@
+ 	samsung,dw-mshc-ddr-timing = <1 2>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&sd1_clk &sd1_cmd &sd1_cd &sd1_bus4>;
+-
+-	slot@0 {
+-		reg = <0>;
+-		bus-width = <4>;
+-	};
++	bus-width = <4>;
++	cap-sd-highspeed;
+ };
  
- 	memset(skb, 0, offsetof(struct sk_buff, tail));
- 	skb->truesize = SKB_TRUESIZE(size);
--	skb->head_frag = frag_size != 0;
-+	if (frag_size) {
-+		skb->head_frag = 1;
-+		if (virt_to_head_page(data)->pfmemalloc)
-+			skb->pfmemalloc = 1;
-+	}
- 	atomic_set(&skb->users, 1);
- 	skb->head = data;
- 	skb->data = data;
-@@ -348,7 +352,8 @@ static struct page *__page_frag_refill(struct netdev_alloc_cache *nc,
- 	gfp_t gfp = gfp_mask;
+ &pinctrl_0 {
+diff --git a/arch/arm/include/asm/elf.h b/arch/arm/include/asm/elf.h
+index afb9caf..674d03f 100644
+--- a/arch/arm/include/asm/elf.h
++++ b/arch/arm/include/asm/elf.h
+@@ -115,7 +115,7 @@ int dump_task_regs(struct task_struct *t, elf_gregset_t *elfregs);
+    the loader.  We need to make sure that it is out of the way of the program
+    that it will "exec", and that there is sufficient room for the brk.  */
  
- 	if (order) {
--		gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY;
-+		gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY |
-+			    __GFP_NOMEMALLOC;
- 		page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, order);
- 		nc->frag.size = PAGE_SIZE << (page ? order : 0);
- 	}
--- 
-2.3.6
-
-
-From f009181dcccd55398f872d090fa2e1780b4ca270 Mon Sep 17 00:00:00 2001
-From: Eric Dumazet <edumazet@google.com>
-Date: Fri, 24 Apr 2015 16:05:01 -0700
-Subject: [PATCH 007/219] net: fix crash in build_skb()
-Cc: mpagano@gentoo.org
-
-[ Upstream commit 2ea2f62c8bda242433809c7f4e9eae1c52c40bbe ]
-
-When I added pfmemalloc support in build_skb(), I forgot netlink
-was using build_skb() with a vmalloc() area.
-
-In this patch I introduce __build_skb() for netlink use,
-and build_skb() is a wrapper handling both skb->head_frag and
-skb->pfmemalloc
-
-This means netlink no longer has to hack skb->head_frag
-
-[ 1567.700067] kernel BUG at arch/x86/mm/physaddr.c:26!
-[ 1567.700067] invalid opcode: 0000 [#1] PREEMPT SMP KASAN
-[ 1567.700067] Dumping ftrace buffer:
-[ 1567.700067]    (ftrace buffer empty)
-[ 1567.700067] Modules linked in:
-[ 1567.700067] CPU: 9 PID: 16186 Comm: trinity-c182 Not tainted 4.0.0-next-20150424-sasha-00037-g4796e21 #2167
-[ 1567.700067] task: ffff880127efb000 ti: ffff880246770000 task.ti: ffff880246770000
-[ 1567.700067] RIP: __phys_addr (arch/x86/mm/physaddr.c:26 (discriminator 3))
-[ 1567.700067] RSP: 0018:ffff8802467779d8  EFLAGS: 00010202
-[ 1567.700067] RAX: 000041000ed8e000 RBX: ffffc9008ed8e000 RCX: 000000000000002c
-[ 1567.700067] RDX: 0000000000000004 RSI: 0000000000000000 RDI: ffffffffb3fd6049
-[ 1567.700067] RBP: ffff8802467779f8 R08: 0000000000000019 R09: ffff8801d0168000
-[ 1567.700067] R10: ffff8801d01680c7 R11: ffffed003a02d019 R12: ffffc9000ed8e000
-[ 1567.700067] R13: 0000000000000f40 R14: 0000000000001180 R15: ffffc9000ed8e000
-[ 1567.700067] FS:  00007f2a7da3f700(0000) GS:ffff8801d1000000(0000) knlGS:0000000000000000
-[ 1567.700067] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
-[ 1567.700067] CR2: 0000000000738308 CR3: 000000022e329000 CR4: 00000000000007e0
-[ 1567.700067] Stack:
-[ 1567.700067]  ffffc9000ed8e000 ffff8801d0168000 ffffc9000ed8e000 ffff8801d0168000
-[ 1567.700067]  ffff880246777a28 ffffffffad7c0a21 0000000000001080 ffff880246777c08
-[ 1567.700067]  ffff88060d302e68 ffff880246777b58 ffff880246777b88 ffffffffad9a6821
-[ 1567.700067] Call Trace:
-[ 1567.700067] build_skb (include/linux/mm.h:508 net/core/skbuff.c:316)
-[ 1567.700067] netlink_sendmsg (net/netlink/af_netlink.c:1633 net/netlink/af_netlink.c:2329)
-[ 1567.774369] ? sched_clock_cpu (kernel/sched/clock.c:311)
-[ 1567.774369] ? netlink_unicast (net/netlink/af_netlink.c:2273)
-[ 1567.774369] ? netlink_unicast (net/netlink/af_netlink.c:2273)
-[ 1567.774369] sock_sendmsg (net/socket.c:614 net/socket.c:623)
-[ 1567.774369] sock_write_iter (net/socket.c:823)
-[ 1567.774369] ? sock_sendmsg (net/socket.c:806)
-[ 1567.774369] __vfs_write (fs/read_write.c:479 fs/read_write.c:491)
-[ 1567.774369] ? get_lock_stats (kernel/locking/lockdep.c:249)
-[ 1567.774369] ? default_llseek (fs/read_write.c:487)
-[ 1567.774369] ? vtime_account_user (kernel/sched/cputime.c:701)
-[ 1567.774369] ? rw_verify_area (fs/read_write.c:406 (discriminator 4))
-[ 1567.774369] vfs_write (fs/read_write.c:539)
-[ 1567.774369] SyS_write (fs/read_write.c:586 fs/read_write.c:577)
-[ 1567.774369] ? SyS_read (fs/read_write.c:577)
-[ 1567.774369] ? __this_cpu_preempt_check (lib/smp_processor_id.c:63)
-[ 1567.774369] ? trace_hardirqs_on_caller (kernel/locking/lockdep.c:2594 kernel/locking/lockdep.c:2636)
-[ 1567.774369] ? trace_hardirqs_on_thunk (arch/x86/lib/thunk_64.S:42)
-[ 1567.774369] system_call_fastpath (arch/x86/kernel/entry_64.S:261)
-
-Fixes: 79930f5892e ("net: do not deplete pfmemalloc reserve")
-Signed-off-by: Eric Dumazet <edumazet@google.com>
-Reported-by: Sasha Levin <sasha.levin@oracle.com>
-Signed-off-by: David S. Miller <davem@davemloft.net>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- include/linux/skbuff.h   |  1 +
- net/core/skbuff.c        | 31 ++++++++++++++++++++++---------
- net/netlink/af_netlink.c |  6 ++----
- 3 files changed, 25 insertions(+), 13 deletions(-)
-
-diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
-index b5c204c..bdccc4b 100644
---- a/include/linux/skbuff.h
-+++ b/include/linux/skbuff.h
-@@ -769,6 +769,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
+-#define ELF_ET_DYN_BASE	(2 * TASK_SIZE / 3)
++#define ELF_ET_DYN_BASE	(TASK_SIZE / 3 * 2)
  
- struct sk_buff *__alloc_skb(unsigned int size, gfp_t priority, int flags,
- 			    int node);
-+struct sk_buff *__build_skb(void *data, unsigned int frag_size);
- struct sk_buff *build_skb(void *data, unsigned int frag_size);
- static inline struct sk_buff *alloc_skb(unsigned int size,
- 					gfp_t priority)
-diff --git a/net/core/skbuff.c b/net/core/skbuff.c
-index 5ec3742..e9f9a15 100644
---- a/net/core/skbuff.c
-+++ b/net/core/skbuff.c
-@@ -280,13 +280,14 @@ nodata:
- EXPORT_SYMBOL(__alloc_skb);
+ /* When the program starts, a1 contains a pointer to a function to be 
+    registered with atexit, as per the SVR4 ABI.  A value of 0 means we 
+diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
+index 0db25bc..3a42ac6 100644
+--- a/arch/arm/include/uapi/asm/kvm.h
++++ b/arch/arm/include/uapi/asm/kvm.h
+@@ -195,8 +195,14 @@ struct kvm_arch_memory_slot {
+ #define KVM_ARM_IRQ_CPU_IRQ		0
+ #define KVM_ARM_IRQ_CPU_FIQ		1
  
- /**
-- * build_skb - build a network buffer
-+ * __build_skb - build a network buffer
-  * @data: data buffer provided by caller
-- * @frag_size: size of fragment, or 0 if head was kmalloced
-+ * @frag_size: size of data, or 0 if head was kmalloced
-  *
-  * Allocate a new &sk_buff. Caller provides space holding head and
-  * skb_shared_info. @data must have been allocated by kmalloc() only if
-- * @frag_size is 0, otherwise data should come from the page allocator.
-+ * @frag_size is 0, otherwise data should come from the page allocator
-+ *  or vmalloc()
-  * The return is the new skb buffer.
-  * On a failure the return is %NULL, and @data is not freed.
-  * Notes :
-@@ -297,7 +298,7 @@ EXPORT_SYMBOL(__alloc_skb);
-  *  before giving packet to stack.
-  *  RX rings only contains data buffers, not full skbs.
-  */
--struct sk_buff *build_skb(void *data, unsigned int frag_size)
-+struct sk_buff *__build_skb(void *data, unsigned int frag_size)
- {
- 	struct skb_shared_info *shinfo;
- 	struct sk_buff *skb;
-@@ -311,11 +312,6 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size)
+-/* Highest supported SPI, from VGIC_NR_IRQS */
++/*
++ * This used to hold the highest supported SPI, but it is now obsolete
++ * and only here to provide source code level compatibility with older
++ * userland. The highest SPI number can be set via KVM_DEV_ARM_VGIC_GRP_NR_IRQS.
++ */
++#ifndef __KERNEL__
+ #define KVM_ARM_IRQ_GIC_MAX		127
++#endif
  
- 	memset(skb, 0, offsetof(struct sk_buff, tail));
- 	skb->truesize = SKB_TRUESIZE(size);
--	if (frag_size) {
--		skb->head_frag = 1;
--		if (virt_to_head_page(data)->pfmemalloc)
--			skb->pfmemalloc = 1;
--	}
- 	atomic_set(&skb->users, 1);
- 	skb->head = data;
- 	skb->data = data;
-@@ -332,6 +328,23 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size)
+ /* PSCI interface */
+ #define KVM_PSCI_FN_BASE		0x95c1ba5e
+diff --git a/arch/arm/kernel/hibernate.c b/arch/arm/kernel/hibernate.c
+index c4cc50e..cfb354f 100644
+--- a/arch/arm/kernel/hibernate.c
++++ b/arch/arm/kernel/hibernate.c
+@@ -22,6 +22,7 @@
+ #include <asm/suspend.h>
+ #include <asm/memory.h>
+ #include <asm/sections.h>
++#include "reboot.h"
  
- 	return skb;
- }
-+
-+/* build_skb() is wrapper over __build_skb(), that specifically
-+ * takes care of skb->head and skb->pfmemalloc
-+ * This means that if @frag_size is not zero, then @data must be backed
-+ * by a page fragment, not kmalloc() or vmalloc()
-+ */
-+struct sk_buff *build_skb(void *data, unsigned int frag_size)
-+{
-+	struct sk_buff *skb = __build_skb(data, frag_size);
-+
-+	if (skb && frag_size) {
-+		skb->head_frag = 1;
-+		if (virt_to_head_page(data)->pfmemalloc)
-+			skb->pfmemalloc = 1;
-+	}
-+	return skb;
-+}
- EXPORT_SYMBOL(build_skb);
+ int pfn_is_nosave(unsigned long pfn)
+ {
+@@ -61,7 +62,7 @@ static int notrace arch_save_image(unsigned long unused)
  
- struct netdev_alloc_cache {
-diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
-index 05919bf..d1d7a81 100644
---- a/net/netlink/af_netlink.c
-+++ b/net/netlink/af_netlink.c
-@@ -1616,13 +1616,11 @@ static struct sk_buff *netlink_alloc_large_skb(unsigned int size,
- 	if (data == NULL)
- 		return NULL;
+ 	ret = swsusp_save();
+ 	if (ret == 0)
+-		soft_restart(virt_to_phys(cpu_resume));
++		_soft_restart(virt_to_phys(cpu_resume), false);
+ 	return ret;
+ }
  
--	skb = build_skb(data, size);
-+	skb = __build_skb(data, size);
- 	if (skb == NULL)
- 		vfree(data);
--	else {
--		skb->head_frag = 0;
-+	else
- 		skb->destructor = netlink_skb_destructor;
--	}
+@@ -86,7 +87,7 @@ static void notrace arch_restore_image(void *unused)
+ 	for (pbe = restore_pblist; pbe; pbe = pbe->next)
+ 		copy_page(pbe->orig_address, pbe->address);
  
- 	return skb;
+-	soft_restart(virt_to_phys(cpu_resume));
++	_soft_restart(virt_to_phys(cpu_resume), false);
  }
--- 
-2.3.6
-
-
-From f80e3eb94b7d4b5b9ebf999da1f50cd5b263a23d Mon Sep 17 00:00:00 2001
-From: Alexey Khoroshilov <khoroshilov@ispras.ru>
-Date: Sat, 25 Apr 2015 04:07:03 +0300
-Subject: [PATCH 008/219] pxa168: fix double deallocation of managed resources
-Cc: mpagano@gentoo.org
-
-[ Upstream commit 0e03fd3e335d272bee88fe733d5fd13f5c5b7140 ]
-
-Commit 43d3ddf87a57 ("net: pxa168_eth: add device tree support") starts
-to use managed resources by adding devm_clk_get() and
-devm_ioremap_resource(), but it leaves explicit iounmap() and clock_put()
-in pxa168_eth_remove() and in failure handling code of pxa168_eth_probe().
-As a result double free can happen.
-
-The patch removes explicit resource deallocation. Also it converts
-clk_disable() to clk_disable_unprepare() to make it symmetrical with
-clk_prepare_enable().
-
-Found by Linux Driver Verification project (linuxtesting.org).
-
-Signed-off-by: Alexey Khoroshilov <khoroshilov@ispras.ru>
-Signed-off-by: David S. Miller <davem@davemloft.net>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/net/ethernet/marvell/pxa168_eth.c | 16 +++++-----------
- 1 file changed, 5 insertions(+), 11 deletions(-)
-
-diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
-index af829c5..7ace07d 100644
---- a/drivers/net/ethernet/marvell/pxa168_eth.c
-+++ b/drivers/net/ethernet/marvell/pxa168_eth.c
-@@ -1508,7 +1508,8 @@ static int pxa168_eth_probe(struct platform_device *pdev)
- 		np = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
- 		if (!np) {
- 			dev_err(&pdev->dev, "missing phy-handle\n");
--			return -EINVAL;
-+			err = -EINVAL;
-+			goto err_netdev;
- 		}
- 		of_property_read_u32(np, "reg", &pep->phy_addr);
- 		pep->phy_intf = of_get_phy_mode(pdev->dev.of_node);
-@@ -1526,7 +1527,7 @@ static int pxa168_eth_probe(struct platform_device *pdev)
- 	pep->smi_bus = mdiobus_alloc();
- 	if (pep->smi_bus == NULL) {
- 		err = -ENOMEM;
--		goto err_base;
-+		goto err_netdev;
- 	}
- 	pep->smi_bus->priv = pep;
- 	pep->smi_bus->name = "pxa168_eth smi";
-@@ -1551,13 +1552,10 @@ err_mdiobus:
- 	mdiobus_unregister(pep->smi_bus);
- err_free_mdio:
- 	mdiobus_free(pep->smi_bus);
--err_base:
--	iounmap(pep->base);
- err_netdev:
- 	free_netdev(dev);
- err_clk:
--	clk_disable(clk);
--	clk_put(clk);
-+	clk_disable_unprepare(clk);
- 	return err;
+ 
+ static u64 resume_stack[PAGE_SIZE/2/sizeof(u64)] __nosavedata;
+diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
+index fdfa3a7..2bf1a16 100644
+--- a/arch/arm/kernel/process.c
++++ b/arch/arm/kernel/process.c
+@@ -41,6 +41,7 @@
+ #include <asm/system_misc.h>
+ #include <asm/mach/time.h>
+ #include <asm/tls.h>
++#include "reboot.h"
+ 
+ #ifdef CONFIG_CC_STACKPROTECTOR
+ #include <linux/stackprotector.h>
+@@ -95,7 +96,7 @@ static void __soft_restart(void *addr)
+ 	BUG();
  }
  
-@@ -1574,13 +1572,9 @@ static int pxa168_eth_remove(struct platform_device *pdev)
- 	if (pep->phy)
- 		phy_disconnect(pep->phy);
- 	if (pep->clk) {
--		clk_disable(pep->clk);
--		clk_put(pep->clk);
--		pep->clk = NULL;
-+		clk_disable_unprepare(pep->clk);
- 	}
+-void soft_restart(unsigned long addr)
++void _soft_restart(unsigned long addr, bool disable_l2)
+ {
+ 	u64 *stack = soft_restart_stack + ARRAY_SIZE(soft_restart_stack);
  
--	iounmap(pep->base);
--	pep->base = NULL;
- 	mdiobus_unregister(pep->smi_bus);
- 	mdiobus_free(pep->smi_bus);
- 	unregister_netdev(dev);
--- 
-2.3.6
-
-
-From b32dec8a9f5834b14daaa75bd3e49f3b54272d65 Mon Sep 17 00:00:00 2001
-From: Eric Dumazet <edumazet@google.com>
-Date: Sat, 25 Apr 2015 09:35:24 -0700
-Subject: [PATCH 009/219] net: rfs: fix crash in get_rps_cpus()
-Cc: mpagano@gentoo.org
-
-[ Upstream commit a31196b07f8034eba6a3487a1ad1bb5ec5cd58a5 ]
-
-Commit 567e4b79731c ("net: rfs: add hash collision detection") had one
-mistake :
-
-RPS_NO_CPU is no longer the marker for invalid cpu in set_rps_cpu()
-and get_rps_cpu(), as @next_cpu was the result of an AND with
-rps_cpu_mask
-
-This bug showed up on a host with 72 cpus :
-next_cpu was 0x7f, and the code was trying to access percpu data of an
-non existent cpu.
-
-In a follow up patch, we might get rid of compares against nr_cpu_ids,
-if we init the tables with 0. This is silly to test for a very unlikely
-condition that exists only shortly after table initialization, as
-we got rid of rps_reset_sock_flow() and similar functions that were
-writing this RPS_NO_CPU magic value at flow dismantle : When table is
-old enough, it never contains this value anymore.
-
-Fixes: 567e4b79731c ("net: rfs: add hash collision detection")
-Signed-off-by: Eric Dumazet <edumazet@google.com>
-Cc: Tom Herbert <tom@herbertland.com>
-Cc: Ben Hutchings <ben@decadent.org.uk>
-Signed-off-by: David S. Miller <davem@davemloft.net>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- Documentation/networking/scaling.txt |  2 +-
- net/core/dev.c                       | 12 ++++++------
- 2 files changed, 7 insertions(+), 7 deletions(-)
-
-diff --git a/Documentation/networking/scaling.txt b/Documentation/networking/scaling.txt
-index 99ca40e..5c204df 100644
---- a/Documentation/networking/scaling.txt
-+++ b/Documentation/networking/scaling.txt
-@@ -282,7 +282,7 @@ following is true:
+@@ -104,7 +105,7 @@ void soft_restart(unsigned long addr)
+ 	local_fiq_disable();
  
- - The current CPU's queue head counter >= the recorded tail counter
-   value in rps_dev_flow[i]
--- The current CPU is unset (equal to RPS_NO_CPU)
-+- The current CPU is unset (>= nr_cpu_ids)
- - The current CPU is offline
+ 	/* Disable the L2 if we're the last man standing. */
+-	if (num_online_cpus() == 1)
++	if (disable_l2)
+ 		outer_disable();
  
- After this check, the packet is sent to the (possibly updated) current
-diff --git a/net/core/dev.c b/net/core/dev.c
-index 45109b7..22a53ac 100644
---- a/net/core/dev.c
-+++ b/net/core/dev.c
-@@ -3041,7 +3041,7 @@ static struct rps_dev_flow *
- set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
- 	    struct rps_dev_flow *rflow, u16 next_cpu)
- {
--	if (next_cpu != RPS_NO_CPU) {
-+	if (next_cpu < nr_cpu_ids) {
- #ifdef CONFIG_RFS_ACCEL
- 		struct netdev_rx_queue *rxqueue;
- 		struct rps_dev_flow_table *flow_table;
-@@ -3146,7 +3146,7 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
- 		 * If the desired CPU (where last recvmsg was done) is
- 		 * different from current CPU (one in the rx-queue flow
- 		 * table entry), switch if one of the following holds:
--		 *   - Current CPU is unset (equal to RPS_NO_CPU).
-+		 *   - Current CPU is unset (>= nr_cpu_ids).
- 		 *   - Current CPU is offline.
- 		 *   - The current CPU's queue tail has advanced beyond the
- 		 *     last packet that was enqueued using this table entry.
-@@ -3154,14 +3154,14 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
- 		 *     have been dequeued, thus preserving in order delivery.
- 		 */
- 		if (unlikely(tcpu != next_cpu) &&
--		    (tcpu == RPS_NO_CPU || !cpu_online(tcpu) ||
-+		    (tcpu >= nr_cpu_ids || !cpu_online(tcpu) ||
- 		     ((int)(per_cpu(softnet_data, tcpu).input_queue_head -
- 		      rflow->last_qtail)) >= 0)) {
- 			tcpu = next_cpu;
- 			rflow = set_rps_cpu(dev, skb, rflow, next_cpu);
- 		}
- 
--		if (tcpu != RPS_NO_CPU && cpu_online(tcpu)) {
-+		if (tcpu < nr_cpu_ids && cpu_online(tcpu)) {
- 			*rflowp = rflow;
- 			cpu = tcpu;
- 			goto done;
-@@ -3202,14 +3202,14 @@ bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index,
- 	struct rps_dev_flow_table *flow_table;
- 	struct rps_dev_flow *rflow;
- 	bool expire = true;
--	int cpu;
-+	unsigned int cpu;
+ 	/* Change to the new stack and continue with the reset. */
+@@ -114,6 +115,11 @@ void soft_restart(unsigned long addr)
+ 	BUG();
+ }
  
- 	rcu_read_lock();
- 	flow_table = rcu_dereference(rxqueue->rps_flow_table);
- 	if (flow_table && flow_id <= flow_table->mask) {
- 		rflow = &flow_table->flows[flow_id];
- 		cpu = ACCESS_ONCE(rflow->cpu);
--		if (rflow->filter == filter_id && cpu != RPS_NO_CPU &&
-+		if (rflow->filter == filter_id && cpu < nr_cpu_ids &&
- 		    ((int)(per_cpu(softnet_data, cpu).input_queue_head -
- 			   rflow->last_qtail) <
- 		     (int)(10 * flow_table->mask)))
--- 
-2.3.6
-
-
-From 36fb8ea94764c1435bc5357057373c73f1055be9 Mon Sep 17 00:00:00 2001
-From: Amir Vadai <amirv@mellanox.com>
-Date: Mon, 27 Apr 2015 13:40:56 +0300
-Subject: [PATCH 010/219] net/mlx4_en: Prevent setting invalid RSS hash
- function
-Cc: mpagano@gentoo.org
-
-[ Upstream commit b37069090b7c5615610a8aa6b36533d67b364d38 ]
-
-mlx4_en_check_rxfh_func() was checking for hardware support before
-setting a known RSS hash function, but didn't do any check before
-setting unknown RSS hash function. Need to make it fail on such values.
-In this occasion, moved the actual setting of the new value from the
-check function into mlx4_en_set_rxfh().
-
-Fixes: 947cbb0 ("net/mlx4_en: Support for configurable RSS hash function")
-Signed-off-by: Amir Vadai <amirv@mellanox.com>
-Signed-off-by: David S. Miller <davem@davemloft.net>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/net/ethernet/mellanox/mlx4/en_ethtool.c | 29 ++++++++++++++-----------
- 1 file changed, 16 insertions(+), 13 deletions(-)
-
-diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
-index a7b58ba..3dccf01 100644
---- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
-+++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
-@@ -981,20 +981,21 @@ static int mlx4_en_check_rxfh_func(struct net_device *dev, u8 hfunc)
- 	struct mlx4_en_priv *priv = netdev_priv(dev);
++void soft_restart(unsigned long addr)
++{
++	_soft_restart(addr, num_online_cpus() == 1);
++}
++
+ /*
+  * Function pointers to optional machine specific functions
+  */
+diff --git a/arch/arm/kernel/reboot.h b/arch/arm/kernel/reboot.h
+new file mode 100644
+index 0000000..c87f058
+--- /dev/null
++++ b/arch/arm/kernel/reboot.h
+@@ -0,0 +1,6 @@
++#ifndef REBOOT_H
++#define REBOOT_H
++
++extern void _soft_restart(unsigned long addr, bool disable_l2);
++
++#endif
+diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
+index 5560f74..b652af5 100644
+--- a/arch/arm/kvm/arm.c
++++ b/arch/arm/kvm/arm.c
+@@ -651,8 +651,7 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
+ 		if (!irqchip_in_kernel(kvm))
+ 			return -ENXIO;
  
- 	/* check if requested function is supported by the device */
--	if ((hfunc == ETH_RSS_HASH_TOP &&
--	     !(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_TOP)) ||
--	    (hfunc == ETH_RSS_HASH_XOR &&
--	     !(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_XOR)))
--		return -EINVAL;
-+	if (hfunc == ETH_RSS_HASH_TOP) {
-+		if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_TOP))
-+			return -EINVAL;
-+		if (!(dev->features & NETIF_F_RXHASH))
-+			en_warn(priv, "Toeplitz hash function should be used in conjunction with RX hashing for optimal performance\n");
-+		return 0;
-+	} else if (hfunc == ETH_RSS_HASH_XOR) {
-+		if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_XOR))
-+			return -EINVAL;
-+		if (dev->features & NETIF_F_RXHASH)
-+			en_warn(priv, "Enabling both XOR Hash function and RX Hashing can limit RPS functionality\n");
-+		return 0;
-+	}
+-		if (irq_num < VGIC_NR_PRIVATE_IRQS ||
+-		    irq_num > KVM_ARM_IRQ_GIC_MAX)
++		if (irq_num < VGIC_NR_PRIVATE_IRQS)
+ 			return -EINVAL;
  
--	priv->rss_hash_fn = hfunc;
--	if (hfunc == ETH_RSS_HASH_TOP && !(dev->features & NETIF_F_RXHASH))
--		en_warn(priv,
--			"Toeplitz hash function should be used in conjunction with RX hashing for optimal performance\n");
--	if (hfunc == ETH_RSS_HASH_XOR && (dev->features & NETIF_F_RXHASH))
--		en_warn(priv,
--			"Enabling both XOR Hash function and RX Hashing can limit RPS functionality\n");
--	return 0;
-+	return -EINVAL;
- }
+ 		return kvm_vgic_inject_irq(kvm, 0, irq_num, level);
+diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c
+index 8b9f5e2..4f4e222 100644
+--- a/arch/arm/mach-mvebu/pmsu.c
++++ b/arch/arm/mach-mvebu/pmsu.c
+@@ -415,6 +415,9 @@ static __init int armada_38x_cpuidle_init(void)
+ 	void __iomem *mpsoc_base;
+ 	u32 reg;
  
- static int mlx4_en_get_rxfh(struct net_device *dev, u32 *ring_index, u8 *key,
-@@ -1068,6 +1069,8 @@ static int mlx4_en_set_rxfh(struct net_device *dev, const u32 *ring_index,
- 		priv->prof->rss_rings = rss_rings;
- 	if (key)
- 		memcpy(priv->rss_key, key, MLX4_EN_RSS_KEY_SIZE);
-+	if (hfunc !=  ETH_RSS_HASH_NO_CHANGE)
-+		priv->rss_hash_fn = hfunc;
++	pr_warn("CPU idle is currently broken on Armada 38x: disabling");
++	return 0;
++
+ 	np = of_find_compatible_node(NULL, NULL,
+ 				     "marvell,armada-380-coherency-fabric");
+ 	if (!np)
+@@ -476,6 +479,16 @@ static int __init mvebu_v7_cpu_pm_init(void)
+ 		return 0;
+ 	of_node_put(np);
  
- 	if (port_up) {
- 		err = mlx4_en_start_port(dev);
--- 
-2.3.6
-
-
-From 8336ee9076303fbdb38e89f18e921ec238d9c48c Mon Sep 17 00:00:00 2001
-From: Gu Zheng <guz.fnst@cn.fujitsu.com>
-Date: Fri, 3 Apr 2015 08:44:47 +0800
-Subject: [PATCH 011/219] md: fix md io stats accounting broken
-Cc: mpagano@gentoo.org
-
-commit 74672d069b298b03e9f657fd70915e055739882e upstream.
-
-Simon reported the md io stats accounting issue:
-"
-I'm seeing "iostat -x -k 1" print this after a RAID1 rebuild on 4.0-rc5.
-It's not abnormal other than it's 3-disk, with one being SSD (sdc) and
-the other two being write-mostly:
-
-Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
-sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
-sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
-sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
-md0               0.00     0.00    0.00    0.00     0.00     0.00     0.00   345.00    0.00    0.00    0.00   0.00 100.00
-md2               0.00     0.00    0.00    0.00     0.00     0.00     0.00 58779.00    0.00    0.00    0.00   0.00 100.00
-md1               0.00     0.00    0.00    0.00     0.00     0.00     0.00    12.00    0.00    0.00    0.00   0.00 100.00
-"
-The cause is commit "18c0b223cf9901727ef3b02da6711ac930b4e5d4" uses the
-generic_start_io_acct to account the disk stats rather than the open code,
-but it also introduced the increase to .in_flight[rw] which is needless to
-md. So we re-use the open code here to fix it.
-
-Reported-by: Simon Kirby <sim@hostway.ca>
-Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
-Signed-off-by: NeilBrown <neilb@suse.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/md/md.c | 6 +++++-
- 1 file changed, 5 insertions(+), 1 deletion(-)
-
-diff --git a/drivers/md/md.c b/drivers/md/md.c
-index 717daad..e617878 100644
---- a/drivers/md/md.c
-+++ b/drivers/md/md.c
-@@ -249,6 +249,7 @@ static void md_make_request(struct request_queue *q, struct bio *bio)
- 	const int rw = bio_data_dir(bio);
- 	struct mddev *mddev = q->queuedata;
- 	unsigned int sectors;
-+	int cpu;
++	/*
++	 * Currently the CPU idle support for Armada 38x is broken, as
++	 * the CPU hotplug uses some of the CPU idle functions it is
++	 * broken too, so let's disable it
++	 */
++	if (of_machine_is_compatible("marvell,armada380")) {
++		cpu_hotplug_disable();
++		pr_warn("CPU hotplug support is currently broken on Armada 38x: disabling");
++	}
++
+ 	if (of_machine_is_compatible("marvell,armadaxp"))
+ 		ret = armada_xp_cpuidle_init();
+ 	else if (of_machine_is_compatible("marvell,armada370"))
+@@ -489,7 +502,8 @@ static int __init mvebu_v7_cpu_pm_init(void)
+ 		return ret;
  
- 	if (mddev == NULL || mddev->pers == NULL
- 	    || !mddev->ready) {
-@@ -284,7 +285,10 @@ static void md_make_request(struct request_queue *q, struct bio *bio)
- 	sectors = bio_sectors(bio);
- 	mddev->pers->make_request(mddev, bio);
+ 	mvebu_v7_pmsu_enable_l2_powerdown_onidle();
+-	platform_device_register(&mvebu_v7_cpuidle_device);
++	if (mvebu_v7_cpuidle_device.name)
++		platform_device_register(&mvebu_v7_cpuidle_device);
+ 	cpu_pm_register_notifier(&mvebu_v7_cpu_pm_notifier);
  
--	generic_start_io_acct(rw, sectors, &mddev->gendisk->part0);
-+	cpu = part_stat_lock();
-+	part_stat_inc(cpu, &mddev->gendisk->part0, ios[rw]);
-+	part_stat_add(cpu, &mddev->gendisk->part0, sectors[rw], sectors);
-+	part_stat_unlock();
+ 	return 0;
+diff --git a/arch/arm/mach-s3c64xx/crag6410.h b/arch/arm/mach-s3c64xx/crag6410.h
+index 7bc6668..dcbe17f 100644
+--- a/arch/arm/mach-s3c64xx/crag6410.h
++++ b/arch/arm/mach-s3c64xx/crag6410.h
+@@ -14,6 +14,7 @@
+ #include <mach/gpio-samsung.h>
  
- 	if (atomic_dec_and_test(&mddev->active_io) && mddev->suspended)
- 		wake_up(&mddev->sb_wait);
--- 
-2.3.6
-
-
-From bbe33d7992b2dd4a79499aeb384a4597b73451eb Mon Sep 17 00:00:00 2001
-From: Andy Lutomirski <luto@amacapital.net>
-Date: Tue, 27 Jan 2015 16:06:02 -0800
-Subject: [PATCH 012/219] x86/asm/decoder: Fix and enforce max instruction size
- in the insn decoder
-Cc: mpagano@gentoo.org
-
-commit 91e5ed49fca09c2b83b262b9757d1376ee2b46c3 upstream.
-
-x86 instructions cannot exceed 15 bytes, and the instruction
-decoder should enforce that.  Prior to 6ba48ff46f76, the
-instruction length limit was implicitly set to 16, which was an
-approximation of 15, but there is currently no limit at all.
-
-Fix MAX_INSN_SIZE (it should be 15, not 16), and fix the decoder
-to reject instructions that exceed MAX_INSN_SIZE.
-
-Other than potentially confusing some of the decoder sanity
-checks, I'm not aware of any actual problems that omitting this
-check would cause, nor am I aware of any practical problems
-caused by the MAX_INSN_SIZE error.
-
-Signed-off-by: Andy Lutomirski <luto@amacapital.net>
-Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
-Cc: Dave Hansen <dave.hansen@linux.intel.com>
-Fixes: 6ba48ff46f76 ("x86: Remove arbitrary instruction size limit ...
-Link: http://lkml.kernel.org/r/f8f0bc9b8c58cfd6830f7d88400bf1396cbdcd0f.1422403511.git.luto@amacapital.net
-Signed-off-by: Ingo Molnar <mingo@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/x86/include/asm/insn.h | 2 +-
- arch/x86/lib/insn.c         | 7 +++++++
- 2 files changed, 8 insertions(+), 1 deletion(-)
-
-diff --git a/arch/x86/include/asm/insn.h b/arch/x86/include/asm/insn.h
-index 47f29b1..e7814b7 100644
---- a/arch/x86/include/asm/insn.h
-+++ b/arch/x86/include/asm/insn.h
-@@ -69,7 +69,7 @@ struct insn {
- 	const insn_byte_t *next_byte;
- };
+ #define GLENFARCLAS_PMIC_IRQ_BASE	IRQ_BOARD_START
++#define BANFF_PMIC_IRQ_BASE		(IRQ_BOARD_START + 64)
  
--#define MAX_INSN_SIZE	16
-+#define MAX_INSN_SIZE	15
+ #define PCA935X_GPIO_BASE		GPIO_BOARD_START
+ #define CODEC_GPIO_BASE			(GPIO_BOARD_START + 8)
+diff --git a/arch/arm/mach-s3c64xx/mach-crag6410.c b/arch/arm/mach-s3c64xx/mach-crag6410.c
+index 10b913b..65c426b 100644
+--- a/arch/arm/mach-s3c64xx/mach-crag6410.c
++++ b/arch/arm/mach-s3c64xx/mach-crag6410.c
+@@ -554,6 +554,7 @@ static struct wm831x_touch_pdata touch_pdata = {
  
- #define X86_MODRM_MOD(modrm) (((modrm) & 0xc0) >> 6)
- #define X86_MODRM_REG(modrm) (((modrm) & 0x38) >> 3)
-diff --git a/arch/x86/lib/insn.c b/arch/x86/lib/insn.c
-index 1313ae6..85994f5 100644
---- a/arch/x86/lib/insn.c
-+++ b/arch/x86/lib/insn.c
-@@ -52,6 +52,13 @@
-  */
- void insn_init(struct insn *insn, const void *kaddr, int buf_len, int x86_64)
- {
-+	/*
-+	 * Instructions longer than MAX_INSN_SIZE (15 bytes) are invalid
-+	 * even if the input buffer is long enough to hold them.
-+	 */
-+	if (buf_len > MAX_INSN_SIZE)
-+		buf_len = MAX_INSN_SIZE;
-+
- 	memset(insn, 0, sizeof(*insn));
- 	insn->kaddr = kaddr;
- 	insn->end_kaddr = kaddr + buf_len;
--- 
-2.3.6
-
-
-From 3fbb83fdcd2be33c3091f2c1094c37b5054da9f8 Mon Sep 17 00:00:00 2001
-From: Marcelo Tosatti <mtosatti@redhat.com>
-Date: Mon, 23 Mar 2015 20:21:51 -0300
-Subject: [PATCH 013/219] x86: kvm: Revert "remove sched notifier for cross-cpu
- migrations"
-Cc: mpagano@gentoo.org
-
-commit 0a4e6be9ca17c54817cf814b4b5aa60478c6df27 upstream.
-
-The following point:
-
-    2. per-CPU pvclock time info is updated if the
-       underlying CPU changes.
-
-Is not true anymore since "KVM: x86: update pvclock area conditionally,
-on cpu migration".
-
-Add task migration notification back.
-
-Problem noticed by Andy Lutomirski.
-
-Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/x86/include/asm/pvclock.h |  1 +
- arch/x86/kernel/pvclock.c      | 44 ++++++++++++++++++++++++++++++++++++++++++
- arch/x86/vdso/vclock_gettime.c | 16 +++++++--------
- include/linux/sched.h          |  8 ++++++++
- kernel/sched/core.c            | 15 ++++++++++++++
- 5 files changed, 76 insertions(+), 8 deletions(-)
-
-diff --git a/arch/x86/include/asm/pvclock.h b/arch/x86/include/asm/pvclock.h
-index d6b078e..25b1cc0 100644
---- a/arch/x86/include/asm/pvclock.h
-+++ b/arch/x86/include/asm/pvclock.h
-@@ -95,6 +95,7 @@ unsigned __pvclock_read_cycles(const struct pvclock_vcpu_time_info *src,
+ static struct wm831x_pdata crag_pmic_pdata = {
+ 	.wm831x_num = 1,
++	.irq_base = BANFF_PMIC_IRQ_BASE,
+ 	.gpio_base = BANFF_PMIC_GPIO_BASE,
+ 	.soft_shutdown = true,
  
- struct pvclock_vsyscall_time_info {
- 	struct pvclock_vcpu_time_info pvti;
-+	u32 migrate_count;
- } __attribute__((__aligned__(SMP_CACHE_BYTES)));
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 1b8e973..a6186c2 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -361,6 +361,27 @@ config ARM64_ERRATUM_832075
  
- #define PVTI_SIZE sizeof(struct pvclock_vsyscall_time_info)
-diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
-index 2f355d2..e5ecd20 100644
---- a/arch/x86/kernel/pvclock.c
-+++ b/arch/x86/kernel/pvclock.c
-@@ -141,7 +141,46 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
- 	set_normalized_timespec(ts, now.tv_sec, now.tv_nsec);
- }
+ 	  If unsure, say Y.
  
-+static struct pvclock_vsyscall_time_info *pvclock_vdso_info;
++config ARM64_ERRATUM_845719
++	bool "Cortex-A53: 845719: a load might read incorrect data"
++	depends on COMPAT
++	default y
++	help
++	  This option adds an alternative code sequence to work around ARM
++	  erratum 845719 on Cortex-A53 parts up to r0p4.
 +
-+static struct pvclock_vsyscall_time_info *
-+pvclock_get_vsyscall_user_time_info(int cpu)
-+{
-+	if (!pvclock_vdso_info) {
-+		BUG();
-+		return NULL;
-+	}
-+
-+	return &pvclock_vdso_info[cpu];
-+}
-+
-+struct pvclock_vcpu_time_info *pvclock_get_vsyscall_time_info(int cpu)
-+{
-+	return &pvclock_get_vsyscall_user_time_info(cpu)->pvti;
-+}
-+
- #ifdef CONFIG_X86_64
-+static int pvclock_task_migrate(struct notifier_block *nb, unsigned long l,
-+			        void *v)
-+{
-+	struct task_migration_notifier *mn = v;
-+	struct pvclock_vsyscall_time_info *pvti;
-+
-+	pvti = pvclock_get_vsyscall_user_time_info(mn->from_cpu);
-+
-+	/* this is NULL when pvclock vsyscall is not initialized */
-+	if (unlikely(pvti == NULL))
-+		return NOTIFY_DONE;
-+
-+	pvti->migrate_count++;
++	  When running a compat (AArch32) userspace on an affected Cortex-A53
++	  part, a load at EL0 from a virtual address that matches the bottom 32
++	  bits of the virtual address used by a recent load at (AArch64) EL1
++	  might return incorrect data.
 +
-+	return NOTIFY_DONE;
-+}
++	  The workaround is to write the contextidr_el1 register on exception
++	  return to a 32-bit task.
++	  Please note that this does not necessarily enable the workaround,
++	  as it depends on the alternative framework, which will only patch
++	  the kernel if an affected CPU is detected.
 +
-+static struct notifier_block pvclock_migrate = {
-+	.notifier_call = pvclock_task_migrate,
-+};
++	  If unsure, say Y.
 +
- /*
-  * Initialize the generic pvclock vsyscall state.  This will allocate
-  * a/some page(s) for the per-vcpu pvclock information, set up a
-@@ -155,12 +194,17 @@ int __init pvclock_init_vsyscall(struct pvclock_vsyscall_time_info *i,
+ endmenu
  
- 	WARN_ON (size != PVCLOCK_VSYSCALL_NR_PAGES*PAGE_SIZE);
  
-+	pvclock_vdso_info = i;
-+
- 	for (idx = 0; idx <= (PVCLOCK_FIXMAP_END-PVCLOCK_FIXMAP_BEGIN); idx++) {
- 		__set_fixmap(PVCLOCK_FIXMAP_BEGIN + idx,
- 			     __pa(i) + (idx*PAGE_SIZE),
- 			     PAGE_KERNEL_VVAR);
- 	}
+@@ -470,6 +491,10 @@ config HOTPLUG_CPU
  
+ source kernel/Kconfig.preempt
+ 
++config UP_LATE_INIT
++       def_bool y
++       depends on !SMP
 +
-+	register_task_migration_notifier(&pvclock_migrate);
-+
- 	return 0;
- }
- #endif
-diff --git a/arch/x86/vdso/vclock_gettime.c b/arch/x86/vdso/vclock_gettime.c
-index 9793322..3093376 100644
---- a/arch/x86/vdso/vclock_gettime.c
-+++ b/arch/x86/vdso/vclock_gettime.c
-@@ -82,18 +82,15 @@ static notrace cycle_t vread_pvclock(int *mode)
- 	cycle_t ret;
- 	u64 last;
- 	u32 version;
-+	u32 migrate_count;
- 	u8 flags;
- 	unsigned cpu, cpu1;
+ config HZ
+ 	int
+ 	default 100
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index 69ceedc..4d2a925 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -48,7 +48,7 @@ core-$(CONFIG_KVM) += arch/arm64/kvm/
+ core-$(CONFIG_XEN) += arch/arm64/xen/
+ core-$(CONFIG_CRYPTO) += arch/arm64/crypto/
+ libs-y		:= arch/arm64/lib/ $(libs-y)
+-libs-$(CONFIG_EFI_STUB) += drivers/firmware/efi/libstub/
++core-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a
+ 
+ # Default target when executing plain make
+ KBUILD_IMAGE	:= Image.gz
+diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
+index b6c16d5..3f0c53c 100644
+--- a/arch/arm64/include/asm/cpufeature.h
++++ b/arch/arm64/include/asm/cpufeature.h
+@@ -23,8 +23,9 @@
  
+ #define ARM64_WORKAROUND_CLEAN_CACHE		0
+ #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE	1
++#define ARM64_WORKAROUND_845719			2
  
- 	/*
--	 * Note: hypervisor must guarantee that:
--	 * 1. cpu ID number maps 1:1 to per-CPU pvclock time info.
--	 * 2. that per-CPU pvclock time info is updated if the
--	 *    underlying CPU changes.
--	 * 3. that version is increased whenever underlying CPU
--	 *    changes.
--	 *
-+	 * When looping to get a consistent (time-info, tsc) pair, we
-+	 * also need to deal with the possibility we can switch vcpus,
-+	 * so make sure we always re-fetch time-info for the current vcpu.
- 	 */
- 	do {
- 		cpu = __getcpu() & VGETCPU_CPU_MASK;
-@@ -104,6 +101,8 @@ static notrace cycle_t vread_pvclock(int *mode)
+-#define ARM64_NCAPS				2
++#define ARM64_NCAPS				3
+ 
+ #ifndef __ASSEMBLY__
  
- 		pvti = get_pvti(cpu);
+diff --git a/arch/arm64/include/asm/smp_plat.h b/arch/arm64/include/asm/smp_plat.h
+index 59e2823..8dcd61e 100644
+--- a/arch/arm64/include/asm/smp_plat.h
++++ b/arch/arm64/include/asm/smp_plat.h
+@@ -40,4 +40,6 @@ static inline u32 mpidr_hash_size(void)
+ extern u64 __cpu_logical_map[NR_CPUS];
+ #define cpu_logical_map(cpu)    __cpu_logical_map[cpu]
  
-+		migrate_count = pvti->migrate_count;
++void __init do_post_cpus_up_work(void);
 +
- 		version = __pvclock_read_cycles(&pvti->pvti, &ret, &flags);
+ #endif /* __ASM_SMP_PLAT_H */
+diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
+index 3ef77a4..bc49a18 100644
+--- a/arch/arm64/include/uapi/asm/kvm.h
++++ b/arch/arm64/include/uapi/asm/kvm.h
+@@ -188,8 +188,14 @@ struct kvm_arch_memory_slot {
+ #define KVM_ARM_IRQ_CPU_IRQ		0
+ #define KVM_ARM_IRQ_CPU_FIQ		1
  
- 		/*
-@@ -115,7 +114,8 @@ static notrace cycle_t vread_pvclock(int *mode)
- 		cpu1 = __getcpu() & VGETCPU_CPU_MASK;
- 	} while (unlikely(cpu != cpu1 ||
- 			  (pvti->pvti.version & 1) ||
--			  pvti->pvti.version != version));
-+			  pvti->pvti.version != version ||
-+			  pvti->migrate_count != migrate_count));
+-/* Highest supported SPI, from VGIC_NR_IRQS */
++/*
++ * This used to hold the highest supported SPI, but it is now obsolete
++ * and only here to provide source code level compatibility with older
++ * userland. The highest SPI number can be set via KVM_DEV_ARM_VGIC_GRP_NR_IRQS.
++ */
++#ifndef __KERNEL__
+ #define KVM_ARM_IRQ_GIC_MAX		127
++#endif
  
- 	if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))
- 		*mode = VCLOCK_NONE;
-diff --git a/include/linux/sched.h b/include/linux/sched.h
-index a419b65..51348f7 100644
---- a/include/linux/sched.h
-+++ b/include/linux/sched.h
-@@ -176,6 +176,14 @@ extern void get_iowait_load(unsigned long *nr_waiters, unsigned long *load);
- extern void calc_global_load(unsigned long ticks);
- extern void update_cpu_load_nohz(void);
+ /* PSCI interface */
+ #define KVM_PSCI_FN_BASE		0x95c1ba5e
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index fa62637..ad6d523 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -88,7 +88,16 @@ struct arm64_cpu_capabilities arm64_errata[] = {
+ 	/* Cortex-A57 r0p0 - r1p2 */
+ 		.desc = "ARM erratum 832075",
+ 		.capability = ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE,
+-		MIDR_RANGE(MIDR_CORTEX_A57, 0x00, 0x12),
++		MIDR_RANGE(MIDR_CORTEX_A57, 0x00,
++			   (1 << MIDR_VARIANT_SHIFT) | 2),
++	},
++#endif
++#ifdef CONFIG_ARM64_ERRATUM_845719
++	{
++	/* Cortex-A53 r0p[01234] */
++		.desc = "ARM erratum 845719",
++		.capability = ARM64_WORKAROUND_845719,
++		MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x04),
+ 	},
+ #endif
+ 	{
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index cf21bb3..959fe87 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -21,8 +21,10 @@
+ #include <linux/init.h>
+ #include <linux/linkage.h>
  
-+/* Notifier for when a task gets migrated to a new CPU */
-+struct task_migration_notifier {
-+	struct task_struct *task;
-+	int from_cpu;
-+	int to_cpu;
-+};
-+extern void register_task_migration_notifier(struct notifier_block *n);
++#include <asm/alternative-asm.h>
+ #include <asm/assembler.h>
+ #include <asm/asm-offsets.h>
++#include <asm/cpufeature.h>
+ #include <asm/errno.h>
+ #include <asm/esr.h>
+ #include <asm/thread_info.h>
+@@ -120,6 +122,24 @@
+ 	ct_user_enter
+ 	ldr	x23, [sp, #S_SP]		// load return stack pointer
+ 	msr	sp_el0, x23
 +
- extern unsigned long get_parent_ip(unsigned long addr);
++#ifdef CONFIG_ARM64_ERRATUM_845719
++	alternative_insn						\
++	"nop",								\
++	"tbz x22, #4, 1f",						\
++	ARM64_WORKAROUND_845719
++#ifdef CONFIG_PID_IN_CONTEXTIDR
++	alternative_insn						\
++	"nop; nop",							\
++	"mrs x29, contextidr_el1; msr contextidr_el1, x29; 1:",		\
++	ARM64_WORKAROUND_845719
++#else
++	alternative_insn						\
++	"nop",								\
++	"msr contextidr_el1, xzr; 1:",					\
++	ARM64_WORKAROUND_845719
++#endif
++#endif
+ 	.endif
+ 	msr	elr_el1, x21			// set up the return data
+ 	msr	spsr_el1, x22
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index 07f9305..c237ffb 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -426,6 +426,7 @@ __create_page_tables:
+ 	 */
+ 	mov	x0, x25
+ 	add	x1, x26, #SWAPPER_DIR_SIZE
++	dmb	sy
+ 	bl	__inval_cache_range
  
- extern void dump_cpu_task(int cpu);
-diff --git a/kernel/sched/core.c b/kernel/sched/core.c
-index 62671f5..3d5f6f6 100644
---- a/kernel/sched/core.c
-+++ b/kernel/sched/core.c
-@@ -996,6 +996,13 @@ void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags)
- 		rq_clock_skip_update(rq, true);
+ 	mov	lr, x27
+diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
+index e8420f6..781f469 100644
+--- a/arch/arm64/kernel/setup.c
++++ b/arch/arm64/kernel/setup.c
+@@ -207,6 +207,18 @@ static void __init smp_build_mpidr_hash(void)
  }
+ #endif
  
-+static ATOMIC_NOTIFIER_HEAD(task_migration_notifier);
-+
-+void register_task_migration_notifier(struct notifier_block *n)
++void __init do_post_cpus_up_work(void)
 +{
-+	atomic_notifier_chain_register(&task_migration_notifier, n);
++	apply_alternatives_all();
 +}
 +
- #ifdef CONFIG_SMP
- void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++#ifdef CONFIG_UP_LATE_INIT
++void __init up_late_init(void)
++{
++	do_post_cpus_up_work();
++}
++#endif /* CONFIG_UP_LATE_INIT */
++
+ static void __init setup_processor(void)
  {
-@@ -1026,10 +1033,18 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
- 	trace_sched_migrate_task(p, new_cpu);
+ 	struct cpu_info *cpu_info;
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index 328b8ce..4257369 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -309,7 +309,7 @@ void cpu_die(void)
+ void __init smp_cpus_done(unsigned int max_cpus)
+ {
+ 	pr_info("SMP: Total of %d processors activated.\n", num_online_cpus());
+-	apply_alternatives_all();
++	do_post_cpus_up_work();
+ }
  
- 	if (task_cpu(p) != new_cpu) {
-+		struct task_migration_notifier tmn;
+ void __init smp_prepare_boot_cpu(void)
+diff --git a/arch/c6x/kernel/time.c b/arch/c6x/kernel/time.c
+index 356ee84..04845aa 100644
+--- a/arch/c6x/kernel/time.c
++++ b/arch/c6x/kernel/time.c
+@@ -49,7 +49,7 @@ u64 sched_clock(void)
+ 	return (tsc * sched_clock_multiplier) >> SCHED_CLOCK_SHIFT;
+ }
+ 
+-void time_init(void)
++void __init time_init(void)
+ {
+ 	u64 tmp = (u64)NSEC_PER_SEC << SCHED_CLOCK_SHIFT;
+ 
+diff --git a/arch/mips/include/asm/asm-eva.h b/arch/mips/include/asm/asm-eva.h
+index e41c56e..1e38f0e 100644
+--- a/arch/mips/include/asm/asm-eva.h
++++ b/arch/mips/include/asm/asm-eva.h
+@@ -11,6 +11,36 @@
+ #define __ASM_ASM_EVA_H
+ 
+ #ifndef __ASSEMBLY__
 +
- 		if (p->sched_class->migrate_task_rq)
- 			p->sched_class->migrate_task_rq(p, new_cpu);
- 		p->se.nr_migrations++;
- 		perf_sw_event_sched(PERF_COUNT_SW_CPU_MIGRATIONS, 1, 0);
++/* Kernel variants */
 +
-+		tmn.task = p;
-+		tmn.from_cpu = task_cpu(p);
-+		tmn.to_cpu = new_cpu;
++#define kernel_cache(op, base)		"cache " op ", " base "\n"
++#define kernel_ll(reg, addr)		"ll " reg ", " addr "\n"
++#define kernel_sc(reg, addr)		"sc " reg ", " addr "\n"
++#define kernel_lw(reg, addr)		"lw " reg ", " addr "\n"
++#define kernel_lwl(reg, addr)		"lwl " reg ", " addr "\n"
++#define kernel_lwr(reg, addr)		"lwr " reg ", " addr "\n"
++#define kernel_lh(reg, addr)		"lh " reg ", " addr "\n"
++#define kernel_lb(reg, addr)		"lb " reg ", " addr "\n"
++#define kernel_lbu(reg, addr)		"lbu " reg ", " addr "\n"
++#define kernel_sw(reg, addr)		"sw " reg ", " addr "\n"
++#define kernel_swl(reg, addr)		"swl " reg ", " addr "\n"
++#define kernel_swr(reg, addr)		"swr " reg ", " addr "\n"
++#define kernel_sh(reg, addr)		"sh " reg ", " addr "\n"
++#define kernel_sb(reg, addr)		"sb " reg ", " addr "\n"
 +
-+		atomic_notifier_call_chain(&task_migration_notifier, 0, &tmn);
- 	}
++#ifdef CONFIG_32BIT
++/*
++ * No 'sd' or 'ld' instructions in 32-bit but the code will
++ * do the correct thing
++ */
++#define kernel_sd(reg, addr)		user_sw(reg, addr)
++#define kernel_ld(reg, addr)		user_lw(reg, addr)
++#else
++#define kernel_sd(reg, addr)		"sd " reg", " addr "\n"
++#define kernel_ld(reg, addr)		"ld " reg", " addr "\n"
++#endif /* CONFIG_32BIT */
++
+ #ifdef CONFIG_EVA
  
- 	__set_task_cpu(p, new_cpu);
--- 
-2.3.6
-
-
-From 82a7e6737ca5b18841f7130821dbec007d736b0b Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= <rkrcmar@redhat.com>
-Date: Thu, 2 Apr 2015 20:44:23 +0200
-Subject: [PATCH 014/219] x86: vdso: fix pvclock races with task migration
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-Cc: mpagano@gentoo.org
-
-commit 80f7fdb1c7f0f9266421f823964fd1962681f6ce upstream.
-
-If we were migrated right after __getcpu, but before reading the
-migration_count, we wouldn't notice that we read TSC of a different
-VCPU, nor that KVM's bug made pvti invalid, as only migration_count
-on source VCPU is increased.
-
-Change vdso instead of updating migration_count on destination.
-
-Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-Fixes: 0a4e6be9ca17 ("x86: kvm: Revert "remove sched notifier for cross-cpu migrations"")
-Message-Id: <1428000263-11892-1-git-send-email-rkrcmar@redhat.com>
-Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/x86/vdso/vclock_gettime.c | 20 ++++++++++++--------
- 1 file changed, 12 insertions(+), 8 deletions(-)
-
-diff --git a/arch/x86/vdso/vclock_gettime.c b/arch/x86/vdso/vclock_gettime.c
-index 3093376..40d2473 100644
---- a/arch/x86/vdso/vclock_gettime.c
-+++ b/arch/x86/vdso/vclock_gettime.c
-@@ -99,21 +99,25 @@ static notrace cycle_t vread_pvclock(int *mode)
- 		 * __getcpu() calls (Gleb).
- 		 */
+ #define __BUILD_EVA_INSN(insn, reg, addr)				\
+@@ -41,37 +71,60 @@
  
--		pvti = get_pvti(cpu);
-+		/* Make sure migrate_count will change if we leave the VCPU. */
-+		do {
-+			pvti = get_pvti(cpu);
-+			migrate_count = pvti->migrate_count;
+ #else
  
--		migrate_count = pvti->migrate_count;
-+			cpu1 = cpu;
-+			cpu = __getcpu() & VGETCPU_CPU_MASK;
-+		} while (unlikely(cpu != cpu1));
+-#define user_cache(op, base)		"cache " op ", " base "\n"
+-#define user_ll(reg, addr)		"ll " reg ", " addr "\n"
+-#define user_sc(reg, addr)		"sc " reg ", " addr "\n"
+-#define user_lw(reg, addr)		"lw " reg ", " addr "\n"
+-#define user_lwl(reg, addr)		"lwl " reg ", " addr "\n"
+-#define user_lwr(reg, addr)		"lwr " reg ", " addr "\n"
+-#define user_lh(reg, addr)		"lh " reg ", " addr "\n"
+-#define user_lb(reg, addr)		"lb " reg ", " addr "\n"
+-#define user_lbu(reg, addr)		"lbu " reg ", " addr "\n"
+-#define user_sw(reg, addr)		"sw " reg ", " addr "\n"
+-#define user_swl(reg, addr)		"swl " reg ", " addr "\n"
+-#define user_swr(reg, addr)		"swr " reg ", " addr "\n"
+-#define user_sh(reg, addr)		"sh " reg ", " addr "\n"
+-#define user_sb(reg, addr)		"sb " reg ", " addr "\n"
++#define user_cache(op, base)		kernel_cache(op, base)
++#define user_ll(reg, addr)		kernel_ll(reg, addr)
++#define user_sc(reg, addr)		kernel_sc(reg, addr)
++#define user_lw(reg, addr)		kernel_lw(reg, addr)
++#define user_lwl(reg, addr)		kernel_lwl(reg, addr)
++#define user_lwr(reg, addr)		kernel_lwr(reg, addr)
++#define user_lh(reg, addr)		kernel_lh(reg, addr)
++#define user_lb(reg, addr)		kernel_lb(reg, addr)
++#define user_lbu(reg, addr)		kernel_lbu(reg, addr)
++#define user_sw(reg, addr)		kernel_sw(reg, addr)
++#define user_swl(reg, addr)		kernel_swl(reg, addr)
++#define user_swr(reg, addr)		kernel_swr(reg, addr)
++#define user_sh(reg, addr)		kernel_sh(reg, addr)
++#define user_sb(reg, addr)		kernel_sb(reg, addr)
  
- 		version = __pvclock_read_cycles(&pvti->pvti, &ret, &flags);
+ #ifdef CONFIG_32BIT
+-/*
+- * No 'sd' or 'ld' instructions in 32-bit but the code will
+- * do the correct thing
+- */
+-#define user_sd(reg, addr)		user_sw(reg, addr)
+-#define user_ld(reg, addr)		user_lw(reg, addr)
++#define user_sd(reg, addr)		kernel_sw(reg, addr)
++#define user_ld(reg, addr)		kernel_lw(reg, addr)
+ #else
+-#define user_sd(reg, addr)		"sd " reg", " addr "\n"
+-#define user_ld(reg, addr)		"ld " reg", " addr "\n"
++#define user_sd(reg, addr)		kernel_sd(reg, addr)
++#define user_ld(reg, addr)		kernel_ld(reg, addr)
+ #endif /* CONFIG_32BIT */
  
- 		/*
- 		 * Test we're still on the cpu as well as the version.
--		 * We could have been migrated just after the first
--		 * vgetcpu but before fetching the version, so we
--		 * wouldn't notice a version change.
-+		 * - We must read TSC of pvti's VCPU.
-+		 * - KVM doesn't follow the versioning protocol, so data could
-+		 *   change before version if we left the VCPU.
- 		 */
--		cpu1 = __getcpu() & VGETCPU_CPU_MASK;
--	} while (unlikely(cpu != cpu1 ||
--			  (pvti->pvti.version & 1) ||
-+		smp_rmb();
-+	} while (unlikely((pvti->pvti.version & 1) ||
- 			  pvti->pvti.version != version ||
- 			  pvti->migrate_count != migrate_count));
- 
--- 
-2.3.6
-
-
-From 0e625b6df5ac57968c7ab197e916ea03f70e4a24 Mon Sep 17 00:00:00 2001
-From: Len Brown <len.brown@intel.com>
-Date: Wed, 15 Jan 2014 00:37:34 -0500
-Subject: [PATCH 015/219] sched/idle/x86: Restore mwait_idle() to fix boot
- hangs, to improve power savings and to improve performance
-Cc: mpagano@gentoo.org
-
-commit b253149b843f89cd300cbdbea27ce1f847506f99 upstream.
-
-In Linux-3.9 we removed the mwait_idle() loop:
-
-  69fb3676df33 ("x86 idle: remove mwait_idle() and "idle=mwait" cmdline param")
-
-The reasoning was that modern machines should be sufficiently
-happy during the boot process using the default_idle() HALT
-loop, until cpuidle loads and either acpi_idle or intel_idle
-invoke the newer MWAIT-with-hints idle loop.
-
-But two machines reported problems:
-
- 1. Certain Core2-era machines support MWAIT-C1 and HALT only.
-    MWAIT-C1 is preferred for optimal power and performance.
-    But if they support just C1, cpuidle never loads and
-    so they use the boot-time default idle loop forever.
-
- 2. Some laptops will boot-hang if HALT is used,
-    but will boot successfully if MWAIT is used.
-    This appears to be a hidden assumption in BIOS SMI,
-    that is presumably valid on the proprietary OS
-    where the BIOS was validated.
-
-       https://bugzilla.kernel.org/show_bug.cgi?id=60770
-
-So here we effectively revert the patch above, restoring
-the mwait_idle() loop.  However, we don't bother restoring
-the idle=mwait cmdline parameter, since it appears to add
-no value.
-
-Maintainer notes:
-
-  For 3.9, simply revert 69fb3676df
-  for 3.10, patch -F3 applies, fuzz needed due to __cpuinit use in
-  context For 3.11, 3.12, 3.13, this patch applies cleanly
-
-Tested-by: Mike Galbraith <bitbucket@online.de>
-Signed-off-by: Len Brown <len.brown@intel.com>
-Acked-by: Mike Galbraith <bitbucket@online.de>
-Cc: Borislav Petkov <bp@alien8.de>
-Cc: H. Peter Anvin <hpa@zytor.com>
-Cc: Ian Malone <ibmalone@gmail.com>
-Cc: Josh Boyer <jwboyer@redhat.com>
-Cc: Linus Torvalds <torvalds@linux-foundation.org>
-Cc: Mike Galbraith <efault@gmx.de>
-Cc: Peter Zijlstra <peterz@infradead.org>
-Cc: Thomas Gleixner <tglx@linutronix.de>
-Link: http://lkml.kernel.org/r/345254a551eb5a6a866e048d7ab570fd2193aca4.1389763084.git.len.brown@intel.com
-[ Ported to recent kernels. ]
-Signed-off-by: Ingo Molnar <mingo@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/x86/include/asm/mwait.h |  8 ++++++++
- arch/x86/kernel/process.c    | 47 ++++++++++++++++++++++++++++++++++++++++++++
- 2 files changed, 55 insertions(+)
-
-diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
-index a1410db..653dfa7 100644
---- a/arch/x86/include/asm/mwait.h
-+++ b/arch/x86/include/asm/mwait.h
-@@ -30,6 +30,14 @@ static inline void __mwait(unsigned long eax, unsigned long ecx)
- 		     :: "a" (eax), "c" (ecx));
- }
+ #endif /* CONFIG_EVA */
  
-+static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
-+{
-+	trace_hardirqs_on();
-+	/* "mwait %eax, %ecx;" */
-+	asm volatile("sti; .byte 0x0f, 0x01, 0xc9;"
-+		     :: "a" (eax), "c" (ecx));
-+}
-+
- /*
-  * This uses new MONITOR/MWAIT instructions on P4 processors with PNI,
-  * which can obviate IPI to trigger checking of need_resched.
-diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
-index 046e2d6..65e1a90 100644
---- a/arch/x86/kernel/process.c
-+++ b/arch/x86/kernel/process.c
-@@ -24,6 +24,7 @@
- #include <asm/syscalls.h>
- #include <asm/idle.h>
- #include <asm/uaccess.h>
-+#include <asm/mwait.h>
- #include <asm/i387.h>
- #include <asm/fpu-internal.h>
- #include <asm/debugreg.h>
-@@ -399,6 +400,49 @@ static void amd_e400_idle(void)
- 		default_idle();
- }
+ #else /* __ASSEMBLY__ */
  
-+/*
-+ * Intel Core2 and older machines prefer MWAIT over HALT for C1.
-+ * We can't rely on cpuidle installing MWAIT, because it will not load
-+ * on systems that support only C1 -- so the boot default must be MWAIT.
-+ *
-+ * Some AMD machines are the opposite, they depend on using HALT.
-+ *
-+ * So for default C1, which is used during boot until cpuidle loads,
-+ * use MWAIT-C1 on Intel HW that has it, else use HALT.
-+ */
-+static int prefer_mwait_c1_over_halt(const struct cpuinfo_x86 *c)
-+{
-+	if (c->x86_vendor != X86_VENDOR_INTEL)
-+		return 0;
-+
-+	if (!cpu_has(c, X86_FEATURE_MWAIT))
-+		return 0;
-+
-+	return 1;
-+}
++#define kernel_cache(op, base)		cache op, base
++#define kernel_ll(reg, addr)		ll reg, addr
++#define kernel_sc(reg, addr)		sc reg, addr
++#define kernel_lw(reg, addr)		lw reg, addr
++#define kernel_lwl(reg, addr)		lwl reg, addr
++#define kernel_lwr(reg, addr)		lwr reg, addr
++#define kernel_lh(reg, addr)		lh reg, addr
++#define kernel_lb(reg, addr)		lb reg, addr
++#define kernel_lbu(reg, addr)		lbu reg, addr
++#define kernel_sw(reg, addr)		sw reg, addr
++#define kernel_swl(reg, addr)		swl reg, addr
++#define kernel_swr(reg, addr)		swr reg, addr
++#define kernel_sh(reg, addr)		sh reg, addr
++#define kernel_sb(reg, addr)		sb reg, addr
 +
++#ifdef CONFIG_32BIT
 +/*
-+ * MONITOR/MWAIT with no hints, used for default default C1 state.
-+ * This invokes MWAIT with interrutps enabled and no flags,
-+ * which is backwards compatible with the original MWAIT implementation.
++ * No 'sd' or 'ld' instructions in 32-bit but the code will
++ * do the correct thing
 + */
++#define kernel_sd(reg, addr)		user_sw(reg, addr)
++#define kernel_ld(reg, addr)		user_lw(reg, addr)
++#else
++#define kernel_sd(reg, addr)		sd reg, addr
++#define kernel_ld(reg, addr)		ld reg, addr
++#endif /* CONFIG_32BIT */
 +
-+static void mwait_idle(void)
-+{
-+	if (!need_resched()) {
-+		if (this_cpu_has(X86_BUG_CLFLUSH_MONITOR))
-+			clflush((void *)&current_thread_info()->flags);
-+
-+		__monitor((void *)&current_thread_info()->flags, 0, 0);
-+		smp_mb();
-+		if (!need_resched())
-+			__sti_mwait(0, 0);
-+		else
-+			local_irq_enable();
-+	} else
-+		local_irq_enable();
-+}
-+
- void select_idle_routine(const struct cpuinfo_x86 *c)
- {
- #ifdef CONFIG_SMP
-@@ -412,6 +456,9 @@ void select_idle_routine(const struct cpuinfo_x86 *c)
- 		/* E400: APIC timer interrupt does not wake up CPU from C1e */
- 		pr_info("using AMD E400 aware idle routine\n");
- 		x86_idle = amd_e400_idle;
-+	} else if (prefer_mwait_c1_over_halt(c)) {
-+		pr_info("using mwait in idle threads\n");
-+		x86_idle = mwait_idle;
- 	} else
- 		x86_idle = default_idle;
- }
--- 
-2.3.6
-
-
-From aaa51337c5819599af0d1f6aba6a31639dd1c0a6 Mon Sep 17 00:00:00 2001
-From: Mike Galbraith <bitbucket@online.de>
-Date: Sat, 18 Jan 2014 17:14:44 +0100
-Subject: [PATCH 016/219] sched/idle/x86: Optimize unnecessary mwait_idle()
- resched IPIs
-Cc: mpagano@gentoo.org
-
-commit f8e617f4582995f7c25ef25b4167213120ad122b upstream.
-
-To fully take advantage of MWAIT, apparently the CLFLUSH instruction needs
-another quirk on certain CPUs: proper barriers around it on certain machines.
-
-On a Q6600 SMP system, pipe-test scheduling performance, cross core,
-improves significantly:
-
-  3.8.13                   487.2 KHz    1.000
-  3.13.0-master            415.5 KHz     .852
-  3.13.0-master+           415.2 KHz     .852     + restore mwait_idle
-  3.13.0-master++          488.5 KHz    1.002     + restore mwait_idle + IPI fix
-
-Since X86_BUG_CLFLUSH_MONITOR is already a quirk, don't create a separate
-quirk for the extra smp_mb()s.
-
-Signed-off-by: Mike Galbraith <bitbucket@online.de>
-Cc: Borislav Petkov <bp@alien8.de>
-Cc: H. Peter Anvin <hpa@zytor.com>
-Cc: Ian Malone <ibmalone@gmail.com>
-Cc: Josh Boyer <jwboyer@redhat.com>
-Cc: Len Brown <len.brown@intel.com>
-Cc: Len Brown <lenb@kernel.org>
-Cc: Linus Torvalds <torvalds@linux-foundation.org>
-Cc: Mike Galbraith <efault@gmx.de>
-Cc: Peter Zijlstra <peterz@infradead.org>
-Cc: Thomas Gleixner <tglx@linutronix.de>
-Link: http://lkml.kernel.org/r/1390061684.5566.4.camel@marge.simpson.net
-[ Ported to recent kernel, added comments about the quirk. ]
-Signed-off-by: Ingo Molnar <mingo@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/x86/kernel/process.c | 12 ++++++++----
- 1 file changed, 8 insertions(+), 4 deletions(-)
-
-diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
-index 65e1a90..a388bb8 100644
---- a/arch/x86/kernel/process.c
-+++ b/arch/x86/kernel/process.c
-@@ -429,18 +429,22 @@ static int prefer_mwait_c1_over_halt(const struct cpuinfo_x86 *c)
- 
- static void mwait_idle(void)
- {
--	if (!need_resched()) {
--		if (this_cpu_has(X86_BUG_CLFLUSH_MONITOR))
-+	if (!current_set_polling_and_test()) {
-+		if (this_cpu_has(X86_BUG_CLFLUSH_MONITOR)) {
-+			smp_mb(); /* quirk */
- 			clflush((void *)&current_thread_info()->flags);
-+			smp_mb(); /* quirk */
-+		}
- 
- 		__monitor((void *)&current_thread_info()->flags, 0, 0);
--		smp_mb();
- 		if (!need_resched())
- 			__sti_mwait(0, 0);
- 		else
- 			local_irq_enable();
--	} else
-+	} else {
- 		local_irq_enable();
-+	}
-+	__current_clr_polling();
- }
- 
- void select_idle_routine(const struct cpuinfo_x86 *c)
--- 
-2.3.6
-
-
-From 6e4dd840cca3053125c3f55650df1a9313b91615 Mon Sep 17 00:00:00 2001
-From: Peter Zijlstra <peterz@infradead.org>
-Date: Sat, 11 Apr 2015 12:16:22 +0200
-Subject: [PATCH 017/219] perf/x86/intel: Fix Core2,Atom,NHM,WSM cycles:pp
- events
-Cc: mpagano@gentoo.org
-
-commit 517e6341fa123ec3a2f9ea78ad547be910529881 upstream.
-
-Ingo reported that cycles:pp didn't work for him on some machines.
-
-It turns out that in this commit:
-
-  af4bdcf675cf perf/x86/intel: Disallow flags for most Core2/Atom/Nehalem/Westmere events
-
-Andi forgot to explicitly allow that event when he
-disabled event flags for PEBS on those uarchs.
-
-Reported-by: Ingo Molnar <mingo@kernel.org>
-Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
-Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
-Cc: Jiri Olsa <jolsa@redhat.com>
-Cc: Linus Torvalds <torvalds@linux-foundation.org>
-Cc: Peter Zijlstra <peterz@infradead.org>
-Fixes: af4bdcf675cf ("perf/x86/intel: Disallow flags for most Core2/Atom/Nehalem/Westmere events")
-Signed-off-by: Ingo Molnar <mingo@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/x86/kernel/cpu/perf_event_intel_ds.c | 8 ++++++++
- 1 file changed, 8 insertions(+)
-
-diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
-index 0739833..666bcf1 100644
---- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
-+++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
-@@ -557,6 +557,8 @@ struct event_constraint intel_core2_pebs_event_constraints[] = {
- 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c5, 0x1), /* BR_INST_RETIRED.MISPRED */
- 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x1fc7, 0x1), /* SIMD_INST_RETURED.ANY */
- 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1),    /* MEM_LOAD_RETIRED.* */
-+	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
-+	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
- 	EVENT_CONSTRAINT_END
- };
- 
-@@ -564,6 +566,8 @@ struct event_constraint intel_atom_pebs_event_constraints[] = {
- 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c0, 0x1), /* INST_RETIRED.ANY */
- 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c5, 0x1), /* MISPREDICTED_BRANCH_RETIRED */
- 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1),    /* MEM_LOAD_RETIRED.* */
-+	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
-+	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
- 	EVENT_CONSTRAINT_END
- };
+ #ifdef CONFIG_EVA
  
-@@ -587,6 +591,8 @@ struct event_constraint intel_nehalem_pebs_event_constraints[] = {
- 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x20c8, 0xf), /* ITLB_MISS_RETIRED */
- 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf),    /* MEM_LOAD_RETIRED.* */
- 	INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf),    /* FP_ASSIST.* */
-+	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
-+	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
- 	EVENT_CONSTRAINT_END
- };
+ #define __BUILD_EVA_INSN(insn, reg, addr)			\
+@@ -101,31 +154,27 @@
+ #define user_sd(reg, addr)		user_sw(reg, addr)
+ #else
  
-@@ -602,6 +608,8 @@ struct event_constraint intel_westmere_pebs_event_constraints[] = {
- 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x20c8, 0xf), /* ITLB_MISS_RETIRED */
- 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf),    /* MEM_LOAD_RETIRED.* */
- 	INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf),    /* FP_ASSIST.* */
-+	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
-+	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
- 	EVENT_CONSTRAINT_END
- };
+-#define user_cache(op, base)		cache op, base
+-#define user_ll(reg, addr)		ll reg, addr
+-#define user_sc(reg, addr)		sc reg, addr
+-#define user_lw(reg, addr)		lw reg, addr
+-#define user_lwl(reg, addr)		lwl reg, addr
+-#define user_lwr(reg, addr)		lwr reg, addr
+-#define user_lh(reg, addr)		lh reg, addr
+-#define user_lb(reg, addr)		lb reg, addr
+-#define user_lbu(reg, addr)		lbu reg, addr
+-#define user_sw(reg, addr)		sw reg, addr
+-#define user_swl(reg, addr)		swl reg, addr
+-#define user_swr(reg, addr)		swr reg, addr
+-#define user_sh(reg, addr)		sh reg, addr
+-#define user_sb(reg, addr)		sb reg, addr
++#define user_cache(op, base)		kernel_cache(op, base)
++#define user_ll(reg, addr)		kernel_ll(reg, addr)
++#define user_sc(reg, addr)		kernel_sc(reg, addr)
++#define user_lw(reg, addr)		kernel_lw(reg, addr)
++#define user_lwl(reg, addr)		kernel_lwl(reg, addr)
++#define user_lwr(reg, addr)		kernel_lwr(reg, addr)
++#define user_lh(reg, addr)		kernel_lh(reg, addr)
++#define user_lb(reg, addr)		kernel_lb(reg, addr)
++#define user_lbu(reg, addr)		kernel_lbu(reg, addr)
++#define user_sw(reg, addr)		kernel_sw(reg, addr)
++#define user_swl(reg, addr)		kernel_swl(reg, addr)
++#define user_swr(reg, addr)		kernel_swr(reg, addr)
++#define user_sh(reg, addr)		kernel_sh(reg, addr)
++#define user_sb(reg, addr)		kernel_sb(reg, addr)
  
--- 
-2.3.6
-
-
-From 5c966c4f563f8b10e276e43579c0f27ea2a3cef2 Mon Sep 17 00:00:00 2001
-From: Linus Torvalds <torvalds@linux-foundation.org>
-Date: Thu, 23 Apr 2015 08:33:59 -0700
-Subject: [PATCH 018/219] x86: fix special __probe_kernel_write() tail zeroing
- case
-Cc: mpagano@gentoo.org
-
-commit d869844bd081081bf537e806a44811884230643e upstream.
-
-Commit cae2a173fe94 ("x86: clean up/fix 'copy_in_user()' tail zeroing")
-fixed the failure case tail zeroing of one special case of the x86-64
-generic user-copy routine, namely when used for the user-to-user case
-("copy_in_user()").
-
-But in the process it broke an even more unusual case: using the user
-copy routine for kernel-to-kernel copying.
-
-Now, normally kernel-kernel copies are obviously done using memcpy(),
-but we have a couple of special cases when we use the user-copy
-functions.  One is when we pass a kernel buffer to a regular user-buffer
-routine, using set_fs(KERNEL_DS).  That's a "normal" case, and continued
-to work fine, because it never takes any faults (with the possible
-exception of a silent and successful vmalloc fault).
-
-But Jan Beulich pointed out another, very unusual, special case: when we
-use the user-copy routines not because it's a path that expects a user
-pointer, but for a couple of ftrace/kgdb cases that want to do a kernel
-copy, but do so using "unsafe" buffers, and use the user-copy routine to
-gracefully handle faults.  IOW, for probe_kernel_write().
-
-And that broke for the case of a faulting kernel destination, because we
-saw the kernel destination and wanted to try to clear the tail of the
-buffer.  Which doesn't work, since that's what faults.
-
-This only triggers for things like kgdb and ftrace users (eg trying
-setting a breakpoint on read-only memory), but it's definitely a bug.
-The fix is to not compare against the kernel address start (TASK_SIZE),
-but instead use the same limits "access_ok()" uses.
-
-Reported-and-tested-by: Jan Beulich <jbeulich@suse.com>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/x86/lib/usercopy_64.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
-index 1f33b3d..0a42327 100644
---- a/arch/x86/lib/usercopy_64.c
-+++ b/arch/x86/lib/usercopy_64.c
-@@ -82,7 +82,7 @@ copy_user_handle_tail(char *to, char *from, unsigned len)
- 	clac();
+ #ifdef CONFIG_32BIT
+-/*
+- * No 'sd' or 'ld' instructions in 32-bit but the code will
+- * do the correct thing
+- */
+-#define user_sd(reg, addr)		user_sw(reg, addr)
+-#define user_ld(reg, addr)		user_lw(reg, addr)
++#define user_sd(reg, addr)		kernel_sw(reg, addr)
++#define user_ld(reg, addr)		kernel_lw(reg, addr)
+ #else
+-#define user_sd(reg, addr)		sd reg, addr
+-#define user_ld(reg, addr)		ld reg, addr
++#define user_sd(reg, addr)		kernel_sd(reg, addr)
++#define user_ld(reg, addr)		kernel_sd(reg, addr)
+ #endif /* CONFIG_32BIT */
  
- 	/* If the destination is a kernel buffer, we always clear the end */
--	if ((unsigned long)to >= TASK_SIZE_MAX)
-+	if (!__addr_ok(to))
- 		memset(to, 0, len);
- 	return len;
- }
--- 
-2.3.6
-
-
-From 47b34f8519e8a009d3ba8506ea8c5e7fe4314a6d Mon Sep 17 00:00:00 2001
-From: Nadav Amit <namit@cs.technion.ac.il>
-Date: Sun, 12 Apr 2015 21:47:15 +0300
-Subject: [PATCH 019/219] KVM: x86: Fix MSR_IA32_BNDCFGS in msrs_to_save
-Cc: mpagano@gentoo.org
-
-commit 9e9c3fe40bcd28e3f98f0ad8408435f4503f2781 upstream.
-
-kvm_init_msr_list is currently called before hardware_setup. As a result,
-vmx_mpx_supported always returns false when kvm_init_msr_list checks whether to
-save MSR_IA32_BNDCFGS.
-
-Move kvm_init_msr_list after vmx_hardware_setup is called to fix this issue.
-
-Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
-Message-Id: <1428864435-4732-1-git-send-email-namit@cs.technion.ac.il>
-Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/x86/kvm/x86.c | 10 ++++++++--
- 1 file changed, 8 insertions(+), 2 deletions(-)
-
-diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
-index 32bf19e..e222ba5 100644
---- a/arch/x86/kvm/x86.c
-+++ b/arch/x86/kvm/x86.c
-@@ -5775,7 +5775,6 @@ int kvm_arch_init(void *opaque)
- 	kvm_set_mmio_spte_mask();
+ #endif /* CONFIG_EVA */
+diff --git a/arch/mips/include/asm/fpu.h b/arch/mips/include/asm/fpu.h
+index dd083e9..9f26b07 100644
+--- a/arch/mips/include/asm/fpu.h
++++ b/arch/mips/include/asm/fpu.h
+@@ -170,6 +170,7 @@ static inline void lose_fpu(int save)
+ 		}
+ 		disable_msa();
+ 		clear_thread_flag(TIF_USEDMSA);
++		__disable_fpu();
+ 	} else if (is_fpu_owner()) {
+ 		if (save)
+ 			_save_fp(current);
+diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
+index ac4fc71..f722b05 100644
+--- a/arch/mips/include/asm/kvm_host.h
++++ b/arch/mips/include/asm/kvm_host.h
+@@ -322,6 +322,7 @@ enum mips_mmu_types {
+ #define T_TRAP			13	/* Trap instruction */
+ #define T_VCEI			14	/* Virtual coherency exception */
+ #define T_FPE			15	/* Floating point exception */
++#define T_MSADIS		21	/* MSA disabled exception */
+ #define T_WATCH			23	/* Watch address reference */
+ #define T_VCED			31	/* Virtual coherency data */
  
- 	kvm_x86_ops = ops;
--	kvm_init_msr_list();
+@@ -578,6 +579,7 @@ struct kvm_mips_callbacks {
+ 	int (*handle_syscall)(struct kvm_vcpu *vcpu);
+ 	int (*handle_res_inst)(struct kvm_vcpu *vcpu);
+ 	int (*handle_break)(struct kvm_vcpu *vcpu);
++	int (*handle_msa_disabled)(struct kvm_vcpu *vcpu);
+ 	int (*vm_init)(struct kvm *kvm);
+ 	int (*vcpu_init)(struct kvm_vcpu *vcpu);
+ 	int (*vcpu_setup)(struct kvm_vcpu *vcpu);
+diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c
+index bbb6969..7659da2 100644
+--- a/arch/mips/kernel/unaligned.c
++++ b/arch/mips/kernel/unaligned.c
+@@ -109,10 +109,11 @@ static u32 unaligned_action;
+ extern void show_registers(struct pt_regs *regs);
  
- 	kvm_mmu_set_mask_ptes(PT_USER_MASK, PT_ACCESSED_MASK,
- 			PT_DIRTY_MASK, PT64_NX_MASK, 0);
-@@ -7209,7 +7208,14 @@ void kvm_arch_hardware_disable(void)
+ #ifdef __BIG_ENDIAN
+-#define     LoadHW(addr, value, res)  \
++#define     _LoadHW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (".set\tnoat\n"        \
+-			"1:\t"user_lb("%0", "0(%2)")"\n"    \
+-			"2:\t"user_lbu("$1", "1(%2)")"\n\t" \
++			"1:\t"type##_lb("%0", "0(%2)")"\n"  \
++			"2:\t"type##_lbu("$1", "1(%2)")"\n\t"\
+ 			"sll\t%0, 0x8\n\t"                  \
+ 			"or\t%0, $1\n\t"                    \
+ 			"li\t%1, 0\n"                       \
+@@ -127,13 +128,15 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
  
- int kvm_arch_hardware_setup(void)
- {
--	return kvm_x86_ops->hardware_setup();
-+	int r;
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     LoadW(addr, value, res)   \
++#define     _LoadW(addr, value, res, type)   \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_lwl("%0", "(%2)")"\n"    \
+-			"2:\t"user_lwr("%0", "3(%2)")"\n\t" \
++			"1:\t"type##_lwl("%0", "(%2)")"\n"   \
++			"2:\t"type##_lwr("%0", "3(%2)")"\n\t"\
+ 			"li\t%1, 0\n"                       \
+ 			"3:\n\t"                            \
+ 			".insn\n\t"                         \
+@@ -146,21 +149,24 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
 +
-+	r = kvm_x86_ops->hardware_setup();
-+	if (r != 0)
-+		return r;
+ #else
+ /* MIPSR6 has no lwl instruction */
+-#define     LoadW(addr, value, res) \
++#define     _LoadW(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n"			    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_lb("%0", "0(%2)")"\n\t"    \
+-			"2:"user_lbu("$1", "1(%2)")"\n\t"   \
++			"1:"type##_lb("%0", "0(%2)")"\n\t"  \
++			"2:"type##_lbu("$1", "1(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"3:"user_lbu("$1", "2(%2)")"\n\t"   \
++			"3:"type##_lbu("$1", "2(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"4:"user_lbu("$1", "3(%2)")"\n\t"   \
++			"4:"type##_lbu("$1", "3(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+ 			"li\t%1, 0\n"			    \
+@@ -178,14 +184,17 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t4b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
 +
-+	kvm_init_msr_list();
-+	return 0;
- }
- 
- void kvm_arch_hardware_unsetup(void)
--- 
-2.3.6
-
-
-From 7362dcdba904cf6a1c3791c253f25f85390d45c0 Mon Sep 17 00:00:00 2001
-From: Filipe Manana <fdmanana@suse.com>
-Date: Mon, 23 Mar 2015 14:07:40 +0000
-Subject: [PATCH 020/219] Btrfs: fix log tree corruption when fs mounted with
- -o discard
-Cc: mpagano@gentoo.org
-
-commit dcc82f4783ad91d4ab654f89f37ae9291cdc846a upstream.
-
-While committing a transaction we free the log roots before we write the
-new super block. Freeing the log roots implies marking the disk location
-of every node/leaf (metadata extent) as pinned before the new super block
-is written. This is to prevent the disk location of log metadata extents
-from being reused before the new super block is written, otherwise we
-would have a corrupted log tree if before the new super block is written
-a crash/reboot happens and the location of any log tree metadata extent
-ended up being reused and rewritten.
-
-Even though we pinned the log tree's metadata extents, we were issuing a
-discard against them if the fs was mounted with the -o discard option,
-resulting in corruption of the log tree if a crash/reboot happened before
-writing the new super block - the next time the fs was mounted, during
-the log replay process we would find nodes/leafs of the log btree with
-a content full of zeroes, causing the process to fail and require the
-use of the tool btrfs-zero-log to wipeout the log tree (and all data
-previously fsynced becoming lost forever).
-
-Fix this by not doing a discard when pinning an extent. The discard will
-be done later when it's safe (after the new super block is committed) at
-extent-tree.c:btrfs_finish_extent_commit().
-
-Fixes: e688b7252f78 (Btrfs: fix extent pinning bugs in the tree log)
-Signed-off-by: Filipe Manana <fdmanana@suse.com>
-Signed-off-by: Chris Mason <clm@fb.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/btrfs/extent-tree.c | 5 ++---
- 1 file changed, 2 insertions(+), 3 deletions(-)
-
-diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
-index 8b353ad..0a795c9 100644
---- a/fs/btrfs/extent-tree.c
-+++ b/fs/btrfs/extent-tree.c
-@@ -6956,12 +6956,11 @@ static int __btrfs_free_reserved_extent(struct btrfs_root *root,
- 		return -ENOSPC;
- 	}
+ #endif /* CONFIG_CPU_MIPSR6 */
  
--	if (btrfs_test_opt(root, DISCARD))
--		ret = btrfs_discard_extent(root, start, len, NULL);
--
- 	if (pin)
- 		pin_down_extent(root, cache, start, len, 1);
- 	else {
-+		if (btrfs_test_opt(root, DISCARD))
-+			ret = btrfs_discard_extent(root, start, len, NULL);
- 		btrfs_add_free_space(cache, start, len);
- 		btrfs_update_reserved_bytes(cache, len, RESERVE_FREE, delalloc);
- 	}
--- 
-2.3.6
-
-
-From 1f6719c298def2c3440dc5e9ca9532053877fff7 Mon Sep 17 00:00:00 2001
-From: David Sterba <dsterba@suse.cz>
-Date: Wed, 25 Mar 2015 19:26:41 +0100
-Subject: [PATCH 021/219] btrfs: don't accept bare namespace as a valid xattr
-Cc: mpagano@gentoo.org
-
-commit 3c3b04d10ff1811a27f86684ccd2f5ba6983211d upstream.
-
-Due to insufficient check in btrfs_is_valid_xattr, this unexpectedly
-works:
-
- $ touch file
- $ setfattr -n user. -v 1 file
- $ getfattr -d file
-user.="1"
-
-ie. the missing attribute name after the namespace.
-
-Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=94291
-Reported-by: William Douglas <william.douglas@intel.com>
-Signed-off-by: David Sterba <dsterba@suse.cz>
-Signed-off-by: Chris Mason <clm@fb.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/btrfs/xattr.c | 53 +++++++++++++++++++++++++++++++++++++++--------------
- 1 file changed, 39 insertions(+), 14 deletions(-)
-
-diff --git a/fs/btrfs/xattr.c b/fs/btrfs/xattr.c
-index 883b936..45ea704 100644
---- a/fs/btrfs/xattr.c
-+++ b/fs/btrfs/xattr.c
-@@ -364,22 +364,42 @@ const struct xattr_handler *btrfs_xattr_handlers[] = {
- /*
-  * Check if the attribute is in a supported namespace.
-  *
-- * This applied after the check for the synthetic attributes in the system
-+ * This is applied after the check for the synthetic attributes in the system
-  * namespace.
-  */
--static bool btrfs_is_valid_xattr(const char *name)
-+static int btrfs_is_valid_xattr(const char *name)
- {
--	return !strncmp(name, XATTR_SECURITY_PREFIX,
--			XATTR_SECURITY_PREFIX_LEN) ||
--	       !strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN) ||
--	       !strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) ||
--	       !strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN) ||
--		!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN);
-+	int len = strlen(name);
-+	int prefixlen = 0;
-+
-+	if (!strncmp(name, XATTR_SECURITY_PREFIX,
-+			XATTR_SECURITY_PREFIX_LEN))
-+		prefixlen = XATTR_SECURITY_PREFIX_LEN;
-+	else if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
-+		prefixlen = XATTR_SYSTEM_PREFIX_LEN;
-+	else if (!strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN))
-+		prefixlen = XATTR_TRUSTED_PREFIX_LEN;
-+	else if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN))
-+		prefixlen = XATTR_USER_PREFIX_LEN;
-+	else if (!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN))
-+		prefixlen = XATTR_BTRFS_PREFIX_LEN;
-+	else
-+		return -EOPNOTSUPP;
-+
-+	/*
-+	 * The name cannot consist of just prefix
-+	 */
-+	if (len <= prefixlen)
-+		return -EINVAL;
-+
-+	return 0;
- }
- 
- ssize_t btrfs_getxattr(struct dentry *dentry, const char *name,
- 		       void *buffer, size_t size)
- {
-+	int ret;
-+
- 	/*
- 	 * If this is a request for a synthetic attribute in the system.*
- 	 * namespace use the generic infrastructure to resolve a handler
-@@ -388,8 +408,9 @@ ssize_t btrfs_getxattr(struct dentry *dentry, const char *name,
- 	if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
- 		return generic_getxattr(dentry, name, buffer, size);
- 
--	if (!btrfs_is_valid_xattr(name))
--		return -EOPNOTSUPP;
-+	ret = btrfs_is_valid_xattr(name);
-+	if (ret)
-+		return ret;
- 	return __btrfs_getxattr(dentry->d_inode, name, buffer, size);
- }
- 
-@@ -397,6 +418,7 @@ int btrfs_setxattr(struct dentry *dentry, const char *name, const void *value,
- 		   size_t size, int flags)
- {
- 	struct btrfs_root *root = BTRFS_I(dentry->d_inode)->root;
-+	int ret;
- 
- 	/*
- 	 * The permission on security.* and system.* is not checked
-@@ -413,8 +435,9 @@ int btrfs_setxattr(struct dentry *dentry, const char *name, const void *value,
- 	if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
- 		return generic_setxattr(dentry, name, value, size, flags);
- 
--	if (!btrfs_is_valid_xattr(name))
--		return -EOPNOTSUPP;
-+	ret = btrfs_is_valid_xattr(name);
-+	if (ret)
-+		return ret;
- 
- 	if (!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN))
- 		return btrfs_set_prop(dentry->d_inode, name,
-@@ -430,6 +453,7 @@ int btrfs_setxattr(struct dentry *dentry, const char *name, const void *value,
- int btrfs_removexattr(struct dentry *dentry, const char *name)
- {
- 	struct btrfs_root *root = BTRFS_I(dentry->d_inode)->root;
-+	int ret;
- 
- 	/*
- 	 * The permission on security.* and system.* is not checked
-@@ -446,8 +470,9 @@ int btrfs_removexattr(struct dentry *dentry, const char *name)
- 	if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
- 		return generic_removexattr(dentry, name);
- 
--	if (!btrfs_is_valid_xattr(name))
--		return -EOPNOTSUPP;
-+	ret = btrfs_is_valid_xattr(name);
-+	if (ret)
-+		return ret;
+-#define     LoadHWU(addr, value, res) \
++#define     _LoadHWU(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+-			"1:\t"user_lbu("%0", "0(%2)")"\n"   \
+-			"2:\t"user_lbu("$1", "1(%2)")"\n\t" \
++			"1:\t"type##_lbu("%0", "0(%2)")"\n" \
++			"2:\t"type##_lbu("$1", "1(%2)")"\n\t"\
+ 			"sll\t%0, 0x8\n\t"                  \
+ 			"or\t%0, $1\n\t"                    \
+ 			"li\t%1, 0\n"                       \
+@@ -201,13 +210,15 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
  
- 	if (!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN))
- 		return btrfs_set_prop(dentry->d_inode, name,
--- 
-2.3.6
-
-
-From 9301d5068d8732a0f2d787240270a1426d09ecf5 Mon Sep 17 00:00:00 2001
-From: Filipe Manana <fdmanana@suse.com>
-Date: Mon, 30 Mar 2015 18:23:59 +0100
-Subject: [PATCH 022/219] Btrfs: fix inode eviction infinite loop after cloning
- into it
-Cc: mpagano@gentoo.org
-
-commit ccccf3d67294714af2d72a6fd6fd7d73b01c9329 upstream.
-
-If we attempt to clone a 0 length region into a file we can end up
-inserting a range in the inode's extent_io tree with a start offset
-that is greater then the end offset, which triggers immediately the
-following warning:
-
-[ 3914.619057] WARNING: CPU: 17 PID: 4199 at fs/btrfs/extent_io.c:435 insert_state+0x4b/0x10b [btrfs]()
-[ 3914.620886] BTRFS: end < start 4095 4096
-(...)
-[ 3914.638093] Call Trace:
-[ 3914.638636]  [<ffffffff81425fd9>] dump_stack+0x4c/0x65
-[ 3914.639620]  [<ffffffff81045390>] warn_slowpath_common+0xa1/0xbb
-[ 3914.640789]  [<ffffffffa03ca44f>] ? insert_state+0x4b/0x10b [btrfs]
-[ 3914.642041]  [<ffffffff810453f0>] warn_slowpath_fmt+0x46/0x48
-[ 3914.643236]  [<ffffffffa03ca44f>] insert_state+0x4b/0x10b [btrfs]
-[ 3914.644441]  [<ffffffffa03ca729>] __set_extent_bit+0x107/0x3f4 [btrfs]
-[ 3914.645711]  [<ffffffffa03cb256>] lock_extent_bits+0x65/0x1bf [btrfs]
-[ 3914.646914]  [<ffffffff8142b2fb>] ? _raw_spin_unlock+0x28/0x33
-[ 3914.648058]  [<ffffffffa03cbac4>] ? test_range_bit+0xcc/0xde [btrfs]
-[ 3914.650105]  [<ffffffffa03cb3c3>] lock_extent+0x13/0x15 [btrfs]
-[ 3914.651361]  [<ffffffffa03db39e>] lock_extent_range+0x3d/0xcd [btrfs]
-[ 3914.652761]  [<ffffffffa03de1fe>] btrfs_ioctl_clone+0x278/0x388 [btrfs]
-[ 3914.654128]  [<ffffffff811226dd>] ? might_fault+0x58/0xb5
-[ 3914.655320]  [<ffffffffa03e0909>] btrfs_ioctl+0xb51/0x2195 [btrfs]
-(...)
-[ 3914.669271] ---[ end trace 14843d3e2e622fc1 ]---
-
-This later makes the inode eviction handler enter an infinite loop that
-keeps dumping the following warning over and over:
-
-[ 3915.117629] WARNING: CPU: 22 PID: 4228 at fs/btrfs/extent_io.c:435 insert_state+0x4b/0x10b [btrfs]()
-[ 3915.119913] BTRFS: end < start 4095 4096
-(...)
-[ 3915.137394] Call Trace:
-[ 3915.137913]  [<ffffffff81425fd9>] dump_stack+0x4c/0x65
-[ 3915.139154]  [<ffffffff81045390>] warn_slowpath_common+0xa1/0xbb
-[ 3915.140316]  [<ffffffffa03ca44f>] ? insert_state+0x4b/0x10b [btrfs]
-[ 3915.141505]  [<ffffffff810453f0>] warn_slowpath_fmt+0x46/0x48
-[ 3915.142709]  [<ffffffffa03ca44f>] insert_state+0x4b/0x10b [btrfs]
-[ 3915.143849]  [<ffffffffa03ca729>] __set_extent_bit+0x107/0x3f4 [btrfs]
-[ 3915.145120]  [<ffffffffa038c1e3>] ? btrfs_kill_super+0x17/0x23 [btrfs]
-[ 3915.146352]  [<ffffffff811548f6>] ? deactivate_locked_super+0x3b/0x50
-[ 3915.147565]  [<ffffffffa03cb256>] lock_extent_bits+0x65/0x1bf [btrfs]
-[ 3915.148785]  [<ffffffff8142b7e2>] ? _raw_write_unlock+0x28/0x33
-[ 3915.149931]  [<ffffffffa03bc325>] btrfs_evict_inode+0x196/0x482 [btrfs]
-[ 3915.151154]  [<ffffffff81168904>] evict+0xa0/0x148
-[ 3915.152094]  [<ffffffff811689e5>] dispose_list+0x39/0x43
-[ 3915.153081]  [<ffffffff81169564>] evict_inodes+0xdc/0xeb
-[ 3915.154062]  [<ffffffff81154418>] generic_shutdown_super+0x49/0xef
-[ 3915.155193]  [<ffffffff811546d1>] kill_anon_super+0x13/0x1e
-[ 3915.156274]  [<ffffffffa038c1e3>] btrfs_kill_super+0x17/0x23 [btrfs]
-(...)
-[ 3915.167404] ---[ end trace 14843d3e2e622fc2 ]---
-
-So just bail out of the clone ioctl if the length of the region to clone
-is zero, without locking any extent range, in order to prevent this issue
-(same behaviour as a pwrite with a 0 length for example).
-
-This is trivial to reproduce. For example, the steps for the test I just
-made for fstests:
-
-  mkfs.btrfs -f SCRATCH_DEV
-  mount SCRATCH_DEV $SCRATCH_MNT
-
-  touch $SCRATCH_MNT/foo
-  touch $SCRATCH_MNT/bar
-
-  $CLONER_PROG -s 0 -d 4096 -l 0 $SCRATCH_MNT/foo $SCRATCH_MNT/bar
-  umount $SCRATCH_MNT
-
-A test case for fstests follows soon.
-
-Signed-off-by: Filipe Manana <fdmanana@suse.com>
-Reviewed-by: Omar Sandoval <osandov@osandov.com>
-Signed-off-by: Chris Mason <clm@fb.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/btrfs/ioctl.c | 5 +++++
- 1 file changed, 5 insertions(+)
-
-diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
-index 74609b9..a09d3b8 100644
---- a/fs/btrfs/ioctl.c
-+++ b/fs/btrfs/ioctl.c
-@@ -3626,6 +3626,11 @@ static noinline long btrfs_ioctl_clone(struct file *file, unsigned long srcfd,
- 	if (off + len == src->i_size)
- 		len = ALIGN(src->i_size, bs) - off;
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     LoadWU(addr, value, res)  \
++#define     _LoadWU(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_lwl("%0", "(%2)")"\n"    \
+-			"2:\t"user_lwr("%0", "3(%2)")"\n\t" \
++			"1:\t"type##_lwl("%0", "(%2)")"\n"  \
++			"2:\t"type##_lwr("%0", "3(%2)")"\n\t"\
+ 			"dsll\t%0, %0, 32\n\t"              \
+ 			"dsrl\t%0, %0, 32\n\t"              \
+ 			"li\t%1, 0\n"                       \
+@@ -222,9 +233,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
  
-+	if (len == 0) {
-+		ret = 0;
-+		goto out_unlock;
-+	}
+-#define     LoadDW(addr, value, res)  \
++#define     _LoadDW(addr, value, res)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tldl\t%0, (%2)\n"               \
+ 			"2:\tldr\t%0, 7(%2)\n\t"            \
+@@ -240,21 +253,24 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
 +
- 	/* verify the end result is block aligned */
- 	if (!IS_ALIGNED(off, bs) || !IS_ALIGNED(off + len, bs) ||
- 	    !IS_ALIGNED(destoff, bs))
--- 
-2.3.6
-
-
-From 68ea2629745f61ddf8a603970e74b294737bc5d7 Mon Sep 17 00:00:00 2001
-From: Filipe Manana <fdmanana@suse.com>
-Date: Mon, 30 Mar 2015 18:26:47 +0100
-Subject: [PATCH 023/219] Btrfs: fix inode eviction infinite loop after
- extent_same ioctl
-Cc: mpagano@gentoo.org
-
-commit 113e8283869b9855c8b999796aadd506bbac155f upstream.
-
-If we pass a length of 0 to the extent_same ioctl, we end up locking an
-extent range with a start offset greater then its end offset (if the
-destination file's offset is greater than zero). This results in a warning
-from extent_io.c:insert_state through the following call chain:
-
-  btrfs_extent_same()
-    btrfs_double_lock()
-      lock_extent_range()
-        lock_extent(inode->io_tree, offset, offset + len - 1)
-          lock_extent_bits()
-            __set_extent_bit()
-              insert_state()
-                --> WARN_ON(end < start)
-
-This leads to an infinite loop when evicting the inode. This is the same
-problem that my previous patch titled
-"Btrfs: fix inode eviction infinite loop after cloning into it" addressed
-but for the extent_same ioctl instead of the clone ioctl.
-
-Signed-off-by: Filipe Manana <fdmanana@suse.com>
-Reviewed-by: Omar Sandoval <osandov@osandov.com>
-Signed-off-by: Chris Mason <clm@fb.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/btrfs/ioctl.c | 3 +++
- 1 file changed, 3 insertions(+)
-
-diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
-index a09d3b8..f23d4be 100644
---- a/fs/btrfs/ioctl.c
-+++ b/fs/btrfs/ioctl.c
-@@ -2897,6 +2897,9 @@ static int btrfs_extent_same(struct inode *src, u64 loff, u64 len,
- 	if (src == dst)
- 		return -EINVAL;
+ #else
+ /* MIPSR6 has not lwl and ldl instructions */
+-#define	    LoadWU(addr, value, res) \
++#define	    _LoadWU(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_lbu("%0", "0(%2)")"\n\t"   \
+-			"2:"user_lbu("$1", "1(%2)")"\n\t"   \
++			"1:"type##_lbu("%0", "0(%2)")"\n\t" \
++			"2:"type##_lbu("$1", "1(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"3:"user_lbu("$1", "2(%2)")"\n\t"   \
++			"3:"type##_lbu("$1", "2(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"4:"user_lbu("$1", "3(%2)")"\n\t"   \
++			"4:"type##_lbu("$1", "3(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+ 			"li\t%1, 0\n"			    \
+@@ -272,9 +288,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t4b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
  
-+	if (len == 0)
-+		return 0;
+-#define     LoadDW(addr, value, res)  \
++#define     _LoadDW(addr, value, res)  \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -319,16 +337,19 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t8b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
 +
- 	btrfs_double_lock(src, loff, dst, dst_loff, len);
+ #endif /* CONFIG_CPU_MIPSR6 */
  
- 	ret = extent_same_check_offsets(src, loff, len);
--- 
-2.3.6
-
-
-From 5683056e4853891106ae0a99938c96dfdc8fa881 Mon Sep 17 00:00:00 2001
-From: Gerald Schaefer <gerald.schaefer@de.ibm.com>
-Date: Tue, 14 Apr 2015 15:42:30 -0700
-Subject: [PATCH 024/219] mm/hugetlb: use pmd_page() in follow_huge_pmd()
-Cc: mpagano@gentoo.org
-
-commit 97534127012f0e396eddea4691f4c9b170aed74b upstream.
-
-Commit 61f77eda9bbf ("mm/hugetlb: reduce arch dependent code around
-follow_huge_*") broke follow_huge_pmd() on s390, where pmd and pte
-layout differ and using pte_page() on a huge pmd will return wrong
-results.  Using pmd_page() instead fixes this.
-
-All architectures that were touched by that commit have pmd_page()
-defined, so this should not break anything on other architectures.
-
-Fixes: 61f77eda "mm/hugetlb: reduce arch dependent code around follow_huge_*"
-Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
-Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
-Cc: Hugh Dickins <hughd@google.com>
-Cc: Michal Hocko <mhocko@suse.cz>
-Cc: Andrea Arcangeli <aarcange@redhat.com>
-Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
-Acked-by: David Rientjes <rientjes@google.com>
-Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- mm/hugetlb.c | 3 +--
- 1 file changed, 1 insertion(+), 2 deletions(-)
-
-diff --git a/mm/hugetlb.c b/mm/hugetlb.c
-index c41b2a0..caad3c5 100644
---- a/mm/hugetlb.c
-+++ b/mm/hugetlb.c
-@@ -3735,8 +3735,7 @@ retry:
- 	if (!pmd_huge(*pmd))
- 		goto out;
- 	if (pmd_present(*pmd)) {
--		page = pte_page(*(pte_t *)pmd) +
--			((address & ~PMD_MASK) >> PAGE_SHIFT);
-+		page = pmd_page(*pmd) + ((address & ~PMD_MASK) >> PAGE_SHIFT);
- 		if (flags & FOLL_GET)
- 			get_page(page);
- 	} else {
--- 
-2.3.6
-
-
-From 5cb46afa0f6d4c48714951dc856c404d79315a39 Mon Sep 17 00:00:00 2001
-From: Scott Wood <scottwood@freescale.com>
-Date: Fri, 10 Apr 2015 19:37:34 -0500
-Subject: [PATCH 025/219] powerpc/hugetlb: Call mm_dec_nr_pmds() in
- hugetlb_free_pmd_range()
-Cc: mpagano@gentoo.org
-
-commit 50c6a665b383cb5839e45d04e36faeeefaffa052 upstream.
-
-Commit dc6c9a35b66b5 ("mm: account pmd page tables to the process")
-added a counter that is incremented whenever a PMD is allocated and
-decremented whenever a PMD is freed.  For hugepages on PPC, common code
-is used to allocated PMDs, but arch-specific code is used to free PMDs.
-
-This results in kernel output such as "BUG: non-zero nr_pmds on freeing
-mm: 1" when using hugepages.
-
-Update the PPC hugepage PMD freeing code to decrement the count, just
-as the above commit did for free_pmd_range().
-
-Fixes: dc6c9a35b66b5 ("mm: account pmd page tables to the process")
-Signed-off-by: Scott Wood <scottwood@freescale.com>
-Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/powerpc/mm/hugetlbpage.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
-index 7e408bf..cecbe00 100644
---- a/arch/powerpc/mm/hugetlbpage.c
-+++ b/arch/powerpc/mm/hugetlbpage.c
-@@ -581,6 +581,7 @@ static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud,
- 	pmd = pmd_offset(pud, start);
- 	pud_clear(pud);
- 	pmd_free_tlb(tlb, pmd, start);
-+	mm_dec_nr_pmds(tlb->mm);
- }
  
- static void hugetlb_free_pud_range(struct mmu_gather *tlb, pgd_t *pgd,
--- 
-2.3.6
-
-
-From 9297ed24421df19f5c5085d65ee2575a63524447 Mon Sep 17 00:00:00 2001
-From: Andrzej Pietrasiewicz <andrzej.p@samsung.com>
-Date: Tue, 3 Mar 2015 10:52:05 +0100
-Subject: [PATCH 026/219] usb: gadget: printer: enqueue printer's response for
- setup request
-Cc: mpagano@gentoo.org
-
-commit eb132ccbdec5df46e29c9814adf76075ce83576b upstream.
-
-Function-specific setup requests should be handled in such a way, that
-apart from filling in the data buffer, the requests are also actually
-enqueued: if function-specific setup is called from composte_setup(),
-the "usb_ep_queue()" block of code in composite_setup() is skipped.
-
-The printer function lacks this part and it results in e.g. get device id
-requests failing: the host expects some response, the device prepares it
-but does not equeue it for sending to the host, so the host finally asserts
-timeout.
-
-This patch adds enqueueing the prepared responses.
-
-Fixes: 2e87edf49227: "usb: gadget: make g_printer use composite"
-Signed-off-by: Andrzej Pietrasiewicz <andrzej.p@samsung.com>
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/gadget/legacy/printer.c | 9 +++++++++
- 1 file changed, 9 insertions(+)
-
-diff --git a/drivers/usb/gadget/legacy/printer.c b/drivers/usb/gadget/legacy/printer.c
-index 9054598..6385c19 100644
---- a/drivers/usb/gadget/legacy/printer.c
-+++ b/drivers/usb/gadget/legacy/printer.c
-@@ -1031,6 +1031,15 @@ unknown:
- 		break;
- 	}
- 	/* host either stalls (value < 0) or reports success */
-+	if (value >= 0) {
-+		req->length = value;
-+		req->zero = value < wLength;
-+		value = usb_ep_queue(cdev->gadget->ep0, req, GFP_ATOMIC);
-+		if (value < 0) {
-+			ERROR(dev, "%s:%d Error!\n", __func__, __LINE__);
-+			req->status = 0;
-+		}
-+	}
- 	return value;
- }
+-#define     StoreHW(addr, value, res) \
++#define     _StoreHW(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+-			"1:\t"user_sb("%1", "1(%2)")"\n"    \
++			"1:\t"type##_sb("%1", "1(%2)")"\n"  \
+ 			"srl\t$1, %1, 0x8\n"                \
+-			"2:\t"user_sb("$1", "0(%2)")"\n"    \
++			"2:\t"type##_sb("$1", "0(%2)")"\n"  \
+ 			".set\tat\n\t"                      \
+ 			"li\t%0, 0\n"                       \
+ 			"3:\n\t"                            \
+@@ -342,13 +363,15 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=r" (res)                        \
+-			: "r" (value), "r" (addr), "i" (-EFAULT));
++			: "r" (value), "r" (addr), "i" (-EFAULT));\
++} while(0)
  
--- 
-2.3.6
-
-
-From bcdd54ffac32205938fa2cdd656604973275214b Mon Sep 17 00:00:00 2001
-From: David Hildenbrand <dahi@linux.vnet.ibm.com>
-Date: Wed, 4 Feb 2015 15:53:42 +0100
-Subject: [PATCH 027/219] KVM: s390: fix handling of write errors in the tpi
- handler
-Cc: mpagano@gentoo.org
-
-commit 261520dcfcba93ca5dfe671b88ffab038cd940c8 upstream.
-
-If the I/O interrupt could not be written to the guest provided
-area (e.g. access exception), a program exception was injected into the
-guest but "inti" wasn't freed, therefore resulting in a memory leak.
-
-In addition, the I/O interrupt wasn't reinjected. Therefore the dequeued
-interrupt is lost.
-
-This patch fixes the problem while cleaning up the function and making the
-cc and rc logic easier to handle.
-
-Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
-Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/s390/kvm/priv.c | 40 +++++++++++++++++++++++-----------------
- 1 file changed, 23 insertions(+), 17 deletions(-)
-
-diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
-index 3511169..767149a 100644
---- a/arch/s390/kvm/priv.c
-+++ b/arch/s390/kvm/priv.c
-@@ -229,18 +229,19 @@ static int handle_tpi(struct kvm_vcpu *vcpu)
- 	struct kvm_s390_interrupt_info *inti;
- 	unsigned long len;
- 	u32 tpi_data[3];
--	int cc, rc;
-+	int rc;
- 	u64 addr;
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     StoreW(addr, value, res)  \
++#define     _StoreW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_swl("%1", "(%2)")"\n"    \
+-			"2:\t"user_swr("%1", "3(%2)")"\n\t" \
++			"1:\t"type##_swl("%1", "(%2)")"\n"  \
++			"2:\t"type##_swr("%1", "3(%2)")"\n\t"\
+ 			"li\t%0, 0\n"                       \
+ 			"3:\n\t"                            \
+ 			".insn\n\t"                         \
+@@ -361,9 +384,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 		: "=r" (res)                                \
+-		: "r" (value), "r" (addr), "i" (-EFAULT));
++		: "r" (value), "r" (addr), "i" (-EFAULT));  \
++} while(0)
  
--	rc = 0;
- 	addr = kvm_s390_get_base_disp_s(vcpu);
- 	if (addr & 3)
- 		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
--	cc = 0;
-+
- 	inti = kvm_s390_get_io_int(vcpu->kvm, vcpu->arch.sie_block->gcr[6], 0);
--	if (!inti)
--		goto no_interrupt;
--	cc = 1;
-+	if (!inti) {
-+		kvm_s390_set_psw_cc(vcpu, 0);
-+		return 0;
-+	}
-+
- 	tpi_data[0] = inti->io.subchannel_id << 16 | inti->io.subchannel_nr;
- 	tpi_data[1] = inti->io.io_int_parm;
- 	tpi_data[2] = inti->io.io_int_word;
-@@ -251,30 +252,35 @@ static int handle_tpi(struct kvm_vcpu *vcpu)
- 		 */
- 		len = sizeof(tpi_data) - 4;
- 		rc = write_guest(vcpu, addr, &tpi_data, len);
--		if (rc)
--			return kvm_s390_inject_prog_cond(vcpu, rc);
-+		if (rc) {
-+			rc = kvm_s390_inject_prog_cond(vcpu, rc);
-+			goto reinject_interrupt;
-+		}
- 	} else {
- 		/*
- 		 * Store the three-word I/O interruption code into
- 		 * the appropriate lowcore area.
- 		 */
- 		len = sizeof(tpi_data);
--		if (write_guest_lc(vcpu, __LC_SUBCHANNEL_ID, &tpi_data, len))
-+		if (write_guest_lc(vcpu, __LC_SUBCHANNEL_ID, &tpi_data, len)) {
-+			/* failed writes to the low core are not recoverable */
- 			rc = -EFAULT;
-+			goto reinject_interrupt;
-+		}
- 	}
+-#define     StoreDW(addr, value, res) \
++#define     _StoreDW(addr, value, res) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tsdl\t%1,(%2)\n"                \
+ 			"2:\tsdr\t%1, 7(%2)\n\t"            \
+@@ -379,20 +404,23 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 		: "=r" (res)                                \
+-		: "r" (value), "r" (addr), "i" (-EFAULT));
++		: "r" (value), "r" (addr), "i" (-EFAULT));  \
++} while(0)
 +
-+	/* irq was successfully handed to the guest */
-+	kfree(inti);
-+	kvm_s390_set_psw_cc(vcpu, 1);
-+	return 0;
-+reinject_interrupt:
- 	/*
- 	 * If we encounter a problem storing the interruption code, the
- 	 * instruction is suppressed from the guest's view: reinject the
- 	 * interrupt.
- 	 */
--	if (!rc)
--		kfree(inti);
--	else
--		kvm_s390_reinject_io_int(vcpu->kvm, inti);
--no_interrupt:
--	/* Set condition code and we're done. */
--	if (!rc)
--		kvm_s390_set_psw_cc(vcpu, cc);
-+	kvm_s390_reinject_io_int(vcpu->kvm, inti);
-+	/* don't set the cc, a pgm irq was injected or we drop to user space */
- 	return rc ? -EFAULT : 0;
- }
- 
--- 
-2.3.6
-
-
-From 98529eff3f93a3179a35f9ae459e21f64e8be813 Mon Sep 17 00:00:00 2001
-From: David Hildenbrand <dahi@linux.vnet.ibm.com>
-Date: Wed, 4 Feb 2015 15:59:11 +0100
-Subject: [PATCH 028/219] KVM: s390: reinjection of irqs can fail in the tpi
- handler
-Cc: mpagano@gentoo.org
-
-commit 15462e37ca848abac7477dece65f8af25febd744 upstream.
-
-The reinjection of an I/O interrupt can fail if the list is at the limit
-and between the dequeue and the reinjection, another I/O interrupt is
-injected (e.g. if user space floods kvm with I/O interrupts).
-
-This patch avoids this memory leak and returns -EFAULT in this special
-case. This error is not recoverable, so let's fail hard. This can later
-be avoided by not dequeuing the interrupt but working directly on the
-locked list.
-
-Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
-Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/s390/kvm/interrupt.c | 4 ++--
- arch/s390/kvm/kvm-s390.h  | 4 ++--
- arch/s390/kvm/priv.c      | 5 ++++-
- 3 files changed, 8 insertions(+), 5 deletions(-)
-
-diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
-index 073b5f3..e7a46e8 100644
---- a/arch/s390/kvm/interrupt.c
-+++ b/arch/s390/kvm/interrupt.c
-@@ -1332,10 +1332,10 @@ int kvm_s390_inject_vm(struct kvm *kvm,
- 	return rc;
- }
+ #else
+ /* MIPSR6 has no swl and sdl instructions */
+-#define     StoreW(addr, value, res)  \
++#define     _StoreW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_sb("%1", "3(%2)")"\n\t"    \
++			"1:"type##_sb("%1", "3(%2)")"\n\t"  \
+ 			"srl\t$1, %1, 0x8\n\t"		    \
+-			"2:"user_sb("$1", "2(%2)")"\n\t"    \
++			"2:"type##_sb("$1", "2(%2)")"\n\t"  \
+ 			"srl\t$1, $1,  0x8\n\t"		    \
+-			"3:"user_sb("$1", "1(%2)")"\n\t"    \
++			"3:"type##_sb("$1", "1(%2)")"\n\t"  \
+ 			"srl\t$1, $1, 0x8\n\t"		    \
+-			"4:"user_sb("$1", "0(%2)")"\n\t"    \
++			"4:"type##_sb("$1", "0(%2)")"\n\t"  \
+ 			".set\tpop\n\t"			    \
+ 			"li\t%0, 0\n"			    \
+ 			"10:\n\t"			    \
+@@ -409,9 +437,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			".previous"			    \
+ 		: "=&r" (res)			    	    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+-		: "memory");
++		: "memory");                                \
++} while(0)
  
--void kvm_s390_reinject_io_int(struct kvm *kvm,
-+int kvm_s390_reinject_io_int(struct kvm *kvm,
- 			      struct kvm_s390_interrupt_info *inti)
- {
--	__inject_vm(kvm, inti);
-+	return __inject_vm(kvm, inti);
- }
+ #define     StoreDW(addr, value, res) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -451,15 +481,18 @@ extern void show_registers(struct pt_regs *regs);
+ 			".previous"			    \
+ 		: "=&r" (res)			    	    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+-		: "memory");
++		: "memory");                                \
++} while(0)
++
+ #endif /* CONFIG_CPU_MIPSR6 */
  
- int s390int_to_s390irq(struct kvm_s390_interrupt *s390int,
-diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
-index c34109a..6995a30 100644
---- a/arch/s390/kvm/kvm-s390.h
-+++ b/arch/s390/kvm/kvm-s390.h
-@@ -151,8 +151,8 @@ int __must_check kvm_s390_inject_vcpu(struct kvm_vcpu *vcpu,
- int __must_check kvm_s390_inject_program_int(struct kvm_vcpu *vcpu, u16 code);
- struct kvm_s390_interrupt_info *kvm_s390_get_io_int(struct kvm *kvm,
- 						    u64 cr6, u64 schid);
--void kvm_s390_reinject_io_int(struct kvm *kvm,
--			      struct kvm_s390_interrupt_info *inti);
-+int kvm_s390_reinject_io_int(struct kvm *kvm,
-+			     struct kvm_s390_interrupt_info *inti);
- int kvm_s390_mask_adapter(struct kvm *kvm, unsigned int id, bool masked);
+ #else /* __BIG_ENDIAN */
  
- /* implemented in intercept.c */
-diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
-index 767149a..613e9f0 100644
---- a/arch/s390/kvm/priv.c
-+++ b/arch/s390/kvm/priv.c
-@@ -279,7 +279,10 @@ reinject_interrupt:
- 	 * instruction is suppressed from the guest's view: reinject the
- 	 * interrupt.
- 	 */
--	kvm_s390_reinject_io_int(vcpu->kvm, inti);
-+	if (kvm_s390_reinject_io_int(vcpu->kvm, inti)) {
-+		kfree(inti);
-+		rc = -EFAULT;
-+	}
- 	/* don't set the cc, a pgm irq was injected or we drop to user space */
- 	return rc ? -EFAULT : 0;
- }
--- 
-2.3.6
-
-
-From 7f1a4ebee923455bb5f50ab4ce832194dff859a7 Mon Sep 17 00:00:00 2001
-From: Ekaterina Tumanova <tumanova@linux.vnet.ibm.com>
-Date: Tue, 3 Mar 2015 09:54:41 +0100
-Subject: [PATCH 029/219] KVM: s390: Zero out current VMDB of STSI before
- including level3 data.
-Cc: mpagano@gentoo.org
-
-commit b75f4c9afac2604feb971441116c07a24ecca1ec upstream.
-
-s390 documentation requires words 0 and 10-15 to be reserved and stored as
-zeros. As we fill out all other fields, we can memset the full structure.
-
-Signed-off-by: Ekaterina Tumanova <tumanova@linux.vnet.ibm.com>
-Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
-Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/s390/kvm/priv.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
-index 613e9f0..b982fbc 100644
---- a/arch/s390/kvm/priv.c
-+++ b/arch/s390/kvm/priv.c
-@@ -476,6 +476,7 @@ static void handle_stsi_3_2_2(struct kvm_vcpu *vcpu, struct sysinfo_3_2_2 *mem)
- 	for (n = mem->count - 1; n > 0 ; n--)
- 		memcpy(&mem->vm[n], &mem->vm[n - 1], sizeof(mem->vm[0]));
+-#define     LoadHW(addr, value, res)  \
++#define     _LoadHW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (".set\tnoat\n"        \
+-			"1:\t"user_lb("%0", "1(%2)")"\n"    \
+-			"2:\t"user_lbu("$1", "0(%2)")"\n\t" \
++			"1:\t"type##_lb("%0", "1(%2)")"\n"  \
++			"2:\t"type##_lbu("$1", "0(%2)")"\n\t"\
+ 			"sll\t%0, 0x8\n\t"                  \
+ 			"or\t%0, $1\n\t"                    \
+ 			"li\t%1, 0\n"                       \
+@@ -474,13 +507,15 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
  
-+	memset(&mem->vm[0], 0, sizeof(mem->vm[0]));
- 	mem->vm[0].cpus_total = cpus;
- 	mem->vm[0].cpus_configured = cpus;
- 	mem->vm[0].cpus_standby = 0;
--- 
-2.3.6
-
-
-From 4756129f7d1bf8fa4ff6011a39f729f5d3bc64c4 Mon Sep 17 00:00:00 2001
-From: Jens Freimann <jfrei@linux.vnet.ibm.com>
-Date: Mon, 16 Mar 2015 12:17:13 +0100
-Subject: [PATCH 030/219] KVM: s390: fix get_all_floating_irqs
-Cc: mpagano@gentoo.org
-
-commit 94aa033efcac47b09db22cb561e135baf37b7887 upstream.
-
-This fixes a bug introduced with commit c05c4186bbe4 ("KVM: s390:
-add floating irq controller").
-
-get_all_floating_irqs() does copy_to_user() while holding
-a spin lock. Let's fix this by filling a temporary buffer
-first and copy it to userspace after giving up the lock.
-
-Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
-Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
-Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
-Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- Documentation/virtual/kvm/devices/s390_flic.txt |  3 ++
- arch/s390/kvm/interrupt.c                       | 58 ++++++++++++++-----------
- 2 files changed, 35 insertions(+), 26 deletions(-)
-
-diff --git a/Documentation/virtual/kvm/devices/s390_flic.txt b/Documentation/virtual/kvm/devices/s390_flic.txt
-index 4ceef53..d1ad9d5 100644
---- a/Documentation/virtual/kvm/devices/s390_flic.txt
-+++ b/Documentation/virtual/kvm/devices/s390_flic.txt
-@@ -27,6 +27,9 @@ Groups:
-     Copies all floating interrupts into a buffer provided by userspace.
-     When the buffer is too small it returns -ENOMEM, which is the indication
-     for userspace to try again with a bigger buffer.
-+    -ENOBUFS is returned when the allocation of a kernelspace buffer has
-+    failed.
-+    -EFAULT is returned when copying data to userspace failed.
-     All interrupts remain pending, i.e. are not deleted from the list of
-     currently pending interrupts.
-     attr->addr contains the userspace address of the buffer into which all
-diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
-index e7a46e8..e7bc2fd 100644
---- a/arch/s390/kvm/interrupt.c
-+++ b/arch/s390/kvm/interrupt.c
-@@ -17,6 +17,7 @@
- #include <linux/signal.h>
- #include <linux/slab.h>
- #include <linux/bitmap.h>
-+#include <linux/vmalloc.h>
- #include <asm/asm-offsets.h>
- #include <asm/uaccess.h>
- #include <asm/sclp.h>
-@@ -1455,61 +1456,66 @@ void kvm_s390_clear_float_irqs(struct kvm *kvm)
- 	spin_unlock(&fi->lock);
- }
- 
--static inline int copy_irq_to_user(struct kvm_s390_interrupt_info *inti,
--				   u8 *addr)
-+static void inti_to_irq(struct kvm_s390_interrupt_info *inti,
-+		       struct kvm_s390_irq *irq)
- {
--	struct kvm_s390_irq __user *uptr = (struct kvm_s390_irq __user *) addr;
--	struct kvm_s390_irq irq = {0};
--
--	irq.type = inti->type;
-+	irq->type = inti->type;
- 	switch (inti->type) {
- 	case KVM_S390_INT_PFAULT_INIT:
- 	case KVM_S390_INT_PFAULT_DONE:
- 	case KVM_S390_INT_VIRTIO:
- 	case KVM_S390_INT_SERVICE:
--		irq.u.ext = inti->ext;
-+		irq->u.ext = inti->ext;
- 		break;
- 	case KVM_S390_INT_IO_MIN...KVM_S390_INT_IO_MAX:
--		irq.u.io = inti->io;
-+		irq->u.io = inti->io;
- 		break;
- 	case KVM_S390_MCHK:
--		irq.u.mchk = inti->mchk;
-+		irq->u.mchk = inti->mchk;
- 		break;
--	default:
--		return -EINVAL;
- 	}
--
--	if (copy_to_user(uptr, &irq, sizeof(irq)))
--		return -EFAULT;
--
--	return 0;
- }
- 
--static int get_all_floating_irqs(struct kvm *kvm, __u8 *buf, __u64 len)
-+static int get_all_floating_irqs(struct kvm *kvm, u8 __user *usrbuf, u64 len)
- {
- 	struct kvm_s390_interrupt_info *inti;
- 	struct kvm_s390_float_interrupt *fi;
-+	struct kvm_s390_irq *buf;
-+	int max_irqs;
- 	int ret = 0;
- 	int n = 0;
- 
-+	if (len > KVM_S390_FLIC_MAX_BUFFER || len == 0)
-+		return -EINVAL;
-+
-+	/*
-+	 * We are already using -ENOMEM to signal
-+	 * userspace it may retry with a bigger buffer,
-+	 * so we need to use something else for this case
-+	 */
-+	buf = vzalloc(len);
-+	if (!buf)
-+		return -ENOBUFS;
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     LoadW(addr, value, res)   \
++#define     _LoadW(addr, value, res, type)   \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_lwl("%0", "3(%2)")"\n"   \
+-			"2:\t"user_lwr("%0", "(%2)")"\n\t"  \
++			"1:\t"type##_lwl("%0", "3(%2)")"\n" \
++			"2:\t"type##_lwr("%0", "(%2)")"\n\t"\
+ 			"li\t%1, 0\n"                       \
+ 			"3:\n\t"                            \
+ 			".insn\n\t"                         \
+@@ -493,21 +528,24 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
 +
-+	max_irqs = len / sizeof(struct kvm_s390_irq);
+ #else
+ /* MIPSR6 has no lwl instruction */
+-#define     LoadW(addr, value, res) \
++#define     _LoadW(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n"			    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_lb("%0", "3(%2)")"\n\t"    \
+-			"2:"user_lbu("$1", "2(%2)")"\n\t"   \
++			"1:"type##_lb("%0", "3(%2)")"\n\t"  \
++			"2:"type##_lbu("$1", "2(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"3:"user_lbu("$1", "1(%2)")"\n\t"   \
++			"3:"type##_lbu("$1", "1(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"4:"user_lbu("$1", "0(%2)")"\n\t"   \
++			"4:"type##_lbu("$1", "0(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+ 			"li\t%1, 0\n"			    \
+@@ -525,15 +563,18 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t4b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
 +
- 	fi = &kvm->arch.float_int;
- 	spin_lock(&fi->lock);
--
- 	list_for_each_entry(inti, &fi->list, list) {
--		if (len < sizeof(struct kvm_s390_irq)) {
-+		if (n == max_irqs) {
- 			/* signal userspace to try again */
- 			ret = -ENOMEM;
- 			break;
- 		}
--		ret = copy_irq_to_user(inti, buf);
--		if (ret)
--			break;
--		buf += sizeof(struct kvm_s390_irq);
--		len -= sizeof(struct kvm_s390_irq);
-+		inti_to_irq(inti, &buf[n]);
- 		n++;
- 	}
--
- 	spin_unlock(&fi->lock);
-+	if (!ret && n > 0) {
-+		if (copy_to_user(usrbuf, buf, sizeof(struct kvm_s390_irq) * n))
-+			ret = -EFAULT;
-+	}
-+	vfree(buf);
- 
- 	return ret < 0 ? ret : n;
- }
-@@ -1520,7 +1526,7 @@ static int flic_get_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
+ #endif /* CONFIG_CPU_MIPSR6 */
  
- 	switch (attr->group) {
- 	case KVM_DEV_FLIC_GET_ALL_IRQS:
--		r = get_all_floating_irqs(dev->kvm, (u8 *) attr->addr,
-+		r = get_all_floating_irqs(dev->kvm, (u8 __user *) attr->addr,
- 					  attr->attr);
- 		break;
- 	default:
--- 
-2.3.6
-
-
-From 654de1f9fd289e10a3de1daf0806051f05f57d92 Mon Sep 17 00:00:00 2001
-From: Heiko Carstens <heiko.carstens@de.ibm.com>
-Date: Wed, 25 Mar 2015 10:13:33 +0100
-Subject: [PATCH 031/219] s390/hibernate: fix save and restore of kernel text
- section
-Cc: mpagano@gentoo.org
-
-commit d74419495633493c9cd3f2bbeb7f3529d0edded6 upstream.
-
-Sebastian reported a crash caused by a jump label mismatch after resume.
-This happens because we do not save the kernel text section during suspend
-and therefore also do not restore it during resume, but use the kernel image
-that restores the old system.
-
-This means that after a suspend/resume cycle we lost all modifications done
-to the kernel text section.
-The reason for this is the pfn_is_nosave() function, which incorrectly
-returns that read-only pages don't need to be saved. This is incorrect since
-we mark the kernel text section read-only.
-We still need to make sure to not save and restore pages contained within
-NSS and DCSS segment.
-To fix this add an extra case for the kernel text section and only save
-those pages if they are not contained within an NSS segment.
-
-Fixes the following crash (and the above bugs as well):
-
-Jump label code mismatch at netif_receive_skb_internal+0x28/0xd0
-Found:    c0 04 00 00 00 00
-Expected: c0 f4 00 00 00 11
-New:      c0 04 00 00 00 00
-Kernel panic - not syncing: Corrupted kernel text
-CPU: 0 PID: 9 Comm: migration/0 Not tainted 3.19.0-01975-gb1b096e70f23 #4
-Call Trace:
-  [<0000000000113972>] show_stack+0x72/0xf0
-  [<000000000081f15e>] dump_stack+0x6e/0x90
-  [<000000000081c4e8>] panic+0x108/0x2b0
-  [<000000000081be64>] jump_label_bug.isra.2+0x104/0x108
-  [<0000000000112176>] __jump_label_transform+0x9e/0xd0
-  [<00000000001121e6>] __sm_arch_jump_label_transform+0x3e/0x50
-  [<00000000001d1136>] multi_cpu_stop+0x12e/0x170
-  [<00000000001d1472>] cpu_stopper_thread+0xb2/0x168
-  [<000000000015d2ac>] smpboot_thread_fn+0x134/0x1b0
-  [<0000000000158baa>] kthread+0x10a/0x110
-  [<0000000000824a86>] kernel_thread_starter+0x6/0xc
-
-Reported-and-tested-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
-Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
-Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/s390/kernel/suspend.c | 4 ++++
- 1 file changed, 4 insertions(+)
-
-diff --git a/arch/s390/kernel/suspend.c b/arch/s390/kernel/suspend.c
-index 1c4c5ac..d3236c9 100644
---- a/arch/s390/kernel/suspend.c
-+++ b/arch/s390/kernel/suspend.c
-@@ -138,6 +138,8 @@ int pfn_is_nosave(unsigned long pfn)
- {
- 	unsigned long nosave_begin_pfn = PFN_DOWN(__pa(&__nosave_begin));
- 	unsigned long nosave_end_pfn = PFN_DOWN(__pa(&__nosave_end));
-+	unsigned long eshared_pfn = PFN_DOWN(__pa(&_eshared)) - 1;
-+	unsigned long stext_pfn = PFN_DOWN(__pa(&_stext));
  
- 	/* Always save lowcore pages (LC protection might be enabled). */
- 	if (pfn <= LC_PAGES)
-@@ -145,6 +147,8 @@ int pfn_is_nosave(unsigned long pfn)
- 	if (pfn >= nosave_begin_pfn && pfn < nosave_end_pfn)
- 		return 1;
- 	/* Skip memory holes and read-only pages (NSS, DCSS, ...). */
-+	if (pfn >= stext_pfn && pfn <= eshared_pfn)
-+		return ipl_info.type == IPL_TYPE_NSS ? 1 : 0;
- 	if (tprot(PFN_PHYS(pfn)))
- 		return 1;
- 	return 0;
--- 
-2.3.6
-
-
-From 15254fde3f5d723bd591a73d88296e9aecdd6bb7 Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= <rkrcmar@redhat.com>
-Date: Wed, 8 Apr 2015 14:16:48 +0200
-Subject: [PATCH 032/219] KVM: use slowpath for cross page cached accesses
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-Cc: mpagano@gentoo.org
-
-commit ca3f0874723fad81d0c701b63ae3a17a408d5f25 upstream.
-
-kvm_write_guest_cached() does not mark all written pages as dirty and
-code comments in kvm_gfn_to_hva_cache_init() talk about NULL memslot
-with cross page accesses.  Fix all the easy way.
-
-The check is '<= 1' to have the same result for 'len = 0' cache anywhere
-in the page.  (nr_pages_needed is 0 on page boundary.)
-
-Fixes: 8f964525a121 ("KVM: Allow cross page reads and writes from cached translations.")
-Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-Message-Id: <20150408121648.GA3519@potion.brq.redhat.com>
-Reviewed-by: Wanpeng Li <wanpeng.li@linux.intel.com>
-Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- virt/kvm/kvm_main.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
-index cc6a25d..f8f3f5f 100644
---- a/virt/kvm/kvm_main.c
-+++ b/virt/kvm/kvm_main.c
-@@ -1653,8 +1653,8 @@ int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
- 	ghc->generation = slots->generation;
- 	ghc->len = len;
- 	ghc->memslot = gfn_to_memslot(kvm, start_gfn);
--	ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, &nr_pages_avail);
--	if (!kvm_is_error_hva(ghc->hva) && nr_pages_avail >= nr_pages_needed) {
-+	ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, NULL);
-+	if (!kvm_is_error_hva(ghc->hva) && nr_pages_needed <= 1) {
- 		ghc->hva += offset;
- 	} else {
- 		/*
--- 
-2.3.6
-
-
-From fb124f8c695ec8ddc72f19a8b3247b5ee872422f Mon Sep 17 00:00:00 2001
-From: Andre Przywara <andre.przywara@arm.com>
-Date: Fri, 10 Apr 2015 16:17:59 +0100
-Subject: [PATCH 033/219] KVM: arm/arm64: check IRQ number on userland
- injection
-Cc: mpagano@gentoo.org
-
-commit fd1d0ddf2ae92fb3df42ed476939861806c5d785 upstream.
-
-When userland injects a SPI via the KVM_IRQ_LINE ioctl we currently
-only check it against a fixed limit, which historically is set
-to 127. With the new dynamic IRQ allocation the effective limit may
-actually be smaller (64).
-So when now a malicious or buggy userland injects a SPI in that
-range, we spill over on our VGIC bitmaps and bytemaps memory.
-I could trigger a host kernel NULL pointer dereference with current
-mainline by injecting some bogus IRQ number from a hacked kvmtool:
------------------
-....
-DEBUG: kvm_vgic_inject_irq(kvm, cpu=0, irq=114, level=1)
-DEBUG: vgic_update_irq_pending(kvm, cpu=0, irq=114, level=1)
-DEBUG: IRQ #114 still in the game, writing to bytemap now...
-Unable to handle kernel NULL pointer dereference at virtual address 00000000
-pgd = ffffffc07652e000
-[00000000] *pgd=00000000f658b003, *pud=00000000f658b003, *pmd=0000000000000000
-Internal error: Oops: 96000006 [#1] PREEMPT SMP
-Modules linked in:
-CPU: 1 PID: 1053 Comm: lkvm-msi-irqinj Not tainted 4.0.0-rc7+ #3027
-Hardware name: FVP Base (DT)
-task: ffffffc0774e9680 ti: ffffffc0765a8000 task.ti: ffffffc0765a8000
-PC is at kvm_vgic_inject_irq+0x234/0x310
-LR is at kvm_vgic_inject_irq+0x30c/0x310
-pc : [<ffffffc0000ae0a8>] lr : [<ffffffc0000ae180>] pstate: 80000145
-.....
-
-So this patch fixes this by checking the SPI number against the
-actual limit. Also we remove the former legacy hard limit of
-127 in the ioctl code.
-
-Signed-off-by: Andre Przywara <andre.przywara@arm.com>
-Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
-[maz: wrap KVM_ARM_IRQ_GIC_MAX with #ifndef __KERNEL__,
-as suggested by Christopher Covington]
-Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/arm/include/uapi/asm/kvm.h   | 8 +++++++-
- arch/arm/kvm/arm.c                | 3 +--
- arch/arm64/include/uapi/asm/kvm.h | 8 +++++++-
- virt/kvm/arm/vgic.c               | 3 +++
- 4 files changed, 18 insertions(+), 4 deletions(-)
-
-diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
-index 0db25bc..3a42ac6 100644
---- a/arch/arm/include/uapi/asm/kvm.h
-+++ b/arch/arm/include/uapi/asm/kvm.h
-@@ -195,8 +195,14 @@ struct kvm_arch_memory_slot {
- #define KVM_ARM_IRQ_CPU_IRQ		0
- #define KVM_ARM_IRQ_CPU_FIQ		1
+-#define     LoadHWU(addr, value, res) \
++#define     _LoadHWU(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+-			"1:\t"user_lbu("%0", "1(%2)")"\n"   \
+-			"2:\t"user_lbu("$1", "0(%2)")"\n\t" \
++			"1:\t"type##_lbu("%0", "1(%2)")"\n" \
++			"2:\t"type##_lbu("$1", "0(%2)")"\n\t"\
+ 			"sll\t%0, 0x8\n\t"                  \
+ 			"or\t%0, $1\n\t"                    \
+ 			"li\t%1, 0\n"                       \
+@@ -549,13 +590,15 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
  
--/* Highest supported SPI, from VGIC_NR_IRQS */
-+/*
-+ * This used to hold the highest supported SPI, but it is now obsolete
-+ * and only here to provide source code level compatibility with older
-+ * userland. The highest SPI number can be set via KVM_DEV_ARM_VGIC_GRP_NR_IRQS.
-+ */
-+#ifndef __KERNEL__
- #define KVM_ARM_IRQ_GIC_MAX		127
-+#endif
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     LoadWU(addr, value, res)  \
++#define     _LoadWU(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_lwl("%0", "3(%2)")"\n"   \
+-			"2:\t"user_lwr("%0", "(%2)")"\n\t"  \
++			"1:\t"type##_lwl("%0", "3(%2)")"\n" \
++			"2:\t"type##_lwr("%0", "(%2)")"\n\t"\
+ 			"dsll\t%0, %0, 32\n\t"              \
+ 			"dsrl\t%0, %0, 32\n\t"              \
+ 			"li\t%1, 0\n"                       \
+@@ -570,9 +613,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
  
- /* PSCI interface */
- #define KVM_PSCI_FN_BASE		0x95c1ba5e
-diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
-index 5560f74..b652af5 100644
---- a/arch/arm/kvm/arm.c
-+++ b/arch/arm/kvm/arm.c
-@@ -651,8 +651,7 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
- 		if (!irqchip_in_kernel(kvm))
- 			return -ENXIO;
- 
--		if (irq_num < VGIC_NR_PRIVATE_IRQS ||
--		    irq_num > KVM_ARM_IRQ_GIC_MAX)
-+		if (irq_num < VGIC_NR_PRIVATE_IRQS)
- 			return -EINVAL;
+-#define     LoadDW(addr, value, res)  \
++#define     _LoadDW(addr, value, res)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tldl\t%0, 7(%2)\n"              \
+ 			"2:\tldr\t%0, (%2)\n\t"             \
+@@ -588,21 +633,24 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
++
+ #else
+ /* MIPSR6 has not lwl and ldl instructions */
+-#define	    LoadWU(addr, value, res) \
++#define	    _LoadWU(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_lbu("%0", "3(%2)")"\n\t"   \
+-			"2:"user_lbu("$1", "2(%2)")"\n\t"   \
++			"1:"type##_lbu("%0", "3(%2)")"\n\t" \
++			"2:"type##_lbu("$1", "2(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"3:"user_lbu("$1", "1(%2)")"\n\t"   \
++			"3:"type##_lbu("$1", "1(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"4:"user_lbu("$1", "0(%2)")"\n\t"   \
++			"4:"type##_lbu("$1", "0(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+ 			"li\t%1, 0\n"			    \
+@@ -620,9 +668,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t4b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
  
- 		return kvm_vgic_inject_irq(kvm, 0, irq_num, level);
-diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
-index 3ef77a4..bc49a18 100644
---- a/arch/arm64/include/uapi/asm/kvm.h
-+++ b/arch/arm64/include/uapi/asm/kvm.h
-@@ -188,8 +188,14 @@ struct kvm_arch_memory_slot {
- #define KVM_ARM_IRQ_CPU_IRQ		0
- #define KVM_ARM_IRQ_CPU_FIQ		1
+-#define     LoadDW(addr, value, res)  \
++#define     _LoadDW(addr, value, res)  \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -667,15 +717,17 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t8b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ #endif /* CONFIG_CPU_MIPSR6 */
  
--/* Highest supported SPI, from VGIC_NR_IRQS */
-+/*
-+ * This used to hold the highest supported SPI, but it is now obsolete
-+ * and only here to provide source code level compatibility with older
-+ * userland. The highest SPI number can be set via KVM_DEV_ARM_VGIC_GRP_NR_IRQS.
-+ */
-+#ifndef __KERNEL__
- #define KVM_ARM_IRQ_GIC_MAX		127
-+#endif
+-#define     StoreHW(addr, value, res) \
++#define     _StoreHW(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+-			"1:\t"user_sb("%1", "0(%2)")"\n"    \
++			"1:\t"type##_sb("%1", "0(%2)")"\n"  \
+ 			"srl\t$1,%1, 0x8\n"                 \
+-			"2:\t"user_sb("$1", "1(%2)")"\n"    \
++			"2:\t"type##_sb("$1", "1(%2)")"\n"  \
+ 			".set\tat\n\t"                      \
+ 			"li\t%0, 0\n"                       \
+ 			"3:\n\t"                            \
+@@ -689,12 +741,15 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=r" (res)                        \
+-			: "r" (value), "r" (addr), "i" (-EFAULT));
++			: "r" (value), "r" (addr), "i" (-EFAULT));\
++} while(0)
++
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     StoreW(addr, value, res)  \
++#define     _StoreW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_swl("%1", "3(%2)")"\n"   \
+-			"2:\t"user_swr("%1", "(%2)")"\n\t"  \
++			"1:\t"type##_swl("%1", "3(%2)")"\n" \
++			"2:\t"type##_swr("%1", "(%2)")"\n\t"\
+ 			"li\t%0, 0\n"                       \
+ 			"3:\n\t"                            \
+ 			".insn\n\t"                         \
+@@ -707,9 +762,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 		: "=r" (res)                                \
+-		: "r" (value), "r" (addr), "i" (-EFAULT));
++		: "r" (value), "r" (addr), "i" (-EFAULT));  \
++} while(0)
  
- /* PSCI interface */
- #define KVM_PSCI_FN_BASE		0x95c1ba5e
-diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
-index c9f60f5..e5abe7c 100644
---- a/virt/kvm/arm/vgic.c
-+++ b/virt/kvm/arm/vgic.c
-@@ -1371,6 +1371,9 @@ int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
- 			goto out;
- 	}
+-#define     StoreDW(addr, value, res) \
++#define     _StoreDW(addr, value, res) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tsdl\t%1, 7(%2)\n"              \
+ 			"2:\tsdr\t%1, (%2)\n\t"             \
+@@ -725,20 +782,23 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 		: "=r" (res)                                \
+-		: "r" (value), "r" (addr), "i" (-EFAULT));
++		: "r" (value), "r" (addr), "i" (-EFAULT));  \
++} while(0)
++
+ #else
+ /* MIPSR6 has no swl and sdl instructions */
+-#define     StoreW(addr, value, res)  \
++#define     _StoreW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_sb("%1", "0(%2)")"\n\t"    \
++			"1:"type##_sb("%1", "0(%2)")"\n\t"  \
+ 			"srl\t$1, %1, 0x8\n\t"		    \
+-			"2:"user_sb("$1", "1(%2)")"\n\t"    \
++			"2:"type##_sb("$1", "1(%2)")"\n\t"  \
+ 			"srl\t$1, $1,  0x8\n\t"		    \
+-			"3:"user_sb("$1", "2(%2)")"\n\t"    \
++			"3:"type##_sb("$1", "2(%2)")"\n\t"  \
+ 			"srl\t$1, $1, 0x8\n\t"		    \
+-			"4:"user_sb("$1", "3(%2)")"\n\t"    \
++			"4:"type##_sb("$1", "3(%2)")"\n\t"  \
+ 			".set\tpop\n\t"			    \
+ 			"li\t%0, 0\n"			    \
+ 			"10:\n\t"			    \
+@@ -755,9 +815,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			".previous"			    \
+ 		: "=&r" (res)			    	    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+-		: "memory");
++		: "memory");                                \
++} while(0)
  
-+	if (irq_num >= kvm->arch.vgic.nr_irqs)
-+		return -EINVAL;
+-#define     StoreDW(addr, value, res) \
++#define     _StoreDW(addr, value, res) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -797,10 +859,28 @@ extern void show_registers(struct pt_regs *regs);
+ 			".previous"			    \
+ 		: "=&r" (res)			    	    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+-		: "memory");
++		: "memory");                                \
++} while(0)
 +
- 	vcpu_id = vgic_update_irq_pending(kvm, cpuid, irq_num, level);
- 	if (vcpu_id >= 0) {
- 		/* kick the specified vcpu */
--- 
-2.3.6
-
-
-From 9656af0b6cee1496640cfd6dc321e216ff650d37 Mon Sep 17 00:00:00 2001
-From: Ben Serebrin <serebrin@google.com>
-Date: Thu, 16 Apr 2015 11:58:05 -0700
-Subject: [PATCH 034/219] KVM: VMX: Preserve host CR4.MCE value while in guest
- mode.
-Cc: mpagano@gentoo.org
-
-commit 085e68eeafbf76e21848ad5bafaecec88a11dd64 upstream.
-
-The host's decision to enable machine check exceptions should remain
-in force during non-root mode.  KVM was writing 0 to cr4 on VCPU reset
-and passed a slightly-modified 0 to the vmcs.guest_cr4 value.
-
-Tested: Built.
-On earlier version, tested by injecting machine check
-while a guest is spinning.
-
-Before the change, if guest CR4.MCE==0, then the machine check is
-escalated to Catastrophic Error (CATERR) and the machine dies.
-If guest CR4.MCE==1, then the machine check causes VMEXIT and is
-handled normally by host Linux. After the change, injecting a machine
-check causes normal Linux machine check handling.
-
-Signed-off-by: Ben Serebrin <serebrin@google.com>
-Reviewed-by: Venkatesh Srinivas <venkateshs@google.com>
-Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/x86/kvm/vmx.c | 12 ++++++++++--
- 1 file changed, 10 insertions(+), 2 deletions(-)
-
-diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
-index ae4f6d3..a60bd3a 100644
---- a/arch/x86/kvm/vmx.c
-+++ b/arch/x86/kvm/vmx.c
-@@ -3621,8 +3621,16 @@ static void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
+ #endif /* CONFIG_CPU_MIPSR6 */
+ #endif
  
- static int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
++#define LoadHWU(addr, value, res)	_LoadHWU(addr, value, res, kernel)
++#define LoadHWUE(addr, value, res)	_LoadHWU(addr, value, res, user)
++#define LoadWU(addr, value, res)	_LoadWU(addr, value, res, kernel)
++#define LoadWUE(addr, value, res)	_LoadWU(addr, value, res, user)
++#define LoadHW(addr, value, res)	_LoadHW(addr, value, res, kernel)
++#define LoadHWE(addr, value, res)	_LoadHW(addr, value, res, user)
++#define LoadW(addr, value, res)		_LoadW(addr, value, res, kernel)
++#define LoadWE(addr, value, res)	_LoadW(addr, value, res, user)
++#define LoadDW(addr, value, res)	_LoadDW(addr, value, res)
++
++#define StoreHW(addr, value, res)	_StoreHW(addr, value, res, kernel)
++#define StoreHWE(addr, value, res)	_StoreHW(addr, value, res, user)
++#define StoreW(addr, value, res)	_StoreW(addr, value, res, kernel)
++#define StoreWE(addr, value, res)	_StoreW(addr, value, res, user)
++#define StoreDW(addr, value, res)	_StoreDW(addr, value, res)
++
+ static void emulate_load_store_insn(struct pt_regs *regs,
+ 	void __user *addr, unsigned int __user *pc)
  {
--	unsigned long hw_cr4 = cr4 | (to_vmx(vcpu)->rmode.vm86_active ?
--		    KVM_RMODE_VM_CR4_ALWAYS_ON : KVM_PMODE_VM_CR4_ALWAYS_ON);
-+	/*
-+	 * Pass through host's Machine Check Enable value to hw_cr4, which
-+	 * is in force while we are in guest mode.  Do not let guests control
-+	 * this bit, even if host CR4.MCE == 0.
-+	 */
-+	unsigned long hw_cr4 =
-+		(cr4_read_shadow() & X86_CR4_MCE) |
-+		(cr4 & ~X86_CR4_MCE) |
-+		(to_vmx(vcpu)->rmode.vm86_active ?
-+		 KVM_RMODE_VM_CR4_ALWAYS_ON : KVM_PMODE_VM_CR4_ALWAYS_ON);
- 
- 	if (cr4 & X86_CR4_VMXE) {
- 		/*
--- 
-2.3.6
-
-
-From 7e5ed3d726c9333bdb3f23c3de7ff2f9e9902508 Mon Sep 17 00:00:00 2001
-From: James Hogan <james.hogan@imgtec.com>
-Date: Fri, 6 Feb 2015 11:11:56 +0000
-Subject: [PATCH 035/219] MIPS: KVM: Handle MSA Disabled exceptions from guest
-Cc: mpagano@gentoo.org
-
-commit 98119ad53376885819d93dfb8737b6a9a61ca0ba upstream.
-
-Guest user mode can generate a guest MSA Disabled exception on an MSA
-capable core by simply trying to execute an MSA instruction. Since this
-exception is unknown to KVM it will be passed on to the guest kernel.
-However guest Linux kernels prior to v3.15 do not set up an exception
-handler for the MSA Disabled exception as they don't support any MSA
-capable cores. This results in a guest OS panic.
-
-Since an older processor ID may be being emulated, and MSA support is
-not advertised to the guest, the correct behaviour is to generate a
-Reserved Instruction exception in the guest kernel so it can send the
-guest process an illegal instruction signal (SIGILL), as would happen
-with a non-MSA-capable core.
-
-Fix this as minimally as reasonably possible by preventing
-kvm_mips_check_privilege() from relaying MSA Disabled exceptions from
-guest user mode to the guest kernel, and handling the MSA Disabled
-exception by emulating a Reserved Instruction exception in the guest,
-via a new handle_msa_disabled() KVM callback.
-
-Signed-off-by: James Hogan <james.hogan@imgtec.com>
-Cc: Paolo Bonzini <pbonzini@redhat.com>
-Cc: Paul Burton <paul.burton@imgtec.com>
-Cc: Ralf Baechle <ralf@linux-mips.org>
-Cc: Gleb Natapov <gleb@kernel.org>
-Cc: linux-mips@linux-mips.org
-Cc: kvm@vger.kernel.org
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/mips/include/asm/kvm_host.h |  2 ++
- arch/mips/kvm/emulate.c          |  1 +
- arch/mips/kvm/mips.c             |  4 ++++
- arch/mips/kvm/trap_emul.c        | 28 ++++++++++++++++++++++++++++
- 4 files changed, 35 insertions(+)
-
-diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
-index ac4fc71..f722b05 100644
---- a/arch/mips/include/asm/kvm_host.h
-+++ b/arch/mips/include/asm/kvm_host.h
-@@ -322,6 +322,7 @@ enum mips_mmu_types {
- #define T_TRAP			13	/* Trap instruction */
- #define T_VCEI			14	/* Virtual coherency exception */
- #define T_FPE			15	/* Floating point exception */
-+#define T_MSADIS		21	/* MSA disabled exception */
- #define T_WATCH			23	/* Watch address reference */
- #define T_VCED			31	/* Virtual coherency data */
- 
-@@ -578,6 +579,7 @@ struct kvm_mips_callbacks {
- 	int (*handle_syscall)(struct kvm_vcpu *vcpu);
- 	int (*handle_res_inst)(struct kvm_vcpu *vcpu);
- 	int (*handle_break)(struct kvm_vcpu *vcpu);
-+	int (*handle_msa_disabled)(struct kvm_vcpu *vcpu);
- 	int (*vm_init)(struct kvm *kvm);
- 	int (*vcpu_init)(struct kvm_vcpu *vcpu);
- 	int (*vcpu_setup)(struct kvm_vcpu *vcpu);
-diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
-index fb3e8df..838d3a6 100644
---- a/arch/mips/kvm/emulate.c
-+++ b/arch/mips/kvm/emulate.c
-@@ -2176,6 +2176,7 @@ enum emulation_result kvm_mips_check_privilege(unsigned long cause,
- 		case T_SYSCALL:
- 		case T_BREAK:
+@@ -872,7 +952,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 				set_fs(seg);
+ 				goto sigbus;
+ 			}
+-			LoadHW(addr, value, res);
++			LoadHWE(addr, value, res);
+ 			if (res) {
+ 				set_fs(seg);
+ 				goto fault;
+@@ -885,7 +965,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 				set_fs(seg);
+ 				goto sigbus;
+ 			}
+-				LoadW(addr, value, res);
++				LoadWE(addr, value, res);
+ 			if (res) {
+ 				set_fs(seg);
+ 				goto fault;
+@@ -898,7 +978,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 				set_fs(seg);
+ 				goto sigbus;
+ 			}
+-			LoadHWU(addr, value, res);
++			LoadHWUE(addr, value, res);
+ 			if (res) {
+ 				set_fs(seg);
+ 				goto fault;
+@@ -913,7 +993,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 			}
+ 			compute_return_epc(regs);
+ 			value = regs->regs[insn.spec3_format.rt];
+-			StoreHW(addr, value, res);
++			StoreHWE(addr, value, res);
+ 			if (res) {
+ 				set_fs(seg);
+ 				goto fault;
+@@ -926,7 +1006,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 			}
+ 			compute_return_epc(regs);
+ 			value = regs->regs[insn.spec3_format.rt];
+-			StoreW(addr, value, res);
++			StoreWE(addr, value, res);
+ 			if (res) {
+ 				set_fs(seg);
+ 				goto fault;
+@@ -943,7 +1023,15 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 		if (!access_ok(VERIFY_READ, addr, 2))
+ 			goto sigbus;
+ 
+-		LoadHW(addr, value, res);
++		if (config_enabled(CONFIG_EVA)) {
++			if (segment_eq(get_fs(), get_ds()))
++				LoadHW(addr, value, res);
++			else
++				LoadHWE(addr, value, res);
++		} else {
++			LoadHW(addr, value, res);
++		}
++
+ 		if (res)
+ 			goto fault;
+ 		compute_return_epc(regs);
+@@ -954,7 +1042,15 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 		if (!access_ok(VERIFY_READ, addr, 4))
+ 			goto sigbus;
+ 
+-		LoadW(addr, value, res);
++		if (config_enabled(CONFIG_EVA)) {
++			if (segment_eq(get_fs(), get_ds()))
++				LoadW(addr, value, res);
++			else
++				LoadWE(addr, value, res);
++		} else {
++			LoadW(addr, value, res);
++		}
++
+ 		if (res)
+ 			goto fault;
+ 		compute_return_epc(regs);
+@@ -965,7 +1061,15 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 		if (!access_ok(VERIFY_READ, addr, 2))
+ 			goto sigbus;
+ 
+-		LoadHWU(addr, value, res);
++		if (config_enabled(CONFIG_EVA)) {
++			if (segment_eq(get_fs(), get_ds()))
++				LoadHWU(addr, value, res);
++			else
++				LoadHWUE(addr, value, res);
++		} else {
++			LoadHWU(addr, value, res);
++		}
++
+ 		if (res)
+ 			goto fault;
+ 		compute_return_epc(regs);
+@@ -1024,7 +1128,16 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 
+ 		compute_return_epc(regs);
+ 		value = regs->regs[insn.i_format.rt];
+-		StoreHW(addr, value, res);
++
++		if (config_enabled(CONFIG_EVA)) {
++			if (segment_eq(get_fs(), get_ds()))
++				StoreHW(addr, value, res);
++			else
++				StoreHWE(addr, value, res);
++		} else {
++			StoreHW(addr, value, res);
++		}
++
+ 		if (res)
+ 			goto fault;
+ 		break;
+@@ -1035,7 +1148,16 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 
+ 		compute_return_epc(regs);
+ 		value = regs->regs[insn.i_format.rt];
+-		StoreW(addr, value, res);
++
++		if (config_enabled(CONFIG_EVA)) {
++			if (segment_eq(get_fs(), get_ds()))
++				StoreW(addr, value, res);
++			else
++				StoreWE(addr, value, res);
++		} else {
++			StoreW(addr, value, res);
++		}
++
+ 		if (res)
+ 			goto fault;
+ 		break;
+diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
+index fb3e8df..838d3a6 100644
+--- a/arch/mips/kvm/emulate.c
++++ b/arch/mips/kvm/emulate.c
+@@ -2176,6 +2176,7 @@ enum emulation_result kvm_mips_check_privilege(unsigned long cause,
+ 		case T_SYSCALL:
+ 		case T_BREAK:
  		case T_RES_INST:
 +		case T_MSADIS:
  			break;
@@ -3038,87 +1535,18 @@ index fd7257b..4372cc8 100644
  
  	.vm_init = kvm_trap_emul_vm_init,
  	.vcpu_init = kvm_trap_emul_vcpu_init,
--- 
-2.3.6
-
-
-From facbd0f25d07e3448d472d679aafefe7580990b2 Mon Sep 17 00:00:00 2001
-From: James Hogan <james.hogan@imgtec.com>
-Date: Wed, 25 Feb 2015 13:08:05 +0000
-Subject: [PATCH 036/219] MIPS: lose_fpu(): Disable FPU when MSA enabled
-Cc: mpagano@gentoo.org
-
-commit acaf6a97d623af123314c2f8ce4cf7254f6b2fc1 upstream.
-
-The lose_fpu() function only disables the FPU in CP0_Status.CU1 if the
-FPU is in use and MSA isn't enabled.
-
-This isn't necessarily a problem because KSTK_STATUS(current), the
-version of CP0_Status stored on the kernel stack on entry from user
-mode, does always get updated and gets restored when returning to user
-mode, but I don't think it was intended, and it is inconsistent with the
-case of only the FPU being in use. Sometimes leaving the FPU enabled may
-also mask kernel bugs where FPU operations are executed when the FPU
-might not be enabled.
-
-So lets disable the FPU in the MSA case too.
-
-Fixes: 33c771ba5c5d ("MIPS: save/disable MSA in lose_fpu")
-Signed-off-by: James Hogan <james.hogan@imgtec.com>
-Cc: Ralf Baechle <ralf@linux-mips.org>
-Cc: Paul Burton <paul.burton@imgtec.com>
-Cc: linux-mips@linux-mips.org
-Patchwork: https://patchwork.linux-mips.org/patch/9323/
-Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/mips/include/asm/fpu.h | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/arch/mips/include/asm/fpu.h b/arch/mips/include/asm/fpu.h
-index dd083e9..9f26b07 100644
---- a/arch/mips/include/asm/fpu.h
-+++ b/arch/mips/include/asm/fpu.h
-@@ -170,6 +170,7 @@ static inline void lose_fpu(int save)
- 		}
- 		disable_msa();
- 		clear_thread_flag(TIF_USEDMSA);
-+		__disable_fpu();
- 	} else if (is_fpu_owner()) {
- 		if (save)
- 			_save_fp(current);
--- 
-2.3.6
-
-
-From 0668432d35a9e96ee500cbe1b3f7df6c4fe29b09 Mon Sep 17 00:00:00 2001
-From: Markos Chandras <markos.chandras@imgtec.com>
-Date: Fri, 27 Feb 2015 07:51:32 +0000
-Subject: [PATCH 037/219] MIPS: Malta: Detect and fix bad memsize values
-Cc: mpagano@gentoo.org
-
-commit f7f8aea4b97c4d48e42f02cb37026bee445f239f upstream.
-
-memsize denotes the amount of RAM we can access from kseg{0,1} and
-that should be up to 256M. In case the bootloader reports a value
-higher than that (perhaps reporting all the available RAM) it's best
-if we fix it ourselves and just warn the user about that. This is
-usually a problem with the bootloader and/or its environment.
-
-[ralf@linux-mips.org: Remove useless parens as suggested bei Sergei.
-Reformat long pr_warn statement to fit into 80 column limit.]
-
-Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-Cc: linux-mips@linux-mips.org
-Patchwork: https://patchwork.linux-mips.org/patch/9362/
-Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/mips/mti-malta/malta-memory.c | 6 ++++++
- 1 file changed, 6 insertions(+)
-
+diff --git a/arch/mips/loongson/loongson-3/irq.c b/arch/mips/loongson/loongson-3/irq.c
+index 21221ed..0f75b6b 100644
+--- a/arch/mips/loongson/loongson-3/irq.c
++++ b/arch/mips/loongson/loongson-3/irq.c
+@@ -44,6 +44,7 @@ void mach_irq_dispatch(unsigned int pending)
+ 
+ static struct irqaction cascade_irqaction = {
+ 	.handler = no_action,
++	.flags = IRQF_NO_SUSPEND,
+ 	.name = "cascade",
+ };
+ 
 diff --git a/arch/mips/mti-malta/malta-memory.c b/arch/mips/mti-malta/malta-memory.c
 index 8fddd2cd..efe366d 100644
 --- a/arch/mips/mti-malta/malta-memory.c
@@ -3136,12971 +1564,6606 @@ index 8fddd2cd..efe366d 100644
  		/* If ememsize is set, then set physical_memsize to that */
  		physical_memsize = ememsize ? : memsize;
  	}
--- 
-2.3.6
-
-
-From e52a20fcbf2ae06dc538b953c065bd6ae0b5f4ad Mon Sep 17 00:00:00 2001
-From: Markos Chandras <markos.chandras@imgtec.com>
-Date: Mon, 9 Mar 2015 14:54:49 +0000
-Subject: [PATCH 038/219] MIPS: asm: asm-eva: Introduce kernel load/store
- variants
-Cc: mpagano@gentoo.org
-
-commit 60cd7e08e453bc6828ac4b539f949e4acd80f143 upstream.
-
-Introduce new macros for kernel load/store variants which will be
-used to perform regular kernel space load/store operations in EVA
-mode.
-
-Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-Cc: linux-mips@linux-mips.org
-Patchwork: https://patchwork.linux-mips.org/patch/9500/
-Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/mips/include/asm/asm-eva.h | 137 +++++++++++++++++++++++++++-------------
- 1 file changed, 93 insertions(+), 44 deletions(-)
-
-diff --git a/arch/mips/include/asm/asm-eva.h b/arch/mips/include/asm/asm-eva.h
-index e41c56e..1e38f0e 100644
---- a/arch/mips/include/asm/asm-eva.h
-+++ b/arch/mips/include/asm/asm-eva.h
-@@ -11,6 +11,36 @@
- #define __ASM_ASM_EVA_H
+diff --git a/arch/mips/power/hibernate.S b/arch/mips/power/hibernate.S
+index 32a7c82..e7567c8 100644
+--- a/arch/mips/power/hibernate.S
++++ b/arch/mips/power/hibernate.S
+@@ -30,6 +30,8 @@ LEAF(swsusp_arch_suspend)
+ END(swsusp_arch_suspend)
  
- #ifndef __ASSEMBLY__
-+
-+/* Kernel variants */
-+
-+#define kernel_cache(op, base)		"cache " op ", " base "\n"
-+#define kernel_ll(reg, addr)		"ll " reg ", " addr "\n"
-+#define kernel_sc(reg, addr)		"sc " reg ", " addr "\n"
-+#define kernel_lw(reg, addr)		"lw " reg ", " addr "\n"
-+#define kernel_lwl(reg, addr)		"lwl " reg ", " addr "\n"
-+#define kernel_lwr(reg, addr)		"lwr " reg ", " addr "\n"
-+#define kernel_lh(reg, addr)		"lh " reg ", " addr "\n"
-+#define kernel_lb(reg, addr)		"lb " reg ", " addr "\n"
-+#define kernel_lbu(reg, addr)		"lbu " reg ", " addr "\n"
-+#define kernel_sw(reg, addr)		"sw " reg ", " addr "\n"
-+#define kernel_swl(reg, addr)		"swl " reg ", " addr "\n"
-+#define kernel_swr(reg, addr)		"swr " reg ", " addr "\n"
-+#define kernel_sh(reg, addr)		"sh " reg ", " addr "\n"
-+#define kernel_sb(reg, addr)		"sb " reg ", " addr "\n"
-+
-+#ifdef CONFIG_32BIT
-+/*
-+ * No 'sd' or 'ld' instructions in 32-bit but the code will
-+ * do the correct thing
-+ */
-+#define kernel_sd(reg, addr)		user_sw(reg, addr)
-+#define kernel_ld(reg, addr)		user_lw(reg, addr)
-+#else
-+#define kernel_sd(reg, addr)		"sd " reg", " addr "\n"
-+#define kernel_ld(reg, addr)		"ld " reg", " addr "\n"
-+#endif /* CONFIG_32BIT */
-+
- #ifdef CONFIG_EVA
+ LEAF(swsusp_arch_resume)
++	/* Avoid TLB mismatch during and after kernel resume */
++	jal local_flush_tlb_all
+ 	PTR_L t0, restore_pblist
+ 0:
+ 	PTR_L t1, PBE_ADDRESS(t0)   /* source */
+@@ -43,7 +45,6 @@ LEAF(swsusp_arch_resume)
+ 	bne t1, t3, 1b
+ 	PTR_L t0, PBE_NEXT(t0)
+ 	bnez t0, 0b
+-	jal local_flush_tlb_all /* Avoid TLB mismatch after kernel resume */
+ 	PTR_LA t0, saved_regs
+ 	PTR_L ra, PT_R31(t0)
+ 	PTR_L sp, PT_R29(t0)
+diff --git a/arch/powerpc/kernel/cacheinfo.c b/arch/powerpc/kernel/cacheinfo.c
+index ae77b7e..c641983 100644
+--- a/arch/powerpc/kernel/cacheinfo.c
++++ b/arch/powerpc/kernel/cacheinfo.c
+@@ -61,12 +61,22 @@ struct cache_type_info {
+ };
  
- #define __BUILD_EVA_INSN(insn, reg, addr)				\
-@@ -41,37 +71,60 @@
+ /* These are used to index the cache_type_info array. */
+-#define CACHE_TYPE_UNIFIED     0
+-#define CACHE_TYPE_INSTRUCTION 1
+-#define CACHE_TYPE_DATA        2
++#define CACHE_TYPE_UNIFIED     0 /* cache-size, cache-block-size, etc. */
++#define CACHE_TYPE_UNIFIED_D   1 /* d-cache-size, d-cache-block-size, etc */
++#define CACHE_TYPE_INSTRUCTION 2
++#define CACHE_TYPE_DATA        3
  
- #else
+ static const struct cache_type_info cache_type_info[] = {
+ 	{
++		/* Embedded systems that use cache-size, cache-block-size,
++		 * etc. for the Unified (typically L2) cache. */
++		.name            = "Unified",
++		.size_prop       = "cache-size",
++		.line_size_props = { "cache-line-size",
++				     "cache-block-size", },
++		.nr_sets_prop    = "cache-sets",
++	},
++	{
+ 		/* PowerPC Processor binding says the [di]-cache-*
+ 		 * must be equal on unified caches, so just use
+ 		 * d-cache properties. */
+@@ -293,7 +303,8 @@ static struct cache *cache_find_first_sibling(struct cache *cache)
+ {
+ 	struct cache *iter;
  
--#define user_cache(op, base)		"cache " op ", " base "\n"
--#define user_ll(reg, addr)		"ll " reg ", " addr "\n"
--#define user_sc(reg, addr)		"sc " reg ", " addr "\n"
--#define user_lw(reg, addr)		"lw " reg ", " addr "\n"
--#define user_lwl(reg, addr)		"lwl " reg ", " addr "\n"
--#define user_lwr(reg, addr)		"lwr " reg ", " addr "\n"
--#define user_lh(reg, addr)		"lh " reg ", " addr "\n"
--#define user_lb(reg, addr)		"lb " reg ", " addr "\n"
--#define user_lbu(reg, addr)		"lbu " reg ", " addr "\n"
--#define user_sw(reg, addr)		"sw " reg ", " addr "\n"
--#define user_swl(reg, addr)		"swl " reg ", " addr "\n"
--#define user_swr(reg, addr)		"swr " reg ", " addr "\n"
--#define user_sh(reg, addr)		"sh " reg ", " addr "\n"
--#define user_sb(reg, addr)		"sb " reg ", " addr "\n"
-+#define user_cache(op, base)		kernel_cache(op, base)
-+#define user_ll(reg, addr)		kernel_ll(reg, addr)
-+#define user_sc(reg, addr)		kernel_sc(reg, addr)
-+#define user_lw(reg, addr)		kernel_lw(reg, addr)
-+#define user_lwl(reg, addr)		kernel_lwl(reg, addr)
-+#define user_lwr(reg, addr)		kernel_lwr(reg, addr)
-+#define user_lh(reg, addr)		kernel_lh(reg, addr)
-+#define user_lb(reg, addr)		kernel_lb(reg, addr)
-+#define user_lbu(reg, addr)		kernel_lbu(reg, addr)
-+#define user_sw(reg, addr)		kernel_sw(reg, addr)
-+#define user_swl(reg, addr)		kernel_swl(reg, addr)
-+#define user_swr(reg, addr)		kernel_swr(reg, addr)
-+#define user_sh(reg, addr)		kernel_sh(reg, addr)
-+#define user_sb(reg, addr)		kernel_sb(reg, addr)
- 
- #ifdef CONFIG_32BIT
--/*
-- * No 'sd' or 'ld' instructions in 32-bit but the code will
-- * do the correct thing
-- */
--#define user_sd(reg, addr)		user_sw(reg, addr)
--#define user_ld(reg, addr)		user_lw(reg, addr)
-+#define user_sd(reg, addr)		kernel_sw(reg, addr)
-+#define user_ld(reg, addr)		kernel_lw(reg, addr)
- #else
--#define user_sd(reg, addr)		"sd " reg", " addr "\n"
--#define user_ld(reg, addr)		"ld " reg", " addr "\n"
-+#define user_sd(reg, addr)		kernel_sd(reg, addr)
-+#define user_ld(reg, addr)		kernel_ld(reg, addr)
- #endif /* CONFIG_32BIT */
+-	if (cache->type == CACHE_TYPE_UNIFIED)
++	if (cache->type == CACHE_TYPE_UNIFIED ||
++	    cache->type == CACHE_TYPE_UNIFIED_D)
+ 		return cache;
  
- #endif /* CONFIG_EVA */
+ 	list_for_each_entry(iter, &cache_list, list)
+@@ -324,16 +335,29 @@ static bool cache_node_is_unified(const struct device_node *np)
+ 	return of_get_property(np, "cache-unified", NULL);
+ }
  
- #else /* __ASSEMBLY__ */
+-static struct cache *cache_do_one_devnode_unified(struct device_node *node,
+-						  int level)
++/*
++ * Unified caches can have two different sets of tags.  Most embedded
++ * use cache-size, etc. for the unified cache size, but open firmware systems
++ * use d-cache-size, etc.   Check on initialization for which type we have, and
++ * return the appropriate structure type.  Assume it's embedded if it isn't
++ * open firmware.  If it's yet a 3rd type, then there will be missing entries
++ * in /sys/devices/system/cpu/cpu0/cache/index2/, and this code will need
++ * to be extended further.
++ */
++static int cache_is_unified_d(const struct device_node *np)
+ {
+-	struct cache *cache;
++	return of_get_property(np,
++		cache_type_info[CACHE_TYPE_UNIFIED_D].size_prop, NULL) ?
++		CACHE_TYPE_UNIFIED_D : CACHE_TYPE_UNIFIED;
++}
  
-+#define kernel_cache(op, base)		cache op, base
-+#define kernel_ll(reg, addr)		ll reg, addr
-+#define kernel_sc(reg, addr)		sc reg, addr
-+#define kernel_lw(reg, addr)		lw reg, addr
-+#define kernel_lwl(reg, addr)		lwl reg, addr
-+#define kernel_lwr(reg, addr)		lwr reg, addr
-+#define kernel_lh(reg, addr)		lh reg, addr
-+#define kernel_lb(reg, addr)		lb reg, addr
-+#define kernel_lbu(reg, addr)		lbu reg, addr
-+#define kernel_sw(reg, addr)		sw reg, addr
-+#define kernel_swl(reg, addr)		swl reg, addr
-+#define kernel_swr(reg, addr)		swr reg, addr
-+#define kernel_sh(reg, addr)		sh reg, addr
-+#define kernel_sb(reg, addr)		sb reg, addr
-+
-+#ifdef CONFIG_32BIT
 +/*
-+ * No 'sd' or 'ld' instructions in 32-bit but the code will
-+ * do the correct thing
 + */
-+#define kernel_sd(reg, addr)		user_sw(reg, addr)
-+#define kernel_ld(reg, addr)		user_lw(reg, addr)
-+#else
-+#define kernel_sd(reg, addr)		sd reg, addr
-+#define kernel_ld(reg, addr)		ld reg, addr
-+#endif /* CONFIG_32BIT */
-+
- #ifdef CONFIG_EVA
++static struct cache *cache_do_one_devnode_unified(struct device_node *node, int level)
++{
+ 	pr_debug("creating L%d ucache for %s\n", level, node->full_name);
  
- #define __BUILD_EVA_INSN(insn, reg, addr)			\
-@@ -101,31 +154,27 @@
- #define user_sd(reg, addr)		user_sw(reg, addr)
- #else
+-	cache = new_cache(CACHE_TYPE_UNIFIED, level, node);
+-
+-	return cache;
++	return new_cache(cache_is_unified_d(node), level, node);
+ }
  
--#define user_cache(op, base)		cache op, base
--#define user_ll(reg, addr)		ll reg, addr
--#define user_sc(reg, addr)		sc reg, addr
--#define user_lw(reg, addr)		lw reg, addr
--#define user_lwl(reg, addr)		lwl reg, addr
--#define user_lwr(reg, addr)		lwr reg, addr
--#define user_lh(reg, addr)		lh reg, addr
--#define user_lb(reg, addr)		lb reg, addr
--#define user_lbu(reg, addr)		lbu reg, addr
--#define user_sw(reg, addr)		sw reg, addr
--#define user_swl(reg, addr)		swl reg, addr
--#define user_swr(reg, addr)		swr reg, addr
--#define user_sh(reg, addr)		sh reg, addr
--#define user_sb(reg, addr)		sb reg, addr
-+#define user_cache(op, base)		kernel_cache(op, base)
-+#define user_ll(reg, addr)		kernel_ll(reg, addr)
-+#define user_sc(reg, addr)		kernel_sc(reg, addr)
-+#define user_lw(reg, addr)		kernel_lw(reg, addr)
-+#define user_lwl(reg, addr)		kernel_lwl(reg, addr)
-+#define user_lwr(reg, addr)		kernel_lwr(reg, addr)
-+#define user_lh(reg, addr)		kernel_lh(reg, addr)
-+#define user_lb(reg, addr)		kernel_lb(reg, addr)
-+#define user_lbu(reg, addr)		kernel_lbu(reg, addr)
-+#define user_sw(reg, addr)		kernel_sw(reg, addr)
-+#define user_swl(reg, addr)		kernel_swl(reg, addr)
-+#define user_swr(reg, addr)		kernel_swr(reg, addr)
-+#define user_sh(reg, addr)		kernel_sh(reg, addr)
-+#define user_sb(reg, addr)		kernel_sb(reg, addr)
+ static struct cache *cache_do_one_devnode_split(struct device_node *node,
+diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
+index 7e408bf..cecbe00 100644
+--- a/arch/powerpc/mm/hugetlbpage.c
++++ b/arch/powerpc/mm/hugetlbpage.c
+@@ -581,6 +581,7 @@ static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud,
+ 	pmd = pmd_offset(pud, start);
+ 	pud_clear(pud);
+ 	pmd_free_tlb(tlb, pmd, start);
++	mm_dec_nr_pmds(tlb->mm);
+ }
  
- #ifdef CONFIG_32BIT
--/*
-- * No 'sd' or 'ld' instructions in 32-bit but the code will
-- * do the correct thing
-- */
--#define user_sd(reg, addr)		user_sw(reg, addr)
--#define user_ld(reg, addr)		user_lw(reg, addr)
-+#define user_sd(reg, addr)		kernel_sw(reg, addr)
-+#define user_ld(reg, addr)		kernel_lw(reg, addr)
- #else
--#define user_sd(reg, addr)		sd reg, addr
--#define user_ld(reg, addr)		ld reg, addr
-+#define user_sd(reg, addr)		kernel_sd(reg, addr)
-+#define user_ld(reg, addr)		kernel_sd(reg, addr)
- #endif /* CONFIG_32BIT */
+ static void hugetlb_free_pud_range(struct mmu_gather *tlb, pgd_t *pgd,
+diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
+index 2396dda..ead5535 100644
+--- a/arch/powerpc/perf/callchain.c
++++ b/arch/powerpc/perf/callchain.c
+@@ -243,7 +243,7 @@ static void perf_callchain_user_64(struct perf_callchain_entry *entry,
+ 	sp = regs->gpr[1];
+ 	perf_callchain_store(entry, next_ip);
  
- #endif /* CONFIG_EVA */
--- 
-2.3.6
-
-
-From 88a82d60a26013483a22b19035517fec54b7dee5 Mon Sep 17 00:00:00 2001
-From: Markos Chandras <markos.chandras@imgtec.com>
-Date: Mon, 9 Mar 2015 14:54:50 +0000
-Subject: [PATCH 039/219] MIPS: unaligned: Prevent EVA instructions on kernel
- unaligned accesses
-Cc: mpagano@gentoo.org
-
-commit eeb538950367e3966cbf0237ab1a1dc30e059818 upstream.
-
-Commit c1771216ab48 ("MIPS: kernel: unaligned: Handle unaligned
-accesses for EVA") allowed unaligned accesses to be emulated for
-EVA. However, when emulating regular load/store unaligned accesses,
-we need to use the appropriate "address space" instructions for that.
-Previously, an unaligned load/store instruction in kernel space would
-have used the corresponding EVA instructions to emulate it which led to
-segmentation faults because of the address translation that happens
-with EVA instructions. This is now fixed by using the EVA instruction
-only when emulating EVA unaligned accesses.
-
-Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-Fixes: c1771216ab48 ("MIPS: kernel: unaligned: Handle unaligned accesses for EVA")
-Cc: linux-mips@linux-mips.org
-Patchwork: https://patchwork.linux-mips.org/patch/9501/
-Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/mips/kernel/unaligned.c | 172 +++++++++++++++++++++++--------------------
- 1 file changed, 94 insertions(+), 78 deletions(-)
-
-diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c
-index bbb6969..7a5707e 100644
---- a/arch/mips/kernel/unaligned.c
-+++ b/arch/mips/kernel/unaligned.c
-@@ -109,10 +109,10 @@ static u32 unaligned_action;
- extern void show_registers(struct pt_regs *regs);
+-	for (;;) {
++	while (entry->nr < PERF_MAX_STACK_DEPTH) {
+ 		fp = (unsigned long __user *) sp;
+ 		if (!valid_user_sp(sp, 1) || read_user_stack_64(fp, &next_sp))
+ 			return;
+diff --git a/arch/powerpc/platforms/cell/interrupt.c b/arch/powerpc/platforms/cell/interrupt.c
+index 4c11421..3af8324 100644
+--- a/arch/powerpc/platforms/cell/interrupt.c
++++ b/arch/powerpc/platforms/cell/interrupt.c
+@@ -163,7 +163,7 @@ static unsigned int iic_get_irq(void)
  
- #ifdef __BIG_ENDIAN
--#define     LoadHW(addr, value, res)  \
-+#define     _LoadHW(addr, value, res, type)  \
- 		__asm__ __volatile__ (".set\tnoat\n"        \
--			"1:\t"user_lb("%0", "0(%2)")"\n"    \
--			"2:\t"user_lbu("$1", "1(%2)")"\n\t" \
-+			"1:\t"type##_lb("%0", "0(%2)")"\n"  \
-+			"2:\t"type##_lbu("$1", "1(%2)")"\n\t"\
- 			"sll\t%0, 0x8\n\t"                  \
- 			"or\t%0, $1\n\t"                    \
- 			"li\t%1, 0\n"                       \
-@@ -130,10 +130,10 @@ extern void show_registers(struct pt_regs *regs);
- 			: "r" (addr), "i" (-EFAULT));
+ void iic_setup_cpu(void)
+ {
+-	out_be64(this_cpu_ptr(&cpu_iic.regs->prio), 0xff);
++	out_be64(&this_cpu_ptr(&cpu_iic)->regs->prio, 0xff);
+ }
  
- #ifndef CONFIG_CPU_MIPSR6
--#define     LoadW(addr, value, res)   \
-+#define     _LoadW(addr, value, res, type)   \
- 		__asm__ __volatile__ (                      \
--			"1:\t"user_lwl("%0", "(%2)")"\n"    \
--			"2:\t"user_lwr("%0", "3(%2)")"\n\t" \
-+			"1:\t"type##_lwl("%0", "(%2)")"\n"   \
-+			"2:\t"type##_lwr("%0", "3(%2)")"\n\t"\
- 			"li\t%1, 0\n"                       \
- 			"3:\n\t"                            \
- 			".insn\n\t"                         \
-@@ -149,18 +149,18 @@ extern void show_registers(struct pt_regs *regs);
- 			: "r" (addr), "i" (-EFAULT));
- #else
- /* MIPSR6 has no lwl instruction */
--#define     LoadW(addr, value, res) \
-+#define     _LoadW(addr, value, res, type) \
- 		__asm__ __volatile__ (			    \
- 			".set\tpush\n"			    \
- 			".set\tnoat\n\t"		    \
--			"1:"user_lb("%0", "0(%2)")"\n\t"    \
--			"2:"user_lbu("$1", "1(%2)")"\n\t"   \
-+			"1:"type##_lb("%0", "0(%2)")"\n\t"  \
-+			"2:"type##_lbu("$1", "1(%2)")"\n\t" \
- 			"sll\t%0, 0x8\n\t"		    \
- 			"or\t%0, $1\n\t"		    \
--			"3:"user_lbu("$1", "2(%2)")"\n\t"   \
-+			"3:"type##_lbu("$1", "2(%2)")"\n\t" \
- 			"sll\t%0, 0x8\n\t"		    \
- 			"or\t%0, $1\n\t"		    \
--			"4:"user_lbu("$1", "3(%2)")"\n\t"   \
-+			"4:"type##_lbu("$1", "3(%2)")"\n\t" \
- 			"sll\t%0, 0x8\n\t"		    \
- 			"or\t%0, $1\n\t"		    \
- 			"li\t%1, 0\n"			    \
-@@ -181,11 +181,11 @@ extern void show_registers(struct pt_regs *regs);
- 			: "r" (addr), "i" (-EFAULT));
- #endif /* CONFIG_CPU_MIPSR6 */
+ u8 iic_get_target_id(int cpu)
+diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c
+index c7c8720..63db1b0 100644
+--- a/arch/powerpc/platforms/cell/iommu.c
++++ b/arch/powerpc/platforms/cell/iommu.c
+@@ -197,7 +197,7 @@ static int tce_build_cell(struct iommu_table *tbl, long index, long npages,
  
--#define     LoadHWU(addr, value, res) \
-+#define     _LoadHWU(addr, value, res, type) \
- 		__asm__ __volatile__ (                      \
- 			".set\tnoat\n"                      \
--			"1:\t"user_lbu("%0", "0(%2)")"\n"   \
--			"2:\t"user_lbu("$1", "1(%2)")"\n\t" \
-+			"1:\t"type##_lbu("%0", "0(%2)")"\n" \
-+			"2:\t"type##_lbu("$1", "1(%2)")"\n\t"\
- 			"sll\t%0, 0x8\n\t"                  \
- 			"or\t%0, $1\n\t"                    \
- 			"li\t%1, 0\n"                       \
-@@ -204,10 +204,10 @@ extern void show_registers(struct pt_regs *regs);
- 			: "r" (addr), "i" (-EFAULT));
+ 	io_pte = (unsigned long *)tbl->it_base + (index - tbl->it_offset);
  
- #ifndef CONFIG_CPU_MIPSR6
--#define     LoadWU(addr, value, res)  \
-+#define     _LoadWU(addr, value, res, type)  \
- 		__asm__ __volatile__ (                      \
--			"1:\t"user_lwl("%0", "(%2)")"\n"    \
--			"2:\t"user_lwr("%0", "3(%2)")"\n\t" \
-+			"1:\t"type##_lwl("%0", "(%2)")"\n"  \
-+			"2:\t"type##_lwr("%0", "3(%2)")"\n\t"\
- 			"dsll\t%0, %0, 32\n\t"              \
- 			"dsrl\t%0, %0, 32\n\t"              \
- 			"li\t%1, 0\n"                       \
-@@ -224,7 +224,7 @@ extern void show_registers(struct pt_regs *regs);
- 			: "=&r" (value), "=r" (res)         \
- 			: "r" (addr), "i" (-EFAULT));
+-	for (i = 0; i < npages; i++, uaddr += tbl->it_page_shift)
++	for (i = 0; i < npages; i++, uaddr += (1 << tbl->it_page_shift))
+ 		io_pte[i] = base_pte | (__pa(uaddr) & CBE_IOPTE_RPN_Mask);
  
--#define     LoadDW(addr, value, res)  \
-+#define     _LoadDW(addr, value, res)  \
- 		__asm__ __volatile__ (                      \
- 			"1:\tldl\t%0, (%2)\n"               \
- 			"2:\tldr\t%0, 7(%2)\n\t"            \
-@@ -243,18 +243,18 @@ extern void show_registers(struct pt_regs *regs);
- 			: "r" (addr), "i" (-EFAULT));
- #else
- /* MIPSR6 has not lwl and ldl instructions */
--#define	    LoadWU(addr, value, res) \
-+#define	    _LoadWU(addr, value, res, type) \
- 		__asm__ __volatile__ (			    \
- 			".set\tpush\n\t"		    \
- 			".set\tnoat\n\t"		    \
--			"1:"user_lbu("%0", "0(%2)")"\n\t"   \
--			"2:"user_lbu("$1", "1(%2)")"\n\t"   \
-+			"1:"type##_lbu("%0", "0(%2)")"\n\t" \
-+			"2:"type##_lbu("$1", "1(%2)")"\n\t" \
- 			"sll\t%0, 0x8\n\t"		    \
- 			"or\t%0, $1\n\t"		    \
--			"3:"user_lbu("$1", "2(%2)")"\n\t"   \
-+			"3:"type##_lbu("$1", "2(%2)")"\n\t" \
- 			"sll\t%0, 0x8\n\t"		    \
- 			"or\t%0, $1\n\t"		    \
--			"4:"user_lbu("$1", "3(%2)")"\n\t"   \
-+			"4:"type##_lbu("$1", "3(%2)")"\n\t" \
- 			"sll\t%0, 0x8\n\t"		    \
- 			"or\t%0, $1\n\t"		    \
- 			"li\t%1, 0\n"			    \
-@@ -274,7 +274,7 @@ extern void show_registers(struct pt_regs *regs);
- 			: "=&r" (value), "=r" (res)	    \
- 			: "r" (addr), "i" (-EFAULT));
+ 	mb();
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index 6c9ff2b..1d9369e 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -1777,7 +1777,8 @@ static void pnv_ioda_setup_pe_seg(struct pci_controller *hose,
+ 				region.start += phb->ioda.io_segsize;
+ 				index++;
+ 			}
+-		} else if (res->flags & IORESOURCE_MEM) {
++		} else if ((res->flags & IORESOURCE_MEM) &&
++			   !pnv_pci_is_mem_pref_64(res->flags)) {
+ 			region.start = res->start -
+ 				       hose->mem_offset[0] -
+ 				       phb->ioda.m32_pci_base;
+diff --git a/arch/s390/kernel/suspend.c b/arch/s390/kernel/suspend.c
+index 1c4c5ac..d3236c9 100644
+--- a/arch/s390/kernel/suspend.c
++++ b/arch/s390/kernel/suspend.c
+@@ -138,6 +138,8 @@ int pfn_is_nosave(unsigned long pfn)
+ {
+ 	unsigned long nosave_begin_pfn = PFN_DOWN(__pa(&__nosave_begin));
+ 	unsigned long nosave_end_pfn = PFN_DOWN(__pa(&__nosave_end));
++	unsigned long eshared_pfn = PFN_DOWN(__pa(&_eshared)) - 1;
++	unsigned long stext_pfn = PFN_DOWN(__pa(&_stext));
  
--#define     LoadDW(addr, value, res)  \
-+#define     _LoadDW(addr, value, res)  \
- 		__asm__ __volatile__ (			    \
- 			".set\tpush\n\t"		    \
- 			".set\tnoat\n\t"		    \
-@@ -323,12 +323,12 @@ extern void show_registers(struct pt_regs *regs);
- #endif /* CONFIG_CPU_MIPSR6 */
+ 	/* Always save lowcore pages (LC protection might be enabled). */
+ 	if (pfn <= LC_PAGES)
+@@ -145,6 +147,8 @@ int pfn_is_nosave(unsigned long pfn)
+ 	if (pfn >= nosave_begin_pfn && pfn < nosave_end_pfn)
+ 		return 1;
+ 	/* Skip memory holes and read-only pages (NSS, DCSS, ...). */
++	if (pfn >= stext_pfn && pfn <= eshared_pfn)
++		return ipl_info.type == IPL_TYPE_NSS ? 1 : 0;
+ 	if (tprot(PFN_PHYS(pfn)))
+ 		return 1;
+ 	return 0;
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index 073b5f3..e7bc2fd 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -17,6 +17,7 @@
+ #include <linux/signal.h>
+ #include <linux/slab.h>
+ #include <linux/bitmap.h>
++#include <linux/vmalloc.h>
+ #include <asm/asm-offsets.h>
+ #include <asm/uaccess.h>
+ #include <asm/sclp.h>
+@@ -1332,10 +1333,10 @@ int kvm_s390_inject_vm(struct kvm *kvm,
+ 	return rc;
+ }
  
+-void kvm_s390_reinject_io_int(struct kvm *kvm,
++int kvm_s390_reinject_io_int(struct kvm *kvm,
+ 			      struct kvm_s390_interrupt_info *inti)
+ {
+-	__inject_vm(kvm, inti);
++	return __inject_vm(kvm, inti);
+ }
  
--#define     StoreHW(addr, value, res) \
-+#define     _StoreHW(addr, value, res, type) \
- 		__asm__ __volatile__ (                      \
- 			".set\tnoat\n"                      \
--			"1:\t"user_sb("%1", "1(%2)")"\n"    \
-+			"1:\t"type##_sb("%1", "1(%2)")"\n"  \
- 			"srl\t$1, %1, 0x8\n"                \
--			"2:\t"user_sb("$1", "0(%2)")"\n"    \
-+			"2:\t"type##_sb("$1", "0(%2)")"\n"  \
- 			".set\tat\n\t"                      \
- 			"li\t%0, 0\n"                       \
- 			"3:\n\t"                            \
-@@ -345,10 +345,10 @@ extern void show_registers(struct pt_regs *regs);
- 			: "r" (value), "r" (addr), "i" (-EFAULT));
+ int s390int_to_s390irq(struct kvm_s390_interrupt *s390int,
+@@ -1455,61 +1456,66 @@ void kvm_s390_clear_float_irqs(struct kvm *kvm)
+ 	spin_unlock(&fi->lock);
+ }
  
- #ifndef CONFIG_CPU_MIPSR6
--#define     StoreW(addr, value, res)  \
-+#define     _StoreW(addr, value, res, type)  \
- 		__asm__ __volatile__ (                      \
--			"1:\t"user_swl("%1", "(%2)")"\n"    \
--			"2:\t"user_swr("%1", "3(%2)")"\n\t" \
-+			"1:\t"type##_swl("%1", "(%2)")"\n"  \
-+			"2:\t"type##_swr("%1", "3(%2)")"\n\t"\
- 			"li\t%0, 0\n"                       \
- 			"3:\n\t"                            \
- 			".insn\n\t"                         \
-@@ -363,7 +363,7 @@ extern void show_registers(struct pt_regs *regs);
- 		: "=r" (res)                                \
- 		: "r" (value), "r" (addr), "i" (-EFAULT));
+-static inline int copy_irq_to_user(struct kvm_s390_interrupt_info *inti,
+-				   u8 *addr)
++static void inti_to_irq(struct kvm_s390_interrupt_info *inti,
++		       struct kvm_s390_irq *irq)
+ {
+-	struct kvm_s390_irq __user *uptr = (struct kvm_s390_irq __user *) addr;
+-	struct kvm_s390_irq irq = {0};
+-
+-	irq.type = inti->type;
++	irq->type = inti->type;
+ 	switch (inti->type) {
+ 	case KVM_S390_INT_PFAULT_INIT:
+ 	case KVM_S390_INT_PFAULT_DONE:
+ 	case KVM_S390_INT_VIRTIO:
+ 	case KVM_S390_INT_SERVICE:
+-		irq.u.ext = inti->ext;
++		irq->u.ext = inti->ext;
+ 		break;
+ 	case KVM_S390_INT_IO_MIN...KVM_S390_INT_IO_MAX:
+-		irq.u.io = inti->io;
++		irq->u.io = inti->io;
+ 		break;
+ 	case KVM_S390_MCHK:
+-		irq.u.mchk = inti->mchk;
++		irq->u.mchk = inti->mchk;
+ 		break;
+-	default:
+-		return -EINVAL;
+ 	}
+-
+-	if (copy_to_user(uptr, &irq, sizeof(irq)))
+-		return -EFAULT;
+-
+-	return 0;
+ }
  
--#define     StoreDW(addr, value, res) \
-+#define     _StoreDW(addr, value, res) \
- 		__asm__ __volatile__ (                      \
- 			"1:\tsdl\t%1,(%2)\n"                \
- 			"2:\tsdr\t%1, 7(%2)\n\t"            \
-@@ -382,17 +382,17 @@ extern void show_registers(struct pt_regs *regs);
- 		: "r" (value), "r" (addr), "i" (-EFAULT));
- #else
- /* MIPSR6 has no swl and sdl instructions */
--#define     StoreW(addr, value, res)  \
-+#define     _StoreW(addr, value, res, type)  \
- 		__asm__ __volatile__ (                      \
- 			".set\tpush\n\t"		    \
- 			".set\tnoat\n\t"		    \
--			"1:"user_sb("%1", "3(%2)")"\n\t"    \
-+			"1:"type##_sb("%1", "3(%2)")"\n\t"  \
- 			"srl\t$1, %1, 0x8\n\t"		    \
--			"2:"user_sb("$1", "2(%2)")"\n\t"    \
-+			"2:"type##_sb("$1", "2(%2)")"\n\t"  \
- 			"srl\t$1, $1,  0x8\n\t"		    \
--			"3:"user_sb("$1", "1(%2)")"\n\t"    \
-+			"3:"type##_sb("$1", "1(%2)")"\n\t"  \
- 			"srl\t$1, $1, 0x8\n\t"		    \
--			"4:"user_sb("$1", "0(%2)")"\n\t"    \
-+			"4:"type##_sb("$1", "0(%2)")"\n\t"  \
- 			".set\tpop\n\t"			    \
- 			"li\t%0, 0\n"			    \
- 			"10:\n\t"			    \
-@@ -456,10 +456,10 @@ extern void show_registers(struct pt_regs *regs);
+-static int get_all_floating_irqs(struct kvm *kvm, __u8 *buf, __u64 len)
++static int get_all_floating_irqs(struct kvm *kvm, u8 __user *usrbuf, u64 len)
+ {
+ 	struct kvm_s390_interrupt_info *inti;
+ 	struct kvm_s390_float_interrupt *fi;
++	struct kvm_s390_irq *buf;
++	int max_irqs;
+ 	int ret = 0;
+ 	int n = 0;
  
- #else /* __BIG_ENDIAN */
++	if (len > KVM_S390_FLIC_MAX_BUFFER || len == 0)
++		return -EINVAL;
++
++	/*
++	 * We are already using -ENOMEM to signal
++	 * userspace it may retry with a bigger buffer,
++	 * so we need to use something else for this case
++	 */
++	buf = vzalloc(len);
++	if (!buf)
++		return -ENOBUFS;
++
++	max_irqs = len / sizeof(struct kvm_s390_irq);
++
+ 	fi = &kvm->arch.float_int;
+ 	spin_lock(&fi->lock);
+-
+ 	list_for_each_entry(inti, &fi->list, list) {
+-		if (len < sizeof(struct kvm_s390_irq)) {
++		if (n == max_irqs) {
+ 			/* signal userspace to try again */
+ 			ret = -ENOMEM;
+ 			break;
+ 		}
+-		ret = copy_irq_to_user(inti, buf);
+-		if (ret)
+-			break;
+-		buf += sizeof(struct kvm_s390_irq);
+-		len -= sizeof(struct kvm_s390_irq);
++		inti_to_irq(inti, &buf[n]);
+ 		n++;
+ 	}
+-
+ 	spin_unlock(&fi->lock);
++	if (!ret && n > 0) {
++		if (copy_to_user(usrbuf, buf, sizeof(struct kvm_s390_irq) * n))
++			ret = -EFAULT;
++	}
++	vfree(buf);
  
--#define     LoadHW(addr, value, res)  \
-+#define     _LoadHW(addr, value, res, type)  \
- 		__asm__ __volatile__ (".set\tnoat\n"        \
--			"1:\t"user_lb("%0", "1(%2)")"\n"    \
--			"2:\t"user_lbu("$1", "0(%2)")"\n\t" \
-+			"1:\t"type##_lb("%0", "1(%2)")"\n"  \
-+			"2:\t"type##_lbu("$1", "0(%2)")"\n\t"\
- 			"sll\t%0, 0x8\n\t"                  \
- 			"or\t%0, $1\n\t"                    \
- 			"li\t%1, 0\n"                       \
-@@ -477,10 +477,10 @@ extern void show_registers(struct pt_regs *regs);
- 			: "r" (addr), "i" (-EFAULT));
- 
- #ifndef CONFIG_CPU_MIPSR6
--#define     LoadW(addr, value, res)   \
-+#define     _LoadW(addr, value, res, type)   \
- 		__asm__ __volatile__ (                      \
--			"1:\t"user_lwl("%0", "3(%2)")"\n"   \
--			"2:\t"user_lwr("%0", "(%2)")"\n\t"  \
-+			"1:\t"type##_lwl("%0", "3(%2)")"\n" \
-+			"2:\t"type##_lwr("%0", "(%2)")"\n\t"\
- 			"li\t%1, 0\n"                       \
- 			"3:\n\t"                            \
- 			".insn\n\t"                         \
-@@ -496,18 +496,18 @@ extern void show_registers(struct pt_regs *regs);
- 			: "r" (addr), "i" (-EFAULT));
- #else
- /* MIPSR6 has no lwl instruction */
--#define     LoadW(addr, value, res) \
-+#define     _LoadW(addr, value, res, type) \
- 		__asm__ __volatile__ (			    \
- 			".set\tpush\n"			    \
- 			".set\tnoat\n\t"		    \
--			"1:"user_lb("%0", "3(%2)")"\n\t"    \
--			"2:"user_lbu("$1", "2(%2)")"\n\t"   \
-+			"1:"type##_lb("%0", "3(%2)")"\n\t"  \
-+			"2:"type##_lbu("$1", "2(%2)")"\n\t" \
- 			"sll\t%0, 0x8\n\t"		    \
- 			"or\t%0, $1\n\t"		    \
--			"3:"user_lbu("$1", "1(%2)")"\n\t"   \
-+			"3:"type##_lbu("$1", "1(%2)")"\n\t" \
- 			"sll\t%0, 0x8\n\t"		    \
- 			"or\t%0, $1\n\t"		    \
--			"4:"user_lbu("$1", "0(%2)")"\n\t"   \
-+			"4:"type##_lbu("$1", "0(%2)")"\n\t" \
- 			"sll\t%0, 0x8\n\t"		    \
- 			"or\t%0, $1\n\t"		    \
- 			"li\t%1, 0\n"			    \
-@@ -529,11 +529,11 @@ extern void show_registers(struct pt_regs *regs);
- #endif /* CONFIG_CPU_MIPSR6 */
+ 	return ret < 0 ? ret : n;
+ }
+@@ -1520,7 +1526,7 @@ static int flic_get_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
  
+ 	switch (attr->group) {
+ 	case KVM_DEV_FLIC_GET_ALL_IRQS:
+-		r = get_all_floating_irqs(dev->kvm, (u8 *) attr->addr,
++		r = get_all_floating_irqs(dev->kvm, (u8 __user *) attr->addr,
+ 					  attr->attr);
+ 		break;
+ 	default:
+diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
+index c34109a..6995a30 100644
+--- a/arch/s390/kvm/kvm-s390.h
++++ b/arch/s390/kvm/kvm-s390.h
+@@ -151,8 +151,8 @@ int __must_check kvm_s390_inject_vcpu(struct kvm_vcpu *vcpu,
+ int __must_check kvm_s390_inject_program_int(struct kvm_vcpu *vcpu, u16 code);
+ struct kvm_s390_interrupt_info *kvm_s390_get_io_int(struct kvm *kvm,
+ 						    u64 cr6, u64 schid);
+-void kvm_s390_reinject_io_int(struct kvm *kvm,
+-			      struct kvm_s390_interrupt_info *inti);
++int kvm_s390_reinject_io_int(struct kvm *kvm,
++			     struct kvm_s390_interrupt_info *inti);
+ int kvm_s390_mask_adapter(struct kvm *kvm, unsigned int id, bool masked);
  
--#define     LoadHWU(addr, value, res) \
-+#define     _LoadHWU(addr, value, res, type) \
- 		__asm__ __volatile__ (                      \
- 			".set\tnoat\n"                      \
--			"1:\t"user_lbu("%0", "1(%2)")"\n"   \
--			"2:\t"user_lbu("$1", "0(%2)")"\n\t" \
-+			"1:\t"type##_lbu("%0", "1(%2)")"\n" \
-+			"2:\t"type##_lbu("$1", "0(%2)")"\n\t"\
- 			"sll\t%0, 0x8\n\t"                  \
- 			"or\t%0, $1\n\t"                    \
- 			"li\t%1, 0\n"                       \
-@@ -552,10 +552,10 @@ extern void show_registers(struct pt_regs *regs);
- 			: "r" (addr), "i" (-EFAULT));
+ /* implemented in intercept.c */
+diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
+index 3511169..b982fbc 100644
+--- a/arch/s390/kvm/priv.c
++++ b/arch/s390/kvm/priv.c
+@@ -229,18 +229,19 @@ static int handle_tpi(struct kvm_vcpu *vcpu)
+ 	struct kvm_s390_interrupt_info *inti;
+ 	unsigned long len;
+ 	u32 tpi_data[3];
+-	int cc, rc;
++	int rc;
+ 	u64 addr;
  
- #ifndef CONFIG_CPU_MIPSR6
--#define     LoadWU(addr, value, res)  \
-+#define     _LoadWU(addr, value, res, type)  \
- 		__asm__ __volatile__ (                      \
--			"1:\t"user_lwl("%0", "3(%2)")"\n"   \
--			"2:\t"user_lwr("%0", "(%2)")"\n\t"  \
-+			"1:\t"type##_lwl("%0", "3(%2)")"\n" \
-+			"2:\t"type##_lwr("%0", "(%2)")"\n\t"\
- 			"dsll\t%0, %0, 32\n\t"              \
- 			"dsrl\t%0, %0, 32\n\t"              \
- 			"li\t%1, 0\n"                       \
-@@ -572,7 +572,7 @@ extern void show_registers(struct pt_regs *regs);
- 			: "=&r" (value), "=r" (res)         \
- 			: "r" (addr), "i" (-EFAULT));
+-	rc = 0;
+ 	addr = kvm_s390_get_base_disp_s(vcpu);
+ 	if (addr & 3)
+ 		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
+-	cc = 0;
++
+ 	inti = kvm_s390_get_io_int(vcpu->kvm, vcpu->arch.sie_block->gcr[6], 0);
+-	if (!inti)
+-		goto no_interrupt;
+-	cc = 1;
++	if (!inti) {
++		kvm_s390_set_psw_cc(vcpu, 0);
++		return 0;
++	}
++
+ 	tpi_data[0] = inti->io.subchannel_id << 16 | inti->io.subchannel_nr;
+ 	tpi_data[1] = inti->io.io_int_parm;
+ 	tpi_data[2] = inti->io.io_int_word;
+@@ -251,30 +252,38 @@ static int handle_tpi(struct kvm_vcpu *vcpu)
+ 		 */
+ 		len = sizeof(tpi_data) - 4;
+ 		rc = write_guest(vcpu, addr, &tpi_data, len);
+-		if (rc)
+-			return kvm_s390_inject_prog_cond(vcpu, rc);
++		if (rc) {
++			rc = kvm_s390_inject_prog_cond(vcpu, rc);
++			goto reinject_interrupt;
++		}
+ 	} else {
+ 		/*
+ 		 * Store the three-word I/O interruption code into
+ 		 * the appropriate lowcore area.
+ 		 */
+ 		len = sizeof(tpi_data);
+-		if (write_guest_lc(vcpu, __LC_SUBCHANNEL_ID, &tpi_data, len))
++		if (write_guest_lc(vcpu, __LC_SUBCHANNEL_ID, &tpi_data, len)) {
++			/* failed writes to the low core are not recoverable */
+ 			rc = -EFAULT;
++			goto reinject_interrupt;
++		}
+ 	}
++
++	/* irq was successfully handed to the guest */
++	kfree(inti);
++	kvm_s390_set_psw_cc(vcpu, 1);
++	return 0;
++reinject_interrupt:
+ 	/*
+ 	 * If we encounter a problem storing the interruption code, the
+ 	 * instruction is suppressed from the guest's view: reinject the
+ 	 * interrupt.
+ 	 */
+-	if (!rc)
++	if (kvm_s390_reinject_io_int(vcpu->kvm, inti)) {
+ 		kfree(inti);
+-	else
+-		kvm_s390_reinject_io_int(vcpu->kvm, inti);
+-no_interrupt:
+-	/* Set condition code and we're done. */
+-	if (!rc)
+-		kvm_s390_set_psw_cc(vcpu, cc);
++		rc = -EFAULT;
++	}
++	/* don't set the cc, a pgm irq was injected or we drop to user space */
+ 	return rc ? -EFAULT : 0;
+ }
  
--#define     LoadDW(addr, value, res)  \
-+#define     _LoadDW(addr, value, res)  \
- 		__asm__ __volatile__ (                      \
- 			"1:\tldl\t%0, 7(%2)\n"              \
- 			"2:\tldr\t%0, (%2)\n\t"             \
-@@ -591,18 +591,18 @@ extern void show_registers(struct pt_regs *regs);
- 			: "r" (addr), "i" (-EFAULT));
- #else
- /* MIPSR6 has not lwl and ldl instructions */
--#define	    LoadWU(addr, value, res) \
-+#define	    _LoadWU(addr, value, res, type) \
- 		__asm__ __volatile__ (			    \
- 			".set\tpush\n\t"		    \
- 			".set\tnoat\n\t"		    \
--			"1:"user_lbu("%0", "3(%2)")"\n\t"   \
--			"2:"user_lbu("$1", "2(%2)")"\n\t"   \
-+			"1:"type##_lbu("%0", "3(%2)")"\n\t" \
-+			"2:"type##_lbu("$1", "2(%2)")"\n\t" \
- 			"sll\t%0, 0x8\n\t"		    \
- 			"or\t%0, $1\n\t"		    \
--			"3:"user_lbu("$1", "1(%2)")"\n\t"   \
-+			"3:"type##_lbu("$1", "1(%2)")"\n\t" \
- 			"sll\t%0, 0x8\n\t"		    \
- 			"or\t%0, $1\n\t"		    \
--			"4:"user_lbu("$1", "0(%2)")"\n\t"   \
-+			"4:"type##_lbu("$1", "0(%2)")"\n\t" \
- 			"sll\t%0, 0x8\n\t"		    \
- 			"or\t%0, $1\n\t"		    \
- 			"li\t%1, 0\n"			    \
-@@ -622,7 +622,7 @@ extern void show_registers(struct pt_regs *regs);
- 			: "=&r" (value), "=r" (res)	    \
- 			: "r" (addr), "i" (-EFAULT));
+@@ -467,6 +476,7 @@ static void handle_stsi_3_2_2(struct kvm_vcpu *vcpu, struct sysinfo_3_2_2 *mem)
+ 	for (n = mem->count - 1; n > 0 ; n--)
+ 		memcpy(&mem->vm[n], &mem->vm[n - 1], sizeof(mem->vm[0]));
  
--#define     LoadDW(addr, value, res)  \
-+#define     _LoadDW(addr, value, res)  \
- 		__asm__ __volatile__ (			    \
- 			".set\tpush\n\t"		    \
- 			".set\tnoat\n\t"		    \
-@@ -670,12 +670,12 @@ extern void show_registers(struct pt_regs *regs);
- 			: "r" (addr), "i" (-EFAULT));
- #endif /* CONFIG_CPU_MIPSR6 */
++	memset(&mem->vm[0], 0, sizeof(mem->vm[0]));
+ 	mem->vm[0].cpus_total = cpus;
+ 	mem->vm[0].cpus_configured = cpus;
+ 	mem->vm[0].cpus_standby = 0;
+diff --git a/arch/x86/include/asm/insn.h b/arch/x86/include/asm/insn.h
+index 47f29b1..e7814b7 100644
+--- a/arch/x86/include/asm/insn.h
++++ b/arch/x86/include/asm/insn.h
+@@ -69,7 +69,7 @@ struct insn {
+ 	const insn_byte_t *next_byte;
+ };
  
--#define     StoreHW(addr, value, res) \
-+#define     _StoreHW(addr, value, res, type) \
- 		__asm__ __volatile__ (                      \
- 			".set\tnoat\n"                      \
--			"1:\t"user_sb("%1", "0(%2)")"\n"    \
-+			"1:\t"type##_sb("%1", "0(%2)")"\n"  \
- 			"srl\t$1,%1, 0x8\n"                 \
--			"2:\t"user_sb("$1", "1(%2)")"\n"    \
-+			"2:\t"type##_sb("$1", "1(%2)")"\n"  \
- 			".set\tat\n\t"                      \
- 			"li\t%0, 0\n"                       \
- 			"3:\n\t"                            \
-@@ -691,10 +691,10 @@ extern void show_registers(struct pt_regs *regs);
- 			: "=r" (res)                        \
- 			: "r" (value), "r" (addr), "i" (-EFAULT));
- #ifndef CONFIG_CPU_MIPSR6
--#define     StoreW(addr, value, res)  \
-+#define     _StoreW(addr, value, res, type)  \
- 		__asm__ __volatile__ (                      \
--			"1:\t"user_swl("%1", "3(%2)")"\n"   \
--			"2:\t"user_swr("%1", "(%2)")"\n\t"  \
-+			"1:\t"type##_swl("%1", "3(%2)")"\n" \
-+			"2:\t"type##_swr("%1", "(%2)")"\n\t"\
- 			"li\t%0, 0\n"                       \
- 			"3:\n\t"                            \
- 			".insn\n\t"                         \
-@@ -709,7 +709,7 @@ extern void show_registers(struct pt_regs *regs);
- 		: "=r" (res)                                \
- 		: "r" (value), "r" (addr), "i" (-EFAULT));
+-#define MAX_INSN_SIZE	16
++#define MAX_INSN_SIZE	15
  
--#define     StoreDW(addr, value, res) \
-+#define     _StoreDW(addr, value, res) \
- 		__asm__ __volatile__ (                      \
- 			"1:\tsdl\t%1, 7(%2)\n"              \
- 			"2:\tsdr\t%1, (%2)\n\t"             \
-@@ -728,17 +728,17 @@ extern void show_registers(struct pt_regs *regs);
- 		: "r" (value), "r" (addr), "i" (-EFAULT));
- #else
- /* MIPSR6 has no swl and sdl instructions */
--#define     StoreW(addr, value, res)  \
-+#define     _StoreW(addr, value, res, type)  \
- 		__asm__ __volatile__ (                      \
- 			".set\tpush\n\t"		    \
- 			".set\tnoat\n\t"		    \
--			"1:"user_sb("%1", "0(%2)")"\n\t"    \
-+			"1:"type##_sb("%1", "0(%2)")"\n\t"  \
- 			"srl\t$1, %1, 0x8\n\t"		    \
--			"2:"user_sb("$1", "1(%2)")"\n\t"    \
-+			"2:"type##_sb("$1", "1(%2)")"\n\t"  \
- 			"srl\t$1, $1,  0x8\n\t"		    \
--			"3:"user_sb("$1", "2(%2)")"\n\t"    \
-+			"3:"type##_sb("$1", "2(%2)")"\n\t"  \
- 			"srl\t$1, $1, 0x8\n\t"		    \
--			"4:"user_sb("$1", "3(%2)")"\n\t"    \
-+			"4:"type##_sb("$1", "3(%2)")"\n\t"  \
- 			".set\tpop\n\t"			    \
- 			"li\t%0, 0\n"			    \
- 			"10:\n\t"			    \
-@@ -757,7 +757,7 @@ extern void show_registers(struct pt_regs *regs);
- 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
- 		: "memory");
+ #define X86_MODRM_MOD(modrm) (((modrm) & 0xc0) >> 6)
+ #define X86_MODRM_REG(modrm) (((modrm) & 0x38) >> 3)
+diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
+index a1410db..653dfa7 100644
+--- a/arch/x86/include/asm/mwait.h
++++ b/arch/x86/include/asm/mwait.h
+@@ -30,6 +30,14 @@ static inline void __mwait(unsigned long eax, unsigned long ecx)
+ 		     :: "a" (eax), "c" (ecx));
+ }
  
--#define     StoreDW(addr, value, res) \
-+#define     _StoreDW(addr, value, res) \
- 		__asm__ __volatile__ (                      \
- 			".set\tpush\n\t"		    \
- 			".set\tnoat\n\t"		    \
-@@ -801,6 +801,22 @@ extern void show_registers(struct pt_regs *regs);
- #endif /* CONFIG_CPU_MIPSR6 */
- #endif
- 
-+#define LoadHWU(addr, value, res)	_LoadHWU(addr, value, res, kernel)
-+#define LoadHWUE(addr, value, res)	_LoadHWU(addr, value, res, user)
-+#define LoadWU(addr, value, res)	_LoadWU(addr, value, res, kernel)
-+#define LoadWUE(addr, value, res)	_LoadWU(addr, value, res, user)
-+#define LoadHW(addr, value, res)	_LoadHW(addr, value, res, kernel)
-+#define LoadHWE(addr, value, res)	_LoadHW(addr, value, res, user)
-+#define LoadW(addr, value, res)		_LoadW(addr, value, res, kernel)
-+#define LoadWE(addr, value, res)	_LoadW(addr, value, res, user)
-+#define LoadDW(addr, value, res)	_LoadDW(addr, value, res)
-+
-+#define StoreHW(addr, value, res)	_StoreHW(addr, value, res, kernel)
-+#define StoreHWE(addr, value, res)	_StoreHW(addr, value, res, user)
-+#define StoreW(addr, value, res)	_StoreW(addr, value, res, kernel)
-+#define StoreWE(addr, value, res)	_StoreW(addr, value, res, user)
-+#define StoreDW(addr, value, res)	_StoreDW(addr, value, res)
-+
- static void emulate_load_store_insn(struct pt_regs *regs,
- 	void __user *addr, unsigned int __user *pc)
- {
-@@ -872,7 +888,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
- 				set_fs(seg);
- 				goto sigbus;
- 			}
--			LoadHW(addr, value, res);
-+			LoadHWE(addr, value, res);
- 			if (res) {
- 				set_fs(seg);
- 				goto fault;
-@@ -885,7 +901,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
- 				set_fs(seg);
- 				goto sigbus;
- 			}
--				LoadW(addr, value, res);
-+				LoadWE(addr, value, res);
- 			if (res) {
- 				set_fs(seg);
- 				goto fault;
-@@ -898,7 +914,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
- 				set_fs(seg);
- 				goto sigbus;
- 			}
--			LoadHWU(addr, value, res);
-+			LoadHWUE(addr, value, res);
- 			if (res) {
- 				set_fs(seg);
- 				goto fault;
-@@ -913,7 +929,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
- 			}
- 			compute_return_epc(regs);
- 			value = regs->regs[insn.spec3_format.rt];
--			StoreHW(addr, value, res);
-+			StoreHWE(addr, value, res);
- 			if (res) {
- 				set_fs(seg);
- 				goto fault;
-@@ -926,7 +942,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
- 			}
- 			compute_return_epc(regs);
- 			value = regs->regs[insn.spec3_format.rt];
--			StoreW(addr, value, res);
-+			StoreWE(addr, value, res);
- 			if (res) {
- 				set_fs(seg);
- 				goto fault;
--- 
-2.3.6
-
-
-From ae0a145ca5b6c135e068a08f859e3f10ad2242d9 Mon Sep 17 00:00:00 2001
-From: Markos Chandras <markos.chandras@imgtec.com>
-Date: Mon, 9 Mar 2015 14:54:51 +0000
-Subject: [PATCH 040/219] MIPS: unaligned: Surround load/store macros in do {}
- while statements
-Cc: mpagano@gentoo.org
-
-commit 3563c32d6532ece53c9dd8905a8e41983ef9952f upstream.
-
-It's best to surround such complex macros with do {} while statements
-so they can appear as independent logical blocks when used within other
-control blocks.
-
-Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-Cc: linux-mips@linux-mips.org
-Patchwork: https://patchwork.linux-mips.org/patch/9502/
-Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/mips/kernel/unaligned.c | 116 +++++++++++++++++++++++++++++++++----------
- 1 file changed, 90 insertions(+), 26 deletions(-)
-
-diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c
-index 7a5707e..ab47590 100644
---- a/arch/mips/kernel/unaligned.c
-+++ b/arch/mips/kernel/unaligned.c
-@@ -110,6 +110,7 @@ extern void show_registers(struct pt_regs *regs);
- 
- #ifdef __BIG_ENDIAN
- #define     _LoadHW(addr, value, res, type)  \
-+do {                                                        \
- 		__asm__ __volatile__ (".set\tnoat\n"        \
- 			"1:\t"type##_lb("%0", "0(%2)")"\n"  \
- 			"2:\t"type##_lbu("$1", "1(%2)")"\n\t"\
-@@ -127,10 +128,12 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 			: "=&r" (value), "=r" (res)         \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
- 
- #ifndef CONFIG_CPU_MIPSR6
- #define     _LoadW(addr, value, res, type)   \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			"1:\t"type##_lwl("%0", "(%2)")"\n"   \
- 			"2:\t"type##_lwr("%0", "3(%2)")"\n\t"\
-@@ -146,10 +149,13 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 			: "=&r" (value), "=r" (res)         \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
-+
- #else
- /* MIPSR6 has no lwl instruction */
- #define     _LoadW(addr, value, res, type) \
-+do {                                                        \
- 		__asm__ __volatile__ (			    \
- 			".set\tpush\n"			    \
- 			".set\tnoat\n\t"		    \
-@@ -178,10 +184,13 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t4b, 11b\n\t"		    \
- 			".previous"			    \
- 			: "=&r" (value), "=r" (res)	    \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
++static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
++{
++	trace_hardirqs_on();
++	/* "mwait %eax, %ecx;" */
++	asm volatile("sti; .byte 0x0f, 0x01, 0xc9;"
++		     :: "a" (eax), "c" (ecx));
++}
 +
- #endif /* CONFIG_CPU_MIPSR6 */
- 
- #define     _LoadHWU(addr, value, res, type) \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			".set\tnoat\n"                      \
- 			"1:\t"type##_lbu("%0", "0(%2)")"\n" \
-@@ -201,10 +210,12 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 			: "=&r" (value), "=r" (res)         \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
+ /*
+  * This uses new MONITOR/MWAIT instructions on P4 processors with PNI,
+  * which can obviate IPI to trigger checking of need_resched.
+diff --git a/arch/x86/include/asm/pvclock.h b/arch/x86/include/asm/pvclock.h
+index d6b078e..25b1cc0 100644
+--- a/arch/x86/include/asm/pvclock.h
++++ b/arch/x86/include/asm/pvclock.h
+@@ -95,6 +95,7 @@ unsigned __pvclock_read_cycles(const struct pvclock_vcpu_time_info *src,
  
- #ifndef CONFIG_CPU_MIPSR6
- #define     _LoadWU(addr, value, res, type)  \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			"1:\t"type##_lwl("%0", "(%2)")"\n"  \
- 			"2:\t"type##_lwr("%0", "3(%2)")"\n\t"\
-@@ -222,9 +233,11 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 			: "=&r" (value), "=r" (res)         \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
+ struct pvclock_vsyscall_time_info {
+ 	struct pvclock_vcpu_time_info pvti;
++	u32 migrate_count;
+ } __attribute__((__aligned__(SMP_CACHE_BYTES)));
  
- #define     _LoadDW(addr, value, res)  \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			"1:\tldl\t%0, (%2)\n"               \
- 			"2:\tldr\t%0, 7(%2)\n\t"            \
-@@ -240,10 +253,13 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 			: "=&r" (value), "=r" (res)         \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
-+
- #else
- /* MIPSR6 has not lwl and ldl instructions */
- #define	    _LoadWU(addr, value, res, type) \
-+do {                                                        \
- 		__asm__ __volatile__ (			    \
- 			".set\tpush\n\t"		    \
- 			".set\tnoat\n\t"		    \
-@@ -272,9 +288,11 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t4b, 11b\n\t"		    \
- 			".previous"			    \
- 			: "=&r" (value), "=r" (res)	    \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
+ #define PVTI_SIZE sizeof(struct pvclock_vsyscall_time_info)
+diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
+index 0739833..666bcf1 100644
+--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
++++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
+@@ -557,6 +557,8 @@ struct event_constraint intel_core2_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c5, 0x1), /* BR_INST_RETIRED.MISPRED */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x1fc7, 0x1), /* SIMD_INST_RETURED.ANY */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1),    /* MEM_LOAD_RETIRED.* */
++	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
++	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
+ 	EVENT_CONSTRAINT_END
+ };
  
- #define     _LoadDW(addr, value, res)  \
-+do {                                                        \
- 		__asm__ __volatile__ (			    \
- 			".set\tpush\n\t"		    \
- 			".set\tnoat\n\t"		    \
-@@ -319,11 +337,14 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t8b, 11b\n\t"		    \
- 			".previous"			    \
- 			: "=&r" (value), "=r" (res)	    \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
-+
- #endif /* CONFIG_CPU_MIPSR6 */
+@@ -564,6 +566,8 @@ struct event_constraint intel_atom_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c0, 0x1), /* INST_RETIRED.ANY */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c5, 0x1), /* MISPREDICTED_BRANCH_RETIRED */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1),    /* MEM_LOAD_RETIRED.* */
++	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
++	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
+ 	EVENT_CONSTRAINT_END
+ };
  
+@@ -587,6 +591,8 @@ struct event_constraint intel_nehalem_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x20c8, 0xf), /* ITLB_MISS_RETIRED */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf),    /* MEM_LOAD_RETIRED.* */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf),    /* FP_ASSIST.* */
++	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
++	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
+ 	EVENT_CONSTRAINT_END
+ };
  
- #define     _StoreHW(addr, value, res, type) \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			".set\tnoat\n"                      \
- 			"1:\t"type##_sb("%1", "1(%2)")"\n"  \
-@@ -342,10 +363,12 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 			: "=r" (res)                        \
--			: "r" (value), "r" (addr), "i" (-EFAULT));
-+			: "r" (value), "r" (addr), "i" (-EFAULT));\
-+} while(0)
+@@ -602,6 +608,8 @@ struct event_constraint intel_westmere_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x20c8, 0xf), /* ITLB_MISS_RETIRED */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf),    /* MEM_LOAD_RETIRED.* */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf),    /* FP_ASSIST.* */
++	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
++	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
+ 	EVENT_CONSTRAINT_END
+ };
  
- #ifndef CONFIG_CPU_MIPSR6
- #define     _StoreW(addr, value, res, type)  \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			"1:\t"type##_swl("%1", "(%2)")"\n"  \
- 			"2:\t"type##_swr("%1", "3(%2)")"\n\t"\
-@@ -361,9 +384,11 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 		: "=r" (res)                                \
--		: "r" (value), "r" (addr), "i" (-EFAULT));
-+		: "r" (value), "r" (addr), "i" (-EFAULT));  \
-+} while(0)
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 046e2d6..a388bb8 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -24,6 +24,7 @@
+ #include <asm/syscalls.h>
+ #include <asm/idle.h>
+ #include <asm/uaccess.h>
++#include <asm/mwait.h>
+ #include <asm/i387.h>
+ #include <asm/fpu-internal.h>
+ #include <asm/debugreg.h>
+@@ -399,6 +400,53 @@ static void amd_e400_idle(void)
+ 		default_idle();
+ }
  
- #define     _StoreDW(addr, value, res) \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			"1:\tsdl\t%1,(%2)\n"                \
- 			"2:\tsdr\t%1, 7(%2)\n\t"            \
-@@ -379,10 +404,13 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 		: "=r" (res)                                \
--		: "r" (value), "r" (addr), "i" (-EFAULT));
-+		: "r" (value), "r" (addr), "i" (-EFAULT));  \
-+} while(0)
-+
- #else
- /* MIPSR6 has no swl and sdl instructions */
- #define     _StoreW(addr, value, res, type)  \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			".set\tpush\n\t"		    \
- 			".set\tnoat\n\t"		    \
-@@ -409,9 +437,11 @@ extern void show_registers(struct pt_regs *regs);
- 			".previous"			    \
- 		: "=&r" (res)			    	    \
- 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
--		: "memory");
-+		: "memory");                                \
-+} while(0)
++/*
++ * Intel Core2 and older machines prefer MWAIT over HALT for C1.
++ * We can't rely on cpuidle installing MWAIT, because it will not load
++ * on systems that support only C1 -- so the boot default must be MWAIT.
++ *
++ * Some AMD machines are the opposite, they depend on using HALT.
++ *
++ * So for default C1, which is used during boot until cpuidle loads,
++ * use MWAIT-C1 on Intel HW that has it, else use HALT.
++ */
++static int prefer_mwait_c1_over_halt(const struct cpuinfo_x86 *c)
++{
++	if (c->x86_vendor != X86_VENDOR_INTEL)
++		return 0;
++
++	if (!cpu_has(c, X86_FEATURE_MWAIT))
++		return 0;
++
++	return 1;
++}
++
++/*
++ * MONITOR/MWAIT with no hints, used for default default C1 state.
++ * This invokes MWAIT with interrutps enabled and no flags,
++ * which is backwards compatible with the original MWAIT implementation.
++ */
++
++static void mwait_idle(void)
++{
++	if (!current_set_polling_and_test()) {
++		if (this_cpu_has(X86_BUG_CLFLUSH_MONITOR)) {
++			smp_mb(); /* quirk */
++			clflush((void *)&current_thread_info()->flags);
++			smp_mb(); /* quirk */
++		}
++
++		__monitor((void *)&current_thread_info()->flags, 0, 0);
++		if (!need_resched())
++			__sti_mwait(0, 0);
++		else
++			local_irq_enable();
++	} else {
++		local_irq_enable();
++	}
++	__current_clr_polling();
++}
++
+ void select_idle_routine(const struct cpuinfo_x86 *c)
+ {
+ #ifdef CONFIG_SMP
+@@ -412,6 +460,9 @@ void select_idle_routine(const struct cpuinfo_x86 *c)
+ 		/* E400: APIC timer interrupt does not wake up CPU from C1e */
+ 		pr_info("using AMD E400 aware idle routine\n");
+ 		x86_idle = amd_e400_idle;
++	} else if (prefer_mwait_c1_over_halt(c)) {
++		pr_info("using mwait in idle threads\n");
++		x86_idle = mwait_idle;
+ 	} else
+ 		x86_idle = default_idle;
+ }
+diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
+index 2f355d2..e5ecd20 100644
+--- a/arch/x86/kernel/pvclock.c
++++ b/arch/x86/kernel/pvclock.c
+@@ -141,7 +141,46 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
+ 	set_normalized_timespec(ts, now.tv_sec, now.tv_nsec);
+ }
  
- #define     StoreDW(addr, value, res) \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			".set\tpush\n\t"		    \
- 			".set\tnoat\n\t"		    \
-@@ -451,12 +481,15 @@ extern void show_registers(struct pt_regs *regs);
- 			".previous"			    \
- 		: "=&r" (res)			    	    \
- 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
--		: "memory");
-+		: "memory");                                \
-+} while(0)
++static struct pvclock_vsyscall_time_info *pvclock_vdso_info;
 +
- #endif /* CONFIG_CPU_MIPSR6 */
++static struct pvclock_vsyscall_time_info *
++pvclock_get_vsyscall_user_time_info(int cpu)
++{
++	if (!pvclock_vdso_info) {
++		BUG();
++		return NULL;
++	}
++
++	return &pvclock_vdso_info[cpu];
++}
++
++struct pvclock_vcpu_time_info *pvclock_get_vsyscall_time_info(int cpu)
++{
++	return &pvclock_get_vsyscall_user_time_info(cpu)->pvti;
++}
++
+ #ifdef CONFIG_X86_64
++static int pvclock_task_migrate(struct notifier_block *nb, unsigned long l,
++			        void *v)
++{
++	struct task_migration_notifier *mn = v;
++	struct pvclock_vsyscall_time_info *pvti;
++
++	pvti = pvclock_get_vsyscall_user_time_info(mn->from_cpu);
++
++	/* this is NULL when pvclock vsyscall is not initialized */
++	if (unlikely(pvti == NULL))
++		return NOTIFY_DONE;
++
++	pvti->migrate_count++;
++
++	return NOTIFY_DONE;
++}
++
++static struct notifier_block pvclock_migrate = {
++	.notifier_call = pvclock_task_migrate,
++};
++
+ /*
+  * Initialize the generic pvclock vsyscall state.  This will allocate
+  * a/some page(s) for the per-vcpu pvclock information, set up a
+@@ -155,12 +194,17 @@ int __init pvclock_init_vsyscall(struct pvclock_vsyscall_time_info *i,
  
- #else /* __BIG_ENDIAN */
+ 	WARN_ON (size != PVCLOCK_VSYSCALL_NR_PAGES*PAGE_SIZE);
  
- #define     _LoadHW(addr, value, res, type)  \
-+do {                                                        \
- 		__asm__ __volatile__ (".set\tnoat\n"        \
- 			"1:\t"type##_lb("%0", "1(%2)")"\n"  \
- 			"2:\t"type##_lbu("$1", "0(%2)")"\n\t"\
-@@ -474,10 +507,12 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 			: "=&r" (value), "=r" (res)         \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
++	pvclock_vdso_info = i;
++
+ 	for (idx = 0; idx <= (PVCLOCK_FIXMAP_END-PVCLOCK_FIXMAP_BEGIN); idx++) {
+ 		__set_fixmap(PVCLOCK_FIXMAP_BEGIN + idx,
+ 			     __pa(i) + (idx*PAGE_SIZE),
+ 			     PAGE_KERNEL_VVAR);
+ 	}
  
- #ifndef CONFIG_CPU_MIPSR6
- #define     _LoadW(addr, value, res, type)   \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			"1:\t"type##_lwl("%0", "3(%2)")"\n" \
- 			"2:\t"type##_lwr("%0", "(%2)")"\n\t"\
-@@ -493,10 +528,13 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 			: "=&r" (value), "=r" (res)         \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
 +
- #else
- /* MIPSR6 has no lwl instruction */
- #define     _LoadW(addr, value, res, type) \
-+do {                                                        \
- 		__asm__ __volatile__ (			    \
- 			".set\tpush\n"			    \
- 			".set\tnoat\n\t"		    \
-@@ -525,11 +563,14 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t4b, 11b\n\t"		    \
- 			".previous"			    \
- 			: "=&r" (value), "=r" (res)	    \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
++	register_task_migration_notifier(&pvclock_migrate);
 +
- #endif /* CONFIG_CPU_MIPSR6 */
+ 	return 0;
+ }
+ #endif
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index ae4f6d3..a60bd3a 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -3621,8 +3621,16 @@ static void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
  
+ static int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ {
+-	unsigned long hw_cr4 = cr4 | (to_vmx(vcpu)->rmode.vm86_active ?
+-		    KVM_RMODE_VM_CR4_ALWAYS_ON : KVM_PMODE_VM_CR4_ALWAYS_ON);
++	/*
++	 * Pass through host's Machine Check Enable value to hw_cr4, which
++	 * is in force while we are in guest mode.  Do not let guests control
++	 * this bit, even if host CR4.MCE == 0.
++	 */
++	unsigned long hw_cr4 =
++		(cr4_read_shadow() & X86_CR4_MCE) |
++		(cr4 & ~X86_CR4_MCE) |
++		(to_vmx(vcpu)->rmode.vm86_active ?
++		 KVM_RMODE_VM_CR4_ALWAYS_ON : KVM_PMODE_VM_CR4_ALWAYS_ON);
  
- #define     _LoadHWU(addr, value, res, type) \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			".set\tnoat\n"                      \
- 			"1:\t"type##_lbu("%0", "1(%2)")"\n" \
-@@ -549,10 +590,12 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 			: "=&r" (value), "=r" (res)         \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
+ 	if (cr4 & X86_CR4_VMXE) {
+ 		/*
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 32bf19e..e222ba5 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -5775,7 +5775,6 @@ int kvm_arch_init(void *opaque)
+ 	kvm_set_mmio_spte_mask();
  
- #ifndef CONFIG_CPU_MIPSR6
- #define     _LoadWU(addr, value, res, type)  \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			"1:\t"type##_lwl("%0", "3(%2)")"\n" \
- 			"2:\t"type##_lwr("%0", "(%2)")"\n\t"\
-@@ -570,9 +613,11 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 			: "=&r" (value), "=r" (res)         \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
+ 	kvm_x86_ops = ops;
+-	kvm_init_msr_list();
  
- #define     _LoadDW(addr, value, res)  \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			"1:\tldl\t%0, 7(%2)\n"              \
- 			"2:\tldr\t%0, (%2)\n\t"             \
-@@ -588,10 +633,13 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 			: "=&r" (value), "=r" (res)         \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
-+
- #else
- /* MIPSR6 has not lwl and ldl instructions */
- #define	    _LoadWU(addr, value, res, type) \
-+do {                                                        \
- 		__asm__ __volatile__ (			    \
- 			".set\tpush\n\t"		    \
- 			".set\tnoat\n\t"		    \
-@@ -620,9 +668,11 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t4b, 11b\n\t"		    \
- 			".previous"			    \
- 			: "=&r" (value), "=r" (res)	    \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
+ 	kvm_mmu_set_mask_ptes(PT_USER_MASK, PT_ACCESSED_MASK,
+ 			PT_DIRTY_MASK, PT64_NX_MASK, 0);
+@@ -7209,7 +7208,14 @@ void kvm_arch_hardware_disable(void)
  
- #define     _LoadDW(addr, value, res)  \
-+do {                                                        \
- 		__asm__ __volatile__ (			    \
- 			".set\tpush\n\t"		    \
- 			".set\tnoat\n\t"		    \
-@@ -667,10 +717,12 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t8b, 11b\n\t"		    \
- 			".previous"			    \
- 			: "=&r" (value), "=r" (res)	    \
--			: "r" (addr), "i" (-EFAULT));
-+			: "r" (addr), "i" (-EFAULT));       \
-+} while(0)
- #endif /* CONFIG_CPU_MIPSR6 */
- 
- #define     _StoreHW(addr, value, res, type) \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			".set\tnoat\n"                      \
- 			"1:\t"type##_sb("%1", "0(%2)")"\n"  \
-@@ -689,9 +741,12 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 			: "=r" (res)                        \
--			: "r" (value), "r" (addr), "i" (-EFAULT));
-+			: "r" (value), "r" (addr), "i" (-EFAULT));\
-+} while(0)
+ int kvm_arch_hardware_setup(void)
+ {
+-	return kvm_x86_ops->hardware_setup();
++	int r;
 +
- #ifndef CONFIG_CPU_MIPSR6
- #define     _StoreW(addr, value, res, type)  \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			"1:\t"type##_swl("%1", "3(%2)")"\n" \
- 			"2:\t"type##_swr("%1", "(%2)")"\n\t"\
-@@ -707,9 +762,11 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 		: "=r" (res)                                \
--		: "r" (value), "r" (addr), "i" (-EFAULT));
-+		: "r" (value), "r" (addr), "i" (-EFAULT));  \
-+} while(0)
- 
- #define     _StoreDW(addr, value, res) \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			"1:\tsdl\t%1, 7(%2)\n"              \
- 			"2:\tsdr\t%1, (%2)\n\t"             \
-@@ -725,10 +782,13 @@ extern void show_registers(struct pt_regs *regs);
- 			STR(PTR)"\t2b, 4b\n\t"              \
- 			".previous"                         \
- 		: "=r" (res)                                \
--		: "r" (value), "r" (addr), "i" (-EFAULT));
-+		: "r" (value), "r" (addr), "i" (-EFAULT));  \
-+} while(0)
++	r = kvm_x86_ops->hardware_setup();
++	if (r != 0)
++		return r;
 +
- #else
- /* MIPSR6 has no swl and sdl instructions */
- #define     _StoreW(addr, value, res, type)  \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			".set\tpush\n\t"		    \
- 			".set\tnoat\n\t"		    \
-@@ -755,9 +815,11 @@ extern void show_registers(struct pt_regs *regs);
- 			".previous"			    \
- 		: "=&r" (res)			    	    \
- 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
--		: "memory");
-+		: "memory");                                \
-+} while(0)
++	kvm_init_msr_list();
++	return 0;
+ }
  
- #define     _StoreDW(addr, value, res) \
-+do {                                                        \
- 		__asm__ __volatile__ (                      \
- 			".set\tpush\n\t"		    \
- 			".set\tnoat\n\t"		    \
-@@ -797,7 +859,9 @@ extern void show_registers(struct pt_regs *regs);
- 			".previous"			    \
- 		: "=&r" (res)			    	    \
- 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
--		: "memory");
-+		: "memory");                                \
-+} while(0)
+ void kvm_arch_hardware_unsetup(void)
+diff --git a/arch/x86/lib/insn.c b/arch/x86/lib/insn.c
+index 1313ae6..85994f5 100644
+--- a/arch/x86/lib/insn.c
++++ b/arch/x86/lib/insn.c
+@@ -52,6 +52,13 @@
+  */
+ void insn_init(struct insn *insn, const void *kaddr, int buf_len, int x86_64)
+ {
++	/*
++	 * Instructions longer than MAX_INSN_SIZE (15 bytes) are invalid
++	 * even if the input buffer is long enough to hold them.
++	 */
++	if (buf_len > MAX_INSN_SIZE)
++		buf_len = MAX_INSN_SIZE;
 +
- #endif /* CONFIG_CPU_MIPSR6 */
- #endif
+ 	memset(insn, 0, sizeof(*insn));
+ 	insn->kaddr = kaddr;
+ 	insn->end_kaddr = kaddr + buf_len;
+diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
+index 1f33b3d..0a42327 100644
+--- a/arch/x86/lib/usercopy_64.c
++++ b/arch/x86/lib/usercopy_64.c
+@@ -82,7 +82,7 @@ copy_user_handle_tail(char *to, char *from, unsigned len)
+ 	clac();
  
--- 
-2.3.6
-
-
-From e239cb24f08477d187a5bb831088de60f70e3ade Mon Sep 17 00:00:00 2001
-From: Markos Chandras <markos.chandras@imgtec.com>
-Date: Mon, 9 Mar 2015 14:54:52 +0000
-Subject: [PATCH 041/219] MIPS: unaligned: Fix regular load/store instruction
- emulation for EVA
-Cc: mpagano@gentoo.org
-
-commit 6eae35485b26f9e51ab896eb8a936bed9908fdf6 upstream.
-
-When emulating a regular lh/lw/lhu/sh/sw we need to use the appropriate
-instruction if we are in EVA mode. This is necessary for userspace
-applications which trigger alignment exceptions. In such case, the
-userspace load/store instruction needs to be emulated with the correct
-eva/non-eva instruction by the kernel emulator.
-
-Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-Fixes: c1771216ab48 ("MIPS: kernel: unaligned: Handle unaligned accesses for EVA")
-Cc: linux-mips@linux-mips.org
-Patchwork: https://patchwork.linux-mips.org/patch/9503/
-Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/mips/kernel/unaligned.c | 52 +++++++++++++++++++++++++++++++++++++++-----
- 1 file changed, 47 insertions(+), 5 deletions(-)
-
-diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c
-index ab47590..7659da2 100644
---- a/arch/mips/kernel/unaligned.c
-+++ b/arch/mips/kernel/unaligned.c
-@@ -1023,7 +1023,15 @@ static void emulate_load_store_insn(struct pt_regs *regs,
- 		if (!access_ok(VERIFY_READ, addr, 2))
- 			goto sigbus;
+ 	/* If the destination is a kernel buffer, we always clear the end */
+-	if ((unsigned long)to >= TASK_SIZE_MAX)
++	if (!__addr_ok(to))
+ 		memset(to, 0, len);
+ 	return len;
+ }
+diff --git a/arch/x86/vdso/vclock_gettime.c b/arch/x86/vdso/vclock_gettime.c
+index 9793322..40d2473 100644
+--- a/arch/x86/vdso/vclock_gettime.c
++++ b/arch/x86/vdso/vclock_gettime.c
+@@ -82,18 +82,15 @@ static notrace cycle_t vread_pvclock(int *mode)
+ 	cycle_t ret;
+ 	u64 last;
+ 	u32 version;
++	u32 migrate_count;
+ 	u8 flags;
+ 	unsigned cpu, cpu1;
  
--		LoadHW(addr, value, res);
-+		if (config_enabled(CONFIG_EVA)) {
-+			if (segment_eq(get_fs(), get_ds()))
-+				LoadHW(addr, value, res);
-+			else
-+				LoadHWE(addr, value, res);
-+		} else {
-+			LoadHW(addr, value, res);
-+		}
-+
- 		if (res)
- 			goto fault;
- 		compute_return_epc(regs);
-@@ -1034,7 +1042,15 @@ static void emulate_load_store_insn(struct pt_regs *regs,
- 		if (!access_ok(VERIFY_READ, addr, 4))
- 			goto sigbus;
  
--		LoadW(addr, value, res);
-+		if (config_enabled(CONFIG_EVA)) {
-+			if (segment_eq(get_fs(), get_ds()))
-+				LoadW(addr, value, res);
-+			else
-+				LoadWE(addr, value, res);
-+		} else {
-+			LoadW(addr, value, res);
-+		}
-+
- 		if (res)
- 			goto fault;
- 		compute_return_epc(regs);
-@@ -1045,7 +1061,15 @@ static void emulate_load_store_insn(struct pt_regs *regs,
- 		if (!access_ok(VERIFY_READ, addr, 2))
- 			goto sigbus;
+ 	/*
+-	 * Note: hypervisor must guarantee that:
+-	 * 1. cpu ID number maps 1:1 to per-CPU pvclock time info.
+-	 * 2. that per-CPU pvclock time info is updated if the
+-	 *    underlying CPU changes.
+-	 * 3. that version is increased whenever underlying CPU
+-	 *    changes.
+-	 *
++	 * When looping to get a consistent (time-info, tsc) pair, we
++	 * also need to deal with the possibility we can switch vcpus,
++	 * so make sure we always re-fetch time-info for the current vcpu.
+ 	 */
+ 	do {
+ 		cpu = __getcpu() & VGETCPU_CPU_MASK;
+@@ -102,20 +99,27 @@ static notrace cycle_t vread_pvclock(int *mode)
+ 		 * __getcpu() calls (Gleb).
+ 		 */
  
--		LoadHWU(addr, value, res);
-+		if (config_enabled(CONFIG_EVA)) {
-+			if (segment_eq(get_fs(), get_ds()))
-+				LoadHWU(addr, value, res);
-+			else
-+				LoadHWUE(addr, value, res);
-+		} else {
-+			LoadHWU(addr, value, res);
-+		}
+-		pvti = get_pvti(cpu);
++		/* Make sure migrate_count will change if we leave the VCPU. */
++		do {
++			pvti = get_pvti(cpu);
++			migrate_count = pvti->migrate_count;
 +
- 		if (res)
- 			goto fault;
- 		compute_return_epc(regs);
-@@ -1104,7 +1128,16 @@ static void emulate_load_store_insn(struct pt_regs *regs,
++			cpu1 = cpu;
++			cpu = __getcpu() & VGETCPU_CPU_MASK;
++		} while (unlikely(cpu != cpu1));
  
- 		compute_return_epc(regs);
- 		value = regs->regs[insn.i_format.rt];
--		StoreHW(addr, value, res);
-+
-+		if (config_enabled(CONFIG_EVA)) {
-+			if (segment_eq(get_fs(), get_ds()))
-+				StoreHW(addr, value, res);
-+			else
-+				StoreHWE(addr, value, res);
-+		} else {
-+			StoreHW(addr, value, res);
-+		}
-+
- 		if (res)
- 			goto fault;
- 		break;
-@@ -1115,7 +1148,16 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 		version = __pvclock_read_cycles(&pvti->pvti, &ret, &flags);
  
- 		compute_return_epc(regs);
- 		value = regs->regs[insn.i_format.rt];
--		StoreW(addr, value, res);
-+
-+		if (config_enabled(CONFIG_EVA)) {
-+			if (segment_eq(get_fs(), get_ds()))
-+				StoreW(addr, value, res);
-+			else
-+				StoreWE(addr, value, res);
-+		} else {
-+			StoreW(addr, value, res);
-+		}
-+
- 		if (res)
- 			goto fault;
- 		break;
--- 
-2.3.6
-
-
-From 9da8705189d48b9d74724d5ae37c5a3a486fcfef Mon Sep 17 00:00:00 2001
-From: Huacai Chen <chenhc@lemote.com>
-Date: Thu, 12 Mar 2015 11:51:06 +0800
-Subject: [PATCH 042/219] MIPS: Loongson-3: Add IRQF_NO_SUSPEND to Cascade
- irqaction
-Cc: mpagano@gentoo.org
-
-commit 0add9c2f1cff9f3f1f2eb7e9babefa872a9d14b9 upstream.
-
-HPET irq is routed to i8259 and then to MIPS CPU irq (cascade). After
-commit a3e6c1eff5 (MIPS: IRQ: Fix disable_irq on CPU IRQs), if without
-IRQF_NO_SUSPEND in cascade_irqaction, HPET interrupts will lost during
-suspend. The result is machine cannot be waken up.
-
-Signed-off-by: Huacai Chen <chenhc@lemote.com>
-Cc: Steven J. Hill <Steven.Hill@imgtec.com>
-Cc: linux-mips@linux-mips.org
-Cc: Fuxin Zhang <zhangfx@lemote.com>
-Cc: Zhangjin Wu <wuzhangjin@gmail.com>
-Patchwork: https://patchwork.linux-mips.org/patch/9528/
-Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/mips/loongson/loongson-3/irq.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/arch/mips/loongson/loongson-3/irq.c b/arch/mips/loongson/loongson-3/irq.c
-index 21221ed..0f75b6b 100644
---- a/arch/mips/loongson/loongson-3/irq.c
-+++ b/arch/mips/loongson/loongson-3/irq.c
-@@ -44,6 +44,7 @@ void mach_irq_dispatch(unsigned int pending)
+ 		/*
+ 		 * Test we're still on the cpu as well as the version.
+-		 * We could have been migrated just after the first
+-		 * vgetcpu but before fetching the version, so we
+-		 * wouldn't notice a version change.
++		 * - We must read TSC of pvti's VCPU.
++		 * - KVM doesn't follow the versioning protocol, so data could
++		 *   change before version if we left the VCPU.
+ 		 */
+-		cpu1 = __getcpu() & VGETCPU_CPU_MASK;
+-	} while (unlikely(cpu != cpu1 ||
+-			  (pvti->pvti.version & 1) ||
+-			  pvti->pvti.version != version));
++		smp_rmb();
++	} while (unlikely((pvti->pvti.version & 1) ||
++			  pvti->pvti.version != version ||
++			  pvti->migrate_count != migrate_count));
  
- static struct irqaction cascade_irqaction = {
- 	.handler = no_action,
-+	.flags = IRQF_NO_SUSPEND,
- 	.name = "cascade",
- };
+ 	if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))
+ 		*mode = VCLOCK_NONE;
+diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
+index e31d494..87be10e 100644
+--- a/arch/xtensa/Kconfig
++++ b/arch/xtensa/Kconfig
+@@ -428,6 +428,36 @@ config DEFAULT_MEM_SIZE
  
--- 
-2.3.6
-
-
-From 6fbe5c7cd4d50582ba22c0a979131e347ec7b132 Mon Sep 17 00:00:00 2001
-From: Huacai Chen <chenhc@lemote.com>
-Date: Sun, 29 Mar 2015 10:54:05 +0800
-Subject: [PATCH 043/219] MIPS: Hibernate: flush TLB entries earlier
-Cc: mpagano@gentoo.org
-
-commit a843d00d038b11267279e3b5388222320f9ddc1d upstream.
-
-We found that TLB mismatch not only happens after kernel resume, but
-also happens during snapshot restore. So move it to the beginning of
-swsusp_arch_suspend().
-
-Signed-off-by: Huacai Chen <chenhc@lemote.com>
-Cc: Steven J. Hill <Steven.Hill@imgtec.com>
-Cc: linux-mips@linux-mips.org
-Cc: Fuxin Zhang <zhangfx@lemote.com>
-Cc: Zhangjin Wu <wuzhangjin@gmail.com>
-Patchwork: https://patchwork.linux-mips.org/patch/9621/
-Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/mips/power/hibernate.S | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/arch/mips/power/hibernate.S b/arch/mips/power/hibernate.S
-index 32a7c82..e7567c8 100644
---- a/arch/mips/power/hibernate.S
-+++ b/arch/mips/power/hibernate.S
-@@ -30,6 +30,8 @@ LEAF(swsusp_arch_suspend)
- END(swsusp_arch_suspend)
+ 	  If unsure, leave the default value here.
  
- LEAF(swsusp_arch_resume)
-+	/* Avoid TLB mismatch during and after kernel resume */
-+	jal local_flush_tlb_all
- 	PTR_L t0, restore_pblist
- 0:
- 	PTR_L t1, PBE_ADDRESS(t0)   /* source */
-@@ -43,7 +45,6 @@ LEAF(swsusp_arch_resume)
- 	bne t1, t3, 1b
- 	PTR_L t0, PBE_NEXT(t0)
- 	bnez t0, 0b
--	jal local_flush_tlb_all /* Avoid TLB mismatch after kernel resume */
- 	PTR_LA t0, saved_regs
- 	PTR_L ra, PT_R31(t0)
- 	PTR_L sp, PT_R29(t0)
--- 
-2.3.6
-
-
-From f0ce3bf7fa069f614101c819576cb0344076e95c Mon Sep 17 00:00:00 2001
-From: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
-Date: Tue, 24 Mar 2015 16:29:32 +0530
-Subject: [PATCH 044/219] staging: panel: fix lcd type
-Cc: mpagano@gentoo.org
-
-commit 2c20d92dad5db6440cfa88d811b69fd605240ce4 upstream.
-
-the lcd type as defined in the Kconfig is not matching in the code.
-as a result the rs, rw and en pins were getting interchanged.
-Kconfig defines the value of PANEL_LCD to be 1 if we select custom
-configuration but in the code LCD_TYPE_CUSTOM is defined as 5.
-
-my hardware is LCD_TYPE_CUSTOM, but the pins were assigned to it
-as pins of LCD_TYPE_OLD, and it was not working.
-Now values are corrected with referenece to the values defined in
-Kconfig and it is working.
-checked on JHD204A lcd with LCD_TYPE_CUSTOM configuration.
-
-Signed-off-by: Sudip Mukherjee <sudip@vectorindia.org>
-Acked-by: Willy Tarreau <w@1wt.eu>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/staging/panel/panel.c | 12 ++++++------
- 1 file changed, 6 insertions(+), 6 deletions(-)
-
-diff --git a/drivers/staging/panel/panel.c b/drivers/staging/panel/panel.c
-index 6ed35b6..04fc217 100644
---- a/drivers/staging/panel/panel.c
-+++ b/drivers/staging/panel/panel.c
-@@ -335,11 +335,11 @@ static unsigned char lcd_bits[LCD_PORTS][LCD_BITS][BIT_STATES];
-  * LCD types
-  */
- #define LCD_TYPE_NONE		0
--#define LCD_TYPE_OLD		1
--#define LCD_TYPE_KS0074		2
--#define LCD_TYPE_HANTRONIX	3
--#define LCD_TYPE_NEXCOM		4
--#define LCD_TYPE_CUSTOM		5
-+#define LCD_TYPE_CUSTOM		1
-+#define LCD_TYPE_OLD		2
-+#define LCD_TYPE_KS0074		3
-+#define LCD_TYPE_HANTRONIX	4
-+#define LCD_TYPE_NEXCOM		5
++config XTFPGA_LCD
++	bool "Enable XTFPGA LCD driver"
++	depends on XTENSA_PLATFORM_XTFPGA
++	default n
++	help
++	  There's a 2x16 LCD on most of XTFPGA boards, kernel may output
++	  progress messages there during bootup/shutdown. It may be useful
++	  during board bringup.
++
++	  If unsure, say N.
++
++config XTFPGA_LCD_BASE_ADDR
++	hex "XTFPGA LCD base address"
++	depends on XTFPGA_LCD
++	default "0x0d0c0000"
++	help
++	  Base address of the LCD controller inside KIO region.
++	  Different boards from XTFPGA family have LCD controller at different
++	  addresses. Please consult prototyping user guide for your board for
++	  the correct address. Wrong address here may lead to hardware lockup.
++
++config XTFPGA_LCD_8BIT_ACCESS
++	bool "Use 8-bit access to XTFPGA LCD"
++	depends on XTFPGA_LCD
++	default n
++	help
++	  LCD may be connected with 4- or 8-bit interface, 8-bit access may
++	  only be used with 8-bit interface. Please consult prototyping user
++	  guide for your board for the correct interface width.
++
+ endmenu
  
- /*
-  * keypad types
-@@ -502,7 +502,7 @@ MODULE_PARM_DESC(keypad_type,
- static int lcd_type = NOT_SET;
- module_param(lcd_type, int, 0000);
- MODULE_PARM_DESC(lcd_type,
--		 "LCD type: 0=none, 1=old //, 2=serial ks0074, 3=hantronix //, 4=nexcom //, 5=compiled-in");
-+		 "LCD type: 0=none, 1=compiled-in, 2=old, 3=serial ks0074, 4=hantronix, 5=nexcom");
+ menu "Executable file formats"
+diff --git a/arch/xtensa/include/uapi/asm/unistd.h b/arch/xtensa/include/uapi/asm/unistd.h
+index db5bb72..62d8465 100644
+--- a/arch/xtensa/include/uapi/asm/unistd.h
++++ b/arch/xtensa/include/uapi/asm/unistd.h
+@@ -715,7 +715,7 @@ __SYSCALL(323, sys_process_vm_writev, 6)
+ __SYSCALL(324, sys_name_to_handle_at, 5)
+ #define __NR_open_by_handle_at			325
+ __SYSCALL(325, sys_open_by_handle_at, 3)
+-#define __NR_sync_file_range			326
++#define __NR_sync_file_range2			326
+ __SYSCALL(326, sys_sync_file_range2, 6)
+ #define __NR_perf_event_open			327
+ __SYSCALL(327, sys_perf_event_open, 5)
+diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c
+index d05f8fe..17b1ef3 100644
+--- a/arch/xtensa/platforms/iss/network.c
++++ b/arch/xtensa/platforms/iss/network.c
+@@ -349,8 +349,8 @@ static void iss_net_timer(unsigned long priv)
+ {
+ 	struct iss_net_private *lp = (struct iss_net_private *)priv;
  
- static int lcd_height = NOT_SET;
- module_param(lcd_height, int, 0000);
--- 
-2.3.6
-
-
-From da01c0cfb196bef048fcb16727d646138d257ce3 Mon Sep 17 00:00:00 2001
-From: Alistair Strachan <alistair.strachan@imgtec.com>
-Date: Tue, 24 Mar 2015 14:51:31 -0700
-Subject: [PATCH 045/219] staging: android: sync: Fix memory corruption in
- sync_timeline_signal().
-Cc: mpagano@gentoo.org
-
-commit 8e43c9c75faf2902955bd2ecd7a50a8cc41cb00a upstream.
-
-The android_fence_release() function checks for active sync points
-by calling list_empty() on the list head embedded on the sync
-point. However, it is only valid to use list_empty() on nodes that
-have been initialized with INIT_LIST_HEAD() or list_del_init().
-
-Because the list entry has likely been removed from the active list
-by sync_timeline_signal(), there is a good chance that this
-WARN_ON_ONCE() will be hit due to dangling pointers pointing at
-freed memory (even though the sync drivers did nothing wrong)
-and memory corruption will ensue as the list entry is removed for
-a second time, corrupting the active list.
-
-This problem can be reproduced quite easily with CONFIG_DEBUG_LIST=y
-and fences with more than one sync point.
-
-Signed-off-by: Alistair Strachan <alistair.strachan@imgtec.com>
-Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
-Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Cc: Colin Cross <ccross@google.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/staging/android/sync.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/staging/android/sync.c b/drivers/staging/android/sync.c
-index 7bdb62b..f83e00c 100644
---- a/drivers/staging/android/sync.c
-+++ b/drivers/staging/android/sync.c
-@@ -114,7 +114,7 @@ void sync_timeline_signal(struct sync_timeline *obj)
- 	list_for_each_entry_safe(pt, next, &obj->active_list_head,
- 				 active_list) {
- 		if (fence_is_signaled_locked(&pt->base))
--			list_del(&pt->active_list);
-+			list_del_init(&pt->active_list);
- 	}
+-	spin_lock(&lp->lock);
+ 	iss_net_poll();
++	spin_lock(&lp->lock);
+ 	mod_timer(&lp->timer, jiffies + lp->timer_val);
+ 	spin_unlock(&lp->lock);
+ }
+@@ -361,7 +361,7 @@ static int iss_net_open(struct net_device *dev)
+ 	struct iss_net_private *lp = netdev_priv(dev);
+ 	int err;
  
- 	spin_unlock_irqrestore(&obj->child_list_lock, flags);
--- 
-2.3.6
-
-
-From c373916a7434a49607ece05dbf0f60c697ad7291 Mon Sep 17 00:00:00 2001
-From: Malcolm Priestley <tvboxspy@gmail.com>
-Date: Wed, 1 Apr 2015 22:32:52 +0100
-Subject: [PATCH 046/219] staging: vt6655: use ieee80211_tx_info to select
- packet type.
-Cc: mpagano@gentoo.org
-
-commit a6388e68321a1e0a0f408379c2a36396807745b3 upstream.
-
-Information for packet type is in ieee80211_tx_info
-
-band IEEE80211_BAND_5GHZ for PK_TYPE_11A.
-
-IEEE80211_TX_RC_USE_CTS_PROTECT via tx_rate flags selects PK_TYPE_11GB
-
-This ensures that the packet is always the right type.
-
-Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/staging/vt6655/rxtx.c | 14 +++++++++++---
- 1 file changed, 11 insertions(+), 3 deletions(-)
-
-diff --git a/drivers/staging/vt6655/rxtx.c b/drivers/staging/vt6655/rxtx.c
-index 07ce3fd..fdf5c56 100644
---- a/drivers/staging/vt6655/rxtx.c
-+++ b/drivers/staging/vt6655/rxtx.c
-@@ -1308,10 +1308,18 @@ int vnt_generate_fifo_header(struct vnt_private *priv, u32 dma_idx,
- 			    priv->hw->conf.chandef.chan->hw_value);
- 	}
+-	spin_lock(&lp->lock);
++	spin_lock_bh(&lp->lock);
  
--	if (current_rate > RATE_11M)
--		pkt_type = (u8)priv->byPacketType;
--	else
-+	if (current_rate > RATE_11M) {
-+		if (info->band == IEEE80211_BAND_5GHZ) {
-+			pkt_type = PK_TYPE_11A;
-+		} else {
-+			if (tx_rate->flags & IEEE80211_TX_RC_USE_CTS_PROTECT)
-+				pkt_type = PK_TYPE_11GB;
-+			else
-+				pkt_type = PK_TYPE_11GA;
-+		}
-+	} else {
- 		pkt_type = PK_TYPE_11B;
-+	}
+ 	err = lp->tp.open(lp);
+ 	if (err < 0)
+@@ -376,9 +376,11 @@ static int iss_net_open(struct net_device *dev)
+ 	while ((err = iss_net_rx(dev)) > 0)
+ 		;
  
- 	/*Set fifo controls */
- 	if (pkt_type == PK_TYPE_11A)
--- 
-2.3.6
-
-
-From a89d16cbd3a2838b54e404d7f8dd0af60667fa21 Mon Sep 17 00:00:00 2001
-From: NeilBrown <neilb@suse.de>
-Date: Fri, 10 Apr 2015 13:19:04 +1000
-Subject: [PATCH 047/219] md/raid0: fix bug with chunksize not a power of 2.
-Cc: mpagano@gentoo.org
-
-commit 47d68979cc968535cb87f3e5f2e6a3533ea48fbd upstream.
-
-Since commit 20d0189b1012a37d2533a87fb451f7852f2418d1
-in v3.14-rc1 RAID0 has performed incorrect calculations
-when the chunksize is not a power of 2.
-
-This happens because "sector_div()" modifies its first argument, but
-this wasn't taken into account in the patch.
-
-So restore that first arg before re-using the variable.
-
-Reported-by: Joe Landman <joe.landman@gmail.com>
-Reported-by: Dave Chinner <david@fromorbit.com>
-Fixes: 20d0189b1012a37d2533a87fb451f7852f2418d1
-Signed-off-by: NeilBrown <neilb@suse.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/md/raid0.c | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
-index 3ed9f42..3b5d7f7 100644
---- a/drivers/md/raid0.c
-+++ b/drivers/md/raid0.c
-@@ -313,7 +313,7 @@ static struct strip_zone *find_zone(struct r0conf *conf,
+-	spin_lock(&opened_lock);
++	spin_unlock_bh(&lp->lock);
++	spin_lock_bh(&opened_lock);
+ 	list_add(&lp->opened_list, &opened);
+-	spin_unlock(&opened_lock);
++	spin_unlock_bh(&opened_lock);
++	spin_lock_bh(&lp->lock);
  
- /*
-  * remaps the bio to the target device. we separate two flows.
-- * power 2 flow and a general flow for the sake of perfromance
-+ * power 2 flow and a general flow for the sake of performance
- */
- static struct md_rdev *map_sector(struct mddev *mddev, struct strip_zone *zone,
- 				sector_t sector, sector_t *sector_offset)
-@@ -524,6 +524,7 @@ static void raid0_make_request(struct mddev *mddev, struct bio *bio)
- 			split = bio;
- 		}
+ 	init_timer(&lp->timer);
+ 	lp->timer_val = ISS_NET_TIMER_VALUE;
+@@ -387,7 +389,7 @@ static int iss_net_open(struct net_device *dev)
+ 	mod_timer(&lp->timer, jiffies + lp->timer_val);
  
-+		sector = bio->bi_iter.bi_sector;
- 		zone = find_zone(mddev->private, &sector);
- 		tmp_dev = map_sector(mddev, zone, sector, &sector);
- 		split->bi_bdev = tmp_dev->bdev;
--- 
-2.3.6
-
-
-From a3ec48fa3f64ea293bfe691a02c17c0a7d2887e1 Mon Sep 17 00:00:00 2001
-From: Christoph Hellwig <hch@infradead.org>
-Date: Wed, 15 Apr 2015 09:44:37 -0700
-Subject: [PATCH 048/219] megaraid_sas: use raw_smp_processor_id()
-Cc: mpagano@gentoo.org
-
-commit 16b8528d20607925899b1df93bfd8fbab98d267c upstream.
-
-We only want to steer the I/O completion towards a queue, but don't
-actually access any per-CPU data, so the raw_ version is fine to use
-and avoids the warnings when using smp_processor_id().
-
-Signed-off-by: Christoph Hellwig <hch@lst.de>
-Reported-by: Andy Lutomirski <luto@kernel.org>
-Tested-by: Andy Lutomirski <luto@kernel.org>
-Acked-by: Sumit Saxena <sumit.saxena@avagotech.com>
-Signed-off-by: James Bottomley <JBottomley@Odin.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/scsi/megaraid/megaraid_sas_fusion.c | 9 ++++++---
- 1 file changed, 6 insertions(+), 3 deletions(-)
-
-diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
-index 675b5e7..5a0800d 100644
---- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
-+++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
-@@ -1584,11 +1584,11 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
- 			fp_possible = io_info.fpOkForIo;
- 	}
+ out:
+-	spin_unlock(&lp->lock);
++	spin_unlock_bh(&lp->lock);
+ 	return err;
+ }
  
--	/* Use smp_processor_id() for now until cmd->request->cpu is CPU
-+	/* Use raw_smp_processor_id() for now until cmd->request->cpu is CPU
- 	   id by default, not CPU group id, otherwise all MSI-X queues won't
- 	   be utilized */
- 	cmd->request_desc->SCSIIO.MSIxIndex = instance->msix_vectors ?
--		smp_processor_id() % instance->msix_vectors : 0;
-+		raw_smp_processor_id() % instance->msix_vectors : 0;
+@@ -395,7 +397,7 @@ static int iss_net_close(struct net_device *dev)
+ {
+ 	struct iss_net_private *lp = netdev_priv(dev);
+ 	netif_stop_queue(dev);
+-	spin_lock(&lp->lock);
++	spin_lock_bh(&lp->lock);
  
- 	if (fp_possible) {
- 		megasas_set_pd_lba(io_request, scp->cmd_len, &io_info, scp,
-@@ -1693,7 +1693,10 @@ megasas_build_dcdb_fusion(struct megasas_instance *instance,
- 			<< MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT;
- 		cmd->request_desc->SCSIIO.DevHandle = io_request->DevHandle;
- 		cmd->request_desc->SCSIIO.MSIxIndex =
--			instance->msix_vectors ? smp_processor_id() % instance->msix_vectors : 0;
-+			instance->msix_vectors ?
-+				raw_smp_processor_id() %
-+					instance->msix_vectors :
-+				0;
- 		os_timeout_value = scmd->request->timeout / HZ;
+ 	spin_lock(&opened_lock);
+ 	list_del(&opened);
+@@ -405,18 +407,17 @@ static int iss_net_close(struct net_device *dev)
  
- 		if (instance->secure_jbod_support &&
--- 
-2.3.6
-
-
-From e654ded279c44285d07a31fe6d6c6fb74a9b5465 Mon Sep 17 00:00:00 2001
-From: Sudeep Holla <sudeep.holla@arm.com>
-Date: Tue, 17 Mar 2015 17:28:46 +0000
-Subject: [PATCH 049/219] drivers/base: cacheinfo: validate device node for all
- the caches
-Cc: mpagano@gentoo.org
-
-commit 8a7d95f95c95f396decbd4cda6d4903fc4664946 upstream.
-
-On architectures that depend on DT for obtaining cache hierarcy, we need
-to validate the device node for all the cache indices, failing to do so
-might result in wrong information being exposed to the userspace.
-
-This is quite possible on initial/incomplete versions of the device
-trees. In such cases, it's better to bail out if all the required device
-nodes are not present.
-
-This patch adds checks for the validation of device node for all the
-caches and doesn't initialise the cacheinfo if there's any error.
-
-Reported-by: Mark Rutland <mark.rutland@arm.com>
-Acked-by: Mark Rutland <mark.rutland@arm.com>
-Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/base/cacheinfo.c | 13 +++++++++++--
- 1 file changed, 11 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
-index 6e64563..9c2ba1c 100644
---- a/drivers/base/cacheinfo.c
-+++ b/drivers/base/cacheinfo.c
-@@ -62,15 +62,21 @@ static int cache_setup_of_node(unsigned int cpu)
- 		return -ENOENT;
- 	}
+ 	lp->tp.close(lp);
  
--	while (np && index < cache_leaves(cpu)) {
-+	while (index < cache_leaves(cpu)) {
- 		this_leaf = this_cpu_ci->info_list + index;
- 		if (this_leaf->level != 1)
- 			np = of_find_next_cache_node(np);
- 		else
- 			np = of_node_get(np);/* cpu node itself */
-+		if (!np)
-+			break;
- 		this_leaf->of_node = np;
- 		index++;
- 	}
-+
-+	if (index != cache_leaves(cpu)) /* not all OF nodes populated */
-+		return -ENOENT;
-+
+-	spin_unlock(&lp->lock);
++	spin_unlock_bh(&lp->lock);
  	return 0;
  }
  
-@@ -189,8 +195,11 @@ static int detect_cache_attributes(unsigned int cpu)
- 	 * will be set up here only if they are not populated already
- 	 */
- 	ret = cache_shared_cpu_map_setup(cpu);
--	if (ret)
-+	if (ret) {
-+		pr_warn("Unable to detect cache hierarcy from DT for CPU %d\n",
-+			cpu);
- 		goto free_ci;
-+	}
- 	return 0;
- 
- free_ci:
--- 
-2.3.6
-
-
-From 766f84104c3a294da5c4f1660589b3d167c5b1c6 Mon Sep 17 00:00:00 2001
-From: Oliver Neukum <oneukum@suse.de>
-Date: Fri, 20 Mar 2015 14:29:34 +0100
-Subject: [PATCH 050/219] cdc-wdm: fix endianness bug in debug statements
-Cc: mpagano@gentoo.org
-
-commit 323ece54e0761198946ecd0c2091f1d2bfdfcb64 upstream.
-
-Values directly from descriptors given in debug statements
-must be converted to native endianness.
-
-Signed-off-by: Oliver Neukum <oneukum@suse.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/class/cdc-wdm.c | 12 +++++++-----
- 1 file changed, 7 insertions(+), 5 deletions(-)
-
-diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
-index a051a7a..a81f9dd 100644
---- a/drivers/usb/class/cdc-wdm.c
-+++ b/drivers/usb/class/cdc-wdm.c
-@@ -245,7 +245,7 @@ static void wdm_int_callback(struct urb *urb)
- 	case USB_CDC_NOTIFY_RESPONSE_AVAILABLE:
- 		dev_dbg(&desc->intf->dev,
- 			"NOTIFY_RESPONSE_AVAILABLE received: index %d len %d",
--			dr->wIndex, dr->wLength);
-+			le16_to_cpu(dr->wIndex), le16_to_cpu(dr->wLength));
- 		break;
- 
- 	case USB_CDC_NOTIFY_NETWORK_CONNECTION:
-@@ -262,7 +262,9 @@ static void wdm_int_callback(struct urb *urb)
- 		clear_bit(WDM_POLL_RUNNING, &desc->flags);
- 		dev_err(&desc->intf->dev,
- 			"unknown notification %d received: index %d len %d\n",
--			dr->bNotificationType, dr->wIndex, dr->wLength);
-+			dr->bNotificationType,
-+			le16_to_cpu(dr->wIndex),
-+			le16_to_cpu(dr->wLength));
- 		goto exit;
- 	}
+ static int iss_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct iss_net_private *lp = netdev_priv(dev);
+-	unsigned long flags;
+ 	int len;
  
-@@ -408,7 +410,7 @@ static ssize_t wdm_write
- 			     USB_RECIP_INTERFACE);
- 	req->bRequest = USB_CDC_SEND_ENCAPSULATED_COMMAND;
- 	req->wValue = 0;
--	req->wIndex = desc->inum;
-+	req->wIndex = desc->inum; /* already converted */
- 	req->wLength = cpu_to_le16(count);
- 	set_bit(WDM_IN_USE, &desc->flags);
- 	desc->outbuf = buf;
-@@ -422,7 +424,7 @@ static ssize_t wdm_write
- 		rv = usb_translate_errors(rv);
- 	} else {
- 		dev_dbg(&desc->intf->dev, "Tx URB has been submitted index=%d",
--			req->wIndex);
-+			le16_to_cpu(req->wIndex));
- 	}
- out:
- 	usb_autopm_put_interface(desc->intf);
-@@ -820,7 +822,7 @@ static int wdm_create(struct usb_interface *intf, struct usb_endpoint_descriptor
- 	desc->irq->bRequestType = (USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE);
- 	desc->irq->bRequest = USB_CDC_GET_ENCAPSULATED_RESPONSE;
- 	desc->irq->wValue = 0;
--	desc->irq->wIndex = desc->inum;
-+	desc->irq->wIndex = desc->inum; /* already converted */
- 	desc->irq->wLength = cpu_to_le16(desc->wMaxCommand);
+ 	netif_stop_queue(dev);
+-	spin_lock_irqsave(&lp->lock, flags);
++	spin_lock_bh(&lp->lock);
  
- 	usb_fill_control_urb(
--- 
-2.3.6
-
-
-From 7df0c5a403d2e9a1698a6ebdcf6e37a0639aad85 Mon Sep 17 00:00:00 2001
-From: Geert Uytterhoeven <geert+renesas@glider.be>
-Date: Wed, 18 Feb 2015 17:34:59 +0100
-Subject: [PATCH 051/219] mmc: tmio: Remove bogus un-initialization in
- tmio_mmc_host_free()
-Cc: mpagano@gentoo.org
-
-commit 13a6a2ed1f5e77ae47c2b1a8e3bf22b2fa2d56ba upstream.
-
-If CONFIG_DEBUG_SLAB=y:
-
-    sh_mobile_sdhi ee100000.sd: Got CD GPIO
-    sh_mobile_sdhi ee100000.sd: Got WP GPIO
-    platform ee100000.sd: Driver sh_mobile_sdhi requests probe deferral
-    ...
-    Slab corruption (Not tainted): kmalloc-1024 start=ed8b3c00, len=1024
-    2d0: 00 00 00 00 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  ....kkkkkkkkkkkk
-    Prev obj: start=ed8b3800, len=1024
-    000: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
-    010: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
-
-Struct tmio_mmc_host is embedded inside struct mmc_host, and thus is
-freed by the call to mmc_free_host(). Hence it must not be written to
-afterwards, as that will corrupt freed (and perhaps already reused)
-memory.
-
-Fixes: 94b110aff8679b14 ("mmc: tmio: add tmio_mmc_host_alloc/free()")
-Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
-Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/mmc/host/tmio_mmc_pio.c | 2 --
- 1 file changed, 2 deletions(-)
-
-diff --git a/drivers/mmc/host/tmio_mmc_pio.c b/drivers/mmc/host/tmio_mmc_pio.c
-index a31c357..dba7e1c 100644
---- a/drivers/mmc/host/tmio_mmc_pio.c
-+++ b/drivers/mmc/host/tmio_mmc_pio.c
-@@ -1073,8 +1073,6 @@ EXPORT_SYMBOL(tmio_mmc_host_alloc);
- void tmio_mmc_host_free(struct tmio_mmc_host *host)
- {
- 	mmc_free_host(host->mmc);
--
--	host->mmc = NULL;
- }
- EXPORT_SYMBOL(tmio_mmc_host_free);
+ 	len = lp->tp.write(lp, &skb);
  
--- 
-2.3.6
-
-
-From 85895968a9444e810f96cc951c6b5fc7dd183296 Mon Sep 17 00:00:00 2001
-From: Chen-Yu Tsai <wens@csie.org>
-Date: Tue, 3 Mar 2015 09:44:40 +0800
-Subject: [PATCH 052/219] mmc: sunxi: Use devm_reset_control_get_optional() for
- reset control
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-Cc: mpagano@gentoo.org
-
-commit 9e71c589e44ddf2b86f361c81e360c6b0d0354b1 upstream.
-
-The reset control for the sunxi mmc controller is optional. Some
-newer platforms (sun6i, sun8i, sun9i) have it, while older ones
-(sun4i, sun5i, sun7i) don't.
-
-Use the properly stubbed _optional version so the driver does not
-fail to compile when RESET_CONTROLLER=n.
-
-This patch also adds a check for deferred probing on the reset
-control.
-
-Signed-off-by: Chen-Yu Tsai <wens@csie.org>
-Acked-by: David Lanzendörfer <david.lanzendoerfer@o2s.ch>
-Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/mmc/host/sunxi-mmc.c | 4 +++-
- 1 file changed, 3 insertions(+), 1 deletion(-)
-
-diff --git a/drivers/mmc/host/sunxi-mmc.c b/drivers/mmc/host/sunxi-mmc.c
-index e8a4218..459ed1b 100644
---- a/drivers/mmc/host/sunxi-mmc.c
-+++ b/drivers/mmc/host/sunxi-mmc.c
-@@ -930,7 +930,9 @@ static int sunxi_mmc_resource_request(struct sunxi_mmc_host *host,
- 		return PTR_ERR(host->clk_sample);
+@@ -438,7 +439,7 @@ static int iss_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		pr_err("%s: %s failed(%d)\n", dev->name, __func__, len);
  	}
  
--	host->reset = devm_reset_control_get(&pdev->dev, "ahb");
-+	host->reset = devm_reset_control_get_optional(&pdev->dev, "ahb");
-+	if (PTR_ERR(host->reset) == -EPROBE_DEFER)
-+		return PTR_ERR(host->reset);
- 
- 	ret = clk_prepare_enable(host->clk_ahb);
- 	if (ret) {
--- 
-2.3.6
-
-
-From 662552a3bf88447e8985bdad78fc7e548487416b Mon Sep 17 00:00:00 2001
-From: Lucas Stach <l.stach@pengutronix.de>
-Date: Wed, 1 Apr 2015 10:46:15 +0200
-Subject: [PATCH 053/219] spi: imx: read back the RX/TX watermark levels
- earlier
-Cc: mpagano@gentoo.org
-
-commit f511ab09dfb0fe7b2335eccac51ff9f001a32e4a upstream.
-
-They are used to decide if the controller can do DMA on a buffer
-of a specific length and thus are needed before any transfer is attempted.
-
-This fixes a memory leak where the SPI core uses the drivers can_dma()
-callback to determine if a buffer needs to be mapped. As the watermark
-levels aren't correct at that point the driver falsely claims to be able to
-DMA the buffer when it fact it isn't.
-After the transfer has been done the core uses the same callback to
-determine if it needs to unmap the buffers. As the driver now correctly
-claims to not being able to DMA the buffer the core doesn't attempt to
-unmap the buffer which leaves the SGT leaking.
-
-Fixes: f62caccd12c17e4 (spi: spi-imx: add DMA support)
-Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
-Signed-off-by: Mark Brown <broonie@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/spi/spi-imx.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
-index 6fea4af..aea3a67 100644
---- a/drivers/spi/spi-imx.c
-+++ b/drivers/spi/spi-imx.c
-@@ -370,8 +370,6 @@ static int __maybe_unused mx51_ecspi_config(struct spi_imx_data *spi_imx,
- 	if (spi_imx->dma_is_inited) {
- 		dma = readl(spi_imx->base + MX51_ECSPI_DMA);
+-	spin_unlock_irqrestore(&lp->lock, flags);
++	spin_unlock_bh(&lp->lock);
  
--		spi_imx->tx_wml = spi_imx_get_fifosize(spi_imx) / 2;
--		spi_imx->rx_wml = spi_imx_get_fifosize(spi_imx) / 2;
- 		spi_imx->rxt_wml = spi_imx_get_fifosize(spi_imx) / 2;
- 		rx_wml_cfg = spi_imx->rx_wml << MX51_ECSPI_DMA_RX_WML_OFFSET;
- 		tx_wml_cfg = spi_imx->tx_wml << MX51_ECSPI_DMA_TX_WML_OFFSET;
-@@ -868,6 +866,8 @@ static int spi_imx_sdma_init(struct device *dev, struct spi_imx_data *spi_imx,
- 	master->max_dma_len = MAX_SDMA_BD_BYTES;
- 	spi_imx->bitbang.master->flags = SPI_MASTER_MUST_RX |
- 					 SPI_MASTER_MUST_TX;
-+	spi_imx->tx_wml = spi_imx_get_fifosize(spi_imx) / 2;
-+	spi_imx->rx_wml = spi_imx_get_fifosize(spi_imx) / 2;
- 	spi_imx->dma_is_inited = 1;
+ 	dev_kfree_skb(skb);
+ 	return NETDEV_TX_OK;
+@@ -466,9 +467,9 @@ static int iss_net_set_mac(struct net_device *dev, void *addr)
  
+ 	if (!is_valid_ether_addr(hwaddr->sa_data))
+ 		return -EADDRNOTAVAIL;
+-	spin_lock(&lp->lock);
++	spin_lock_bh(&lp->lock);
+ 	memcpy(dev->dev_addr, hwaddr->sa_data, ETH_ALEN);
+-	spin_unlock(&lp->lock);
++	spin_unlock_bh(&lp->lock);
  	return 0;
--- 
-2.3.6
-
-
-From 721669bff3eaa852476783845293dca50431ce5b Mon Sep 17 00:00:00 2001
-From: Ian Abbott <abbotti@mev.co.uk>
-Date: Mon, 23 Mar 2015 17:50:27 +0000
-Subject: [PATCH 054/219] spi: spidev: fix possible arithmetic overflow for
- multi-transfer message
-Cc: mpagano@gentoo.org
-
-commit f20fbaad7620af2df36a1f9d1c9ecf48ead5b747 upstream.
-
-`spidev_message()` sums the lengths of the individual SPI transfers to
-determine the overall SPI message length.  It restricts the total
-length, returning an error if too long, but it does not check for
-arithmetic overflow.  For example, if the SPI message consisted of two
-transfers and the first has a length of 10 and the second has a length
-of (__u32)(-1), the total length would be seen as 9, even though the
-second transfer is actually very long.  If the second transfer specifies
-a null `rx_buf` and a non-null `tx_buf`, the `copy_from_user()` could
-overrun the spidev's pre-allocated tx buffer before it reaches an
-invalid user memory address.  Fix it by checking that neither the total
-nor the individual transfer lengths exceed the maximum allowed value.
-
-Thanks to Dan Carpenter for reporting the potential integer overflow.
-
-Signed-off-by: Ian Abbott <abbotti@mev.co.uk>
-Signed-off-by: Mark Brown <broonie@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/spi/spidev.c | 5 ++++-
- 1 file changed, 4 insertions(+), 1 deletion(-)
-
-diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
-index 4eb7a98..7bf5186 100644
---- a/drivers/spi/spidev.c
-+++ b/drivers/spi/spidev.c
-@@ -245,7 +245,10 @@ static int spidev_message(struct spidev_data *spidev,
- 		k_tmp->len = u_tmp->len;
- 
- 		total += k_tmp->len;
--		if (total > bufsiz) {
-+		/* Check total length of transfers.  Also check each
-+		 * transfer length to avoid arithmetic overflow.
-+		 */
-+		if (total > bufsiz || k_tmp->len > bufsiz) {
- 			status = -EMSGSIZE;
- 			goto done;
- 		}
--- 
-2.3.6
-
-
-From 855715fa0e283d4ff8280c79ac2c531116bc3290 Mon Sep 17 00:00:00 2001
-From: Krzysztof Kozlowski <k.kozlowski@samsung.com>
-Date: Thu, 12 Mar 2015 08:43:59 +0100
-Subject: [PATCH 055/219] compal-laptop: Fix leaking hwmon device
-Cc: mpagano@gentoo.org
-
-commit ad774702f1705c04e5fa492b793d8d477a504fa6 upstream.
-
-The commit c2be45f09bb0 ("compal-laptop: Use
-devm_hwmon_device_register_with_groups") wanted to change the
-registering of hwmon device to resource-managed version. It mostly did
-it except the main thing - it forgot to use devm-like function so the
-hwmon device leaked after device removal or probe failure.
-
-Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
-Fixes: c2be45f09bb0 ("compal-laptop: Use devm_hwmon_device_register_with_groups")
-Acked-by: Guenter Roeck <linux@roeck-us.net>
-Acked-by: Darren Hart <dvhart@linux.intel.com>
-Signed-off-by: Sebastian Reichel <sre@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/platform/x86/compal-laptop.c | 6 +++---
- 1 file changed, 3 insertions(+), 3 deletions(-)
-
-diff --git a/drivers/platform/x86/compal-laptop.c b/drivers/platform/x86/compal-laptop.c
-index 15c0fab..eb9885e 100644
---- a/drivers/platform/x86/compal-laptop.c
-+++ b/drivers/platform/x86/compal-laptop.c
-@@ -1026,9 +1026,9 @@ static int compal_probe(struct platform_device *pdev)
- 	if (err)
- 		return err;
+ }
  
--	hwmon_dev = hwmon_device_register_with_groups(&pdev->dev,
--						      "compal", data,
--						      compal_hwmon_groups);
-+	hwmon_dev = devm_hwmon_device_register_with_groups(&pdev->dev,
-+							   "compal", data,
-+							   compal_hwmon_groups);
- 	if (IS_ERR(hwmon_dev)) {
- 		err = PTR_ERR(hwmon_dev);
- 		goto remove;
--- 
-2.3.6
-
-
-From 7d91365ba6ce7256b1afb1197aecf3dd0dca6e65 Mon Sep 17 00:00:00 2001
-From: Krzysztof Kozlowski <k.kozlowski@samsung.com>
-Date: Thu, 12 Mar 2015 08:44:00 +0100
-Subject: [PATCH 056/219] compal-laptop: Check return value of
- power_supply_register
-Cc: mpagano@gentoo.org
-
-commit 1915a718b1872edffcb13e5436a9f7302d3d36f0 upstream.
-
-The return value of power_supply_register() call was not checked and
-even on error probe() function returned 0. If registering failed then
-during unbind the driver tried to unregister power supply which was not
-actually registered.
-
-This could lead to memory corruption because power_supply_unregister()
-unconditionally cleans up given power supply.
-
-Fix this by checking return status of power_supply_register() call. In
-case of failure, clean up sysfs entries and fail the probe.
-
-Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
-Fixes: 9be0fcb5ed46 ("compal-laptop: add JHL90, battery & hwmon interface")
-Signed-off-by: Sebastian Reichel <sre@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/platform/x86/compal-laptop.c | 4 +++-
- 1 file changed, 3 insertions(+), 1 deletion(-)
-
-diff --git a/drivers/platform/x86/compal-laptop.c b/drivers/platform/x86/compal-laptop.c
-index eb9885e..bceb30b 100644
---- a/drivers/platform/x86/compal-laptop.c
-+++ b/drivers/platform/x86/compal-laptop.c
-@@ -1036,7 +1036,9 @@ static int compal_probe(struct platform_device *pdev)
+@@ -520,11 +521,11 @@ static int iss_net_configure(int index, char *init)
+ 	*lp = (struct iss_net_private) {
+ 		.device_list		= LIST_HEAD_INIT(lp->device_list),
+ 		.opened_list		= LIST_HEAD_INIT(lp->opened_list),
+-		.lock			= __SPIN_LOCK_UNLOCKED(lp.lock),
+ 		.dev			= dev,
+ 		.index			= index,
+-		};
++	};
  
- 	/* Power supply */
- 	initialize_power_supply_data(data);
--	power_supply_register(&compal_device->dev, &data->psy);
-+	err = power_supply_register(&compal_device->dev, &data->psy);
-+	if (err < 0)
-+		goto remove;
++	spin_lock_init(&lp->lock);
+ 	/*
+ 	 * If this name ends up conflicting with an existing registered
+ 	 * netdevice, that is OK, register_netdev{,ice}() will notice this
+diff --git a/arch/xtensa/platforms/xtfpga/Makefile b/arch/xtensa/platforms/xtfpga/Makefile
+index b9ae206..7839d38 100644
+--- a/arch/xtensa/platforms/xtfpga/Makefile
++++ b/arch/xtensa/platforms/xtfpga/Makefile
+@@ -6,4 +6,5 @@
+ #
+ # Note 2! The CFLAGS definitions are in the main makefile...
  
- 	platform_set_drvdata(pdev, data);
+-obj-y			= setup.o lcd.o
++obj-y			+= setup.o
++obj-$(CONFIG_XTFPGA_LCD) += lcd.o
+diff --git a/arch/xtensa/platforms/xtfpga/include/platform/hardware.h b/arch/xtensa/platforms/xtfpga/include/platform/hardware.h
+index 6edd20b..4e0af26 100644
+--- a/arch/xtensa/platforms/xtfpga/include/platform/hardware.h
++++ b/arch/xtensa/platforms/xtfpga/include/platform/hardware.h
+@@ -40,9 +40,6 @@
  
--- 
-2.3.6
-
-
-From 676ee802b67bf6ea0287ab5b25ae3f551cf27f74 Mon Sep 17 00:00:00 2001
-From: Steven Rostedt <rostedt@goodmis.org>
-Date: Tue, 17 Mar 2015 10:40:38 -0400
-Subject: [PATCH 057/219] ring-buffer: Replace this_cpu_*() with __this_cpu_*()
-Cc: mpagano@gentoo.org
-
-commit 80a9b64e2c156b6523e7a01f2ba6e5d86e722814 upstream.
-
-It has come to my attention that this_cpu_read/write are horrible on
-architectures other than x86. Worse yet, they actually disable
-preemption or interrupts! This caused some unexpected tracing results
-on ARM.
-
-   101.356868: preempt_count_add <-ring_buffer_lock_reserve
-   101.356870: preempt_count_sub <-ring_buffer_lock_reserve
-
-The ring_buffer_lock_reserve has recursion protection that requires
-accessing a per cpu variable. But since preempt_disable() is traced, it
-too got traced while accessing the variable that is suppose to prevent
-recursion like this.
-
-The generic version of this_cpu_read() and write() are:
-
- #define this_cpu_generic_read(pcp)					\
- ({	typeof(pcp) ret__;						\
-	preempt_disable();						\
-	ret__ = *this_cpu_ptr(&(pcp));					\
-	preempt_enable();						\
-	ret__;								\
- })
-
- #define this_cpu_generic_to_op(pcp, val, op)				\
- do {									\
-	unsigned long flags;						\
-	raw_local_irq_save(flags);					\
-	*__this_cpu_ptr(&(pcp)) op val;					\
-	raw_local_irq_restore(flags);					\
- } while (0)
-
-Which is unacceptable for locations that know they are within preempt
-disabled or interrupt disabled locations.
-
-Paul McKenney stated that __this_cpu_() versions produce much better code on
-other architectures than this_cpu_() does, if we know that the call is done in
-a preempt disabled location.
-
-I also changed the recursive_unlock() to use two local variables instead
-of accessing the per_cpu variable twice.
-
-Link: http://lkml.kernel.org/r/20150317114411.GE3589@linux.vnet.ibm.com
-Link: http://lkml.kernel.org/r/20150317104038.312e73d1@gandalf.local.home
-
-Acked-by: Christoph Lameter <cl@linux.com>
-Reported-by: Uwe Kleine-Koenig <u.kleine-koenig@pengutronix.de>
-Tested-by: Uwe Kleine-Koenig <u.kleine-koenig@pengutronix.de>
-Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- kernel/trace/ring_buffer.c | 11 +++++------
- 1 file changed, 5 insertions(+), 6 deletions(-)
-
-diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
-index 5040d44..922048a 100644
---- a/kernel/trace/ring_buffer.c
-+++ b/kernel/trace/ring_buffer.c
-@@ -2679,7 +2679,7 @@ static DEFINE_PER_CPU(unsigned int, current_context);
+ /* UART */
+ #define DUART16552_PADDR	(XCHAL_KIO_PADDR + 0x0D050020)
+-/* LCD instruction and data addresses. */
+-#define LCD_INSTR_ADDR		((char *)IOADDR(0x0D040000))
+-#define LCD_DATA_ADDR		((char *)IOADDR(0x0D040004))
  
- static __always_inline int trace_recursive_lock(void)
- {
--	unsigned int val = this_cpu_read(current_context);
-+	unsigned int val = __this_cpu_read(current_context);
- 	int bit;
+ /* Misc. */
+ #define XTFPGA_FPGAREGS_VADDR	IOADDR(0x0D020000)
+diff --git a/arch/xtensa/platforms/xtfpga/include/platform/lcd.h b/arch/xtensa/platforms/xtfpga/include/platform/lcd.h
+index 0e43564..4c8541e 100644
+--- a/arch/xtensa/platforms/xtfpga/include/platform/lcd.h
++++ b/arch/xtensa/platforms/xtfpga/include/platform/lcd.h
+@@ -11,10 +11,25 @@
+ #ifndef __XTENSA_XTAVNET_LCD_H
+ #define __XTENSA_XTAVNET_LCD_H
  
- 	if (in_interrupt()) {
-@@ -2696,18 +2696,17 @@ static __always_inline int trace_recursive_lock(void)
- 		return 1;
++#ifdef CONFIG_XTFPGA_LCD
+ /* Display string STR at position POS on the LCD. */
+ void lcd_disp_at_pos(char *str, unsigned char pos);
  
- 	val |= (1 << bit);
--	this_cpu_write(current_context, val);
-+	__this_cpu_write(current_context, val);
+ /* Shift the contents of the LCD display left or right. */
+ void lcd_shiftleft(void);
+ void lcd_shiftright(void);
++#else
++static inline void lcd_disp_at_pos(char *str, unsigned char pos)
++{
++}
++
++static inline void lcd_shiftleft(void)
++{
++}
++
++static inline void lcd_shiftright(void)
++{
++}
++#endif
++
+ #endif
+diff --git a/arch/xtensa/platforms/xtfpga/lcd.c b/arch/xtensa/platforms/xtfpga/lcd.c
+index 2872301..4dc0c1b 100644
+--- a/arch/xtensa/platforms/xtfpga/lcd.c
++++ b/arch/xtensa/platforms/xtfpga/lcd.c
+@@ -1,50 +1,63 @@
+ /*
+- * Driver for the LCD display on the Tensilica LX60 Board.
++ * Driver for the LCD display on the Tensilica XTFPGA board family.
++ * http://www.mytechcorp.com/cfdata/productFile/File1/MOC-16216B-B-A0A04.pdf
+  *
+  * This file is subject to the terms and conditions of the GNU General Public
+  * License.  See the file "COPYING" in the main directory of this archive
+  * for more details.
+  *
+  * Copyright (C) 2001, 2006 Tensilica Inc.
++ * Copyright (C) 2015 Cadence Design Systems Inc.
+  */
  
- 	return 0;
- }
+-/*
+- *
+- * FIXME: this code is from the examples from the LX60 user guide.
+- *
+- * The lcd_pause function does busy waiting, which is probably not
+- * great. Maybe the code could be changed to use kernel timers, or
+- * change the hardware to not need to wait.
+- */
+-
++#include <linux/delay.h>
+ #include <linux/init.h>
+ #include <linux/io.h>
  
- static __always_inline void trace_recursive_unlock(void)
- {
--	unsigned int val = this_cpu_read(current_context);
-+	unsigned int val = __this_cpu_read(current_context);
+ #include <platform/hardware.h>
+ #include <platform/lcd.h>
+-#include <linux/delay.h>
  
--	val--;
--	val &= this_cpu_read(current_context);
--	this_cpu_write(current_context, val);
-+	val &= val & (val - 1);
-+	__this_cpu_write(current_context, val);
- }
+-#define LCD_PAUSE_ITERATIONS	4000
++/* LCD instruction and data addresses. */
++#define LCD_INSTR_ADDR		((char *)IOADDR(CONFIG_XTFPGA_LCD_BASE_ADDR))
++#define LCD_DATA_ADDR		(LCD_INSTR_ADDR + 4)
++
+ #define LCD_CLEAR		0x1
+ #define LCD_DISPLAY_ON		0xc
  
- #else
--- 
-2.3.6
-
-
-From 85020c092b437aaceec966678ec5fd9f7792b547 Mon Sep 17 00:00:00 2001
-From: Krzysztof Kozlowski <k.kozlowski@samsung.com>
-Date: Fri, 20 Feb 2015 14:32:22 +0100
-Subject: [PATCH 058/219] power_supply: twl4030_madc: Check return value of
- power_supply_register
-Cc: mpagano@gentoo.org
-
-commit 68c3ed6fa7e0d69529ced772d650ab128916a81d upstream.
-
-The return value of power_supply_register() call was not checked and
-even on error probe() function returned 0. If registering failed then
-during unbind the driver tried to unregister power supply which was not
-actually registered.
-
-This could lead to memory corruption because power_supply_unregister()
-unconditionally cleans up given power supply.
-
-Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
-Fixes: da0a00ebc239 ("power: Add twl4030_madc battery driver.")
-Signed-off-by: Sebastian Reichel <sre@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/power/twl4030_madc_battery.c | 7 +++++--
- 1 file changed, 5 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/power/twl4030_madc_battery.c b/drivers/power/twl4030_madc_battery.c
-index 7ef445a..cf90760 100644
---- a/drivers/power/twl4030_madc_battery.c
-+++ b/drivers/power/twl4030_madc_battery.c
-@@ -192,6 +192,7 @@ static int twl4030_madc_battery_probe(struct platform_device *pdev)
- {
- 	struct twl4030_madc_battery *twl4030_madc_bat;
- 	struct twl4030_madc_bat_platform_data *pdata = pdev->dev.platform_data;
-+	int ret = 0;
+ /* 8bit and 2 lines display */
+ #define LCD_DISPLAY_MODE8BIT	0x38
++#define LCD_DISPLAY_MODE4BIT	0x28
+ #define LCD_DISPLAY_POS		0x80
+ #define LCD_SHIFT_LEFT		0x18
+ #define LCD_SHIFT_RIGHT		0x1c
  
- 	twl4030_madc_bat = kzalloc(sizeof(*twl4030_madc_bat), GFP_KERNEL);
- 	if (!twl4030_madc_bat)
-@@ -216,9 +217,11 @@ static int twl4030_madc_battery_probe(struct platform_device *pdev)
++static void lcd_put_byte(u8 *addr, u8 data)
++{
++#ifdef CONFIG_XTFPGA_LCD_8BIT_ACCESS
++	ACCESS_ONCE(*addr) = data;
++#else
++	ACCESS_ONCE(*addr) = data & 0xf0;
++	ACCESS_ONCE(*addr) = (data << 4) & 0xf0;
++#endif
++}
++
+ static int __init lcd_init(void)
+ {
+-	*LCD_INSTR_ADDR = LCD_DISPLAY_MODE8BIT;
++	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE8BIT;
+ 	mdelay(5);
+-	*LCD_INSTR_ADDR = LCD_DISPLAY_MODE8BIT;
++	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE8BIT;
+ 	udelay(200);
+-	*LCD_INSTR_ADDR = LCD_DISPLAY_MODE8BIT;
++	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE8BIT;
++	udelay(50);
++#ifndef CONFIG_XTFPGA_LCD_8BIT_ACCESS
++	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE4BIT;
++	udelay(50);
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_DISPLAY_MODE4BIT);
+ 	udelay(50);
+-	*LCD_INSTR_ADDR = LCD_DISPLAY_ON;
++#endif
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_DISPLAY_ON);
+ 	udelay(50);
+-	*LCD_INSTR_ADDR = LCD_CLEAR;
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_CLEAR);
+ 	mdelay(10);
+ 	lcd_disp_at_pos("XTENSA LINUX", 0);
+ 	return 0;
+@@ -52,10 +65,10 @@ static int __init lcd_init(void)
  
- 	twl4030_madc_bat->pdata = pdata;
- 	platform_set_drvdata(pdev, twl4030_madc_bat);
--	power_supply_register(&pdev->dev, &twl4030_madc_bat->psy);
-+	ret = power_supply_register(&pdev->dev, &twl4030_madc_bat->psy);
-+	if (ret < 0)
-+		kfree(twl4030_madc_bat);
+ void lcd_disp_at_pos(char *str, unsigned char pos)
+ {
+-	*LCD_INSTR_ADDR = LCD_DISPLAY_POS | pos;
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_DISPLAY_POS | pos);
+ 	udelay(100);
+ 	while (*str != 0) {
+-		*LCD_DATA_ADDR = *str;
++		lcd_put_byte(LCD_DATA_ADDR, *str);
+ 		udelay(200);
+ 		str++;
+ 	}
+@@ -63,13 +76,13 @@ void lcd_disp_at_pos(char *str, unsigned char pos)
  
--	return 0;
-+	return ret;
+ void lcd_shiftleft(void)
+ {
+-	*LCD_INSTR_ADDR = LCD_SHIFT_LEFT;
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_SHIFT_LEFT);
+ 	udelay(50);
  }
  
- static int twl4030_madc_battery_remove(struct platform_device *pdev)
--- 
-2.3.6
-
-
-From e7b8d14c9be1ddb14796569a636807647e30724c Mon Sep 17 00:00:00 2001
-From: Krzysztof Kozlowski <k.kozlowski@samsung.com>
-Date: Fri, 20 Feb 2015 14:32:25 +0100
-Subject: [PATCH 059/219] power_supply: lp8788-charger: Fix leaked power supply
- on probe fail
-Cc: mpagano@gentoo.org
-
-commit a7117f81e8391e035c49b3440792f7e6cea28173 upstream.
-
-Driver forgot to unregister charger power supply if registering of
-battery supply failed in probe(). In such case the memory associated
-with power supply leaked.
-
-Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
-Fixes: 98a276649358 ("power_supply: Add new lp8788 charger driver")
-Signed-off-by: Sebastian Reichel <sre@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/power/lp8788-charger.c | 4 +++-
- 1 file changed, 3 insertions(+), 1 deletion(-)
-
-diff --git a/drivers/power/lp8788-charger.c b/drivers/power/lp8788-charger.c
-index 21fc233..176dab2 100644
---- a/drivers/power/lp8788-charger.c
-+++ b/drivers/power/lp8788-charger.c
-@@ -417,8 +417,10 @@ static int lp8788_psy_register(struct platform_device *pdev,
- 	pchg->battery.num_properties = ARRAY_SIZE(lp8788_battery_prop);
- 	pchg->battery.get_property = lp8788_battery_get_property;
+ void lcd_shiftright(void)
+ {
+-	*LCD_INSTR_ADDR = LCD_SHIFT_RIGHT;
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_SHIFT_RIGHT);
+ 	udelay(50);
+ }
  
--	if (power_supply_register(&pdev->dev, &pchg->battery))
-+	if (power_supply_register(&pdev->dev, &pchg->battery)) {
-+		power_supply_unregister(&pchg->charger);
- 		return -EPERM;
-+	}
+diff --git a/drivers/acpi/acpica/evgpe.c b/drivers/acpi/acpica/evgpe.c
+index 5ed064e..ccf7932 100644
+--- a/drivers/acpi/acpica/evgpe.c
++++ b/drivers/acpi/acpica/evgpe.c
+@@ -92,6 +92,7 @@ acpi_ev_update_gpe_enable_mask(struct acpi_gpe_event_info *gpe_event_info)
+ 		ACPI_SET_BIT(gpe_register_info->enable_for_run,
+ 			     (u8)register_bit);
+ 	}
++	gpe_register_info->enable_mask = gpe_register_info->enable_for_run;
  
- 	return 0;
+ 	return_ACPI_STATUS(AE_OK);
  }
--- 
-2.3.6
-
-
-From a8cb866f5168eaec313528f7059b0025b859cccf Mon Sep 17 00:00:00 2001
-From: Krzysztof Kozlowski <k.kozlowski@samsung.com>
-Date: Fri, 20 Feb 2015 14:32:23 +0100
-Subject: [PATCH 060/219] power_supply: ipaq_micro_battery: Fix leaking
- workqueue
-Cc: mpagano@gentoo.org
-
-commit f852ec461e24504690445e7d281cbe806df5ccef upstream.
-
-Driver allocates singlethread workqueue in probe but it is not destroyed
-during removal.
-
-Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
-Fixes: 00a588f9d27f ("power: add driver for battery reading on iPaq h3xxx")
-Signed-off-by: Sebastian Reichel <sre@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/power/ipaq_micro_battery.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/drivers/power/ipaq_micro_battery.c b/drivers/power/ipaq_micro_battery.c
-index 9d69460..698cf16 100644
---- a/drivers/power/ipaq_micro_battery.c
-+++ b/drivers/power/ipaq_micro_battery.c
-@@ -251,6 +251,7 @@ static int micro_batt_remove(struct platform_device *pdev)
- 	power_supply_unregister(&micro_ac_power);
- 	power_supply_unregister(&micro_batt_power);
- 	cancel_delayed_work_sync(&mb->update);
-+	destroy_workqueue(mb->wq);
+@@ -123,7 +124,7 @@ acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info)
  
- 	return 0;
+ 	/* Enable the requested GPE */
+ 
+-	status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE_SAVE);
++	status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE);
+ 	return_ACPI_STATUS(status);
  }
--- 
-2.3.6
-
-
-From 640e9bd83b3a3bc313eb0ade22effbab5c135a76 Mon Sep 17 00:00:00 2001
-From: Krzysztof Kozlowski <k.kozlowski@samsung.com>
-Date: Fri, 20 Feb 2015 14:32:24 +0100
-Subject: [PATCH 061/219] power_supply: ipaq_micro_battery: Check return values
- in probe
-Cc: mpagano@gentoo.org
-
-commit a2c1d531854c4319610f1d83351213b47a633969 upstream.
-
-The return values of create_singlethread_workqueue() and
-power_supply_register() calls were not checked and even on error probe()
-function returned 0.
-
-1. If allocation of workqueue failed (returning NULL) then further
-   accesses could lead to NULL pointer dereference. The
-   queue_delayed_work() expects workqueue to be non-NULL.
-
-2. If registration of power supply failed then during unbind the driver
-   tried to unregister power supply which was not actually registered.
-   This could lead to memory corruption because
-   power_supply_unregister() unconditionally cleans up given power
-   supply.
-
-Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
-Fixes: 00a588f9d27f ("power: add driver for battery reading on iPaq h3xxx")
-Signed-off-by: Sebastian Reichel <sre@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/power/ipaq_micro_battery.c | 21 +++++++++++++++++++--
- 1 file changed, 19 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/power/ipaq_micro_battery.c b/drivers/power/ipaq_micro_battery.c
-index 698cf16..96b15e0 100644
---- a/drivers/power/ipaq_micro_battery.c
-+++ b/drivers/power/ipaq_micro_battery.c
-@@ -226,6 +226,7 @@ static struct power_supply micro_ac_power = {
- static int micro_batt_probe(struct platform_device *pdev)
- {
- 	struct micro_battery *mb;
-+	int ret;
  
- 	mb = devm_kzalloc(&pdev->dev, sizeof(*mb), GFP_KERNEL);
- 	if (!mb)
-@@ -233,14 +234,30 @@ static int micro_batt_probe(struct platform_device *pdev)
+@@ -202,7 +203,7 @@ acpi_ev_remove_gpe_reference(struct acpi_gpe_event_info *gpe_event_info)
+ 		if (ACPI_SUCCESS(status)) {
+ 			status =
+ 			    acpi_hw_low_set_gpe(gpe_event_info,
+-						ACPI_GPE_DISABLE_SAVE);
++						ACPI_GPE_DISABLE);
+ 		}
  
- 	mb->micro = dev_get_drvdata(pdev->dev.parent);
- 	mb->wq = create_singlethread_workqueue("ipaq-battery-wq");
-+	if (!mb->wq)
-+		return -ENOMEM;
-+
- 	INIT_DELAYED_WORK(&mb->update, micro_battery_work);
- 	platform_set_drvdata(pdev, mb);
- 	queue_delayed_work(mb->wq, &mb->update, 1);
--	power_supply_register(&pdev->dev, &micro_batt_power);
--	power_supply_register(&pdev->dev, &micro_ac_power);
-+
-+	ret = power_supply_register(&pdev->dev, &micro_batt_power);
-+	if (ret < 0)
-+		goto batt_err;
-+
-+	ret = power_supply_register(&pdev->dev, &micro_ac_power);
-+	if (ret < 0)
-+		goto ac_err;
+ 		if (ACPI_FAILURE(status)) {
+diff --git a/drivers/acpi/acpica/hwgpe.c b/drivers/acpi/acpica/hwgpe.c
+index 84bc550..af6514e 100644
+--- a/drivers/acpi/acpica/hwgpe.c
++++ b/drivers/acpi/acpica/hwgpe.c
+@@ -89,6 +89,8 @@ u32 acpi_hw_get_gpe_register_bit(struct acpi_gpe_event_info *gpe_event_info)
+  * RETURN:	Status
+  *
+  * DESCRIPTION: Enable or disable a single GPE in the parent enable register.
++ *              The enable_mask field of the involved GPE register must be
++ *              updated by the caller if necessary.
+  *
+  ******************************************************************************/
  
- 	dev_info(&pdev->dev, "iPAQ micro battery driver\n");
- 	return 0;
-+
-+ac_err:
-+	power_supply_unregister(&micro_ac_power);
-+batt_err:
-+	cancel_delayed_work_sync(&mb->update);
-+	destroy_workqueue(mb->wq);
-+	return ret;
- }
+@@ -119,7 +121,7 @@ acpi_hw_low_set_gpe(struct acpi_gpe_event_info *gpe_event_info, u32 action)
+ 	/* Set or clear just the bit that corresponds to this GPE */
  
- static int micro_batt_remove(struct platform_device *pdev)
--- 
-2.3.6
-
-
-From 4fc2e2c56db0c05c62444ed7bc8d285704155386 Mon Sep 17 00:00:00 2001
-From: Oliver Neukum <oneukum@suse.de>
-Date: Wed, 25 Mar 2015 15:13:36 +0100
-Subject: [PATCH 062/219] HID: add HP OEM mouse to quirk ALWAYS_POLL
-Cc: mpagano@gentoo.org
-
-commit 7a8e53c414c8183e8735e3b08d9a776200e6e665 upstream.
-
-This mouse needs QUIRK_ALWAYS_POLL.
-
-Signed-off-by: Oliver Neukum <oneukum@suse.de>
-Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/hid/hid-ids.h           | 3 +++
- drivers/hid/usbhid/hid-quirks.c | 1 +
- 2 files changed, 4 insertions(+)
-
-diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
-index 9c47867..7ace715 100644
---- a/drivers/hid/hid-ids.h
-+++ b/drivers/hid/hid-ids.h
-@@ -459,6 +459,9 @@
- #define USB_DEVICE_ID_UGCI_FLYING	0x0020
- #define USB_DEVICE_ID_UGCI_FIGHTING	0x0030
+ 	register_bit = acpi_hw_get_gpe_register_bit(gpe_event_info);
+-	switch (action & ~ACPI_GPE_SAVE_MASK) {
++	switch (action) {
+ 	case ACPI_GPE_CONDITIONAL_ENABLE:
  
-+#define USB_VENDOR_ID_HP		0x03f0
-+#define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE	0x0a4a
-+
- #define USB_VENDOR_ID_HUION		0x256c
- #define USB_DEVICE_ID_HUION_TABLET	0x006e
+ 		/* Only enable if the corresponding enable_mask bit is set */
+@@ -149,9 +151,6 @@ acpi_hw_low_set_gpe(struct acpi_gpe_event_info *gpe_event_info, u32 action)
+ 	/* Write the updated enable mask */
  
-diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c
-index a821277..fe6c60d 100644
---- a/drivers/hid/usbhid/hid-quirks.c
-+++ b/drivers/hid/usbhid/hid-quirks.c
-@@ -78,6 +78,7 @@ static const struct hid_blacklist {
- 	{ USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET },
- 	{ USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS },
- 	{ USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET },
-+	{ USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL },
- 	{ USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077, HID_QUIRK_ALWAYS_POLL },
- 	{ USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET },
- 	{ USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3, HID_QUIRK_NO_INIT_REPORTS },
--- 
-2.3.6
-
-
-From 66997b1d6c47e793556da41877262f5ac92e8d4d Mon Sep 17 00:00:00 2001
-From: Oliver Neukum <oneukum@suse.de>
-Date: Wed, 25 Mar 2015 15:38:31 +0100
-Subject: [PATCH 063/219] HID: add quirk for PIXART OEM mouse used by HP
-Cc: mpagano@gentoo.org
-
-commit b70b82580248b5393241c986082842ec05a2b7d7 upstream.
-
-This mouse is also known under other IDs. It needs the quirk or will disconnect
-in runlevel 1 or 3.
-
-Signed-off-by: Oliver Neukum <oneukum@suse.de>
-Signed-off-by: Jiri Kosina <jkosina@suse.cz>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/hid/hid-ids.h           | 1 +
- drivers/hid/usbhid/hid-quirks.c | 1 +
- 2 files changed, 2 insertions(+)
-
-diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
-index 7ace715..7fe5590 100644
---- a/drivers/hid/hid-ids.h
-+++ b/drivers/hid/hid-ids.h
-@@ -461,6 +461,7 @@
+ 	status = acpi_hw_write(enable_mask, &gpe_register_info->enable_address);
+-	if (ACPI_SUCCESS(status) && (action & ACPI_GPE_SAVE_MASK)) {
+-		gpe_register_info->enable_mask = (u8)enable_mask;
+-	}
+ 	return (status);
+ }
  
- #define USB_VENDOR_ID_HP		0x03f0
- #define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE	0x0a4a
-+#define USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE		0x134a
- 
- #define USB_VENDOR_ID_HUION		0x256c
- #define USB_DEVICE_ID_HUION_TABLET	0x006e
-diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c
-index fe6c60d..4e3ae9f 100644
---- a/drivers/hid/usbhid/hid-quirks.c
-+++ b/drivers/hid/usbhid/hid-quirks.c
-@@ -79,6 +79,7 @@ static const struct hid_blacklist {
- 	{ USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS },
- 	{ USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET },
- 	{ USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL },
-+	{ USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL },
- 	{ USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077, HID_QUIRK_ALWAYS_POLL },
- 	{ USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET },
- 	{ USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3, HID_QUIRK_NO_INIT_REPORTS },
--- 
-2.3.6
-
-
-From 3bc3783ea692a04256e2cf027bfd98bf7b8d82a6 Mon Sep 17 00:00:00 2001
-From: Andrew Elble <aweits@rit.edu>
-Date: Mon, 23 Feb 2015 08:51:24 -0500
-Subject: [PATCH 064/219] NFS: fix BUG() crash in notify_change() with patch to
- chown_common()
-Cc: mpagano@gentoo.org
-
-commit c1b8940b42bb6487b10f2267a96b486276ce9ff7 upstream.
-
-We have observed a BUG() crash in fs/attr.c:notify_change(). The crash
-occurs during an rsync into a filesystem that is exported via NFS.
-
-1.) fs/attr.c:notify_change() modifies the caller's version of attr.
-2.) 6de0ec00ba8d ("VFS: make notify_change pass ATTR_KILL_S*ID to
-    setattr operations") introduced a BUG() restriction such that "no
-    function will ever call notify_change() with both ATTR_MODE and
-    ATTR_KILL_S*ID set". Under some circumstances though, it will have
-    assisted in setting the caller's version of attr to this very
-    combination.
-3.) 27ac0ffeac80 ("locks: break delegations on any attribute
-    modification") introduced code to handle breaking
-    delegations. This can result in notify_change() being re-called. attr
-    _must_ be explicitly reset to avoid triggering the BUG() established
-    in #2.
-4.) The path that that triggers this is via fs/open.c:chmod_common().
-    The combination of attr flags set here and in the first call to
-    notify_change() along with a later failed break_deleg_wait()
-    results in notify_change() being called again via retry_deleg
-    without resetting attr.
-
-Solution is to move retry_deleg in chmod_common() a bit further up to
-ensure attr is completely reset.
-
-There are other places where this seemingly could occur, such as
-fs/utimes.c:utimes_common(), but the attr flags are not initially
-set in such a way to trigger this.
-
-Fixes: 27ac0ffeac80 ("locks: break delegations on any attribute modification")
-Reported-by: Eric Meddaugh <etmsys@rit.edu>
-Tested-by: Eric Meddaugh <etmsys@rit.edu>
-Signed-off-by: Andrew Elble <aweits@rit.edu>
-Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/open.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/fs/open.c b/fs/open.c
-index 33f9cbf..44a3be1 100644
---- a/fs/open.c
-+++ b/fs/open.c
-@@ -570,6 +570,7 @@ static int chown_common(struct path *path, uid_t user, gid_t group)
- 	uid = make_kuid(current_user_ns(), user);
- 	gid = make_kgid(current_user_ns(), group);
- 
-+retry_deleg:
- 	newattrs.ia_valid =  ATTR_CTIME;
- 	if (user != (uid_t) -1) {
- 		if (!uid_valid(uid))
-@@ -586,7 +587,6 @@ static int chown_common(struct path *path, uid_t user, gid_t group)
- 	if (!S_ISDIR(inode->i_mode))
- 		newattrs.ia_valid |=
- 			ATTR_KILL_SUID | ATTR_KILL_SGID | ATTR_KILL_PRIV;
--retry_deleg:
- 	mutex_lock(&inode->i_mutex);
- 	error = security_path_chown(path, uid, gid);
- 	if (!error)
--- 
-2.3.6
-
-
-From 46d09e1c86167373dcb343cfd6c901c78624ff01 Mon Sep 17 00:00:00 2001
-From: Russell King <rmk+kernel@arm.linux.org.uk>
-Date: Wed, 1 Apr 2015 16:20:39 +0100
-Subject: [PATCH 065/219] ARM: fix broken hibernation
-Cc: mpagano@gentoo.org
-
-commit 767bf7e7a1e82a81c59778348d156993d0a6175d upstream.
-
-Normally, when a CPU wants to clear a cache line to zero in the external
-L2 cache, it would generate bus cycles to write each word as it would do
-with any other data access.
-
-However, a Cortex A9 connected to a L2C-310 has a specific feature where
-the CPU can detect this operation, and signal that it wants to zero an
-entire cache line.  This feature, known as Full Line of Zeros (FLZ),
-involves a non-standard AXI signalling mechanism which only the L2C-310
-can properly interpret.
-
-There are separate enable bits in both the L2C-310 and the Cortex A9 -
-the L2C-310 needs to be enabled and have the FLZ enable bit set in the
-auxiliary control register before the Cortex A9 has this feature
-enabled.
-
-Unfortunately, the suspend code was not respecting this - it's not
-obvious from the code:
-
-swsusp_arch_suspend()
- cpu_suspend() /* saves the Cortex A9 auxiliary control register */
-  arch_save_image()
-  soft_restart() /* turns off FLZ in Cortex A9, and disables L2C */
-   cpu_resume() /* restores the Cortex A9 registers, inc auxcr */
-
-At this point, we end up with the L2C disabled, but the Cortex A9 with
-FLZ enabled - which means any memset() or zeroing of a full cache line
-will fail to take effect.
-
-A similar issue exists in the resume path, but it's slightly more
-complex:
-
-swsusp_arch_suspend()
- cpu_suspend() /* saves the Cortex A9 auxiliary control register */
-  arch_save_image() /* image with A9 auxcr saved */
-...
-swsusp_arch_resume()
- call_with_stack()
-  arch_restore_image() /* restores image with A9 auxcr saved above */
-  soft_restart() /* turns off FLZ in Cortex A9, and disables L2C */
-   cpu_resume() /* restores the Cortex A9 registers, inc auxcr */
-
-Again, here we end up with the L2C disabled, but Cortex A9 FLZ enabled.
-
-There's no need to turn off the L2C in either of these two paths; there
-are benefits from not doing so - for example, the page copies will be
-faster with the L2C enabled.
-
-Hence, fix this by providing a variant of soft_restart() which can be
-used without turning the L2 cache controller off, and use it in both
-of these paths to keep the L2C enabled across the respective resume
-transitions.
-
-Fixes: 8ef418c7178f ("ARM: l2c: trial at enabling some Cortex-A9 optimisations")
-Reported-by: Sean Cross <xobs@kosagi.com>
-Tested-by: Sean Cross <xobs@kosagi.com>
-Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/arm/kernel/hibernate.c |  5 +++--
- arch/arm/kernel/process.c   | 10 ++++++++--
- arch/arm/kernel/reboot.h    |  6 ++++++
- 3 files changed, 17 insertions(+), 4 deletions(-)
- create mode 100644 arch/arm/kernel/reboot.h
-
-diff --git a/arch/arm/kernel/hibernate.c b/arch/arm/kernel/hibernate.c
-index c4cc50e..cfb354f 100644
---- a/arch/arm/kernel/hibernate.c
-+++ b/arch/arm/kernel/hibernate.c
-@@ -22,6 +22,7 @@
- #include <asm/suspend.h>
- #include <asm/memory.h>
- #include <asm/sections.h>
-+#include "reboot.h"
- 
- int pfn_is_nosave(unsigned long pfn)
+@@ -286,10 +285,8 @@ acpi_hw_gpe_enable_write(u8 enable_mask,
  {
-@@ -61,7 +62,7 @@ static int notrace arch_save_image(unsigned long unused)
+ 	acpi_status status;
  
- 	ret = swsusp_save();
- 	if (ret == 0)
--		soft_restart(virt_to_phys(cpu_resume));
-+		_soft_restart(virt_to_phys(cpu_resume), false);
- 	return ret;
++	gpe_register_info->enable_mask = enable_mask;
+ 	status = acpi_hw_write(enable_mask, &gpe_register_info->enable_address);
+-	if (ACPI_SUCCESS(status)) {
+-		gpe_register_info->enable_mask = enable_mask;
+-	}
+ 	return (status);
  }
  
-@@ -86,7 +87,7 @@ static void notrace arch_restore_image(void *unused)
- 	for (pbe = restore_pblist; pbe; pbe = pbe->next)
- 		copy_page(pbe->orig_address, pbe->address);
+diff --git a/drivers/acpi/acpica/tbinstal.c b/drivers/acpi/acpica/tbinstal.c
+index 9bad45e..7fbc2b9 100644
+--- a/drivers/acpi/acpica/tbinstal.c
++++ b/drivers/acpi/acpica/tbinstal.c
+@@ -346,7 +346,6 @@ acpi_tb_install_standard_table(acpi_physical_address address,
+ 				 */
+ 				acpi_tb_uninstall_table(&new_table_desc);
+ 				*table_index = i;
+-				(void)acpi_ut_release_mutex(ACPI_MTX_TABLES);
+ 				return_ACPI_STATUS(AE_OK);
+ 			}
+ 		}
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index bbca783..349f4fd 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -298,7 +298,11 @@ bool acpi_scan_is_offline(struct acpi_device *adev, bool uevent)
+ 	struct acpi_device_physical_node *pn;
+ 	bool offline = true;
  
--	soft_restart(virt_to_phys(cpu_resume));
-+	_soft_restart(virt_to_phys(cpu_resume), false);
- }
+-	mutex_lock(&adev->physical_node_lock);
++	/*
++	 * acpi_container_offline() calls this for all of the container's
++	 * children under the container's physical_node_lock lock.
++	 */
++	mutex_lock_nested(&adev->physical_node_lock, SINGLE_DEPTH_NESTING);
  
- static u64 resume_stack[PAGE_SIZE/2/sizeof(u64)] __nosavedata;
-diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
-index fdfa3a7..2bf1a16 100644
---- a/arch/arm/kernel/process.c
-+++ b/arch/arm/kernel/process.c
-@@ -41,6 +41,7 @@
- #include <asm/system_misc.h>
- #include <asm/mach/time.h>
- #include <asm/tls.h>
-+#include "reboot.h"
+ 	list_for_each_entry(pn, &adev->physical_node_list, node)
+ 		if (device_supports_offline(pn->dev) && !pn->dev->offline) {
+diff --git a/drivers/base/bus.c b/drivers/base/bus.c
+index 876bae5..79bc203 100644
+--- a/drivers/base/bus.c
++++ b/drivers/base/bus.c
+@@ -515,11 +515,11 @@ int bus_add_device(struct device *dev)
+ 			goto out_put;
+ 		error = device_add_groups(dev, bus->dev_groups);
+ 		if (error)
+-			goto out_groups;
++			goto out_id;
+ 		error = sysfs_create_link(&bus->p->devices_kset->kobj,
+ 						&dev->kobj, dev_name(dev));
+ 		if (error)
+-			goto out_id;
++			goto out_groups;
+ 		error = sysfs_create_link(&dev->kobj,
+ 				&dev->bus->p->subsys.kobj, "subsystem");
+ 		if (error)
+diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
+index 6e64563..9c2ba1c 100644
+--- a/drivers/base/cacheinfo.c
++++ b/drivers/base/cacheinfo.c
+@@ -62,15 +62,21 @@ static int cache_setup_of_node(unsigned int cpu)
+ 		return -ENOENT;
+ 	}
  
- #ifdef CONFIG_CC_STACKPROTECTOR
- #include <linux/stackprotector.h>
-@@ -95,7 +96,7 @@ static void __soft_restart(void *addr)
- 	BUG();
+-	while (np && index < cache_leaves(cpu)) {
++	while (index < cache_leaves(cpu)) {
+ 		this_leaf = this_cpu_ci->info_list + index;
+ 		if (this_leaf->level != 1)
+ 			np = of_find_next_cache_node(np);
+ 		else
+ 			np = of_node_get(np);/* cpu node itself */
++		if (!np)
++			break;
+ 		this_leaf->of_node = np;
+ 		index++;
+ 	}
++
++	if (index != cache_leaves(cpu)) /* not all OF nodes populated */
++		return -ENOENT;
++
+ 	return 0;
  }
  
--void soft_restart(unsigned long addr)
-+void _soft_restart(unsigned long addr, bool disable_l2)
- {
- 	u64 *stack = soft_restart_stack + ARRAY_SIZE(soft_restart_stack);
+@@ -189,8 +195,11 @@ static int detect_cache_attributes(unsigned int cpu)
+ 	 * will be set up here only if they are not populated already
+ 	 */
+ 	ret = cache_shared_cpu_map_setup(cpu);
+-	if (ret)
++	if (ret) {
++		pr_warn("Unable to detect cache hierarcy from DT for CPU %d\n",
++			cpu);
+ 		goto free_ci;
++	}
+ 	return 0;
  
-@@ -104,7 +105,7 @@ void soft_restart(unsigned long addr)
- 	local_fiq_disable();
+ free_ci:
+diff --git a/drivers/base/platform.c b/drivers/base/platform.c
+index 9421fed..e68ab79 100644
+--- a/drivers/base/platform.c
++++ b/drivers/base/platform.c
+@@ -101,6 +101,15 @@ int platform_get_irq(struct platform_device *dev, unsigned int num)
+ 	}
  
- 	/* Disable the L2 if we're the last man standing. */
--	if (num_online_cpus() == 1)
-+	if (disable_l2)
- 		outer_disable();
+ 	r = platform_get_resource(dev, IORESOURCE_IRQ, num);
++	/*
++	 * The resources may pass trigger flags to the irqs that need
++	 * to be set up. It so happens that the trigger flags for
++	 * IORESOURCE_BITS correspond 1-to-1 to the IRQF_TRIGGER*
++	 * settings.
++	 */
++	if (r && r->flags & IORESOURCE_BITS)
++		irqd_set_trigger_type(irq_get_irq_data(r->start),
++				      r->flags & IORESOURCE_BITS);
  
- 	/* Change to the new stack and continue with the reset. */
-@@ -114,6 +115,11 @@ void soft_restart(unsigned long addr)
- 	BUG();
+ 	return r ? r->start : -ENXIO;
+ #endif
+diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c
+index de4c849..288547a 100644
+--- a/drivers/bluetooth/ath3k.c
++++ b/drivers/bluetooth/ath3k.c
+@@ -65,6 +65,7 @@ static const struct usb_device_id ath3k_table[] = {
+ 	/* Atheros AR3011 with sflash firmware*/
+ 	{ USB_DEVICE(0x0489, 0xE027) },
+ 	{ USB_DEVICE(0x0489, 0xE03D) },
++	{ USB_DEVICE(0x04F2, 0xAFF1) },
+ 	{ USB_DEVICE(0x0930, 0x0215) },
+ 	{ USB_DEVICE(0x0CF3, 0x3002) },
+ 	{ USB_DEVICE(0x0CF3, 0xE019) },
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 8bfc4c2..2c527da 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -159,6 +159,7 @@ static const struct usb_device_id blacklist_table[] = {
+ 	/* Atheros 3011 with sflash firmware */
+ 	{ USB_DEVICE(0x0489, 0xe027), .driver_info = BTUSB_IGNORE },
+ 	{ USB_DEVICE(0x0489, 0xe03d), .driver_info = BTUSB_IGNORE },
++	{ USB_DEVICE(0x04f2, 0xaff1), .driver_info = BTUSB_IGNORE },
+ 	{ USB_DEVICE(0x0930, 0x0215), .driver_info = BTUSB_IGNORE },
+ 	{ USB_DEVICE(0x0cf3, 0x3002), .driver_info = BTUSB_IGNORE },
+ 	{ USB_DEVICE(0x0cf3, 0xe019), .driver_info = BTUSB_IGNORE },
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index e096e9c..283f00a 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -170,6 +170,41 @@ static void tpm_dev_del_device(struct tpm_chip *chip)
+ 	device_unregister(&chip->dev);
  }
  
-+void soft_restart(unsigned long addr)
++static int tpm1_chip_register(struct tpm_chip *chip)
 +{
-+	_soft_restart(addr, num_online_cpus() == 1);
-+}
++	int rc;
 +
- /*
-  * Function pointers to optional machine specific functions
-  */
-diff --git a/arch/arm/kernel/reboot.h b/arch/arm/kernel/reboot.h
-new file mode 100644
-index 0000000..c87f058
---- /dev/null
-+++ b/arch/arm/kernel/reboot.h
-@@ -0,0 +1,6 @@
-+#ifndef REBOOT_H
-+#define REBOOT_H
++	if (chip->flags & TPM_CHIP_FLAG_TPM2)
++		return 0;
 +
-+extern void _soft_restart(unsigned long addr, bool disable_l2);
++	rc = tpm_sysfs_add_device(chip);
++	if (rc)
++		return rc;
++
++	rc = tpm_add_ppi(chip);
++	if (rc) {
++		tpm_sysfs_del_device(chip);
++		return rc;
++	}
++
++	chip->bios_dir = tpm_bios_log_setup(chip->devname);
 +
-+#endif
--- 
-2.3.6
-
-
-From c5528d2a0edcbbc3ceba739ec70133e2594486c4 Mon Sep 17 00:00:00 2001
-From: Andrey Ryabinin <a.ryabinin@samsung.com>
-Date: Fri, 20 Mar 2015 15:42:27 +0100
-Subject: [PATCH 066/219] ARM: 8320/1: fix integer overflow in ELF_ET_DYN_BASE
-Cc: mpagano@gentoo.org
-
-commit 8defb3367fcd19d1af64c07792aade0747b54e0f upstream.
-
-Usually ELF_ET_DYN_BASE is 2/3 of TASK_SIZE. With 3G/1G user/kernel
-split this is not so, because 2*TASK_SIZE overflows 32 bits,
-so the actual value of ELF_ET_DYN_BASE is:
-	(2 * TASK_SIZE / 3) = 0x2a000000
-
-When ASLR is disabled PIE binaries will load at ELF_ET_DYN_BASE address.
-On 32bit platforms AddressSanitzer uses addresses [0x20000000 - 0x40000000]
-for shadow memory [1]. So ASan doesn't work for PIE binaries when ASLR disabled
-as it fails to map shadow memory.
-Also after Kees's 'split ET_DYN ASLR from mmap ASLR' patchset PIE binaries
-has a high chance of loading somewhere in between [0x2a000000 - 0x40000000]
-even if ASLR enabled. This makes ASan with PIE absolutely incompatible.
-
-Fix overflow by dividing TASK_SIZE prior to multiplying.
-After this patch ELF_ET_DYN_BASE equals to (for CONFIG_VMSPLIT_3G=y):
-	(TASK_SIZE / 3 * 2) = 0x7f555554
-
-[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerAlgorithm#Mapping
-
-Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
-Reported-by: Maria Guseva <m.guseva@samsung.com>
-Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/arm/include/asm/elf.h | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/arch/arm/include/asm/elf.h b/arch/arm/include/asm/elf.h
-index afb9caf..674d03f 100644
---- a/arch/arm/include/asm/elf.h
-+++ b/arch/arm/include/asm/elf.h
-@@ -115,7 +115,7 @@ int dump_task_regs(struct task_struct *t, elf_gregset_t *elfregs);
-    the loader.  We need to make sure that it is out of the way of the program
-    that it will "exec", and that there is sufficient room for the brk.  */
- 
--#define ELF_ET_DYN_BASE	(2 * TASK_SIZE / 3)
-+#define ELF_ET_DYN_BASE	(TASK_SIZE / 3 * 2)
- 
- /* When the program starts, a1 contains a pointer to a function to be 
-    registered with atexit, as per the SVR4 ABI.  A value of 0 means we 
--- 
-2.3.6
-
-
-From 6ec6b63f4e9d59f78b61944f8c533d9ff029f46f Mon Sep 17 00:00:00 2001
-From: Gregory CLEMENT <gregory.clement@free-electrons.com>
-Date: Fri, 30 Jan 2015 12:34:25 +0100
-Subject: [PATCH 067/219] ARM: mvebu: Disable CPU Idle on Armada 38x
-Cc: mpagano@gentoo.org
-
-commit 548ae94c1cc7fc120848757249b9a542b1080ffb upstream.
-
-On Armada 38x SoCs, under heavy I/O load, the system hangs when CPU
-Idle is enabled. Waiting for a solution to this issue, this patch
-disables the CPU Idle support for this SoC.
-
-As CPU Hot plug support also uses some of the CPU Idle functions it is
-also affected by the same issue. This patch disables it also for the
-Armada 38x SoCs.
-
-Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
-Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/arm/mach-mvebu/pmsu.c | 16 +++++++++++++++-
- 1 file changed, 15 insertions(+), 1 deletion(-)
-
-diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c
-index 8b9f5e2..4f4e222 100644
---- a/arch/arm/mach-mvebu/pmsu.c
-+++ b/arch/arm/mach-mvebu/pmsu.c
-@@ -415,6 +415,9 @@ static __init int armada_38x_cpuidle_init(void)
- 	void __iomem *mpsoc_base;
- 	u32 reg;
- 
-+	pr_warn("CPU idle is currently broken on Armada 38x: disabling");
 +	return 0;
++}
 +
- 	np = of_find_compatible_node(NULL, NULL,
- 				     "marvell,armada-380-coherency-fabric");
- 	if (!np)
-@@ -476,6 +479,16 @@ static int __init mvebu_v7_cpu_pm_init(void)
- 		return 0;
- 	of_node_put(np);
- 
-+	/*
-+	 * Currently the CPU idle support for Armada 38x is broken, as
-+	 * the CPU hotplug uses some of the CPU idle functions it is
-+	 * broken too, so let's disable it
-+	 */
-+	if (of_machine_is_compatible("marvell,armada380")) {
-+		cpu_hotplug_disable();
-+		pr_warn("CPU hotplug support is currently broken on Armada 38x: disabling");
-+	}
++static void tpm1_chip_unregister(struct tpm_chip *chip)
++{
++	if (chip->flags & TPM_CHIP_FLAG_TPM2)
++		return;
 +
- 	if (of_machine_is_compatible("marvell,armadaxp"))
- 		ret = armada_xp_cpuidle_init();
- 	else if (of_machine_is_compatible("marvell,armada370"))
-@@ -489,7 +502,8 @@ static int __init mvebu_v7_cpu_pm_init(void)
- 		return ret;
++	if (chip->bios_dir)
++		tpm_bios_log_teardown(chip->bios_dir);
++
++	tpm_remove_ppi(chip);
++
++	tpm_sysfs_del_device(chip);
++}
++
+ /*
+  * tpm_chip_register() - create a character device for the TPM chip
+  * @chip: TPM chip to use.
+@@ -185,22 +220,13 @@ int tpm_chip_register(struct tpm_chip *chip)
+ {
+ 	int rc;
  
- 	mvebu_v7_pmsu_enable_l2_powerdown_onidle();
--	platform_device_register(&mvebu_v7_cpuidle_device);
-+	if (mvebu_v7_cpuidle_device.name)
-+		platform_device_register(&mvebu_v7_cpuidle_device);
- 	cpu_pm_register_notifier(&mvebu_v7_cpu_pm_notifier);
+-	/* Populate sysfs for TPM1 devices. */
+-	if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
+-		rc = tpm_sysfs_add_device(chip);
+-		if (rc)
+-			goto del_misc;
+-
+-		rc = tpm_add_ppi(chip);
+-		if (rc)
+-			goto del_sysfs;
+-
+-		chip->bios_dir = tpm_bios_log_setup(chip->devname);
+-	}
++	rc = tpm1_chip_register(chip);
++	if (rc)
++		return rc;
  
- 	return 0;
--- 
-2.3.6
-
-
-From 3c9d536953582615eb9054c38a5e4de6c711ccb5 Mon Sep 17 00:00:00 2001
-From: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
-Date: Fri, 27 Mar 2015 01:58:08 +0900
-Subject: [PATCH 068/219] ARM: S3C64XX: Use fixed IRQ bases to avoid conflicts
- on Cragganmore
-Cc: mpagano@gentoo.org
-
-commit 4e330ae4ab2915444f1e6dca1358a910aa259362 upstream.
-
-There are two PMICs on Cragganmore, currently one dynamically assign
-its IRQ base and the other uses a fixed base. It is possible for the
-statically assigned PMIC to fail if its IRQ is taken by the dynamically
-assigned one. Fix this by statically assigning both the IRQ bases.
-
-Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
-Signed-off-by: Kukjin Kim <kgene@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/arm/mach-s3c64xx/crag6410.h      | 1 +
- arch/arm/mach-s3c64xx/mach-crag6410.c | 1 +
- 2 files changed, 2 insertions(+)
-
-diff --git a/arch/arm/mach-s3c64xx/crag6410.h b/arch/arm/mach-s3c64xx/crag6410.h
-index 7bc6668..dcbe17f 100644
---- a/arch/arm/mach-s3c64xx/crag6410.h
-+++ b/arch/arm/mach-s3c64xx/crag6410.h
-@@ -14,6 +14,7 @@
- #include <mach/gpio-samsung.h>
+ 	rc = tpm_dev_add_device(chip);
+ 	if (rc)
+-		return rc;
++		goto out_err;
  
- #define GLENFARCLAS_PMIC_IRQ_BASE	IRQ_BOARD_START
-+#define BANFF_PMIC_IRQ_BASE		(IRQ_BOARD_START + 64)
+ 	/* Make the chip available. */
+ 	spin_lock(&driver_lock);
+@@ -210,10 +236,8 @@ int tpm_chip_register(struct tpm_chip *chip)
+ 	chip->flags |= TPM_CHIP_FLAG_REGISTERED;
  
- #define PCA935X_GPIO_BASE		GPIO_BOARD_START
- #define CODEC_GPIO_BASE			(GPIO_BOARD_START + 8)
-diff --git a/arch/arm/mach-s3c64xx/mach-crag6410.c b/arch/arm/mach-s3c64xx/mach-crag6410.c
-index 10b913b..65c426b 100644
---- a/arch/arm/mach-s3c64xx/mach-crag6410.c
-+++ b/arch/arm/mach-s3c64xx/mach-crag6410.c
-@@ -554,6 +554,7 @@ static struct wm831x_touch_pdata touch_pdata = {
+ 	return 0;
+-del_sysfs:
+-	tpm_sysfs_del_device(chip);
+-del_misc:
+-	tpm_dev_del_device(chip);
++out_err:
++	tpm1_chip_unregister(chip);
+ 	return rc;
+ }
+ EXPORT_SYMBOL_GPL(tpm_chip_register);
+@@ -238,13 +262,7 @@ void tpm_chip_unregister(struct tpm_chip *chip)
+ 	spin_unlock(&driver_lock);
+ 	synchronize_rcu();
  
- static struct wm831x_pdata crag_pmic_pdata = {
- 	.wm831x_num = 1,
-+	.irq_base = BANFF_PMIC_IRQ_BASE,
- 	.gpio_base = BANFF_PMIC_GPIO_BASE,
- 	.soft_shutdown = true,
+-	if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
+-		if (chip->bios_dir)
+-			tpm_bios_log_teardown(chip->bios_dir);
+-		tpm_remove_ppi(chip);
+-		tpm_sysfs_del_device(chip);
+-	}
+-
++	tpm1_chip_unregister(chip);
+ 	tpm_dev_del_device(chip);
+ }
+ EXPORT_SYMBOL_GPL(tpm_chip_unregister);
+diff --git a/drivers/clk/at91/clk-usb.c b/drivers/clk/at91/clk-usb.c
+index a23ac0c..0b7c3e8 100644
+--- a/drivers/clk/at91/clk-usb.c
++++ b/drivers/clk/at91/clk-usb.c
+@@ -56,22 +56,55 @@ static unsigned long at91sam9x5_clk_usb_recalc_rate(struct clk_hw *hw,
+ 	return DIV_ROUND_CLOSEST(parent_rate, (usbdiv + 1));
+ }
  
--- 
-2.3.6
-
-
-From 64d90ab58af7a385a7955061e0a319f7f939ddff Mon Sep 17 00:00:00 2001
-From: Nicolas Ferre <nicolas.ferre@atmel.com>
-Date: Tue, 31 Mar 2015 10:56:10 +0200
-Subject: [PATCH 069/219] ARM: at91/dt: sama5d3 xplained: add phy address for
- macb1
-Cc: mpagano@gentoo.org
-
-commit 98b80987c940956da48f0c703f60340128bb8521 upstream.
-
-After 57a38effa598 (net: phy: micrel: disable broadcast for KSZ8081/KSZ8091)
-the macb1 interface refuses to work properly because it tries
-to cling to address 0 which isn't able to communicate in broadcast with
-the mac anymore. The micrel phy on the board is actually configured
-to show up at address 1.
-Adding the phy node and its real address fixes the issue.
-
-Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
-Cc: Johan Hovold <johan@kernel.org>
-Signed-off-by: Olof Johansson <olof@lixom.net>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/arm/boot/dts/at91-sama5d3_xplained.dts | 6 ++++++
- 1 file changed, 6 insertions(+)
-
-diff --git a/arch/arm/boot/dts/at91-sama5d3_xplained.dts b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
-index fec1fca..6c4bc53 100644
---- a/arch/arm/boot/dts/at91-sama5d3_xplained.dts
-+++ b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
-@@ -167,7 +167,13 @@
+-static long at91sam9x5_clk_usb_round_rate(struct clk_hw *hw, unsigned long rate,
+-					  unsigned long *parent_rate)
++static long at91sam9x5_clk_usb_determine_rate(struct clk_hw *hw,
++					      unsigned long rate,
++					      unsigned long min_rate,
++					      unsigned long max_rate,
++					      unsigned long *best_parent_rate,
++					      struct clk_hw **best_parent_hw)
+ {
+-	unsigned long div;
++	struct clk *parent = NULL;
++	long best_rate = -EINVAL;
++	unsigned long tmp_rate;
++	int best_diff = -1;
++	int tmp_diff;
++	int i;
  
- 			macb1: ethernet@f802c000 {
- 				phy-mode = "rmii";
-+				#address-cells = <1>;
-+				#size-cells = <0>;
- 				status = "okay";
+-	if (!rate)
+-		return -EINVAL;
++	for (i = 0; i < __clk_get_num_parents(hw->clk); i++) {
++		int div;
+ 
+-	if (rate >= *parent_rate)
+-		return *parent_rate;
++		parent = clk_get_parent_by_index(hw->clk, i);
++		if (!parent)
++			continue;
 +
-+				ethernet-phy@1 {
-+					reg = <0x1>;
-+				};
- 			};
++		for (div = 1; div < SAM9X5_USB_MAX_DIV + 2; div++) {
++			unsigned long tmp_parent_rate;
++
++			tmp_parent_rate = rate * div;
++			tmp_parent_rate = __clk_round_rate(parent,
++							   tmp_parent_rate);
++			tmp_rate = DIV_ROUND_CLOSEST(tmp_parent_rate, div);
++			if (tmp_rate < rate)
++				tmp_diff = rate - tmp_rate;
++			else
++				tmp_diff = tmp_rate - rate;
++
++			if (best_diff < 0 || best_diff > tmp_diff) {
++				best_rate = tmp_rate;
++				best_diff = tmp_diff;
++				*best_parent_rate = tmp_parent_rate;
++				*best_parent_hw = __clk_get_hw(parent);
++			}
++
++			if (!best_diff || tmp_rate < rate)
++				break;
++		}
  
- 			dbgu: serial@ffffee00 {
--- 
-2.3.6
-
-
-From 5b126c3890f31b1b0e2bbfd94aace90169664e69 Mon Sep 17 00:00:00 2001
-From: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
-Date: Tue, 17 Feb 2015 19:52:04 +0100
-Subject: [PATCH 070/219] ARM: dts: dove: Fix uart[23] reg property
-Cc: mpagano@gentoo.org
-
-commit a74cd13b807029397f7232449df929bac11fb228 upstream.
-
-Fix Dove's register addresses of uart2 and uart3 nodes that seem to
-be broken since ages due to a copy-and-paste error.
-
-Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
-Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
-Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/arm/boot/dts/dove.dtsi | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/arch/arm/boot/dts/dove.dtsi b/arch/arm/boot/dts/dove.dtsi
-index a5441d5..3cc8b83 100644
---- a/arch/arm/boot/dts/dove.dtsi
-+++ b/arch/arm/boot/dts/dove.dtsi
-@@ -154,7 +154,7 @@
+-	div = DIV_ROUND_CLOSEST(*parent_rate, rate);
+-	if (div > SAM9X5_USB_MAX_DIV + 1)
+-		div = SAM9X5_USB_MAX_DIV + 1;
++		if (!best_diff)
++			break;
++	}
  
- 			uart2: serial@12200 {
- 				compatible = "ns16550a";
--				reg = <0x12000 0x100>;
-+				reg = <0x12200 0x100>;
- 				reg-shift = <2>;
- 				interrupts = <9>;
- 				clocks = <&core_clk 0>;
-@@ -163,7 +163,7 @@
+-	return DIV_ROUND_CLOSEST(*parent_rate, div);
++	return best_rate;
+ }
  
- 			uart3: serial@12300 {
- 				compatible = "ns16550a";
--				reg = <0x12100 0x100>;
-+				reg = <0x12300 0x100>;
- 				reg-shift = <2>;
- 				interrupts = <10>;
- 				clocks = <&core_clk 0>;
--- 
-2.3.6
-
-
-From 422be9a5e09ea7d6e84ad2c3d05dfdf01e4a7a3f Mon Sep 17 00:00:00 2001
-From: Andreas Faerber <afaerber@suse.de>
-Date: Wed, 18 Mar 2015 01:25:18 +0900
-Subject: [PATCH 071/219] ARM: dts: fix mmc node updates for exynos5250-spring
-Cc: mpagano@gentoo.org
-
-commit 7e9e20b1faab02357501553d7f4e3efec1b4cfd3 upstream.
-
-Resolve a merge conflict with mmc refactoring aaa25a5a33cb ("ARM: dts:
-unuse the slot-node and deprecate the supports-highspeed for dw-mmc in
-exynos") by dropping the slot@0 nodes, moving its bus-width property to
-the mmc node and replacing supports-highspeed with cap-{mmc,sd}-highspeed,
-matching exynos5250-snow.
-
-Cc: Jaehoon Chung <jh80.chung@samsung.com>
-Fixes: 53dd4138bb0a ("ARM: dts: Add exynos5250-spring device tree")
-Signed-off-by: Andreas Faerber <afaerber@suse.de>
-Reviewed-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk>
-Signed-off-by: Kukjin Kim <kgene@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/arm/boot/dts/exynos5250-spring.dts | 16 ++++------------
- 1 file changed, 4 insertions(+), 12 deletions(-)
-
-diff --git a/arch/arm/boot/dts/exynos5250-spring.dts b/arch/arm/boot/dts/exynos5250-spring.dts
-index f027754..c41600e 100644
---- a/arch/arm/boot/dts/exynos5250-spring.dts
-+++ b/arch/arm/boot/dts/exynos5250-spring.dts
-@@ -429,7 +429,6 @@
- &mmc_0 {
- 	status = "okay";
- 	num-slots = <1>;
--	supports-highspeed;
- 	broken-cd;
- 	card-detect-delay = <200>;
- 	samsung,dw-mshc-ciu-div = <3>;
-@@ -437,11 +436,8 @@
- 	samsung,dw-mshc-ddr-timing = <1 2>;
- 	pinctrl-names = "default";
- 	pinctrl-0 = <&sd0_clk &sd0_cmd &sd0_cd &sd0_bus4 &sd0_bus8>;
--
--	slot@0 {
--		reg = <0>;
--		bus-width = <8>;
--	};
-+	bus-width = <8>;
-+	cap-mmc-highspeed;
- };
+ static int at91sam9x5_clk_usb_set_parent(struct clk_hw *hw, u8 index)
+@@ -121,7 +154,7 @@ static int at91sam9x5_clk_usb_set_rate(struct clk_hw *hw, unsigned long rate,
  
- /*
-@@ -451,7 +447,6 @@
- &mmc_1 {
- 	status = "okay";
- 	num-slots = <1>;
--	supports-highspeed;
- 	broken-cd;
- 	card-detect-delay = <200>;
- 	samsung,dw-mshc-ciu-div = <3>;
-@@ -459,11 +454,8 @@
- 	samsung,dw-mshc-ddr-timing = <1 2>;
- 	pinctrl-names = "default";
- 	pinctrl-0 = <&sd1_clk &sd1_cmd &sd1_cd &sd1_bus4>;
--
--	slot@0 {
--		reg = <0>;
--		bus-width = <4>;
--	};
-+	bus-width = <4>;
-+	cap-sd-highspeed;
+ static const struct clk_ops at91sam9x5_usb_ops = {
+ 	.recalc_rate = at91sam9x5_clk_usb_recalc_rate,
+-	.round_rate = at91sam9x5_clk_usb_round_rate,
++	.determine_rate = at91sam9x5_clk_usb_determine_rate,
+ 	.get_parent = at91sam9x5_clk_usb_get_parent,
+ 	.set_parent = at91sam9x5_clk_usb_set_parent,
+ 	.set_rate = at91sam9x5_clk_usb_set_rate,
+@@ -159,7 +192,7 @@ static const struct clk_ops at91sam9n12_usb_ops = {
+ 	.disable = at91sam9n12_clk_usb_disable,
+ 	.is_enabled = at91sam9n12_clk_usb_is_enabled,
+ 	.recalc_rate = at91sam9x5_clk_usb_recalc_rate,
+-	.round_rate = at91sam9x5_clk_usb_round_rate,
++	.determine_rate = at91sam9x5_clk_usb_determine_rate,
+ 	.set_rate = at91sam9x5_clk_usb_set_rate,
  };
  
- &pinctrl_0 {
--- 
-2.3.6
-
-
-From 55db0145ac65aec05c736cddb3a6717b83619d7e Mon Sep 17 00:00:00 2001
-From: Felipe Balbi <balbi@ti.com>
-Date: Mon, 30 Dec 2013 12:33:53 -0600
-Subject: [PATCH 072/219] usb: musb: core: fix TX/RX endpoint order
-Cc: mpagano@gentoo.org
-
-commit e3c93e1a3f35be4cf1493d3ccfb0c6d9209e4922 upstream.
-
-As per Mentor Graphics' documentation, we should
-always handle TX endpoints before RX endpoints.
-
-This patch fixes that error while also updating
-some hard-to-read comments which were scattered
-around musb_interrupt().
-
-This patch should be backported as far back as
-possible since this error has been in the driver
-since it's conception.
-
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/musb/musb_core.c | 44 ++++++++++++++++++++++++++------------------
- 1 file changed, 26 insertions(+), 18 deletions(-)
-
-diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
-index 067920f..461bfe8 100644
---- a/drivers/usb/musb/musb_core.c
-+++ b/drivers/usb/musb/musb_core.c
-@@ -1597,16 +1597,30 @@ irqreturn_t musb_interrupt(struct musb *musb)
- 		is_host_active(musb) ? "host" : "peripheral",
- 		musb->int_usb, musb->int_tx, musb->int_rx);
- 
--	/* the core can interrupt us for multiple reasons; docs have
--	 * a generic interrupt flowchart to follow
-+	/**
-+	 * According to Mentor Graphics' documentation, flowchart on page 98,
-+	 * IRQ should be handled as follows:
-+	 *
-+	 * . Resume IRQ
-+	 * . Session Request IRQ
-+	 * . VBUS Error IRQ
-+	 * . Suspend IRQ
-+	 * . Connect IRQ
-+	 * . Disconnect IRQ
-+	 * . Reset/Babble IRQ
-+	 * . SOF IRQ (we're not using this one)
-+	 * . Endpoint 0 IRQ
-+	 * . TX Endpoints
-+	 * . RX Endpoints
-+	 *
-+	 * We will be following that flowchart in order to avoid any problems
-+	 * that might arise with internal Finite State Machine.
- 	 */
-+
- 	if (musb->int_usb)
- 		retval |= musb_stage0_irq(musb, musb->int_usb,
- 				devctl);
- 
--	/* "stage 1" is handling endpoint irqs */
--
--	/* handle endpoint 0 first */
- 	if (musb->int_tx & 1) {
- 		if (is_host_active(musb))
- 			retval |= musb_h_ep0_irq(musb);
-@@ -1614,37 +1628,31 @@ irqreturn_t musb_interrupt(struct musb *musb)
- 			retval |= musb_g_ep0_irq(musb);
- 	}
- 
--	/* RX on endpoints 1-15 */
--	reg = musb->int_rx >> 1;
-+	reg = musb->int_tx >> 1;
- 	ep_num = 1;
- 	while (reg) {
- 		if (reg & 1) {
--			/* musb_ep_select(musb->mregs, ep_num); */
--			/* REVISIT just retval = ep->rx_irq(...) */
- 			retval = IRQ_HANDLED;
- 			if (is_host_active(musb))
--				musb_host_rx(musb, ep_num);
-+				musb_host_tx(musb, ep_num);
- 			else
--				musb_g_rx(musb, ep_num);
-+				musb_g_tx(musb, ep_num);
- 		}
--
- 		reg >>= 1;
- 		ep_num++;
- 	}
+@@ -179,7 +212,8 @@ at91sam9x5_clk_register_usb(struct at91_pmc *pmc, const char *name,
+ 	init.ops = &at91sam9x5_usb_ops;
+ 	init.parent_names = parent_names;
+ 	init.num_parents = num_parents;
+-	init.flags = CLK_SET_RATE_GATE | CLK_SET_PARENT_GATE;
++	init.flags = CLK_SET_RATE_GATE | CLK_SET_PARENT_GATE |
++		     CLK_SET_RATE_PARENT;
  
--	/* TX on endpoints 1-15 */
--	reg = musb->int_tx >> 1;
-+	reg = musb->int_rx >> 1;
- 	ep_num = 1;
- 	while (reg) {
- 		if (reg & 1) {
--			/* musb_ep_select(musb->mregs, ep_num); */
--			/* REVISIT just retval |= ep->tx_irq(...) */
- 			retval = IRQ_HANDLED;
- 			if (is_host_active(musb))
--				musb_host_tx(musb, ep_num);
-+				musb_host_rx(musb, ep_num);
- 			else
--				musb_g_tx(musb, ep_num);
-+				musb_g_rx(musb, ep_num);
- 		}
-+
- 		reg >>= 1;
- 		ep_num++;
- 	}
--- 
-2.3.6
-
-
-From 968986cb57477f487045baa184eee0cf7a82b2e3 Mon Sep 17 00:00:00 2001
-From: Axel Lin <axel.lin@ingics.com>
-Date: Thu, 12 Mar 2015 09:15:28 +0800
-Subject: [PATCH 073/219] usb: phy: Find the right match in devm_usb_phy_match
-Cc: mpagano@gentoo.org
-
-commit 869aee0f31429fa9d94d5aef539602b73ae0cf4b upstream.
-
-The res parameter passed to devm_usb_phy_match() is the location where the
-pointer to the usb_phy is stored, hence it needs to be dereferenced before
-comparing to the match data in order to find the correct match.
-
-Fixes: 410219dcd2ba ("usb: otg: utils: devres: Add API's to associate a device with the phy")
-Signed-off-by: Axel Lin <axel.lin@ingics.com>
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/phy/phy.c | 4 +++-
- 1 file changed, 3 insertions(+), 1 deletion(-)
-
-diff --git a/drivers/usb/phy/phy.c b/drivers/usb/phy/phy.c
-index 2f9735b..d1cd6b5 100644
---- a/drivers/usb/phy/phy.c
-+++ b/drivers/usb/phy/phy.c
-@@ -81,7 +81,9 @@ static void devm_usb_phy_release(struct device *dev, void *res)
+ 	usb->hw.init = &init;
+ 	usb->pmc = pmc;
+@@ -207,7 +241,7 @@ at91sam9n12_clk_register_usb(struct at91_pmc *pmc, const char *name,
+ 	init.ops = &at91sam9n12_usb_ops;
+ 	init.parent_names = &parent_name;
+ 	init.num_parents = 1;
+-	init.flags = CLK_SET_RATE_GATE;
++	init.flags = CLK_SET_RATE_GATE | CLK_SET_RATE_PARENT;
  
- static int devm_usb_phy_match(struct device *dev, void *res, void *match_data)
- {
--	return res == match_data;
-+	struct usb_phy **phy = res;
-+
-+	return *phy == match_data;
+ 	usb->hw.init = &init;
+ 	usb->pmc = pmc;
+diff --git a/drivers/clk/qcom/clk-rcg.c b/drivers/clk/qcom/clk-rcg.c
+index 0039bd7..466f30c 100644
+--- a/drivers/clk/qcom/clk-rcg.c
++++ b/drivers/clk/qcom/clk-rcg.c
+@@ -495,6 +495,57 @@ static int clk_rcg_bypass_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	return __clk_rcg_set_rate(rcg, rcg->freq_tbl);
  }
  
- /**
--- 
-2.3.6
-
-
-From c3f787950225dc61f2a4342601d78d1052d0f8ef Mon Sep 17 00:00:00 2001
-From: Felipe Balbi <balbi@ti.com>
-Date: Fri, 13 Feb 2015 14:34:25 -0600
-Subject: [PATCH 074/219] usb: define a generic USB_RESUME_TIMEOUT macro
-Cc: mpagano@gentoo.org
-
-commit 62f0342de1f012f3e90607d39e20fce811391169 upstream.
-
-Every USB Host controller should use this new
-macro to define for how long resume signalling
-should be driven on the bus.
-
-Currently, almost every single USB controller
-is using a 20ms timeout for resume signalling.
-
-That's problematic for two reasons:
-
-a) sometimes that 20ms timer expires a little
-before 20ms, which makes us fail certification
-
-b) some (many) devices actually need more than
-20ms resume signalling.
-
-Sure, in case of (b) we can state that the device
-is against the USB spec, but the fact is that
-we have no control over which device the certification
-lab will use. We also have no control over which host
-they will use. Most likely they'll be using a Windows
-PC which, again, we have no control over how that
-USB stack is written and how long resume signalling
-they are using.
-
-At the end of the day, we must make sure Linux passes
-electrical compliance when working as Host or as Device
-and currently we don't pass compliance as host because
-we're driving resume signallig for exactly 20ms and
-that confuses certification test setup resulting in
-Certification failure.
-
-Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Acked-by: Peter Chen <peter.chen@freescale.com>
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- include/linux/usb.h | 26 ++++++++++++++++++++++++++
- 1 file changed, 26 insertions(+)
-
-diff --git a/include/linux/usb.h b/include/linux/usb.h
-index 7ee1b5c..447fe29 100644
---- a/include/linux/usb.h
-+++ b/include/linux/usb.h
-@@ -205,6 +205,32 @@ void usb_put_intf(struct usb_interface *intf);
- #define USB_MAXINTERFACES	32
- #define USB_MAXIADS		(USB_MAXINTERFACES/2)
- 
 +/*
-+ * USB Resume Timer: Every Host controller driver should drive the resume
-+ * signalling on the bus for the amount of time defined by this macro.
-+ *
-+ * That way we will have a 'stable' behavior among all HCDs supported by Linux.
-+ *
-+ * Note that the USB Specification states we should drive resume for *at least*
-+ * 20 ms, but it doesn't give an upper bound. This creates two possible
-+ * situations which we want to avoid:
-+ *
-+ * (a) sometimes an msleep(20) might expire slightly before 20 ms, which causes
-+ * us to fail USB Electrical Tests, thus failing Certification
-+ *
-+ * (b) Some (many) devices actually need more than 20 ms of resume signalling,
-+ * and while we can argue that's against the USB Specification, we don't have
-+ * control over which devices a certification laboratory will be using for
-+ * certification. If CertLab uses a device which was tested against Windows and
-+ * that happens to have relaxed resume signalling rules, we might fall into
-+ * situations where we fail interoperability and electrical tests.
-+ *
-+ * In order to avoid both conditions, we're using a 40 ms resume timeout, which
-+ * should cope with both LPJ calibration errors and devices not following every
-+ * detail of the USB Specification.
++ * This type of clock has a glitch-free mux that switches between the output of
++ * the M/N counter and an always on clock source (XO). When clk_set_rate() is
++ * called we need to make sure that we don't switch to the M/N counter if it
++ * isn't clocking because the mux will get stuck and the clock will stop
++ * outputting a clock. This can happen if the framework isn't aware that this
++ * clock is on and so clk_set_rate() doesn't turn on the new parent. To fix
++ * this we switch the mux in the enable/disable ops and reprogram the M/N
++ * counter in the set_rate op. We also make sure to switch away from the M/N
++ * counter in set_rate if software thinks the clock is off.
 + */
-+#define USB_RESUME_TIMEOUT	40 /* ms */
++static int clk_rcg_lcc_set_rate(struct clk_hw *hw, unsigned long rate,
++				unsigned long parent_rate)
++{
++	struct clk_rcg *rcg = to_clk_rcg(hw);
++	const struct freq_tbl *f;
++	int ret;
++	u32 gfm = BIT(10);
 +
- /**
-  * struct usb_interface_cache - long-term representation of a device interface
-  * @num_altsetting: number of altsettings defined.
--- 
-2.3.6
-
-
-From 913916432e9f24d403a51dae54b905b07e509dd9 Mon Sep 17 00:00:00 2001
-From: Felipe Balbi <balbi@ti.com>
-Date: Fri, 13 Feb 2015 14:46:27 -0600
-Subject: [PATCH 075/219] usb: musb: use new USB_RESUME_TIMEOUT
-Cc: mpagano@gentoo.org
-
-commit 309be239369609929d5d3833ee043f7c5afc95d1 upstream.
-
-Make sure we're using the new macro, so our
-resume signaling will always pass certification.
-
-Based on original work by Bin Liu <Bin Liu <b-liu@ti.com>>
-
-Cc: Bin Liu <b-liu@ti.com>
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/musb/musb_core.c    | 7 ++++---
- drivers/usb/musb/musb_virthub.c | 2 +-
- 2 files changed, 5 insertions(+), 4 deletions(-)
-
-diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
-index 461bfe8..ec0ee3b 100644
---- a/drivers/usb/musb/musb_core.c
-+++ b/drivers/usb/musb/musb_core.c
-@@ -99,6 +99,7 @@
- #include <linux/platform_device.h>
- #include <linux/io.h>
- #include <linux/dma-mapping.h>
-+#include <linux/usb.h>
- 
- #include "musb_core.h"
- 
-@@ -562,7 +563,7 @@ static irqreturn_t musb_stage0_irq(struct musb *musb, u8 int_usb,
- 						(USB_PORT_STAT_C_SUSPEND << 16)
- 						| MUSB_PORT_STAT_RESUME;
- 				musb->rh_timer = jiffies
--						 + msecs_to_jiffies(20);
-+					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
- 				musb->need_finish_resume = 1;
++	f = qcom_find_freq(rcg->freq_tbl, rate);
++	if (!f)
++		return -EINVAL;
++
++	/* Switch to XO to avoid glitches */
++	regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, 0);
++	ret = __clk_rcg_set_rate(rcg, f);
++	/* Switch back to M/N if it's clocking */
++	if (__clk_is_enabled(hw->clk))
++		regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, gfm);
++
++	return ret;
++}
++
++static int clk_rcg_lcc_enable(struct clk_hw *hw)
++{
++	struct clk_rcg *rcg = to_clk_rcg(hw);
++	u32 gfm = BIT(10);
++
++	/* Use M/N */
++	return regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, gfm);
++}
++
++static void clk_rcg_lcc_disable(struct clk_hw *hw)
++{
++	struct clk_rcg *rcg = to_clk_rcg(hw);
++	u32 gfm = BIT(10);
++
++	/* Use XO */
++	regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, 0);
++}
++
+ static int __clk_dyn_rcg_set_rate(struct clk_hw *hw, unsigned long rate)
+ {
+ 	struct clk_dyn_rcg *rcg = to_clk_dyn_rcg(hw);
+@@ -543,6 +594,17 @@ const struct clk_ops clk_rcg_bypass_ops = {
+ };
+ EXPORT_SYMBOL_GPL(clk_rcg_bypass_ops);
  
- 				musb->xceiv->otg->state = OTG_STATE_A_HOST;
-@@ -2471,7 +2472,7 @@ static int musb_resume(struct device *dev)
- 	if (musb->need_finish_resume) {
- 		musb->need_finish_resume = 0;
- 		schedule_delayed_work(&musb->finish_resume_work,
--				      msecs_to_jiffies(20));
-+				      msecs_to_jiffies(USB_RESUME_TIMEOUT));
- 	}
++const struct clk_ops clk_rcg_lcc_ops = {
++	.enable = clk_rcg_lcc_enable,
++	.disable = clk_rcg_lcc_disable,
++	.get_parent = clk_rcg_get_parent,
++	.set_parent = clk_rcg_set_parent,
++	.recalc_rate = clk_rcg_recalc_rate,
++	.determine_rate = clk_rcg_determine_rate,
++	.set_rate = clk_rcg_lcc_set_rate,
++};
++EXPORT_SYMBOL_GPL(clk_rcg_lcc_ops);
++
+ const struct clk_ops clk_dyn_rcg_ops = {
+ 	.enable = clk_enable_regmap,
+ 	.is_enabled = clk_is_enabled_regmap,
+diff --git a/drivers/clk/qcom/clk-rcg.h b/drivers/clk/qcom/clk-rcg.h
+index 687e41f..d09d06b 100644
+--- a/drivers/clk/qcom/clk-rcg.h
++++ b/drivers/clk/qcom/clk-rcg.h
+@@ -96,6 +96,7 @@ struct clk_rcg {
  
- 	/*
-@@ -2514,7 +2515,7 @@ static int musb_runtime_resume(struct device *dev)
- 	if (musb->need_finish_resume) {
- 		musb->need_finish_resume = 0;
- 		schedule_delayed_work(&musb->finish_resume_work,
--				msecs_to_jiffies(20));
-+				msecs_to_jiffies(USB_RESUME_TIMEOUT));
- 	}
+ extern const struct clk_ops clk_rcg_ops;
+ extern const struct clk_ops clk_rcg_bypass_ops;
++extern const struct clk_ops clk_rcg_lcc_ops;
  
- 	return 0;
-diff --git a/drivers/usb/musb/musb_virthub.c b/drivers/usb/musb/musb_virthub.c
-index 294e159..5428ed1 100644
---- a/drivers/usb/musb/musb_virthub.c
-+++ b/drivers/usb/musb/musb_virthub.c
-@@ -136,7 +136,7 @@ void musb_port_suspend(struct musb *musb, bool do_suspend)
- 		/* later, GetPortStatus will stop RESUME signaling */
- 		musb->port1_status |= MUSB_PORT_STAT_RESUME;
- 		schedule_delayed_work(&musb->finish_resume_work,
--				      msecs_to_jiffies(20));
-+				      msecs_to_jiffies(USB_RESUME_TIMEOUT));
- 	}
- }
+ #define to_clk_rcg(_hw) container_of(to_clk_regmap(_hw), struct clk_rcg, clkr)
  
--- 
-2.3.6
-
-
-From 0e33853a595e4947e416e86c966a2f532084b3ae Mon Sep 17 00:00:00 2001
-From: Felipe Balbi <balbi@ti.com>
-Date: Fri, 13 Feb 2015 14:57:54 -0600
-Subject: [PATCH 076/219] usb: host: oxu210hp: use new USB_RESUME_TIMEOUT
-Cc: mpagano@gentoo.org
-
-commit 84c0d178eb9f3a3ae4d63dc97a440266cf17f7f5 upstream.
-
-Make sure we're using the new macro, so our
-resume signaling will always pass certification.
-
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/host/oxu210hp-hcd.c | 7 ++++---
- 1 file changed, 4 insertions(+), 3 deletions(-)
-
-diff --git a/drivers/usb/host/oxu210hp-hcd.c b/drivers/usb/host/oxu210hp-hcd.c
-index ef7efb2..28a2866 100644
---- a/drivers/usb/host/oxu210hp-hcd.c
-+++ b/drivers/usb/host/oxu210hp-hcd.c
-@@ -2500,11 +2500,12 @@ static irqreturn_t oxu210_hcd_irq(struct usb_hcd *hcd)
- 					|| oxu->reset_done[i] != 0)
- 				continue;
+diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
+index 742acfa..381f274 100644
+--- a/drivers/clk/qcom/clk-rcg2.c
++++ b/drivers/clk/qcom/clk-rcg2.c
+@@ -243,7 +243,7 @@ static int clk_rcg2_configure(struct clk_rcg2 *rcg, const struct freq_tbl *f)
+ 	mask |= CFG_SRC_SEL_MASK | CFG_MODE_MASK;
+ 	cfg = f->pre_div << CFG_SRC_DIV_SHIFT;
+ 	cfg |= rcg->parent_map[f->src] << CFG_SRC_SEL_SHIFT;
+-	if (rcg->mnd_width && f->n)
++	if (rcg->mnd_width && f->n && (f->m != f->n))
+ 		cfg |= CFG_MODE_DUAL_EDGE;
+ 	ret = regmap_update_bits(rcg->clkr.regmap,
+ 			rcg->cmd_rcgr + CFG_REG, mask, cfg);
+diff --git a/drivers/clk/qcom/gcc-ipq806x.c b/drivers/clk/qcom/gcc-ipq806x.c
+index cbdc31d..a015bb0 100644
+--- a/drivers/clk/qcom/gcc-ipq806x.c
++++ b/drivers/clk/qcom/gcc-ipq806x.c
+@@ -525,8 +525,8 @@ static struct freq_tbl clk_tbl_gsbi_qup[] = {
+ 	{ 10800000, P_PXO,  1, 2,  5 },
+ 	{ 15060000, P_PLL8, 1, 2, 51 },
+ 	{ 24000000, P_PLL8, 4, 1,  4 },
++	{ 25000000, P_PXO,  1, 0,  0 },
+ 	{ 25600000, P_PLL8, 1, 1, 15 },
+-	{ 27000000, P_PXO,  1, 0,  0 },
+ 	{ 48000000, P_PLL8, 4, 1,  2 },
+ 	{ 51200000, P_PLL8, 1, 2, 15 },
+ 	{ }
+diff --git a/drivers/clk/qcom/lcc-ipq806x.c b/drivers/clk/qcom/lcc-ipq806x.c
+index c9ff27b..a6d3a67 100644
+--- a/drivers/clk/qcom/lcc-ipq806x.c
++++ b/drivers/clk/qcom/lcc-ipq806x.c
+@@ -294,14 +294,14 @@ static struct clk_regmap_mux pcm_clk = {
+ };
  
--			/* start 20 msec resume signaling from this port,
--			 * and make hub_wq collect PORT_STAT_C_SUSPEND to
-+			/* start USB_RESUME_TIMEOUT resume signaling from this
-+			 * port, and make hub_wq collect PORT_STAT_C_SUSPEND to
- 			 * stop that signaling.
- 			 */
--			oxu->reset_done[i] = jiffies + msecs_to_jiffies(20);
-+			oxu->reset_done[i] = jiffies +
-+				msecs_to_jiffies(USB_RESUME_TIMEOUT);
- 			oxu_dbg(oxu, "port %d remote wakeup\n", i + 1);
- 			mod_timer(&hcd->rh_timer, oxu->reset_done[i]);
- 		}
--- 
-2.3.6
-
-
-From 9aeb024dc65fa1c9520c655a36d52d48e4285ab1 Mon Sep 17 00:00:00 2001
-From: Felipe Balbi <balbi@ti.com>
-Date: Fri, 13 Feb 2015 14:55:34 -0600
-Subject: [PATCH 077/219] usb: host: fusbh200: use new USB_RESUME_TIMEOUT
-Cc: mpagano@gentoo.org
-
-commit 595227db1f2d98bfc33f02a55842f268e12b247d upstream.
-
-Make sure we're using the new macro, so our
-resume signaling will always pass certification.
-
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/host/fusbh200-hcd.c | 3 +--
- 1 file changed, 1 insertion(+), 2 deletions(-)
-
-diff --git a/drivers/usb/host/fusbh200-hcd.c b/drivers/usb/host/fusbh200-hcd.c
-index a83eefe..ba77e2e 100644
---- a/drivers/usb/host/fusbh200-hcd.c
-+++ b/drivers/usb/host/fusbh200-hcd.c
-@@ -1550,10 +1550,9 @@ static int fusbh200_hub_control (
- 			if ((temp & PORT_PE) == 0)
- 				goto error;
+ static struct freq_tbl clk_tbl_aif_osr[] = {
+-	{  22050, P_PLL4, 1, 147, 20480 },
+-	{  32000, P_PLL4, 1,   1,    96 },
+-	{  44100, P_PLL4, 1, 147, 10240 },
+-	{  48000, P_PLL4, 1,   1,    64 },
+-	{  88200, P_PLL4, 1, 147,  5120 },
+-	{  96000, P_PLL4, 1,   1,    32 },
+-	{ 176400, P_PLL4, 1, 147,  2560 },
+-	{ 192000, P_PLL4, 1,   1,    16 },
++	{  2822400, P_PLL4, 1, 147, 20480 },
++	{  4096000, P_PLL4, 1,   1,    96 },
++	{  5644800, P_PLL4, 1, 147, 10240 },
++	{  6144000, P_PLL4, 1,   1,    64 },
++	{ 11289600, P_PLL4, 1, 147,  5120 },
++	{ 12288000, P_PLL4, 1,   1,    32 },
++	{ 22579200, P_PLL4, 1, 147,  2560 },
++	{ 24576000, P_PLL4, 1,   1,    16 },
+ 	{ },
+ };
  
--			/* resume signaling for 20 msec */
- 			fusbh200_writel(fusbh200, temp | PORT_RESUME, status_reg);
- 			fusbh200->reset_done[wIndex] = jiffies
--					+ msecs_to_jiffies(20);
-+					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
- 			break;
- 		case USB_PORT_FEAT_C_SUSPEND:
- 			clear_bit(wIndex, &fusbh200->port_c_suspend);
--- 
-2.3.6
-
-
-From c8d7235af46783ee3e312ea5c877ac73de8c435d Mon Sep 17 00:00:00 2001
-From: Felipe Balbi <balbi@ti.com>
-Date: Fri, 13 Feb 2015 14:44:17 -0600
-Subject: [PATCH 078/219] usb: host: uhci: use new USB_RESUME_TIMEOUT
-Cc: mpagano@gentoo.org
-
-commit b8fb6f79f76f478acbbffccc966daa878f172a0a upstream.
-
-Make sure we're using the new macro, so our
-resume signaling will always pass certification.
-
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/host/uhci-hub.c | 5 +++--
- 1 file changed, 3 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/usb/host/uhci-hub.c b/drivers/usb/host/uhci-hub.c
-index 19ba5ea..7b3d1af 100644
---- a/drivers/usb/host/uhci-hub.c
-+++ b/drivers/usb/host/uhci-hub.c
-@@ -166,7 +166,7 @@ static void uhci_check_ports(struct uhci_hcd *uhci)
- 				/* Port received a wakeup request */
- 				set_bit(port, &uhci->resuming_ports);
- 				uhci->ports_timeout = jiffies +
--						msecs_to_jiffies(25);
-+					msecs_to_jiffies(USB_RESUME_TIMEOUT);
- 				usb_hcd_start_port_resume(
- 						&uhci_to_hcd(uhci)->self, port);
+@@ -360,7 +360,7 @@ static struct clk_branch spdif_clk = {
+ };
  
-@@ -338,7 +338,8 @@ static int uhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
- 			uhci_finish_suspend(uhci, port, port_addr);
+ static struct freq_tbl clk_tbl_ahbix[] = {
+-	{ 131072, P_PLL4, 1, 1, 3 },
++	{ 131072000, P_PLL4, 1, 1, 3 },
+ 	{ },
+ };
  
- 			/* USB v2.0 7.1.7.5 */
--			uhci->ports_timeout = jiffies + msecs_to_jiffies(50);
-+			uhci->ports_timeout = jiffies +
-+				msecs_to_jiffies(USB_RESUME_TIMEOUT);
- 			break;
- 		case USB_PORT_FEAT_POWER:
- 			/* UHCI has no power switching */
--- 
-2.3.6
-
-
-From fb4655758ba685c5aa07b9af45b18895e3df2a26 Mon Sep 17 00:00:00 2001
-From: Felipe Balbi <balbi@ti.com>
-Date: Fri, 13 Feb 2015 14:54:38 -0600
-Subject: [PATCH 079/219] usb: host: fotg210: use new USB_RESUME_TIMEOUT
-Cc: mpagano@gentoo.org
-
-commit 7e136bb71a08e8b8be3bc492f041d9b0bea3856d upstream.
-
-Make sure we're using the new macro, so our
-resume signaling will always pass certification.
-
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/host/fotg210-hcd.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/usb/host/fotg210-hcd.c b/drivers/usb/host/fotg210-hcd.c
-index 475b21f..7a6681f 100644
---- a/drivers/usb/host/fotg210-hcd.c
-+++ b/drivers/usb/host/fotg210-hcd.c
-@@ -1595,7 +1595,7 @@ static int fotg210_hub_control(
- 			/* resume signaling for 20 msec */
- 			fotg210_writel(fotg210, temp | PORT_RESUME, status_reg);
- 			fotg210->reset_done[wIndex] = jiffies
--					+ msecs_to_jiffies(20);
-+					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
- 			break;
- 		case USB_PORT_FEAT_C_SUSPEND:
- 			clear_bit(wIndex, &fotg210->port_c_suspend);
--- 
-2.3.6
-
-
-From 14c69a53b6c0640d94796b04762ed943e9cf3918 Mon Sep 17 00:00:00 2001
-From: Felipe Balbi <balbi@ti.com>
-Date: Fri, 13 Feb 2015 14:58:53 -0600
-Subject: [PATCH 080/219] usb: host: r8a66597: use new USB_RESUME_TIMEOUT
-Cc: mpagano@gentoo.org
-
-commit 7a606ac29752a3e571b83f9b3fceb1eaa1d37781 upstream.
-
-While this driver was already using a 50ms resume
-timeout, let's make sure everybody uses the same
-macro so it's easy to fix later should anything
-go wrong.
-
-It also gives a more "stable" expectation to Linux
-users.
-
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/host/r8a66597-hcd.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/usb/host/r8a66597-hcd.c b/drivers/usb/host/r8a66597-hcd.c
-index bdc82fe..54a4170 100644
---- a/drivers/usb/host/r8a66597-hcd.c
-+++ b/drivers/usb/host/r8a66597-hcd.c
-@@ -2301,7 +2301,7 @@ static int r8a66597_bus_resume(struct usb_hcd *hcd)
- 		rh->port &= ~USB_PORT_STAT_SUSPEND;
- 		rh->port |= USB_PORT_STAT_C_SUSPEND << 16;
- 		r8a66597_mdfy(r8a66597, RESUME, RESUME | UACT, dvstctr_reg);
--		msleep(50);
-+		msleep(USB_RESUME_TIMEOUT);
- 		r8a66597_mdfy(r8a66597, UACT, RESUME | UACT, dvstctr_reg);
- 	}
+@@ -386,13 +386,12 @@ static struct clk_rcg ahbix_clk = {
+ 	.freq_tbl = clk_tbl_ahbix,
+ 	.clkr = {
+ 		.enable_reg = 0x38,
+-		.enable_mask = BIT(10), /* toggle the gfmux to select mn/pxo */
++		.enable_mask = BIT(11),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "ahbix",
+ 			.parent_names = lcc_pxo_pll4,
+ 			.num_parents = 2,
+-			.ops = &clk_rcg_ops,
+-			.flags = CLK_SET_RATE_GATE,
++			.ops = &clk_rcg_lcc_ops,
+ 		},
+ 	},
+ };
+diff --git a/drivers/clk/samsung/clk-exynos4.c b/drivers/clk/samsung/clk-exynos4.c
+index 51462e8..714d6ba 100644
+--- a/drivers/clk/samsung/clk-exynos4.c
++++ b/drivers/clk/samsung/clk-exynos4.c
+@@ -1354,7 +1354,7 @@ static struct samsung_pll_clock exynos4x12_plls[nr_plls] __initdata = {
+ 			VPLL_LOCK, VPLL_CON0, NULL),
+ };
  
--- 
-2.3.6
-
-
-From 34f698795e94955800a8ba8acdea4a725211a20a Mon Sep 17 00:00:00 2001
-From: Felipe Balbi <balbi@ti.com>
-Date: Fri, 13 Feb 2015 14:50:10 -0600
-Subject: [PATCH 081/219] usb: host: isp116x: use new USB_RESUME_TIMEOUT
-Cc: mpagano@gentoo.org
-
-commit 8c0ae6574ccfd3d619876a65829aad74c9d22ba5 upstream.
-
-Make sure we're using the new macro, so our
-resume signaling will always pass certification.
-
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/host/isp116x-hcd.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/usb/host/isp116x-hcd.c b/drivers/usb/host/isp116x-hcd.c
-index 113d0cc..9ef5644 100644
---- a/drivers/usb/host/isp116x-hcd.c
-+++ b/drivers/usb/host/isp116x-hcd.c
-@@ -1490,7 +1490,7 @@ static int isp116x_bus_resume(struct usb_hcd *hcd)
- 	spin_unlock_irq(&isp116x->lock);
+-static void __init exynos4_core_down_clock(enum exynos4_soc soc)
++static void __init exynos4x12_core_down_clock(void)
+ {
+ 	unsigned int tmp;
  
- 	hcd->state = HC_STATE_RESUMING;
--	msleep(20);
-+	msleep(USB_RESUME_TIMEOUT);
+@@ -1373,11 +1373,9 @@ static void __init exynos4_core_down_clock(enum exynos4_soc soc)
+ 	__raw_writel(tmp, reg_base + PWR_CTRL1);
  
- 	/* Go operational */
- 	spin_lock_irq(&isp116x->lock);
--- 
-2.3.6
-
-
-From 9a0a677ad3526bf0914aecab14423c761e5af9e7 Mon Sep 17 00:00:00 2001
-From: Felipe Balbi <balbi@ti.com>
-Date: Fri, 13 Feb 2015 14:39:13 -0600
-Subject: [PATCH 082/219] usb: host: xhci: use new USB_RESUME_TIMEOUT
-Cc: mpagano@gentoo.org
-
-commit b9e451885deb6262dbaf5cd14aa77d192d9ac759 upstream.
-
-Make sure we're using the new macro, so our
-resume signaling will always pass certification.
-
-Acked-by: Mathias Nyman <mathias.nyman@linux.intel.com>
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/host/xhci-ring.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
-index 73485fa..eeedde8 100644
---- a/drivers/usb/host/xhci-ring.c
-+++ b/drivers/usb/host/xhci-ring.c
-@@ -1574,7 +1574,7 @@ static void handle_port_status(struct xhci_hcd *xhci,
- 		} else {
- 			xhci_dbg(xhci, "resume HS port %d\n", port_id);
- 			bus_state->resume_done[faked_port_index] = jiffies +
--				msecs_to_jiffies(20);
-+				msecs_to_jiffies(USB_RESUME_TIMEOUT);
- 			set_bit(faked_port_index, &bus_state->resuming_ports);
- 			mod_timer(&hcd->rh_timer,
- 				  bus_state->resume_done[faked_port_index]);
--- 
-2.3.6
-
-
-From 426c93ea979c24f4f011351af58d5f5319514493 Mon Sep 17 00:00:00 2001
-From: Felipe Balbi <balbi@ti.com>
-Date: Fri, 13 Feb 2015 14:42:25 -0600
-Subject: [PATCH 083/219] usb: host: ehci: use new USB_RESUME_TIMEOUT
-Cc: mpagano@gentoo.org
-
-commit ea16328f80ca8d74434352157f37ef60e2f55ce2 upstream.
-
-Make sure we're using the new macro, so our
-resume signaling will always pass certification.
-
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/host/ehci-hcd.c | 10 +++++-----
- drivers/usb/host/ehci-hub.c |  9 ++++++---
- 2 files changed, 11 insertions(+), 8 deletions(-)
-
-diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
-index 85e56d1..f4d88df 100644
---- a/drivers/usb/host/ehci-hcd.c
-+++ b/drivers/usb/host/ehci-hcd.c
-@@ -792,12 +792,12 @@ static irqreturn_t ehci_irq (struct usb_hcd *hcd)
- 					ehci->reset_done[i] == 0))
- 				continue;
+ 	/*
+-	 * Disable the clock up feature on Exynos4x12, in case it was
+-	 * enabled by bootloader.
++	 * Disable the clock up feature in case it was enabled by bootloader.
+ 	 */
+-	if (exynos4_soc == EXYNOS4X12)
+-		__raw_writel(0x0, reg_base + E4X12_PWR_CTRL2);
++	__raw_writel(0x0, reg_base + E4X12_PWR_CTRL2);
+ }
  
--			/* start 20 msec resume signaling from this port,
--			 * and make hub_wq collect PORT_STAT_C_SUSPEND to
--			 * stop that signaling.  Use 5 ms extra for safety,
--			 * like usb_port_resume() does.
-+			/* start USB_RESUME_TIMEOUT msec resume signaling from
-+			 * this port, and make hub_wq collect
-+			 * PORT_STAT_C_SUSPEND to stop that signaling.
- 			 */
--			ehci->reset_done[i] = jiffies + msecs_to_jiffies(25);
-+			ehci->reset_done[i] = jiffies +
-+				msecs_to_jiffies(USB_RESUME_TIMEOUT);
- 			set_bit(i, &ehci->resuming_ports);
- 			ehci_dbg (ehci, "port %d remote wakeup\n", i + 1);
- 			usb_hcd_start_port_resume(&hcd->self, i);
-diff --git a/drivers/usb/host/ehci-hub.c b/drivers/usb/host/ehci-hub.c
-index 87cf86f..7354d01 100644
---- a/drivers/usb/host/ehci-hub.c
-+++ b/drivers/usb/host/ehci-hub.c
-@@ -471,10 +471,13 @@ static int ehci_bus_resume (struct usb_hcd *hcd)
- 		ehci_writel(ehci, temp, &ehci->regs->port_status [i]);
- 	}
+ /* register exynos4 clocks */
+@@ -1474,7 +1472,8 @@ static void __init exynos4_clk_init(struct device_node *np,
+ 	samsung_clk_register_alias(ctx, exynos4_aliases,
+ 			ARRAY_SIZE(exynos4_aliases));
  
--	/* msleep for 20ms only if code is trying to resume port */
-+	/*
-+	 * msleep for USB_RESUME_TIMEOUT ms only if code is trying to resume
-+	 * port
-+	 */
- 	if (resume_needed) {
- 		spin_unlock_irq(&ehci->lock);
--		msleep(20);
-+		msleep(USB_RESUME_TIMEOUT);
- 		spin_lock_irq(&ehci->lock);
- 		if (ehci->shutdown)
- 			goto shutdown;
-@@ -942,7 +945,7 @@ int ehci_hub_control(
- 			temp &= ~PORT_WAKE_BITS;
- 			ehci_writel(ehci, temp | PORT_RESUME, status_reg);
- 			ehci->reset_done[wIndex] = jiffies
--					+ msecs_to_jiffies(20);
-+					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
- 			set_bit(wIndex, &ehci->resuming_ports);
- 			usb_hcd_start_port_resume(&hcd->self, wIndex);
- 			break;
--- 
-2.3.6
-
-
-From 6a0ecbeea7d077ae4e49c3a1ef03a38bb91c5218 Mon Sep 17 00:00:00 2001
-From: Felipe Balbi <balbi@ti.com>
-Date: Fri, 13 Feb 2015 15:00:38 -0600
-Subject: [PATCH 084/219] usb: host: sl811: use new USB_RESUME_TIMEOUT
-Cc: mpagano@gentoo.org
-
-commit 08debfb13b199716da6153940c31968c556b195d upstream.
-
-Make sure we're using the new macro, so our
-resume signaling will always pass certification.
-
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/host/sl811-hcd.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/usb/host/sl811-hcd.c b/drivers/usb/host/sl811-hcd.c
-index 4f4ba1e..9118cd8 100644
---- a/drivers/usb/host/sl811-hcd.c
-+++ b/drivers/usb/host/sl811-hcd.c
-@@ -1259,7 +1259,7 @@ sl811h_hub_control(
- 			sl811_write(sl811, SL11H_CTLREG1, sl811->ctrl1);
+-	exynos4_core_down_clock(soc);
++	if (soc == EXYNOS4X12)
++		exynos4x12_core_down_clock();
+ 	exynos4_clk_sleep_init();
  
- 			mod_timer(&sl811->timer, jiffies
--					+ msecs_to_jiffies(20));
-+					+ msecs_to_jiffies(USB_RESUME_TIMEOUT));
- 			break;
- 		case USB_PORT_FEAT_POWER:
- 			port_power(sl811, 0);
--- 
-2.3.6
-
-
-From 8271acf33346951d281a428ae8a40f20750e789f Mon Sep 17 00:00:00 2001
-From: Felipe Balbi <balbi@ti.com>
-Date: Fri, 13 Feb 2015 15:03:13 -0600
-Subject: [PATCH 085/219] usb: dwc2: hcd: use new USB_RESUME_TIMEOUT
-Cc: mpagano@gentoo.org
-
-commit 74bd7b69801819707713b88e9d0bc074efa2f5e7 upstream.
-
-Make sure we're using the new macro, so our
-resume signaling will always pass certification.
-
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/dwc2/hcd.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
-index c78c874..758b7e0 100644
---- a/drivers/usb/dwc2/hcd.c
-+++ b/drivers/usb/dwc2/hcd.c
-@@ -1521,7 +1521,7 @@ static int dwc2_hcd_hub_control(struct dwc2_hsotg *hsotg, u16 typereq,
- 			dev_dbg(hsotg->dev,
- 				"ClearPortFeature USB_PORT_FEAT_SUSPEND\n");
- 			writel(0, hsotg->regs + PCGCTL);
--			usleep_range(20000, 40000);
-+			msleep(USB_RESUME_TIMEOUT);
+ 	samsung_clk_of_add_provider(np, ctx);
+diff --git a/drivers/clk/tegra/clk-tegra124.c b/drivers/clk/tegra/clk-tegra124.c
+index 9a893f2..23ce0af 100644
+--- a/drivers/clk/tegra/clk-tegra124.c
++++ b/drivers/clk/tegra/clk-tegra124.c
+@@ -1110,16 +1110,18 @@ static __init void tegra124_periph_clk_init(void __iomem *clk_base,
+ 					1, 2);
+ 	clks[TEGRA124_CLK_XUSB_SS_DIV2] = clk;
  
- 			hprt0 = dwc2_read_hprt0(hsotg);
- 			hprt0 |= HPRT0_RES;
--- 
-2.3.6
-
-
-From b6053a1546ea879b47c346628cf40401bcf9e27e Mon Sep 17 00:00:00 2001
-From: Felipe Balbi <balbi@ti.com>
-Date: Fri, 13 Feb 2015 15:04:06 -0600
-Subject: [PATCH 086/219] usb: isp1760: hcd: use new USB_RESUME_TIMEOUT
-Cc: mpagano@gentoo.org
-
-commit 59c9904cce77b55892e15f40791f1e66e4d3a1e6 upstream.
-
-Make sure we're using the new macro, so our
-resume signaling will always pass certification.
-
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/isp1760/isp1760-hcd.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/usb/isp1760/isp1760-hcd.c b/drivers/usb/isp1760/isp1760-hcd.c
-index 3cb98b1..7911b6b 100644
---- a/drivers/usb/isp1760/isp1760-hcd.c
-+++ b/drivers/usb/isp1760/isp1760-hcd.c
-@@ -1869,7 +1869,7 @@ static int isp1760_hub_control(struct usb_hcd *hcd, u16 typeReq,
- 				reg_write32(hcd->regs, HC_PORTSC1,
- 							temp | PORT_RESUME);
- 				priv->reset_done = jiffies +
--					msecs_to_jiffies(20);
-+					msecs_to_jiffies(USB_RESUME_TIMEOUT);
- 			}
- 			break;
- 		case USB_PORT_FEAT_C_SUSPEND:
--- 
-2.3.6
-
-
-From 1eeba7304a3e8070983c3a9f757a6b51236a64de Mon Sep 17 00:00:00 2001
-From: Felipe Balbi <balbi@ti.com>
-Date: Fri, 13 Feb 2015 15:38:33 -0600
-Subject: [PATCH 087/219] usb: core: hub: use new USB_RESUME_TIMEOUT
-Cc: mpagano@gentoo.org
-
-commit bbc78c07a51f6fd29c227b1220a9016e585358ba upstream.
-
-Make sure we're using the new macro, so our
-resume signaling will always pass certification.
-
-Signed-off-by: Felipe Balbi <balbi@ti.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/usb/core/hub.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
-index d7c3d5a..3b71516 100644
---- a/drivers/usb/core/hub.c
-+++ b/drivers/usb/core/hub.c
-@@ -3406,10 +3406,10 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
- 	if (status) {
- 		dev_dbg(&port_dev->dev, "can't resume, status %d\n", status);
- 	} else {
--		/* drive resume for at least 20 msec */
-+		/* drive resume for USB_RESUME_TIMEOUT msec */
- 		dev_dbg(&udev->dev, "usb %sresume\n",
- 				(PMSG_IS_AUTO(msg) ? "auto-" : ""));
--		msleep(25);
-+		msleep(USB_RESUME_TIMEOUT);
+-	clk = clk_register_gate(NULL, "plld_dsi", "plld_out0", 0,
++	clk = clk_register_gate(NULL, "pll_d_dsi_out", "pll_d_out0", 0,
+ 				clk_base + PLLD_MISC, 30, 0, &pll_d_lock);
+-	clks[TEGRA124_CLK_PLLD_DSI] = clk;
++	clks[TEGRA124_CLK_PLL_D_DSI_OUT] = clk;
  
- 		/* Virtual root hubs can trigger on GET_PORT_STATUS to
- 		 * stop resume signaling.  Then finish the resume
--- 
-2.3.6
-
-
-From f5a652339c3ff18b6184d0ee02f7f0eef2ebe681 Mon Sep 17 00:00:00 2001
-From: Boris Brezillon <boris.brezillon@free-electrons.com>
-Date: Sun, 29 Mar 2015 03:45:33 +0200
-Subject: [PATCH 088/219] clk: at91: usb: propagate rate modification to the
- parent clk
-Cc: mpagano@gentoo.org
-
-commit 4591243102faa8de92da320edea47219901461e9 upstream.
-
-The at91sam9n12 and at91sam9x5 usb clocks do not propagate rate
-modification requests to their parents.
-This causes a bug when the PLLB is left uninitialized by the bootloader
-(PLL multiplier set to 0, or in other words, PLL rate = 0 Hz).
-
-Implement the determinate_rate method and propagate the change rate
-request to the parent clk.
-
-Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
-Reported-by: Bo Shen <voice.shen@atmel.com>
-Tested-by: Bo Shen <voice.shen@atmel.com>
-Signed-off-by: Michael Turquette <mturquette@linaro.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/clk/at91/clk-usb.c | 64 +++++++++++++++++++++++++++++++++++-----------
- 1 file changed, 49 insertions(+), 15 deletions(-)
-
-diff --git a/drivers/clk/at91/clk-usb.c b/drivers/clk/at91/clk-usb.c
-index a23ac0c..0b7c3e8 100644
---- a/drivers/clk/at91/clk-usb.c
-+++ b/drivers/clk/at91/clk-usb.c
-@@ -56,22 +56,55 @@ static unsigned long at91sam9x5_clk_usb_recalc_rate(struct clk_hw *hw,
- 	return DIV_ROUND_CLOSEST(parent_rate, (usbdiv + 1));
- }
+-	clk = tegra_clk_register_periph_gate("dsia", "plld_dsi", 0, clk_base,
+-					     0, 48, periph_clk_enb_refcnt);
++	clk = tegra_clk_register_periph_gate("dsia", "pll_d_dsi_out", 0,
++					     clk_base, 0, 48,
++					     periph_clk_enb_refcnt);
+ 	clks[TEGRA124_CLK_DSIA] = clk;
  
--static long at91sam9x5_clk_usb_round_rate(struct clk_hw *hw, unsigned long rate,
--					  unsigned long *parent_rate)
-+static long at91sam9x5_clk_usb_determine_rate(struct clk_hw *hw,
-+					      unsigned long rate,
-+					      unsigned long min_rate,
-+					      unsigned long max_rate,
-+					      unsigned long *best_parent_rate,
-+					      struct clk_hw **best_parent_hw)
- {
--	unsigned long div;
-+	struct clk *parent = NULL;
-+	long best_rate = -EINVAL;
-+	unsigned long tmp_rate;
-+	int best_diff = -1;
-+	int tmp_diff;
-+	int i;
+-	clk = tegra_clk_register_periph_gate("dsib", "plld_dsi", 0, clk_base,
+-					     0, 82, periph_clk_enb_refcnt);
++	clk = tegra_clk_register_periph_gate("dsib", "pll_d_dsi_out", 0,
++					     clk_base, 0, 82,
++					     periph_clk_enb_refcnt);
+ 	clks[TEGRA124_CLK_DSIB] = clk;
  
--	if (!rate)
--		return -EINVAL;
-+	for (i = 0; i < __clk_get_num_parents(hw->clk); i++) {
-+		int div;
+ 	/* emc mux */
+diff --git a/drivers/clk/tegra/clk.c b/drivers/clk/tegra/clk.c
+index 9ddb754..7a1df61 100644
+--- a/drivers/clk/tegra/clk.c
++++ b/drivers/clk/tegra/clk.c
+@@ -272,7 +272,7 @@ void __init tegra_add_of_provider(struct device_node *np)
+ 	of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data);
  
--	if (rate >= *parent_rate)
--		return *parent_rate;
-+		parent = clk_get_parent_by_index(hw->clk, i);
-+		if (!parent)
-+			continue;
+ 	rst_ctlr.of_node = np;
+-	rst_ctlr.nr_resets = clk_num * 32;
++	rst_ctlr.nr_resets = periph_banks * 32;
+ 	reset_controller_register(&rst_ctlr);
+ }
+ 
+diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
+index 42f95a4..9a28b7e 100644
+--- a/drivers/crypto/omap-aes.c
++++ b/drivers/crypto/omap-aes.c
+@@ -554,15 +554,23 @@ static int omap_aes_crypt_dma_stop(struct omap_aes_dev *dd)
+ 	return err;
+ }
+ 
+-static int omap_aes_check_aligned(struct scatterlist *sg)
++static int omap_aes_check_aligned(struct scatterlist *sg, int total)
+ {
++	int len = 0;
 +
-+		for (div = 1; div < SAM9X5_USB_MAX_DIV + 2; div++) {
-+			unsigned long tmp_parent_rate;
+ 	while (sg) {
+ 		if (!IS_ALIGNED(sg->offset, 4))
+ 			return -1;
+ 		if (!IS_ALIGNED(sg->length, AES_BLOCK_SIZE))
+ 			return -1;
 +
-+			tmp_parent_rate = rate * div;
-+			tmp_parent_rate = __clk_round_rate(parent,
-+							   tmp_parent_rate);
-+			tmp_rate = DIV_ROUND_CLOSEST(tmp_parent_rate, div);
-+			if (tmp_rate < rate)
-+				tmp_diff = rate - tmp_rate;
-+			else
-+				tmp_diff = tmp_rate - rate;
++		len += sg->length;
+ 		sg = sg_next(sg);
+ 	}
 +
-+			if (best_diff < 0 || best_diff > tmp_diff) {
-+				best_rate = tmp_rate;
-+				best_diff = tmp_diff;
-+				*best_parent_rate = tmp_parent_rate;
-+				*best_parent_hw = __clk_get_hw(parent);
-+			}
++	if (len != total)
++		return -1;
 +
-+			if (!best_diff || tmp_rate < rate)
-+				break;
-+		}
+ 	return 0;
+ }
  
--	div = DIV_ROUND_CLOSEST(*parent_rate, rate);
--	if (div > SAM9X5_USB_MAX_DIV + 1)
--		div = SAM9X5_USB_MAX_DIV + 1;
-+		if (!best_diff)
-+			break;
-+	}
+@@ -633,8 +641,8 @@ static int omap_aes_handle_queue(struct omap_aes_dev *dd,
+ 	dd->in_sg = req->src;
+ 	dd->out_sg = req->dst;
  
--	return DIV_ROUND_CLOSEST(*parent_rate, div);
-+	return best_rate;
- }
+-	if (omap_aes_check_aligned(dd->in_sg) ||
+-	    omap_aes_check_aligned(dd->out_sg)) {
++	if (omap_aes_check_aligned(dd->in_sg, dd->total) ||
++	    omap_aes_check_aligned(dd->out_sg, dd->total)) {
+ 		if (omap_aes_copy_sgs(dd))
+ 			pr_err("Failed to copy SGs for unaligned cases\n");
+ 		dd->sgs_copied = 1;
+diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c
+index d0bc123..1a54205 100644
+--- a/drivers/gpio/gpio-mvebu.c
++++ b/drivers/gpio/gpio-mvebu.c
+@@ -320,11 +320,13 @@ static void mvebu_gpio_edge_irq_mask(struct irq_data *d)
+ {
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
++	struct irq_chip_type *ct = irq_data_get_chip_type(d);
+ 	u32 mask = 1 << (d->irq - gc->irq_base);
  
- static int at91sam9x5_clk_usb_set_parent(struct clk_hw *hw, u8 index)
-@@ -121,7 +154,7 @@ static int at91sam9x5_clk_usb_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	irq_gc_lock(gc);
+-	gc->mask_cache &= ~mask;
+-	writel_relaxed(gc->mask_cache, mvebu_gpioreg_edge_mask(mvchip));
++	ct->mask_cache_priv &= ~mask;
++
++	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_edge_mask(mvchip));
+ 	irq_gc_unlock(gc);
+ }
  
- static const struct clk_ops at91sam9x5_usb_ops = {
- 	.recalc_rate = at91sam9x5_clk_usb_recalc_rate,
--	.round_rate = at91sam9x5_clk_usb_round_rate,
-+	.determine_rate = at91sam9x5_clk_usb_determine_rate,
- 	.get_parent = at91sam9x5_clk_usb_get_parent,
- 	.set_parent = at91sam9x5_clk_usb_set_parent,
- 	.set_rate = at91sam9x5_clk_usb_set_rate,
-@@ -159,7 +192,7 @@ static const struct clk_ops at91sam9n12_usb_ops = {
- 	.disable = at91sam9n12_clk_usb_disable,
- 	.is_enabled = at91sam9n12_clk_usb_is_enabled,
- 	.recalc_rate = at91sam9x5_clk_usb_recalc_rate,
--	.round_rate = at91sam9x5_clk_usb_round_rate,
-+	.determine_rate = at91sam9x5_clk_usb_determine_rate,
- 	.set_rate = at91sam9x5_clk_usb_set_rate,
- };
- 
-@@ -179,7 +212,8 @@ at91sam9x5_clk_register_usb(struct at91_pmc *pmc, const char *name,
- 	init.ops = &at91sam9x5_usb_ops;
- 	init.parent_names = parent_names;
- 	init.num_parents = num_parents;
--	init.flags = CLK_SET_RATE_GATE | CLK_SET_PARENT_GATE;
-+	init.flags = CLK_SET_RATE_GATE | CLK_SET_PARENT_GATE |
-+		     CLK_SET_RATE_PARENT;
+@@ -332,11 +334,13 @@ static void mvebu_gpio_edge_irq_unmask(struct irq_data *d)
+ {
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
++	struct irq_chip_type *ct = irq_data_get_chip_type(d);
++
+ 	u32 mask = 1 << (d->irq - gc->irq_base);
  
- 	usb->hw.init = &init;
- 	usb->pmc = pmc;
-@@ -207,7 +241,7 @@ at91sam9n12_clk_register_usb(struct at91_pmc *pmc, const char *name,
- 	init.ops = &at91sam9n12_usb_ops;
- 	init.parent_names = &parent_name;
- 	init.num_parents = 1;
--	init.flags = CLK_SET_RATE_GATE;
-+	init.flags = CLK_SET_RATE_GATE | CLK_SET_RATE_PARENT;
+ 	irq_gc_lock(gc);
+-	gc->mask_cache |= mask;
+-	writel_relaxed(gc->mask_cache, mvebu_gpioreg_edge_mask(mvchip));
++	ct->mask_cache_priv |= mask;
++	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_edge_mask(mvchip));
+ 	irq_gc_unlock(gc);
+ }
  
- 	usb->hw.init = &init;
- 	usb->pmc = pmc;
--- 
-2.3.6
-
-
-From ffa5893889612e5d65e456c0b433d0160d46c4eb Mon Sep 17 00:00:00 2001
-From: Yves-Alexis Perez <corsac@debian.org>
-Date: Sat, 11 Apr 2015 09:31:35 +0200
-Subject: [PATCH 089/219] ALSA: hda - Add dock support for ThinkPad X250
- (17aa:2226)
-Cc: mpagano@gentoo.org
-
-commit c0278669fb61596cc1a10ab8686d27c37269c37b upstream.
-
-This model uses the same dock port as the previous generation.
-
-Signed-off-by: Yves-Alexis Perez <corsac@debian.org>
-Signed-off-by: Takashi Iwai <tiwai@suse.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- sound/pci/hda/patch_realtek.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
-index f9d12c0..3ad85c7 100644
---- a/sound/pci/hda/patch_realtek.c
-+++ b/sound/pci/hda/patch_realtek.c
-@@ -5047,6 +5047,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
- 	SND_PCI_QUIRK(0x17aa, 0x2212, "Thinkpad T440", ALC292_FIXUP_TPT440_DOCK),
- 	SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad X240", ALC292_FIXUP_TPT440_DOCK),
- 	SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
-+	SND_PCI_QUIRK(0x17aa, 0x2226, "ThinkPad X250", ALC292_FIXUP_TPT440_DOCK),
- 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
- 	SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP),
- 	SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
--- 
-2.3.6
-
-
-From 0b586ed327f10ed037bf84381cbdb16754d7bdfd Mon Sep 17 00:00:00 2001
-From: Adam Honse <calcprogrammer1@gmail.com>
-Date: Sun, 12 Apr 2015 01:03:07 -0500
-Subject: [PATCH 090/219] ALSA: usb-audio: Don't attempt to get Microsoft
- Lifecam Cinema sample rate
-Cc: mpagano@gentoo.org
-
-commit eef0342cf32689f77d78ee3302999e5caaa6a8f3 upstream.
-
-Adds Microsoft LifeCam Cinema USB ID to the snd_usb_get_sample_rate_quirk list as the Lifecam Cinema does not appear to support getting the sample rate.
-
-Fixes the issue where the LifeCam Cinema would wait for USB timeout and log the message "cannot get freq at ep 0x82" when accessed.
-
-Addresses bug report https://bugzilla.kernel.org/show_bug.cgi?id=95961.
-
-Signed-off-by: Adam Honse <calcprogrammer1@gmail.com>
-Signed-off-by: Takashi Iwai <tiwai@suse.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- sound/usb/quirks.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
-index 9a28365..32631a8 100644
---- a/sound/usb/quirks.c
-+++ b/sound/usb/quirks.c
-@@ -1115,6 +1115,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
+@@ -344,11 +348,13 @@ static void mvebu_gpio_level_irq_mask(struct irq_data *d)
  {
- 	/* devices which do not support reading the sample rate. */
- 	switch (chip->usb_id) {
-+	case USB_ID(0x045E, 0x075D): /* MS Lifecam Cinema  */
- 	case USB_ID(0x045E, 0x076D): /* MS Lifecam HD-5000 */
- 	case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */
- 		return true;
--- 
-2.3.6
-
-
-From 15c97265c67f27eef7d92262964a43e0aff8df61 Mon Sep 17 00:00:00 2001
-From: Michael Gernoth <michael@gernoth.net>
-Date: Thu, 9 Apr 2015 23:42:15 +0200
-Subject: [PATCH 091/219] ALSA: emu10k1: don't deadlock in proc-functions
-Cc: mpagano@gentoo.org
-
-commit 91bf0c2dcb935a87e5c0795f5047456b965fd143 upstream.
-
-The functions snd_emu10k1_proc_spdif_read and snd_emu1010_fpga_read
-acquire the emu_lock before accessing the FPGA. The function used
-to access the FPGA (snd_emu1010_fpga_read) also tries to take
-the emu_lock which causes a deadlock.
-Remove the outer locking in the proc-functions (guarding only the
-already safe fpga read) to prevent this deadlock.
-
-[removed superfluous flags variables too -- tiwai]
-
-Signed-off-by: Michael Gernoth <michael@gernoth.net>
-Signed-off-by: Takashi Iwai <tiwai@suse.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- sound/pci/emu10k1/emuproc.c | 12 ------------
- 1 file changed, 12 deletions(-)
-
-diff --git a/sound/pci/emu10k1/emuproc.c b/sound/pci/emu10k1/emuproc.c
-index 2ca9f2e..53745f4 100644
---- a/sound/pci/emu10k1/emuproc.c
-+++ b/sound/pci/emu10k1/emuproc.c
-@@ -241,31 +241,22 @@ static void snd_emu10k1_proc_spdif_read(struct snd_info_entry *entry,
- 	struct snd_emu10k1 *emu = entry->private_data;
- 	u32 value;
- 	u32 value2;
--	unsigned long flags;
- 	u32 rate;
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
++	struct irq_chip_type *ct = irq_data_get_chip_type(d);
++
+ 	u32 mask = 1 << (d->irq - gc->irq_base);
  
- 	if (emu->card_capabilities->emu_model) {
--		spin_lock_irqsave(&emu->emu_lock, flags);
- 		snd_emu1010_fpga_read(emu, 0x38, &value);
--		spin_unlock_irqrestore(&emu->emu_lock, flags);
- 		if ((value & 0x1) == 0) {
--			spin_lock_irqsave(&emu->emu_lock, flags);
- 			snd_emu1010_fpga_read(emu, 0x2a, &value);
- 			snd_emu1010_fpga_read(emu, 0x2b, &value2);
--			spin_unlock_irqrestore(&emu->emu_lock, flags);
- 			rate = 0x1770000 / (((value << 5) | value2)+1);	
- 			snd_iprintf(buffer, "ADAT Locked : %u\n", rate);
- 		} else {
- 			snd_iprintf(buffer, "ADAT Unlocked\n");
- 		}
--		spin_lock_irqsave(&emu->emu_lock, flags);
- 		snd_emu1010_fpga_read(emu, 0x20, &value);
--		spin_unlock_irqrestore(&emu->emu_lock, flags);
- 		if ((value & 0x4) == 0) {
--			spin_lock_irqsave(&emu->emu_lock, flags);
- 			snd_emu1010_fpga_read(emu, 0x28, &value);
- 			snd_emu1010_fpga_read(emu, 0x29, &value2);
--			spin_unlock_irqrestore(&emu->emu_lock, flags);
- 			rate = 0x1770000 / (((value << 5) | value2)+1);	
- 			snd_iprintf(buffer, "SPDIF Locked : %d\n", rate);
- 		} else {
-@@ -410,14 +401,11 @@ static void snd_emu_proc_emu1010_reg_read(struct snd_info_entry *entry,
+ 	irq_gc_lock(gc);
+-	gc->mask_cache &= ~mask;
+-	writel_relaxed(gc->mask_cache, mvebu_gpioreg_level_mask(mvchip));
++	ct->mask_cache_priv &= ~mask;
++	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_level_mask(mvchip));
+ 	irq_gc_unlock(gc);
+ }
+ 
+@@ -356,11 +362,13 @@ static void mvebu_gpio_level_irq_unmask(struct irq_data *d)
  {
- 	struct snd_emu10k1 *emu = entry->private_data;
- 	u32 value;
--	unsigned long flags;
- 	int i;
- 	snd_iprintf(buffer, "EMU1010 Registers:\n\n");
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
++	struct irq_chip_type *ct = irq_data_get_chip_type(d);
++
+ 	u32 mask = 1 << (d->irq - gc->irq_base);
  
- 	for(i = 0; i < 0x40; i+=1) {
--		spin_lock_irqsave(&emu->emu_lock, flags);
- 		snd_emu1010_fpga_read(emu, i, &value);
--		spin_unlock_irqrestore(&emu->emu_lock, flags);
- 		snd_iprintf(buffer, "%02X: %08X, %02X\n", i, value, (value >> 8) & 0x7f);
- 	}
+ 	irq_gc_lock(gc);
+-	gc->mask_cache |= mask;
+-	writel_relaxed(gc->mask_cache, mvebu_gpioreg_level_mask(mvchip));
++	ct->mask_cache_priv |= mask;
++	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_level_mask(mvchip));
+ 	irq_gc_unlock(gc);
  }
--- 
-2.3.6
-
-
-From 0933e9dd839f4d37d408d9365266940928a73a8c Mon Sep 17 00:00:00 2001
-From: Jo-Philipp Wich <jow@openwrt.org>
-Date: Mon, 13 Apr 2015 12:47:26 +0200
-Subject: [PATCH 092/219] ALSA: hda/realtek - Enable the ALC292 dock fixup on
- the Thinkpad T450
-Cc: mpagano@gentoo.org
-
-commit f2aa111041ce36b94e651d882458dea502e76721 upstream.
-
-The Lenovo Thinkpad T450 requires the ALC292_FIXUP_TPT440_DOCK as well in
-order to get working sound output on the docking stations headphone jack.
-
-Patch tested on a Thinkpad T450 (20BVCTO1WW) using kernel 4.0-rc7 in
-conjunction with a ThinkPad Ultradock.
-
-Signed-off-by: Jo-Philipp Wich <jow@openwrt.org>
-Signed-off-by: Takashi Iwai <tiwai@suse.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- sound/pci/hda/patch_realtek.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
-index 3ad85c7..f37e4ea 100644
---- a/sound/pci/hda/patch_realtek.c
-+++ b/sound/pci/hda/patch_realtek.c
-@@ -5054,6 +5054,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
- 	SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC),
- 	SND_PCI_QUIRK(0x17aa, 0x501e, "Thinkpad L440", ALC292_FIXUP_TPT440_DOCK),
- 	SND_PCI_QUIRK(0x17aa, 0x5026, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
-+	SND_PCI_QUIRK(0x17aa, 0x5034, "Thinkpad T450", ALC292_FIXUP_TPT440_DOCK),
- 	SND_PCI_QUIRK(0x17aa, 0x5036, "Thinkpad T450s", ALC292_FIXUP_TPT440_DOCK),
- 	SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
- 	SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
--- 
-2.3.6
-
-
-From cb927a0ae496171966921e084eb7f6c2dc04e43b Mon Sep 17 00:00:00 2001
-From: David Henningsson <david.henningsson@canonical.com>
-Date: Tue, 21 Apr 2015 10:48:46 +0200
-Subject: [PATCH 093/219] ALSA: hda - fix "num_steps = 0" error on ALC256
-Cc: mpagano@gentoo.org
-
-commit 7d1b6e29327428993ba568bdd8c66734070f45e0 upstream.
-
-The ALC256 does not have a mixer nid at 0x0b, and there's no
-loopback path (the output pins are directly connected to the DACs).
-
-This commit fixes an "num_steps = 0 for NID=0xb (ctl = Beep Playback Volume)"
-error (and as a result, problems with amixer/alsamixer).
-
-If there's pcbeep functionality, it certainly isn't controlled by setting an
-amp on 0x0b, so disable beep functionality (at least for now).
-
-BugLink: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1446517
-Signed-off-by: David Henningsson <david.henningsson@canonical.com>
-Signed-off-by: Takashi Iwai <tiwai@suse.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- sound/pci/hda/patch_realtek.c | 5 +++--
- 1 file changed, 3 insertions(+), 2 deletions(-)
-
-diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
-index f37e4ea..b46bb84 100644
---- a/sound/pci/hda/patch_realtek.c
-+++ b/sound/pci/hda/patch_realtek.c
-@@ -5565,6 +5565,7 @@ static int patch_alc269(struct hda_codec *codec)
- 		break;
- 	case 0x10ec0256:
- 		spec->codec_variant = ALC269_TYPE_ALC256;
-+		spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */
- 		break;
- 	}
  
-@@ -5578,8 +5579,8 @@ static int patch_alc269(struct hda_codec *codec)
- 	if (err < 0)
- 		goto error;
+diff --git a/drivers/gpu/drm/exynos/exynos_dp_core.c b/drivers/gpu/drm/exynos/exynos_dp_core.c
+index bf17a60..1dbfba5 100644
+--- a/drivers/gpu/drm/exynos/exynos_dp_core.c
++++ b/drivers/gpu/drm/exynos/exynos_dp_core.c
+@@ -32,10 +32,16 @@
+ #include <drm/bridge/ptn3460.h>
  
--	if (!spec->gen.no_analog && spec->gen.beep_nid)
--		set_beep_amp(spec, 0x0b, 0x04, HDA_INPUT);
-+	if (!spec->gen.no_analog && spec->gen.beep_nid && spec->gen.mixer_nid)
-+		set_beep_amp(spec, spec->gen.mixer_nid, 0x04, HDA_INPUT);
+ #include "exynos_dp_core.h"
++#include "exynos_drm_fimd.h"
  
- 	codec->patch_ops = alc_patch_ops;
- #ifdef CONFIG_PM
--- 
-2.3.6
-
-
-From c7a98726965179726bbd105e5ff6465c1d09ace1 Mon Sep 17 00:00:00 2001
-From: Kailang Yang <kailang@realtek.com>
-Date: Thu, 23 Apr 2015 15:10:53 +0800
-Subject: [PATCH 094/219] ALSA: hda/realtek - Fix Headphone Mic doesn't
- recording for ALC256
-Cc: mpagano@gentoo.org
-
-commit d32b66668c702aed0e330dc5ca186afbadcdacf8 upstream.
-
-Switch default pcbeep path to Line in path.
-
-Signed-off-by: Kailang Yang <kailang@realtek.com>
-Tested-by: Hui Wang <hui.wang@canonical.com>
-Signed-off-by: Takashi Iwai <tiwai@suse.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- sound/pci/hda/patch_realtek.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
-index b46bb84..2210e1b 100644
---- a/sound/pci/hda/patch_realtek.c
-+++ b/sound/pci/hda/patch_realtek.c
-@@ -5566,6 +5566,7 @@ static int patch_alc269(struct hda_codec *codec)
- 	case 0x10ec0256:
- 		spec->codec_variant = ALC269_TYPE_ALC256;
- 		spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */
-+		alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/
- 		break;
- 	}
+ #define ctx_from_connector(c)	container_of(c, struct exynos_dp_device, \
+ 					connector)
  
--- 
-2.3.6
-
-
-From ca7d80c841febeb3688d5ed57660d37b4baedad5 Mon Sep 17 00:00:00 2001
-From: Hui Wang <hui.wang@canonical.com>
-Date: Fri, 24 Apr 2015 13:39:59 +0800
-Subject: [PATCH 095/219] ALSA: hda - fix headset mic detection problem for one
- more machine
-Cc: mpagano@gentoo.org
-
-commit e8191a8e475551b277d85cd76c3f0f52fdf42e86 upstream.
-
-We have two machines with alc256 codec in the pin quirk table, so
-moving the common pins to ALC256_STANDARD_PINS.
-
-BugLink: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1447909
-Signed-off-by: Hui Wang <hui.wang@canonical.com>
-Signed-off-by: Takashi Iwai <tiwai@suse.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- sound/pci/hda/patch_realtek.c | 24 +++++++++++++++---------
- 1 file changed, 15 insertions(+), 9 deletions(-)
-
-diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
-index 2210e1b..2fd490b 100644
---- a/sound/pci/hda/patch_realtek.c
-+++ b/sound/pci/hda/patch_realtek.c
-@@ -5144,6 +5144,16 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
- 	{0x1b, 0x411111f0}, \
- 	{0x1e, 0x411111f0}
++static inline struct exynos_drm_crtc *dp_to_crtc(struct exynos_dp_device *dp)
++{
++	return to_exynos_crtc(dp->encoder->crtc);
++}
++
+ static inline struct exynos_dp_device *
+ display_to_dp(struct exynos_drm_display *d)
+ {
+@@ -1070,6 +1076,8 @@ static void exynos_dp_poweron(struct exynos_dp_device *dp)
+ 		}
+ 	}
  
-+#define ALC256_STANDARD_PINS \
-+	{0x12, 0x90a60140}, \
-+	{0x14, 0x90170110}, \
-+	{0x19, 0x411111f0}, \
-+	{0x1a, 0x411111f0}, \
-+	{0x1b, 0x411111f0}, \
-+	{0x1d, 0x40700001}, \
-+	{0x1e, 0x411111f0}, \
-+	{0x21, 0x02211020}
++	fimd_dp_clock_enable(dp_to_crtc(dp), true);
 +
- #define ALC282_STANDARD_PINS \
- 	{0x14, 0x90170110}, \
- 	{0x18, 0x411111f0}, \
-@@ -5237,15 +5247,11 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
- 		{0x1d, 0x40700001},
- 		{0x21, 0x02211050}),
- 	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
--		{0x12, 0x90a60140},
--		{0x13, 0x40000000},
--		{0x14, 0x90170110},
--		{0x19, 0x411111f0},
--		{0x1a, 0x411111f0},
--		{0x1b, 0x411111f0},
--		{0x1d, 0x40700001},
--		{0x1e, 0x411111f0},
--		{0x21, 0x02211020}),
-+		ALC256_STANDARD_PINS,
-+		{0x13, 0x40000000}),
-+	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
-+		ALC256_STANDARD_PINS,
-+		{0x13, 0x411111f0}),
- 	SND_HDA_PIN_QUIRK(0x10ec0280, 0x103c, "HP", ALC280_FIXUP_HP_GPIO4,
- 		{0x12, 0x90a60130},
- 		{0x13, 0x40000000},
--- 
-2.3.6
-
-
-From 53c20b74579ec9bb7b45b2208fce79df09e8bdfb Mon Sep 17 00:00:00 2001
-From: Ulrik De Bie <ulrik.debie-os@e2big.org>
-Date: Mon, 6 Apr 2015 15:35:38 -0700
-Subject: [PATCH 096/219] Input: elantech - fix absolute mode setting on some
- ASUS laptops
-Cc: mpagano@gentoo.org
-
-commit bd884149aca61de269fd9bad83fe2a4232ffab21 upstream.
-
-On ASUS TP500LN and X750JN, the touchpad absolute mode is reset each
-time set_rate is done.
-
-In order to fix this, we will verify the firmware version, and if it
-matches the one in those laptops, the set_rate function is overloaded
-with a function elantech_set_rate_restore_reg_07 that performs the
-set_rate with the original function, followed by a restore of reg_07
-(the register that sets the absolute mode on elantech v4 hardware).
-
-Also the ASUS TP500LN and X750JN firmware version, capabilities, and
-button constellation is added to elantech.c
-
-Reported-and-tested-by: George Moutsopoulos <gmoutso@yahoo.co.uk>
-Signed-off-by: Ulrik De Bie <ulrik.debie-os@e2big.org>
-Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/input/mouse/elantech.c | 22 ++++++++++++++++++++++
- drivers/input/mouse/elantech.h |  1 +
- 2 files changed, 23 insertions(+)
-
-diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
-index 6e22682..991dc6b 100644
---- a/drivers/input/mouse/elantech.c
-+++ b/drivers/input/mouse/elantech.c
-@@ -893,6 +893,21 @@ static psmouse_ret_t elantech_process_byte(struct psmouse *psmouse)
- }
+ 	clk_prepare_enable(dp->clock);
+ 	exynos_dp_phy_init(dp);
+ 	exynos_dp_init_dp(dp);
+@@ -1094,6 +1102,8 @@ static void exynos_dp_poweroff(struct exynos_dp_device *dp)
+ 	exynos_dp_phy_exit(dp);
+ 	clk_disable_unprepare(dp->clock);
  
- /*
-+ * This writes the reg_07 value again to the hardware at the end of every
-+ * set_rate call because the register loses its value. reg_07 allows setting
-+ * absolute mode on v4 hardware
-+ */
-+static void elantech_set_rate_restore_reg_07(struct psmouse *psmouse,
-+		unsigned int rate)
-+{
-+	struct elantech_data *etd = psmouse->private;
++	fimd_dp_clock_enable(dp_to_crtc(dp), false);
 +
-+	etd->original_set_rate(psmouse, rate);
-+	if (elantech_write_reg(psmouse, 0x07, etd->reg_07))
-+		psmouse_err(psmouse, "restoring reg_07 failed\n");
+ 	if (dp->panel) {
+ 		if (drm_panel_unprepare(dp->panel))
+ 			DRM_ERROR("failed to turnoff the panel\n");
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.c b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+index 33a10ce..5d58f6c 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_fimd.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+@@ -32,6 +32,7 @@
+ #include "exynos_drm_fbdev.h"
+ #include "exynos_drm_crtc.h"
+ #include "exynos_drm_iommu.h"
++#include "exynos_drm_fimd.h"
+ 
+ /*
+  * FIMD stands for Fully Interactive Mobile Display and
+@@ -1233,6 +1234,24 @@ static int fimd_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++void fimd_dp_clock_enable(struct exynos_drm_crtc *crtc, bool enable)
++{
++	struct fimd_context *ctx = crtc->ctx;
++	u32 val;
++
++	/*
++	 * Only Exynos 5250, 5260, 5410 and 542x requires enabling DP/MIE
++	 * clock. On these SoCs the bootloader may enable it but any
++	 * power domain off/on will reset it to disable state.
++	 */
++	if (ctx->driver_data != &exynos5_fimd_driver_data)
++		return;
++
++	val = enable ? DP_MIE_CLK_DP_ENABLE : DP_MIE_CLK_DISABLE;
++	writel(DP_MIE_CLK_DP_ENABLE, ctx->regs + DP_MIE_CLKCON);
 +}
++EXPORT_SYMBOL_GPL(fimd_dp_clock_enable);
 +
+ struct platform_driver fimd_driver = {
+ 	.probe		= fimd_probe,
+ 	.remove		= fimd_remove,
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.h b/drivers/gpu/drm/exynos/exynos_drm_fimd.h
+new file mode 100644
+index 0000000..b4fcaa5
+--- /dev/null
++++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.h
+@@ -0,0 +1,15 @@
 +/*
-  * Put the touchpad into absolute mode
-  */
- static int elantech_set_absolute_mode(struct psmouse *psmouse)
-@@ -1094,6 +1109,8 @@ static int elantech_get_resolution_v4(struct psmouse *psmouse,
-  * Asus K53SV              0x450f01        78, 15, 0c      2 hw buttons
-  * Asus G46VW              0x460f02        00, 18, 0c      2 hw buttons
-  * Asus G750JX             0x360f00        00, 16, 0c      2 hw buttons
-+ * Asus TP500LN            0x381f17        10, 14, 0e      clickpad
-+ * Asus X750JN             0x381f17        10, 14, 0e      clickpad
-  * Asus UX31               0x361f00        20, 15, 0e      clickpad
-  * Asus UX32VD             0x361f02        00, 15, 0e      clickpad
-  * Avatar AVIU-145A2       0x361f00        ?               clickpad
-@@ -1635,6 +1652,11 @@ int elantech_init(struct psmouse *psmouse)
- 		goto init_fail;
- 	}
- 
-+	if (etd->fw_version == 0x381f17) {
-+		etd->original_set_rate = psmouse->set_rate;
-+		psmouse->set_rate = elantech_set_rate_restore_reg_07;
-+	}
++ * Copyright (c) 2015 Samsung Electronics Co., Ltd.
++ *
++ * This program is free software; you can redistribute  it and/or modify it
++ * under  the terms of  the GNU General  Public License as published by the
++ * Free Software Foundation;  either version 2 of the  License, or (at your
++ * option) any later version.
++ */
 +
- 	if (elantech_set_input_params(psmouse)) {
- 		psmouse_err(psmouse, "failed to query touchpad range.\n");
- 		goto init_fail;
-diff --git a/drivers/input/mouse/elantech.h b/drivers/input/mouse/elantech.h
-index 6f3afec..f965d15 100644
---- a/drivers/input/mouse/elantech.h
-+++ b/drivers/input/mouse/elantech.h
-@@ -142,6 +142,7 @@ struct elantech_data {
- 	struct finger_pos mt[ETP_MAX_FINGERS];
- 	unsigned char parity[256];
- 	int (*send_cmd)(struct psmouse *psmouse, unsigned char c, unsigned char *param);
-+	void (*original_set_rate)(struct psmouse *psmouse, unsigned int rate);
- };
++#ifndef _EXYNOS_DRM_FIMD_H_
++#define _EXYNOS_DRM_FIMD_H_
++
++extern void fimd_dp_clock_enable(struct exynos_drm_crtc *crtc, bool enable);
++
++#endif /* _EXYNOS_DRM_FIMD_H_ */
+diff --git a/drivers/gpu/drm/i2c/adv7511.c b/drivers/gpu/drm/i2c/adv7511.c
+index fa140e0..60ab1f7 100644
+--- a/drivers/gpu/drm/i2c/adv7511.c
++++ b/drivers/gpu/drm/i2c/adv7511.c
+@@ -33,6 +33,7 @@ struct adv7511 {
  
- #ifdef CONFIG_MOUSE_PS2_ELANTECH
--- 
-2.3.6
-
-
-From 93ab611572eae4cb426cf006c70a7c7216603cfe Mon Sep 17 00:00:00 2001
-From: Hans de Goede <hdegoede@redhat.com>
-Date: Wed, 8 Apr 2015 09:26:42 -0700
-Subject: [PATCH 097/219] Input: alps - fix touchpad buttons getting stuck when
- used with trackpoint
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-Cc: mpagano@gentoo.org
-
-commit 6bcca19f5dcedc3a006ca0bcc3699a437cadee74 upstream.
-
-When the left touchpad button gets pressed, and then the trackpoint is
-moved, and then the button is released, the following happens:
-
-1) touchpad packet is received, touchpad evdev node reports BTN_LEFT 1
-
-2) pointing stick packet is received, the hw will report a BTN_LEFT 1 in
-   this packet because when the trackstick is active it communicates the
-   combined touchpad + pointing stick buttons in the trackstick packet,
-   since alps_report_bare_ps2_packet passes NULL (*) for the dev2 parameter
-   to alps_report_buttons the combining is not detected and the
-   pointing stick evdev node will also report BTN_LEFT 1
-
-3) on release of the button a pointing stick packet with BTN_LEFT 0 is
-   received and the pointing stick evdev node will report BTN_LEFT 0
-
-Note how because of the passing as NULL for dev2 the touchpad evdev node
-will never send BTN_LEFT 0 in this scenario leading to a stuck mouse button.
-
-This is a regression in 4.0 introduced by commit 04aae283ba6a8
-("Input: ALPS - do not mix trackstick and external PS/2 mouse data")
-
-This commit fixes this by passing in the touchpad evdev as dev2 parameter
-when calling alps_report_buttons for the pointingstick on alps v2 devices,
-so that alps_report_buttons correctly detect that we're already reporting
-the button as pressed via the touchpad evdev node, and will also send the
-release event there.
-
-Reported-by: Hans de Bruin <jmdebruin@xmsnet.nl>
-Signed-off-by: Hans de Goede <hdegoede@redhat.com>
-Acked-by: Pali Rohár <pali.rohar@gmail.com>
-Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/input/mouse/alps.c | 5 +++--
- 1 file changed, 3 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/input/mouse/alps.c b/drivers/input/mouse/alps.c
-index 27bcdbc..ea6cb64 100644
---- a/drivers/input/mouse/alps.c
-+++ b/drivers/input/mouse/alps.c
-@@ -1159,13 +1159,14 @@ static void alps_report_bare_ps2_packet(struct psmouse *psmouse,
- 					bool report_buttons)
- {
- 	struct alps_data *priv = psmouse->private;
--	struct input_dev *dev;
-+	struct input_dev *dev, *dev2 = NULL;
+ 	unsigned int current_edid_segment;
+ 	uint8_t edid_buf[256];
++	bool edid_read;
  
- 	/* Figure out which device to use to report the bare packet */
- 	if (priv->proto_version == ALPS_PROTO_V2 &&
- 	    (priv->flags & ALPS_DUALPOINT)) {
- 		/* On V2 devices the DualPoint Stick reports bare packets */
- 		dev = priv->dev2;
-+		dev2 = psmouse->dev;
- 	} else if (unlikely(IS_ERR_OR_NULL(priv->dev3))) {
- 		/* Register dev3 mouse if we received PS/2 packet first time */
- 		if (!IS_ERR(priv->dev3))
-@@ -1177,7 +1178,7 @@ static void alps_report_bare_ps2_packet(struct psmouse *psmouse,
- 	}
+ 	wait_queue_head_t wq;
+ 	struct drm_encoder *encoder;
+@@ -379,69 +380,71 @@ static bool adv7511_hpd(struct adv7511 *adv7511)
+ 	return false;
+ }
  
- 	if (report_buttons)
--		alps_report_buttons(dev, NULL,
-+		alps_report_buttons(dev, dev2,
- 				packet[0] & 1, packet[0] & 2, packet[0] & 4);
+-static irqreturn_t adv7511_irq_handler(int irq, void *devid)
+-{
+-	struct adv7511 *adv7511 = devid;
+-
+-	if (adv7511_hpd(adv7511))
+-		drm_helper_hpd_irq_event(adv7511->encoder->dev);
+-
+-	wake_up_all(&adv7511->wq);
+-
+-	return IRQ_HANDLED;
+-}
+-
+-static unsigned int adv7511_is_interrupt_pending(struct adv7511 *adv7511,
+-						 unsigned int irq)
++static int adv7511_irq_process(struct adv7511 *adv7511)
+ {
+ 	unsigned int irq0, irq1;
+-	unsigned int pending;
+ 	int ret;
  
- 	input_report_rel(dev, REL_X,
--- 
-2.3.6
-
-
-From 9a7fcd609f2e3eaf2d661ee26ab7601e450cd7a2 Mon Sep 17 00:00:00 2001
-From: Johan Hovold <johan@kernel.org>
-Date: Wed, 25 Mar 2015 12:07:05 +0100
-Subject: [PATCH 098/219] mfd: core: Fix platform-device name collisions
-Cc: mpagano@gentoo.org
-
-commit a77c50b44cfb663ad03faba9800fec19bdf83577 upstream.
-
-Since commit 6e3f62f0793e ("mfd: core: Fix platform-device id
-generation") we honour PLATFORM_DEVID_AUTO and PLATFORM_DEVID_NONE when
-registering mfd-devices.
-
-Unfortunately, some mfd-drivers rely on the old behaviour of generating
-platform-device ids by adding the cell id also to the special value of
-PLATFORM_DEVID_NONE. The resulting platform ids are not only used to
-generate device-unique names, but are also used instead of the cell id
-to identify cells when probing subdevices.
-
-These drivers should be updated to use PLATFORM_DEVID_AUTO, which would
-also allow more than one device to be registered without resorting to
-hacks (see for example wm831x), but lets fix the regression first by
-partially reverting the above mentioned commit with respect to
-PLATFORM_DEVID_NONE.
-
-Fixes: 6e3f62f0793e ("mfd: core: Fix platform-device id generation")
-Reported-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
-Signed-off-by: Johan Hovold <johan@kernel.org>
-Acked-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
-Signed-off-by: Lee Jones <lee.jones@linaro.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/mfd/mfd-core.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c
-index 2a87f69..1aed3b7 100644
---- a/drivers/mfd/mfd-core.c
-+++ b/drivers/mfd/mfd-core.c
-@@ -128,7 +128,7 @@ static int mfd_add_device(struct device *parent, int id,
- 	int platform_id;
- 	int r;
+ 	ret = regmap_read(adv7511->regmap, ADV7511_REG_INT(0), &irq0);
+ 	if (ret < 0)
+-		return 0;
++		return ret;
++
+ 	ret = regmap_read(adv7511->regmap, ADV7511_REG_INT(1), &irq1);
+ 	if (ret < 0)
+-		return 0;
++		return ret;
++
++	regmap_write(adv7511->regmap, ADV7511_REG_INT(0), irq0);
++	regmap_write(adv7511->regmap, ADV7511_REG_INT(1), irq1);
++
++	if (irq0 & ADV7511_INT0_HDP)
++		drm_helper_hpd_irq_event(adv7511->encoder->dev);
++
++	if (irq0 & ADV7511_INT0_EDID_READY || irq1 & ADV7511_INT1_DDC_ERROR) {
++		adv7511->edid_read = true;
++
++		if (adv7511->i2c_main->irq)
++			wake_up_all(&adv7511->wq);
++	}
++
++	return 0;
++}
  
--	if (id < 0)
-+	if (id == PLATFORM_DEVID_AUTO)
- 		platform_id = id;
- 	else
- 		platform_id = id + cell->id;
--- 
-2.3.6
-
-
-From 671ea8186b4d894fef503c13745152d9827d7a1b Mon Sep 17 00:00:00 2001
-From: Michael Davidson <md@google.com>
-Date: Tue, 14 Apr 2015 15:47:38 -0700
-Subject: [PATCH 099/219] fs/binfmt_elf.c: fix bug in loading of PIE binaries
-Cc: mpagano@gentoo.org
-
-commit a87938b2e246b81b4fb713edb371a9fa3c5c3c86 upstream.
-
-With CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE enabled, and a normal top-down
-address allocation strategy, load_elf_binary() will attempt to map a PIE
-binary into an address range immediately below mm->mmap_base.
-
-Unfortunately, load_elf_ binary() does not take account of the need to
-allocate sufficient space for the entire binary which means that, while
-the first PT_LOAD segment is mapped below mm->mmap_base, the subsequent
-PT_LOAD segment(s) end up being mapped above mm->mmap_base into the are
-that is supposed to be the "gap" between the stack and the binary.
-
-Since the size of the "gap" on x86_64 is only guaranteed to be 128MB this
-means that binaries with large data segments > 128MB can end up mapping
-part of their data segment over their stack resulting in corruption of the
-stack (and the data segment once the binary starts to run).
-
-Any PIE binary with a data segment > 128MB is vulnerable to this although
-address randomization means that the actual gap between the stack and the
-end of the binary is normally greater than 128MB.  The larger the data
-segment of the binary the higher the probability of failure.
-
-Fix this by calculating the total size of the binary in the same way as
-load_elf_interp().
-
-Signed-off-by: Michael Davidson <md@google.com>
-Cc: Alexander Viro <viro@zeniv.linux.org.uk>
-Cc: Jiri Kosina <jkosina@suse.cz>
-Cc: Kees Cook <keescook@chromium.org>
-Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/binfmt_elf.c | 9 ++++++++-
- 1 file changed, 8 insertions(+), 1 deletion(-)
-
-diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
-index 995986b..d925f55 100644
---- a/fs/binfmt_elf.c
-+++ b/fs/binfmt_elf.c
-@@ -862,6 +862,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
- 	    i < loc->elf_ex.e_phnum; i++, elf_ppnt++) {
- 		int elf_prot = 0, elf_flags;
- 		unsigned long k, vaddr;
-+		unsigned long total_size = 0;
+-	pending = (irq1 << 8) | irq0;
++static irqreturn_t adv7511_irq_handler(int irq, void *devid)
++{
++	struct adv7511 *adv7511 = devid;
++	int ret;
  
- 		if (elf_ppnt->p_type != PT_LOAD)
- 			continue;
-@@ -924,10 +925,16 @@ static int load_elf_binary(struct linux_binprm *bprm)
- #else
- 			load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
- #endif
-+			total_size = total_mapping_size(elf_phdata,
-+							loc->elf_ex.e_phnum);
-+			if (!total_size) {
-+				error = -EINVAL;
-+				goto out_free_dentry;
-+			}
- 		}
+-	return pending & irq;
++	ret = adv7511_irq_process(adv7511);
++	return ret < 0 ? IRQ_NONE : IRQ_HANDLED;
+ }
  
- 		error = elf_map(bprm->file, load_bias + vaddr, elf_ppnt,
--				elf_prot, elf_flags, 0);
-+				elf_prot, elf_flags, total_size);
- 		if (BAD_ADDR(error)) {
- 			retval = IS_ERR((void *)error) ?
- 				PTR_ERR((void*)error) : -EINVAL;
--- 
-2.3.6
-
-
-From 12ea13bf83f15c5cf59b4039295f98b0d7a83881 Mon Sep 17 00:00:00 2001
-From: Oleg Nesterov <oleg@redhat.com>
-Date: Thu, 16 Apr 2015 12:47:29 -0700
-Subject: [PATCH 100/219] ptrace: fix race between ptrace_resume() and
- wait_task_stopped()
-Cc: mpagano@gentoo.org
-
-commit b72c186999e689cb0b055ab1c7b3cd8fffbeb5ed upstream.
-
-ptrace_resume() is called when the tracee is still __TASK_TRACED.  We set
-tracee->exit_code and then wake_up_state() changes tracee->state.  If the
-tracer's sub-thread does wait() in between, task_stopped_code(ptrace => T)
-wrongly looks like another report from tracee.
-
-This confuses debugger, and since wait_task_stopped() clears ->exit_code
-the tracee can miss a signal.
-
-Test-case:
-
-	#include <stdio.h>
-	#include <unistd.h>
-	#include <sys/wait.h>
-	#include <sys/ptrace.h>
-	#include <pthread.h>
-	#include <assert.h>
-
-	int pid;
-
-	void *waiter(void *arg)
-	{
-		int stat;
-
-		for (;;) {
-			assert(pid == wait(&stat));
-			assert(WIFSTOPPED(stat));
-			if (WSTOPSIG(stat) == SIGHUP)
-				continue;
-
-			assert(WSTOPSIG(stat) == SIGCONT);
-			printf("ERR! extra/wrong report:%x\n", stat);
-		}
-	}
-
-	int main(void)
-	{
-		pthread_t thread;
-
-		pid = fork();
-		if (!pid) {
-			assert(ptrace(PTRACE_TRACEME, 0,0,0) == 0);
-			for (;;)
-				kill(getpid(), SIGHUP);
-		}
-
-		assert(pthread_create(&thread, NULL, waiter, NULL) == 0);
-
-		for (;;)
-			ptrace(PTRACE_CONT, pid, 0, SIGCONT);
-
-		return 0;
-	}
-
-Note for stable: the bug is very old, but without 9899d11f6544 "ptrace:
-ensure arch_ptrace/ptrace_request can never race with SIGKILL" the fix
-should use lock_task_sighand(child).
-
-Signed-off-by: Oleg Nesterov <oleg@redhat.com>
-Reported-by: Pavel Labath <labath@google.com>
-Tested-by: Pavel Labath <labath@google.com>
-Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- kernel/ptrace.c | 20 ++++++++++++++++++++
- 1 file changed, 20 insertions(+)
-
-diff --git a/kernel/ptrace.c b/kernel/ptrace.c
-index 227fec3..9a34bd8 100644
---- a/kernel/ptrace.c
-+++ b/kernel/ptrace.c
-@@ -697,6 +697,8 @@ static int ptrace_peek_siginfo(struct task_struct *child,
- static int ptrace_resume(struct task_struct *child, long request,
- 			 unsigned long data)
- {
-+	bool need_siglock;
+-static int adv7511_wait_for_interrupt(struct adv7511 *adv7511, int irq,
+-				      int timeout)
++/* -----------------------------------------------------------------------------
++ * EDID retrieval
++ */
 +
- 	if (!valid_signal(data))
- 		return -EIO;
- 
-@@ -724,8 +726,26 @@ static int ptrace_resume(struct task_struct *child, long request,
- 		user_disable_single_step(child);
- 	}
++static int adv7511_wait_for_edid(struct adv7511 *adv7511, int timeout)
+ {
+-	unsigned int pending;
+ 	int ret;
  
-+	/*
-+	 * Change ->exit_code and ->state under siglock to avoid the race
-+	 * with wait_task_stopped() in between; a non-zero ->exit_code will
-+	 * wrongly look like another report from tracee.
-+	 *
-+	 * Note that we need siglock even if ->exit_code == data and/or this
-+	 * status was not reported yet, the new status must not be cleared by
-+	 * wait_task_stopped() after resume.
-+	 *
-+	 * If data == 0 we do not care if wait_task_stopped() reports the old
-+	 * status and clears the code too; this can't race with the tracee, it
-+	 * takes siglock after resume.
-+	 */
-+	need_siglock = data && !thread_group_empty(current);
-+	if (need_siglock)
-+		spin_lock_irq(&child->sighand->siglock);
- 	child->exit_code = data;
- 	wake_up_state(child, __TASK_TRACED);
-+	if (need_siglock)
-+		spin_unlock_irq(&child->sighand->siglock);
+ 	if (adv7511->i2c_main->irq) {
+ 		ret = wait_event_interruptible_timeout(adv7511->wq,
+-				adv7511_is_interrupt_pending(adv7511, irq),
+-				msecs_to_jiffies(timeout));
+-		if (ret <= 0)
+-			return 0;
+-		pending = adv7511_is_interrupt_pending(adv7511, irq);
++				adv7511->edid_read, msecs_to_jiffies(timeout));
+ 	} else {
+-		if (timeout < 25)
+-			timeout = 25;
+-		do {
+-			pending = adv7511_is_interrupt_pending(adv7511, irq);
+-			if (pending)
++		for (; timeout > 0; timeout -= 25) {
++			ret = adv7511_irq_process(adv7511);
++			if (ret < 0)
+ 				break;
++
++			if (adv7511->edid_read)
++				break;
++
+ 			msleep(25);
+-			timeout -= 25;
+-		} while (timeout >= 25);
++		}
+ 	}
  
- 	return 0;
+-	return pending;
++	return adv7511->edid_read ? 0 : -EIO;
  }
--- 
-2.3.6
-
-
-From 64b22d90114136c3f66fef541c844bc2deb539c5 Mon Sep 17 00:00:00 2001
-From: Len Brown <len.brown@intel.com>
-Date: Tue, 24 Mar 2015 23:23:20 -0400
-Subject: [PATCH 101/219] intel_idle: Update support for Silvermont Core in
- Baytrail SOC
-Cc: mpagano@gentoo.org
-
-commit d7ef76717322c8e2df7d4360b33faa9466cb1a0d upstream.
-
-On some Silvermont-Core/Baytrail-SOC systems,
-C1E latency is higher than original specifications.
-Although C1E is still enumerated in CPUID.MWAIT.EDX,
-we delete the state from intel_idle to avoid latency impact.
-
-Under some conditions, the latency of the C6N-BYT and C6S-BYT states
-may exceed the specified values of 40 and 140 usec, respectively.
-Increase those values to 300 and 500 usec; to assure
-that the hardware does not violate constraints that may be set
-by the Linux PM_QOS sub-system.
-
-Also increase the C7-BYT target residency to 4.0 ms from 1.5 ms.
-
-Signed-off-by: Len Brown <len.brown@intel.com>
-Cc: Kumar P Mahesh <mahesh.kumar.p@intel.com>
-Cc: Alan Cox <alan@linux.intel.com>
-Cc: Mika Westerberg <mika.westerberg@linux.intel.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/idle/intel_idle.c | 14 +++-----------
- 1 file changed, 3 insertions(+), 11 deletions(-)
-
-diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
-index b0e5852..44d1d79 100644
---- a/drivers/idle/intel_idle.c
-+++ b/drivers/idle/intel_idle.c
-@@ -218,18 +218,10 @@ static struct cpuidle_state byt_cstates[] = {
- 		.enter = &intel_idle,
- 		.enter_freeze = intel_idle_freeze, },
- 	{
--		.name = "C1E-BYT",
--		.desc = "MWAIT 0x01",
--		.flags = MWAIT2flg(0x01),
--		.exit_latency = 15,
--		.target_residency = 30,
--		.enter = &intel_idle,
--		.enter_freeze = intel_idle_freeze, },
--	{
- 		.name = "C6N-BYT",
- 		.desc = "MWAIT 0x58",
- 		.flags = MWAIT2flg(0x58) | CPUIDLE_FLAG_TLB_FLUSHED,
--		.exit_latency = 40,
-+		.exit_latency = 300,
- 		.target_residency = 275,
- 		.enter = &intel_idle,
- 		.enter_freeze = intel_idle_freeze, },
-@@ -237,7 +229,7 @@ static struct cpuidle_state byt_cstates[] = {
- 		.name = "C6S-BYT",
- 		.desc = "MWAIT 0x52",
- 		.flags = MWAIT2flg(0x52) | CPUIDLE_FLAG_TLB_FLUSHED,
--		.exit_latency = 140,
-+		.exit_latency = 500,
- 		.target_residency = 560,
- 		.enter = &intel_idle,
- 		.enter_freeze = intel_idle_freeze, },
-@@ -246,7 +238,7 @@ static struct cpuidle_state byt_cstates[] = {
- 		.desc = "MWAIT 0x60",
- 		.flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED,
- 		.exit_latency = 1200,
--		.target_residency = 1500,
-+		.target_residency = 4000,
- 		.enter = &intel_idle,
- 		.enter_freeze = intel_idle_freeze, },
- 	{
--- 
-2.3.6
-
-
-From 6181a6b2238de82fed39b0568645ea6a1ff2c6fd Mon Sep 17 00:00:00 2001
-From: Nicolas Ferre <nicolas.ferre@atmel.com>
-Date: Tue, 31 Mar 2015 15:02:05 +0200
-Subject: [PATCH 102/219] net/macb: fix the peripheral version test
-Cc: mpagano@gentoo.org
-
-commit 361918970b7426bba97a64678ef2b2679c37199b upstream.
-
-We currently need two checks of the peripheral version in MACB_MID register.
-One of them got out of sync after modification by 8a013a9c71b2 (net: macb:
-Include multi queue support for xilinx ZynqMP ethernet version).
-Fix this in macb_configure_caps() so that xilinx ZynqMP will be considered
-as a GEM flavor.
-
-Fixes: 8a013a9c71b2 ("net: macb: Include multi queue support for xilinx ZynqMP
-ethernet version")
-
-Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
-Cc: Michal Simek <michal.simek@xilinx.com>
-Cc: Punnaiah Choudary Kalluri <punnaia@xilinx.com>
-Signed-off-by: David S. Miller <davem@davemloft.net>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/net/ethernet/cadence/macb.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/net/ethernet/cadence/macb.c b/drivers/net/ethernet/cadence/macb.c
-index 81d4153..77bf133 100644
---- a/drivers/net/ethernet/cadence/macb.c
-+++ b/drivers/net/ethernet/cadence/macb.c
-@@ -2165,7 +2165,7 @@ static void macb_configure_caps(struct macb *bp)
- 		}
- 	}
  
--	if (MACB_BFEXT(IDNUM, macb_readl(bp, MID)) == 0x2)
-+	if (MACB_BFEXT(IDNUM, macb_readl(bp, MID)) >= 0x2)
- 		bp->caps |= MACB_CAPS_MACB_IS_GEM;
+-/* -----------------------------------------------------------------------------
+- * EDID retrieval
+- */
+-
+ static int adv7511_get_edid_block(void *data, u8 *buf, unsigned int block,
+ 				  size_t len)
+ {
+@@ -463,19 +466,14 @@ static int adv7511_get_edid_block(void *data, u8 *buf, unsigned int block,
+ 			return ret;
  
- 	if (macb_is_gem(bp)) {
--- 
-2.3.6
-
-
-From 95df5a6b8698921ca30cd55853446016a2acb891 Mon Sep 17 00:00:00 2001
-From: Christophe Ricard <christophe.ricard@gmail.com>
-Date: Tue, 31 Mar 2015 08:02:15 +0200
-Subject: [PATCH 103/219] NFC: st21nfcb: Retry i2c_master_send if it returns a
- negative value
-Cc: mpagano@gentoo.org
-
-commit d4a41d10b2cb5890aeda6b2912973b2a754b05b1 upstream.
-
-i2c_master_send may return many negative values different than
--EREMOTEIO.
-In case an i2c transaction is NACK'ed, on raspberry pi B+
-kernel 3.18, -EIO is generated instead.
-
-Signed-off-by: Christophe Ricard <christophe-h.ricard@st.com>
-Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/nfc/st21nfcb/i2c.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/nfc/st21nfcb/i2c.c b/drivers/nfc/st21nfcb/i2c.c
-index eb88693..7b53a5c 100644
---- a/drivers/nfc/st21nfcb/i2c.c
-+++ b/drivers/nfc/st21nfcb/i2c.c
-@@ -109,7 +109,7 @@ static int st21nfcb_nci_i2c_write(void *phy_id, struct sk_buff *skb)
- 		return phy->ndlc->hard_fault;
+ 		if (status != 2) {
++			adv7511->edid_read = false;
+ 			regmap_write(adv7511->regmap, ADV7511_REG_EDID_SEGMENT,
+ 				     block);
+-			ret = adv7511_wait_for_interrupt(adv7511,
+-					ADV7511_INT0_EDID_READY |
+-					ADV7511_INT1_DDC_ERROR, 200);
+-
+-			if (!(ret & ADV7511_INT0_EDID_READY))
+-				return -EIO;
++			ret = adv7511_wait_for_edid(adv7511, 200);
++			if (ret < 0)
++				return ret;
+ 		}
  
- 	r = i2c_master_send(client, skb->data, skb->len);
--	if (r == -EREMOTEIO) {  /* Retry, chip was in standby */
-+	if (r < 0) {  /* Retry, chip was in standby */
- 		usleep_range(1000, 4000);
- 		r = i2c_master_send(client, skb->data, skb->len);
- 	}
-@@ -148,7 +148,7 @@ static int st21nfcb_nci_i2c_read(struct st21nfcb_i2c_phy *phy,
- 	struct i2c_client *client = phy->i2c_dev;
+-		regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
+-			     ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
+-
+ 		/* Break this apart, hopefully more I2C controllers will
+ 		 * support 64 byte transfers than 256 byte transfers
+ 		 */
+@@ -528,7 +526,9 @@ static int adv7511_get_modes(struct drm_encoder *encoder,
+ 	/* Reading the EDID only works if the device is powered */
+ 	if (adv7511->dpms_mode != DRM_MODE_DPMS_ON) {
+ 		regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
+-			     ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
++			     ADV7511_INT0_EDID_READY);
++		regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
++			     ADV7511_INT1_DDC_ERROR);
+ 		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
+ 				   ADV7511_POWER_POWER_DOWN, 0);
+ 		adv7511->current_edid_segment = -1;
+@@ -563,7 +563,9 @@ static void adv7511_encoder_dpms(struct drm_encoder *encoder, int mode)
+ 		adv7511->current_edid_segment = -1;
  
- 	r = i2c_master_recv(client, buf, ST21NFCB_NCI_I2C_MIN_SIZE);
--	if (r == -EREMOTEIO) {  /* Retry, chip was in standby */
-+	if (r < 0) {  /* Retry, chip was in standby */
- 		usleep_range(1000, 4000);
- 		r = i2c_master_recv(client, buf, ST21NFCB_NCI_I2C_MIN_SIZE);
- 	}
--- 
-2.3.6
-
-
-From 9e2d43e521a469a50ef03b55cef24e7d260bbdbb Mon Sep 17 00:00:00 2001
-From: Larry Finger <Larry.Finger@lwfinger.net>
-Date: Mon, 23 Mar 2015 18:14:10 -0500
-Subject: [PATCH 104/219] rtlwifi: rtl8192cu: Add new USB ID
-Cc: mpagano@gentoo.org
-
-commit 2f92b314f4daff2117847ac5343c54d3d041bf78 upstream.
-
-USB ID 2001:330d is used for a D-Link DWA-131.
-
-Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
-Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/net/wireless/rtlwifi/rtl8192cu/sw.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
-index 90a714c..6fde250 100644
---- a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
-+++ b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
-@@ -377,6 +377,7 @@ static struct usb_device_id rtl8192c_usb_ids[] = {
- 	{RTL_USB_DEVICE(0x2001, 0x3307, rtl92cu_hal_cfg)}, /*D-Link-Cameo*/
- 	{RTL_USB_DEVICE(0x2001, 0x3309, rtl92cu_hal_cfg)}, /*D-Link-Alpha*/
- 	{RTL_USB_DEVICE(0x2001, 0x330a, rtl92cu_hal_cfg)}, /*D-Link-Alpha*/
-+	{RTL_USB_DEVICE(0x2001, 0x330d, rtl92cu_hal_cfg)}, /*D-Link DWA-131 */
- 	{RTL_USB_DEVICE(0x2019, 0xab2b, rtl92cu_hal_cfg)}, /*Planex -Abocom*/
- 	{RTL_USB_DEVICE(0x20f4, 0x624d, rtl92cu_hal_cfg)}, /*TRENDNet*/
- 	{RTL_USB_DEVICE(0x2357, 0x0100, rtl92cu_hal_cfg)}, /*TP-Link WN8200ND*/
--- 
-2.3.6
-
-
-From a9fe1b9caf0ea4ccada73ce243b23fd6a7e896d3 Mon Sep 17 00:00:00 2001
-From: Marek Vasut <marex@denx.de>
-Date: Thu, 26 Mar 2015 02:16:06 +0100
-Subject: [PATCH 105/219] rtlwifi: rtl8192cu: Add new device ID
-Cc: mpagano@gentoo.org
-
-commit 9374e7d2fdcad3c36dafc8d3effd554bc702c4b6 upstream.
-
-Add new ID for ASUS N10 WiFi dongle.
-
-Signed-off-by: Marek Vasut <marex@denx.de>
-Tested-by: Marek Vasut <marex@denx.de>
-Cc: Larry Finger <Larry.Finger@lwfinger.net>
-Cc: John W. Linville <linville@tuxdriver.com>
-Acked-by: Larry Finger <Larry.Finger@lwfinger.net>
-Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/net/wireless/rtlwifi/rtl8192cu/sw.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
-index 6fde250..23806c2 100644
---- a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
-+++ b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
-@@ -321,6 +321,7 @@ static struct usb_device_id rtl8192c_usb_ids[] = {
- 	{RTL_USB_DEVICE(0x07b8, 0x8188, rtl92cu_hal_cfg)}, /*Abocom - Abocom*/
- 	{RTL_USB_DEVICE(0x07b8, 0x8189, rtl92cu_hal_cfg)}, /*Funai - Abocom*/
- 	{RTL_USB_DEVICE(0x0846, 0x9041, rtl92cu_hal_cfg)}, /*NetGear WNA1000M*/
-+	{RTL_USB_DEVICE(0x0b05, 0x17ba, rtl92cu_hal_cfg)}, /*ASUS-Edimax*/
- 	{RTL_USB_DEVICE(0x0bda, 0x5088, rtl92cu_hal_cfg)}, /*Thinkware-CC&C*/
- 	{RTL_USB_DEVICE(0x0df6, 0x0052, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/
- 	{RTL_USB_DEVICE(0x0df6, 0x005c, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/
--- 
-2.3.6
-
-
-From 3536e283ea6797daac8054aebea238cafe9a464c Mon Sep 17 00:00:00 2001
-From: Lukas Czerner <lczerner@redhat.com>
-Date: Fri, 3 Apr 2015 10:46:58 -0400
-Subject: [PATCH 106/219] ext4: make fsync to sync parent dir in no-journal for
- real this time
-Cc: mpagano@gentoo.org
-
-commit e12fb97222fc41e8442896934f76d39ef99b590a upstream.
-
-Previously commit 14ece1028b3ed53ffec1b1213ffc6acaf79ad77c added a
-support for for syncing parent directory of newly created inodes to
-make sure that the inode is not lost after a power failure in
-no-journal mode.
-
-However this does not work in majority of cases, namely:
- - if the directory has inline data
- - if the directory is already indexed
- - if the directory already has at least one block and:
-	- the new entry fits into it
-	- or we've successfully converted it to indexed
-
-So in those cases we might lose the inode entirely even after fsync in
-the no-journal mode. This also includes ext2 default mode obviously.
-
-I've noticed this while running xfstest generic/321 and even though the
-test should fail (we need to run fsck after a crash in no-journal mode)
-I could not find a newly created entries even when if it was fsynced
-before.
-
-Fix this by adjusting the ext4_add_entry() successful exit paths to set
-the inode EXT4_STATE_NEWENTRY so that fsync has the chance to fsync the
-parent directory as well.
-
-Signed-off-by: Lukas Czerner <lczerner@redhat.com>
-Signed-off-by: Theodore Ts'o <tytso@mit.edu>
-Reviewed-by: Jan Kara <jack@suse.cz>
-Cc: Frank Mayhar <fmayhar@google.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/ext4/namei.c | 20 +++++++++++---------
- 1 file changed, 11 insertions(+), 9 deletions(-)
-
-diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
-index 28fe71a..aae7011 100644
---- a/fs/ext4/namei.c
-+++ b/fs/ext4/namei.c
-@@ -1865,7 +1865,7 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
- 			  struct inode *inode)
- {
- 	struct inode *dir = dentry->d_parent->d_inode;
--	struct buffer_head *bh;
-+	struct buffer_head *bh = NULL;
- 	struct ext4_dir_entry_2 *de;
- 	struct ext4_dir_entry_tail *t;
- 	struct super_block *sb;
-@@ -1889,14 +1889,14 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
- 			return retval;
- 		if (retval == 1) {
- 			retval = 0;
--			return retval;
-+			goto out;
- 		}
- 	}
+ 		regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
+-			     ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
++			     ADV7511_INT0_EDID_READY);
++		regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
++			     ADV7511_INT1_DDC_ERROR);
+ 		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
+ 				   ADV7511_POWER_POWER_DOWN, 0);
+ 		/*
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index 5c66b56..ec4d932 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -1042,7 +1042,7 @@ static void vlv_save_gunit_s0ix_state(struct drm_i915_private *dev_priv)
+ 		s->lra_limits[i] = I915_READ(GEN7_LRA_LIMITS_BASE + i * 4);
  
- 	if (is_dx(dir)) {
- 		retval = ext4_dx_add_entry(handle, dentry, inode);
- 		if (!retval || (retval != ERR_BAD_DX_DIR))
--			return retval;
-+			goto out;
- 		ext4_clear_inode_flag(dir, EXT4_INODE_INDEX);
- 		dx_fallback++;
- 		ext4_mark_inode_dirty(handle, dir);
-@@ -1908,14 +1908,15 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
- 			return PTR_ERR(bh);
+ 	s->media_max_req_count	= I915_READ(GEN7_MEDIA_MAX_REQ_COUNT);
+-	s->gfx_max_req_count	= I915_READ(GEN7_MEDIA_MAX_REQ_COUNT);
++	s->gfx_max_req_count	= I915_READ(GEN7_GFX_MAX_REQ_COUNT);
  
- 		retval = add_dirent_to_buf(handle, dentry, inode, NULL, bh);
--		if (retval != -ENOSPC) {
--			brelse(bh);
--			return retval;
--		}
-+		if (retval != -ENOSPC)
-+			goto out;
+ 	s->render_hwsp		= I915_READ(RENDER_HWS_PGA_GEN7);
+ 	s->ecochk		= I915_READ(GAM_ECOCHK);
+@@ -1124,7 +1124,7 @@ static void vlv_restore_gunit_s0ix_state(struct drm_i915_private *dev_priv)
+ 		I915_WRITE(GEN7_LRA_LIMITS_BASE + i * 4, s->lra_limits[i]);
  
- 		if (blocks == 1 && !dx_fallback &&
--		    EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_DIR_INDEX))
--			return make_indexed_dir(handle, dentry, inode, bh);
-+		    EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_DIR_INDEX)) {
-+			retval = make_indexed_dir(handle, dentry, inode, bh);
-+			bh = NULL; /* make_indexed_dir releases bh */
-+			goto out;
-+		}
- 		brelse(bh);
- 	}
- 	bh = ext4_append(handle, dir, &block);
-@@ -1931,6 +1932,7 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
- 	}
+ 	I915_WRITE(GEN7_MEDIA_MAX_REQ_COUNT, s->media_max_req_count);
+-	I915_WRITE(GEN7_MEDIA_MAX_REQ_COUNT, s->gfx_max_req_count);
++	I915_WRITE(GEN7_GFX_MAX_REQ_COUNT, s->gfx_max_req_count);
  
- 	retval = add_dirent_to_buf(handle, dentry, inode, de, bh);
-+out:
- 	brelse(bh);
- 	if (retval == 0)
- 		ext4_set_inode_state(inode, EXT4_STATE_NEWENTRY);
--- 
-2.3.6
-
-
-From 1527fbabfa4fdb32f66b47dd48518572fb4e0eaa Mon Sep 17 00:00:00 2001
-From: "Eric W. Biederman" <ebiederm@xmission.com>
-Date: Wed, 24 Dec 2014 07:20:01 -0600
-Subject: [PATCH 107/219] mnt: Improve the umount_tree flags
-Cc: mpagano@gentoo.org
-
-commit e819f152104c9f7c9fe50e1aecce6f5d4bf06d65 upstream.
-
-- Remove the unneeded declaration from pnode.h
-- Mark umount_tree static as it has no callers outside of namespace.c
-- Define an enumeration of umount_tree's flags.
-- Pass umount_tree's flags in by name
-
-This removes the magic numbers 0, 1 and 2 making the code a little
-clearer and makes it possible for there to be lazy unmounts that don't
-propagate.  Which is what __detach_mounts actually wants for example.
-
-Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/namespace.c | 31 ++++++++++++++++---------------
- fs/pnode.h     |  1 -
- 2 files changed, 16 insertions(+), 16 deletions(-)
-
-diff --git a/fs/namespace.c b/fs/namespace.c
-index 82ef140..712b3c5 100644
---- a/fs/namespace.c
-+++ b/fs/namespace.c
-@@ -1319,14 +1319,15 @@ static inline void namespace_lock(void)
- 	down_write(&namespace_sem);
- }
- 
-+enum umount_tree_flags {
-+	UMOUNT_SYNC = 1,
-+	UMOUNT_PROPAGATE = 2,
-+};
- /*
-  * mount_lock must be held
-  * namespace_sem must be held for write
-- * how = 0 => just this tree, don't propagate
-- * how = 1 => propagate; we know that nobody else has reference to any victims
-- * how = 2 => lazy umount
-  */
--void umount_tree(struct mount *mnt, int how)
-+static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
- {
- 	HLIST_HEAD(tmp_list);
- 	struct mount *p;
-@@ -1339,7 +1340,7 @@ void umount_tree(struct mount *mnt, int how)
- 	hlist_for_each_entry(p, &tmp_list, mnt_hash)
- 		list_del_init(&p->mnt_child);
+ 	I915_WRITE(RENDER_HWS_PGA_GEN7,	s->render_hwsp);
+ 	I915_WRITE(GAM_ECOCHK,		s->ecochk);
+diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
+index ede5bbb..07320cb 100644
+--- a/drivers/gpu/drm/i915/i915_irq.c
++++ b/drivers/gpu/drm/i915/i915_irq.c
+@@ -3718,14 +3718,12 @@ static int i8xx_irq_postinstall(struct drm_device *dev)
+ 		~(I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
+ 		  I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
+ 		  I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
+-		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT |
+-		  I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT);
++		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT);
+ 	I915_WRITE16(IMR, dev_priv->irq_mask);
  
--	if (how)
-+	if (how & UMOUNT_PROPAGATE)
- 		propagate_umount(&tmp_list);
+ 	I915_WRITE16(IER,
+ 		     I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
+ 		     I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
+-		     I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT |
+ 		     I915_USER_INTERRUPT);
+ 	POSTING_READ16(IER);
  
- 	while (!hlist_empty(&tmp_list)) {
-@@ -1349,7 +1350,7 @@ void umount_tree(struct mount *mnt, int how)
- 		list_del_init(&p->mnt_list);
- 		__touch_mnt_namespace(p->mnt_ns);
- 		p->mnt_ns = NULL;
--		if (how < 2)
-+		if (how & UMOUNT_SYNC)
- 			p->mnt.mnt_flags |= MNT_SYNC_UMOUNT;
+@@ -3887,14 +3885,12 @@ static int i915_irq_postinstall(struct drm_device *dev)
+ 		  I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
+ 		  I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
+ 		  I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
+-		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT |
+-		  I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT);
++		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT);
  
- 		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt, &unmounted);
-@@ -1447,14 +1448,14 @@ static int do_umount(struct mount *mnt, int flags)
+ 	enable_mask =
+ 		I915_ASLE_INTERRUPT |
+ 		I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
+ 		I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
+-		I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT |
+ 		I915_USER_INTERRUPT;
  
- 	if (flags & MNT_DETACH) {
- 		if (!list_empty(&mnt->mnt_list))
--			umount_tree(mnt, 2);
-+			umount_tree(mnt, UMOUNT_PROPAGATE);
- 		retval = 0;
- 	} else {
- 		shrink_submounts(mnt);
- 		retval = -EBUSY;
- 		if (!propagate_mount_busy(mnt, 2)) {
- 			if (!list_empty(&mnt->mnt_list))
--				umount_tree(mnt, 1);
-+				umount_tree(mnt, UMOUNT_PROPAGATE|UMOUNT_SYNC);
- 			retval = 0;
- 		}
- 	}
-@@ -1486,7 +1487,7 @@ void __detach_mounts(struct dentry *dentry)
- 	lock_mount_hash();
- 	while (!hlist_empty(&mp->m_list)) {
- 		mnt = hlist_entry(mp->m_list.first, struct mount, mnt_mp_list);
--		umount_tree(mnt, 2);
-+		umount_tree(mnt, UMOUNT_PROPAGATE);
- 	}
- 	unlock_mount_hash();
- 	put_mountpoint(mp);
-@@ -1648,7 +1649,7 @@ struct mount *copy_tree(struct mount *mnt, struct dentry *dentry,
- out:
- 	if (res) {
- 		lock_mount_hash();
--		umount_tree(res, 0);
-+		umount_tree(res, UMOUNT_SYNC);
- 		unlock_mount_hash();
- 	}
- 	return q;
-@@ -1672,7 +1673,7 @@ void drop_collected_mounts(struct vfsmount *mnt)
- {
- 	namespace_lock();
- 	lock_mount_hash();
--	umount_tree(real_mount(mnt), 0);
-+	umount_tree(real_mount(mnt), UMOUNT_SYNC);
- 	unlock_mount_hash();
- 	namespace_unlock();
- }
-@@ -1855,7 +1856,7 @@ static int attach_recursive_mnt(struct mount *source_mnt,
-  out_cleanup_ids:
- 	while (!hlist_empty(&tree_list)) {
- 		child = hlist_entry(tree_list.first, struct mount, mnt_hash);
--		umount_tree(child, 0);
-+		umount_tree(child, UMOUNT_SYNC);
- 	}
- 	unlock_mount_hash();
- 	cleanup_group_ids(source_mnt, NULL);
-@@ -2035,7 +2036,7 @@ static int do_loopback(struct path *path, const char *old_name,
- 	err = graft_tree(mnt, parent, mp);
- 	if (err) {
- 		lock_mount_hash();
--		umount_tree(mnt, 0);
-+		umount_tree(mnt, UMOUNT_SYNC);
- 		unlock_mount_hash();
- 	}
- out2:
-@@ -2406,7 +2407,7 @@ void mark_mounts_for_expiry(struct list_head *mounts)
- 	while (!list_empty(&graveyard)) {
- 		mnt = list_first_entry(&graveyard, struct mount, mnt_expire);
- 		touch_mnt_namespace(mnt->mnt_ns);
--		umount_tree(mnt, 1);
-+		umount_tree(mnt, UMOUNT_PROPAGATE|UMOUNT_SYNC);
- 	}
- 	unlock_mount_hash();
- 	namespace_unlock();
-@@ -2477,7 +2478,7 @@ static void shrink_submounts(struct mount *mnt)
- 			m = list_first_entry(&graveyard, struct mount,
- 						mnt_expire);
- 			touch_mnt_namespace(m->mnt_ns);
--			umount_tree(m, 1);
-+			umount_tree(m, UMOUNT_PROPAGATE|UMOUNT_SYNC);
- 		}
- 	}
+ 	if (I915_HAS_HOTPLUG(dev)) {
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 33b3d0a2..f536ff2 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -1740,6 +1740,7 @@ enum punit_power_well {
+ #define   GMBUS_CYCLE_INDEX	(2<<25)
+ #define   GMBUS_CYCLE_STOP	(4<<25)
+ #define   GMBUS_BYTE_COUNT_SHIFT 16
++#define   GMBUS_BYTE_COUNT_MAX   256U
+ #define   GMBUS_SLAVE_INDEX_SHIFT 8
+ #define   GMBUS_SLAVE_ADDR_SHIFT 1
+ #define   GMBUS_SLAVE_READ	(1<<0)
+diff --git a/drivers/gpu/drm/i915/intel_i2c.c b/drivers/gpu/drm/i915/intel_i2c.c
+index b31088a..56e437e 100644
+--- a/drivers/gpu/drm/i915/intel_i2c.c
++++ b/drivers/gpu/drm/i915/intel_i2c.c
+@@ -270,18 +270,17 @@ gmbus_wait_idle(struct drm_i915_private *dev_priv)
  }
-diff --git a/fs/pnode.h b/fs/pnode.h
-index 4a24635..16afc3d 100644
---- a/fs/pnode.h
-+++ b/fs/pnode.h
-@@ -47,7 +47,6 @@ int get_dominating_id(struct mount *mnt, const struct path *root);
- unsigned int mnt_get_count(struct mount *mnt);
- void mnt_set_mountpoint(struct mount *, struct mountpoint *,
- 			struct mount *);
--void umount_tree(struct mount *, int);
- struct mount *copy_tree(struct mount *, struct dentry *, int);
- bool is_path_reachable(struct mount *, struct dentry *,
- 			 const struct path *root);
--- 
-2.3.6
-
-
-From a15f7b5e276d1b8f71d3d64d7f3f509e77bee5e4 Mon Sep 17 00:00:00 2001
-From: "Eric W. Biederman" <ebiederm@xmission.com>
-Date: Wed, 24 Dec 2014 07:35:10 -0600
-Subject: [PATCH 108/219] mnt: Don't propagate umounts in __detach_mounts
-Cc: mpagano@gentoo.org
-
-commit 8318e667f176f7ea34451a1a530634e293f216ac upstream.
-
-Invoking mount propagation from __detach_mounts is inefficient and
-wrong.
-
-It is inefficient because __detach_mounts already walks the list of
-mounts that where something needs to be done, and mount propagation
-walks some subset of those mounts again.
-
-It is actively wrong because if the dentry that is passed to
-__detach_mounts is not part of the path to a mount that mount should
-not be affected.
-
-change_mnt_propagation(p,MS_PRIVATE) modifies the mount propagation
-tree of a master mount so it's slaves are connected to another master
-if possible.  Which means even removing a mount from the middle of a
-mount tree with __detach_mounts will not deprive any mount propagated
-mount events.
-
-Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/namespace.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/fs/namespace.c b/fs/namespace.c
-index 712b3c5..616a694 100644
---- a/fs/namespace.c
-+++ b/fs/namespace.c
-@@ -1487,7 +1487,7 @@ void __detach_mounts(struct dentry *dentry)
- 	lock_mount_hash();
- 	while (!hlist_empty(&mp->m_list)) {
- 		mnt = hlist_entry(mp->m_list.first, struct mount, mnt_mp_list);
--		umount_tree(mnt, UMOUNT_PROPAGATE);
-+		umount_tree(mnt, 0);
- 	}
- 	unlock_mount_hash();
- 	put_mountpoint(mp);
--- 
-2.3.6
-
-
-From 953bab2cb35f8f6f2a0183c1b27ff7466f72bccc Mon Sep 17 00:00:00 2001
-From: "Eric W. Biederman" <ebiederm@xmission.com>
-Date: Thu, 18 Dec 2014 13:10:48 -0600
-Subject: [PATCH 109/219] mnt: In umount_tree reuse mnt_list instead of
- mnt_hash
-Cc: mpagano@gentoo.org
-
-commit c003b26ff98ca04a180ff34c38c007a3998d62f9 upstream.
-
-umount_tree builds a list of mounts that need to be unmounted.
-Utilize mnt_list for this purpose instead of mnt_hash.  This begins to
-allow keeping a mount on the mnt_hash after it is unmounted, which is
-necessary for a properly functioning MNT_LOCKED implementation.
-
-The fact that mnt_list is an ordinary list makding available list_move
-is nice bonus.
-
-Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/namespace.c | 20 +++++++++++---------
- fs/pnode.c     |  6 +++---
- fs/pnode.h     |  2 +-
- 3 files changed, 15 insertions(+), 13 deletions(-)
-
-diff --git a/fs/namespace.c b/fs/namespace.c
-index 616a694..18df0af 100644
---- a/fs/namespace.c
-+++ b/fs/namespace.c
-@@ -1329,23 +1329,25 @@ enum umount_tree_flags {
-  */
- static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
- {
--	HLIST_HEAD(tmp_list);
-+	LIST_HEAD(tmp_list);
- 	struct mount *p;
- 
--	for (p = mnt; p; p = next_mnt(p, mnt)) {
--		hlist_del_init_rcu(&p->mnt_hash);
--		hlist_add_head(&p->mnt_hash, &tmp_list);
--	}
-+	/* Gather the mounts to umount */
-+	for (p = mnt; p; p = next_mnt(p, mnt))
-+		list_move(&p->mnt_list, &tmp_list);
- 
--	hlist_for_each_entry(p, &tmp_list, mnt_hash)
-+	/* Hide the mounts from lookup_mnt and mnt_mounts */
-+	list_for_each_entry(p, &tmp_list, mnt_list) {
-+		hlist_del_init_rcu(&p->mnt_hash);
- 		list_del_init(&p->mnt_child);
-+	}
- 
-+	/* Add propogated mounts to the tmp_list */
- 	if (how & UMOUNT_PROPAGATE)
- 		propagate_umount(&tmp_list);
  
--	while (!hlist_empty(&tmp_list)) {
--		p = hlist_entry(tmp_list.first, struct mount, mnt_hash);
--		hlist_del_init_rcu(&p->mnt_hash);
-+	while (!list_empty(&tmp_list)) {
-+		p = list_first_entry(&tmp_list, struct mount, mnt_list);
- 		list_del_init(&p->mnt_expire);
- 		list_del_init(&p->mnt_list);
- 		__touch_mnt_namespace(p->mnt_ns);
-diff --git a/fs/pnode.c b/fs/pnode.c
-index 260ac8f..bf012af 100644
---- a/fs/pnode.c
-+++ b/fs/pnode.c
-@@ -384,7 +384,7 @@ static void __propagate_umount(struct mount *mnt)
- 		if (child && list_empty(&child->mnt_mounts)) {
- 			list_del_init(&child->mnt_child);
- 			hlist_del_init_rcu(&child->mnt_hash);
--			hlist_add_before_rcu(&child->mnt_hash, &mnt->mnt_hash);
-+			list_move_tail(&child->mnt_list, &mnt->mnt_list);
- 		}
- 	}
- }
-@@ -396,11 +396,11 @@ static void __propagate_umount(struct mount *mnt)
-  *
-  * vfsmount lock must be held for write
-  */
--int propagate_umount(struct hlist_head *list)
-+int propagate_umount(struct list_head *list)
+ static int
+-gmbus_xfer_read(struct drm_i915_private *dev_priv, struct i2c_msg *msg,
+-		u32 gmbus1_index)
++gmbus_xfer_read_chunk(struct drm_i915_private *dev_priv,
++		      unsigned short addr, u8 *buf, unsigned int len,
++		      u32 gmbus1_index)
  {
- 	struct mount *mnt;
+ 	int reg_offset = dev_priv->gpio_mmio_base;
+-	u16 len = msg->len;
+-	u8 *buf = msg->buf;
  
--	hlist_for_each_entry(mnt, list, mnt_hash)
-+	list_for_each_entry(mnt, list, mnt_list)
- 		__propagate_umount(mnt);
- 	return 0;
+ 	I915_WRITE(GMBUS1 + reg_offset,
+ 		   gmbus1_index |
+ 		   GMBUS_CYCLE_WAIT |
+ 		   (len << GMBUS_BYTE_COUNT_SHIFT) |
+-		   (msg->addr << GMBUS_SLAVE_ADDR_SHIFT) |
++		   (addr << GMBUS_SLAVE_ADDR_SHIFT) |
+ 		   GMBUS_SLAVE_READ | GMBUS_SW_RDY);
+ 	while (len) {
+ 		int ret;
+@@ -303,11 +302,35 @@ gmbus_xfer_read(struct drm_i915_private *dev_priv, struct i2c_msg *msg,
  }
-diff --git a/fs/pnode.h b/fs/pnode.h
-index 16afc3d..aa6d65d 100644
---- a/fs/pnode.h
-+++ b/fs/pnode.h
-@@ -40,7 +40,7 @@ static inline void set_mnt_shared(struct mount *mnt)
- void change_mnt_propagation(struct mount *, int);
- int propagate_mnt(struct mount *, struct mountpoint *, struct mount *,
- 		struct hlist_head *);
--int propagate_umount(struct hlist_head *);
-+int propagate_umount(struct list_head *);
- int propagate_mount_busy(struct mount *, int);
- void mnt_release_group_id(struct mount *);
- int get_dominating_id(struct mount *mnt, const struct path *root);
--- 
-2.3.6
-
-
-From 7052e71b2d085f76800115d4a212dcaf82b86262 Mon Sep 17 00:00:00 2001
-From: "Eric W. Biederman" <ebiederm@xmission.com>
-Date: Mon, 22 Dec 2014 18:30:08 -0600
-Subject: [PATCH 110/219] mnt: Add MNT_UMOUNT flag
-Cc: mpagano@gentoo.org
-
-commit 590ce4bcbfb4e0462a720a4ad901e84416080bba upstream.
-
-In some instances it is necessary to know if the the unmounting
-process has begun on a mount.  Add MNT_UMOUNT to make that reliably
-testable.
-
-This fix gets used in fixing locked mounts in MNT_DETACH
-
-Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/namespace.c        | 4 +++-
- fs/pnode.c            | 1 +
- include/linux/mount.h | 1 +
- 3 files changed, 5 insertions(+), 1 deletion(-)
-
-diff --git a/fs/namespace.c b/fs/namespace.c
-index 18df0af..9f3c7e5 100644
---- a/fs/namespace.c
-+++ b/fs/namespace.c
-@@ -1333,8 +1333,10 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
- 	struct mount *p;
- 
- 	/* Gather the mounts to umount */
--	for (p = mnt; p; p = next_mnt(p, mnt))
-+	for (p = mnt; p; p = next_mnt(p, mnt)) {
-+		p->mnt.mnt_flags |= MNT_UMOUNT;
- 		list_move(&p->mnt_list, &tmp_list);
-+	}
  
- 	/* Hide the mounts from lookup_mnt and mnt_mounts */
- 	list_for_each_entry(p, &tmp_list, mnt_list) {
-diff --git a/fs/pnode.c b/fs/pnode.c
-index bf012af..ac3aa0d 100644
---- a/fs/pnode.c
-+++ b/fs/pnode.c
-@@ -384,6 +384,7 @@ static void __propagate_umount(struct mount *mnt)
- 		if (child && list_empty(&child->mnt_mounts)) {
- 			list_del_init(&child->mnt_child);
- 			hlist_del_init_rcu(&child->mnt_hash);
-+			child->mnt.mnt_flags |= MNT_UMOUNT;
- 			list_move_tail(&child->mnt_list, &mnt->mnt_list);
- 		}
- 	}
-diff --git a/include/linux/mount.h b/include/linux/mount.h
-index c2c561d..564beee 100644
---- a/include/linux/mount.h
-+++ b/include/linux/mount.h
-@@ -61,6 +61,7 @@ struct mnt_namespace;
- #define MNT_DOOMED		0x1000000
- #define MNT_SYNC_UMOUNT		0x2000000
- #define MNT_MARKED		0x4000000
-+#define MNT_UMOUNT		0x8000000
- 
- struct vfsmount {
- 	struct dentry *mnt_root;	/* root of the mounted tree */
--- 
-2.3.6
-
-
-From 7a9742a65c02e30a62ae42c765eb4dff26b51cc9 Mon Sep 17 00:00:00 2001
-From: "Eric W. Biederman" <ebiederm@xmission.com>
-Date: Mon, 22 Dec 2014 19:12:07 -0600
-Subject: [PATCH 111/219] mnt: Delay removal from the mount hash.
-Cc: mpagano@gentoo.org
-
-commit 411a938b5abc9cb126c41cccf5975ae464fe0f3e upstream.
-
-- Modify __lookup_mnt_hash_last to ignore mounts that have MNT_UMOUNTED set.
-- Don't remove mounts from the mount hash table in propogate_umount
-- Don't remove mounts from the mount hash table in umount_tree before
-  the entire list of mounts to be umounted is selected.
-- Remove mounts from the mount hash table as the last thing that
-  happens in the case where a mount has a parent in umount_tree.
-  Mounts without parents are not hashed (by definition).
-
-This paves the way for delaying removal from the mount hash table even
-farther and fixing the MNT_LOCKED vs MNT_DETACH issue.
-
-Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/namespace.c | 13 ++++++++-----
- fs/pnode.c     |  1 -
- 2 files changed, 8 insertions(+), 6 deletions(-)
-
-diff --git a/fs/namespace.c b/fs/namespace.c
-index 9f3c7e5..6c477be 100644
---- a/fs/namespace.c
-+++ b/fs/namespace.c
-@@ -632,14 +632,17 @@ struct mount *__lookup_mnt(struct vfsmount *mnt, struct dentry *dentry)
-  */
- struct mount *__lookup_mnt_last(struct vfsmount *mnt, struct dentry *dentry)
+ static int
+-gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
++gmbus_xfer_read(struct drm_i915_private *dev_priv, struct i2c_msg *msg,
++		u32 gmbus1_index)
  {
--	struct mount *p, *res;
--	res = p = __lookup_mnt(mnt, dentry);
-+	struct mount *p, *res = NULL;
-+	p = __lookup_mnt(mnt, dentry);
- 	if (!p)
- 		goto out;
-+	if (!(p->mnt.mnt_flags & MNT_UMOUNT))
-+		res = p;
- 	hlist_for_each_entry_continue(p, mnt_hash) {
- 		if (&p->mnt_parent->mnt != mnt || p->mnt_mountpoint != dentry)
- 			break;
--		res = p;
-+		if (!(p->mnt.mnt_flags & MNT_UMOUNT))
-+			res = p;
- 	}
- out:
- 	return res;
-@@ -1338,9 +1341,8 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
- 		list_move(&p->mnt_list, &tmp_list);
- 	}
- 
--	/* Hide the mounts from lookup_mnt and mnt_mounts */
-+	/* Hide the mounts from mnt_mounts */
- 	list_for_each_entry(p, &tmp_list, mnt_list) {
--		hlist_del_init_rcu(&p->mnt_hash);
- 		list_del_init(&p->mnt_child);
- 	}
- 
-@@ -1367,6 +1369,7 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
- 			p->mnt_mountpoint = p->mnt.mnt_root;
- 			p->mnt_parent = p;
- 			p->mnt_mp = NULL;
-+			hlist_del_init_rcu(&p->mnt_hash);
- 		}
- 		change_mnt_propagation(p, MS_PRIVATE);
- 	}
-diff --git a/fs/pnode.c b/fs/pnode.c
-index ac3aa0d..c27ae38 100644
---- a/fs/pnode.c
-+++ b/fs/pnode.c
-@@ -383,7 +383,6 @@ static void __propagate_umount(struct mount *mnt)
- 		 */
- 		if (child && list_empty(&child->mnt_mounts)) {
- 			list_del_init(&child->mnt_child);
--			hlist_del_init_rcu(&child->mnt_hash);
- 			child->mnt.mnt_flags |= MNT_UMOUNT;
- 			list_move_tail(&child->mnt_list, &mnt->mnt_list);
- 		}
--- 
-2.3.6
-
-
-From 397dd1fc1225b478824134ddd5540f889b13809d Mon Sep 17 00:00:00 2001
-From: "Eric W. Biederman" <ebiederm@xmission.com>
-Date: Sat, 3 Jan 2015 05:39:35 -0600
-Subject: [PATCH 112/219] mnt: On an unmount propagate clearing of MNT_LOCKED
-Cc: mpagano@gentoo.org
-
-commit 5d88457eb5b86b475422dc882f089203faaeedb5 upstream.
-
-A prerequisite of calling umount_tree is that the point where the tree
-is mounted at is valid to unmount.
-
-If we are propagating the effect of the unmount clear MNT_LOCKED in
-every instance where the same filesystem is mounted on the same
-mountpoint in the mount tree, as we know (by virtue of the fact
-that umount_tree was called) that it is safe to reveal what
-is at that mountpoint.
-
-Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/namespace.c |  3 +++
- fs/pnode.c     | 20 ++++++++++++++++++++
- fs/pnode.h     |  1 +
- 3 files changed, 24 insertions(+)
-
-diff --git a/fs/namespace.c b/fs/namespace.c
-index 6c477be..7d9a69d 100644
---- a/fs/namespace.c
-+++ b/fs/namespace.c
-@@ -1335,6 +1335,9 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
- 	LIST_HEAD(tmp_list);
- 	struct mount *p;
- 
-+	if (how & UMOUNT_PROPAGATE)
-+		propagate_mount_unlock(mnt);
+-	int reg_offset = dev_priv->gpio_mmio_base;
+-	u16 len = msg->len;
+ 	u8 *buf = msg->buf;
++	unsigned int rx_size = msg->len;
++	unsigned int len;
++	int ret;
 +
- 	/* Gather the mounts to umount */
- 	for (p = mnt; p; p = next_mnt(p, mnt)) {
- 		p->mnt.mnt_flags |= MNT_UMOUNT;
-diff --git a/fs/pnode.c b/fs/pnode.c
-index c27ae38..8989029 100644
---- a/fs/pnode.c
-+++ b/fs/pnode.c
-@@ -362,6 +362,26 @@ int propagate_mount_busy(struct mount *mnt, int refcnt)
- }
- 
- /*
-+ * Clear MNT_LOCKED when it can be shown to be safe.
-+ *
-+ * mount_lock lock must be held for write
-+ */
-+void propagate_mount_unlock(struct mount *mnt)
-+{
-+	struct mount *parent = mnt->mnt_parent;
-+	struct mount *m, *child;
++	do {
++		len = min(rx_size, GMBUS_BYTE_COUNT_MAX);
 +
-+	BUG_ON(parent == mnt);
++		ret = gmbus_xfer_read_chunk(dev_priv, msg->addr,
++					    buf, len, gmbus1_index);
++		if (ret)
++			return ret;
 +
-+	for (m = propagation_next(parent, parent); m;
-+			m = propagation_next(m, parent)) {
-+		child = __lookup_mnt_last(&m->mnt, mnt->mnt_mountpoint);
-+		if (child)
-+			child->mnt.mnt_flags &= ~MNT_LOCKED;
-+	}
++		rx_size -= len;
++		buf += len;
++	} while (rx_size != 0);
++
++	return 0;
 +}
 +
-+/*
-  * NOTE: unmounting 'mnt' naturally propagates to all other mounts its
-  * parent propagates to.
-  */
-diff --git a/fs/pnode.h b/fs/pnode.h
-index aa6d65d..af47d4b 100644
---- a/fs/pnode.h
-+++ b/fs/pnode.h
-@@ -42,6 +42,7 @@ int propagate_mnt(struct mount *, struct mountpoint *, struct mount *,
- 		struct hlist_head *);
- int propagate_umount(struct list_head *);
- int propagate_mount_busy(struct mount *, int);
-+void propagate_mount_unlock(struct mount *);
- void mnt_release_group_id(struct mount *);
- int get_dominating_id(struct mount *mnt, const struct path *root);
- unsigned int mnt_get_count(struct mount *mnt);
--- 
-2.3.6
-
-
-From 928116b22b1eb446c59a0fb93857d7a6d80930af Mon Sep 17 00:00:00 2001
-From: "Eric W. Biederman" <ebiederm@xmission.com>
-Date: Mon, 5 Jan 2015 13:38:04 -0600
-Subject: [PATCH 113/219] mnt: Don't propagate unmounts to locked mounts
-Cc: mpagano@gentoo.org
-
-commit 0c56fe31420ca599c90240315f7959bf1b4eb6ce upstream.
-
-If the first mount in shared subtree is locked don't unmount the
-shared subtree.
-
-This is ensured by walking through the mounts parents before children
-and marking a mount as unmountable if it is not locked or it is locked
-but it's parent is marked.
-
-This allows recursive mount detach to propagate through a set of
-mounts when unmounting them would not reveal what is under any locked
-mount.
-
-Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/pnode.c | 32 +++++++++++++++++++++++++++++---
- fs/pnode.h |  1 +
- 2 files changed, 30 insertions(+), 3 deletions(-)
-
-diff --git a/fs/pnode.c b/fs/pnode.c
-index 8989029..6367e1e 100644
---- a/fs/pnode.c
-+++ b/fs/pnode.c
-@@ -382,6 +382,26 @@ void propagate_mount_unlock(struct mount *mnt)
- }
++static int
++gmbus_xfer_write_chunk(struct drm_i915_private *dev_priv,
++		       unsigned short addr, u8 *buf, unsigned int len)
++{
++	int reg_offset = dev_priv->gpio_mmio_base;
++	unsigned int chunk_size = len;
+ 	u32 val, loop;
  
- /*
-+ * Mark all mounts that the MNT_LOCKED logic will allow to be unmounted.
-+ */
-+static void mark_umount_candidates(struct mount *mnt)
+ 	val = loop = 0;
+@@ -319,8 +342,8 @@ gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
+ 	I915_WRITE(GMBUS3 + reg_offset, val);
+ 	I915_WRITE(GMBUS1 + reg_offset,
+ 		   GMBUS_CYCLE_WAIT |
+-		   (msg->len << GMBUS_BYTE_COUNT_SHIFT) |
+-		   (msg->addr << GMBUS_SLAVE_ADDR_SHIFT) |
++		   (chunk_size << GMBUS_BYTE_COUNT_SHIFT) |
++		   (addr << GMBUS_SLAVE_ADDR_SHIFT) |
+ 		   GMBUS_SLAVE_WRITE | GMBUS_SW_RDY);
+ 	while (len) {
+ 		int ret;
+@@ -337,6 +360,29 @@ gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
+ 		if (ret)
+ 			return ret;
+ 	}
++
++	return 0;
++}
++
++static int
++gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
 +{
-+	struct mount *parent = mnt->mnt_parent;
-+	struct mount *m;
++	u8 *buf = msg->buf;
++	unsigned int tx_size = msg->len;
++	unsigned int len;
++	int ret;
 +
-+	BUG_ON(parent == mnt);
++	do {
++		len = min(tx_size, GMBUS_BYTE_COUNT_MAX);
 +
-+	for (m = propagation_next(parent, parent); m;
-+			m = propagation_next(m, parent)) {
-+		struct mount *child = __lookup_mnt_last(&m->mnt,
-+						mnt->mnt_mountpoint);
-+		if (child && (!IS_MNT_LOCKED(child) || IS_MNT_MARKED(m))) {
-+			SET_MNT_MARK(child);
-+		}
-+	}
-+}
++		ret = gmbus_xfer_write_chunk(dev_priv, msg->addr, buf, len);
++		if (ret)
++			return ret;
 +
-+/*
-  * NOTE: unmounting 'mnt' naturally propagates to all other mounts its
-  * parent propagates to.
-  */
-@@ -398,10 +418,13 @@ static void __propagate_umount(struct mount *mnt)
- 		struct mount *child = __lookup_mnt_last(&m->mnt,
- 						mnt->mnt_mountpoint);
- 		/*
--		 * umount the child only if the child has no
--		 * other children
-+		 * umount the child only if the child has no children
-+		 * and the child is marked safe to unmount.
- 		 */
--		if (child && list_empty(&child->mnt_mounts)) {
-+		if (!child || !IS_MNT_MARKED(child))
-+			continue;
-+		CLEAR_MNT_MARK(child);
-+		if (list_empty(&child->mnt_mounts)) {
- 			list_del_init(&child->mnt_child);
- 			child->mnt.mnt_flags |= MNT_UMOUNT;
- 			list_move_tail(&child->mnt_list, &mnt->mnt_list);
-@@ -420,6 +443,9 @@ int propagate_umount(struct list_head *list)
- {
- 	struct mount *mnt;
- 
-+	list_for_each_entry_reverse(mnt, list, mnt_list)
-+		mark_umount_candidates(mnt);
++		buf += len;
++		tx_size -= len;
++	} while (tx_size != 0);
 +
- 	list_for_each_entry(mnt, list, mnt_list)
- 		__propagate_umount(mnt);
  	return 0;
-diff --git a/fs/pnode.h b/fs/pnode.h
-index af47d4b..0fcdbe7 100644
---- a/fs/pnode.h
-+++ b/fs/pnode.h
-@@ -19,6 +19,7 @@
- #define IS_MNT_MARKED(m) ((m)->mnt.mnt_flags & MNT_MARKED)
- #define SET_MNT_MARK(m) ((m)->mnt.mnt_flags |= MNT_MARKED)
- #define CLEAR_MNT_MARK(m) ((m)->mnt.mnt_flags &= ~MNT_MARKED)
-+#define IS_MNT_LOCKED(m) ((m)->mnt.mnt_flags & MNT_LOCKED)
- 
- #define CL_EXPIRE    		0x01
- #define CL_SLAVE     		0x02
--- 
-2.3.6
-
-
-From 92e35ac5954f9f7829ad88066930a4b2b58fe4dd Mon Sep 17 00:00:00 2001
-From: "Eric W. Biederman" <ebiederm@xmission.com>
-Date: Mon, 29 Dec 2014 13:03:41 -0600
-Subject: [PATCH 114/219] mnt: Factor out unhash_mnt from detach_mnt and
- umount_tree
-Cc: mpagano@gentoo.org
-
-commit 7bdb11de8ee4f4ae195e2fa19efd304e0b36c63b upstream.
-
-Create a function unhash_mnt that contains the common code between
-detach_mnt and umount_tree, and use unhash_mnt in place of the common
-code.  This add a unncessary list_del_init(mnt->mnt_child) into
-umount_tree but given that mnt_child is already empty this extra
-line is a noop.
-
-Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/namespace.c | 21 ++++++++++++---------
- 1 file changed, 12 insertions(+), 9 deletions(-)
-
-diff --git a/fs/namespace.c b/fs/namespace.c
-index 7d9a69d..0e95c84 100644
---- a/fs/namespace.c
-+++ b/fs/namespace.c
-@@ -798,10 +798,8 @@ static void __touch_mnt_namespace(struct mnt_namespace *ns)
- /*
-  * vfsmount lock must be held for write
-  */
--static void detach_mnt(struct mount *mnt, struct path *old_path)
-+static void unhash_mnt(struct mount *mnt)
- {
--	old_path->dentry = mnt->mnt_mountpoint;
--	old_path->mnt = &mnt->mnt_parent->mnt;
- 	mnt->mnt_parent = mnt;
- 	mnt->mnt_mountpoint = mnt->mnt.mnt_root;
- 	list_del_init(&mnt->mnt_child);
-@@ -814,6 +812,16 @@ static void detach_mnt(struct mount *mnt, struct path *old_path)
- /*
-  * vfsmount lock must be held for write
-  */
-+static void detach_mnt(struct mount *mnt, struct path *old_path)
-+{
-+	old_path->dentry = mnt->mnt_mountpoint;
-+	old_path->mnt = &mnt->mnt_parent->mnt;
-+	unhash_mnt(mnt);
-+}
-+
-+/*
-+ * vfsmount lock must be held for write
-+ */
- void mnt_set_mountpoint(struct mount *mnt,
- 			struct mountpoint *mp,
- 			struct mount *child_mnt)
-@@ -1364,15 +1372,10 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
+ }
  
- 		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt, &unmounted);
- 		if (mnt_has_parent(p)) {
--			hlist_del_init(&p->mnt_mp_list);
--			put_mountpoint(p->mnt_mp);
- 			mnt_add_count(p->mnt_parent, -1);
- 			/* old mountpoint will be dropped when we can do that */
- 			p->mnt_ex_mountpoint = p->mnt_mountpoint;
--			p->mnt_mountpoint = p->mnt.mnt_root;
--			p->mnt_parent = p;
--			p->mnt_mp = NULL;
--			hlist_del_init_rcu(&p->mnt_hash);
-+			unhash_mnt(p);
- 		}
- 		change_mnt_propagation(p, MS_PRIVATE);
- 	}
--- 
-2.3.6
-
-
-From 2db706971b3f28b3d59a9af231578803da85def8 Mon Sep 17 00:00:00 2001
-From: "Eric W. Biederman" <ebiederm@xmission.com>
-Date: Thu, 15 Jan 2015 22:58:33 -0600
-Subject: [PATCH 115/219] mnt: Factor umount_mnt from umount_tree
-Cc: mpagano@gentoo.org
-
-commit 6a46c5735c29175da55b2fa9d53775182422cdd7 upstream.
-
-For future use factor out a function umount_mnt from umount_tree.
-This function unhashes a mount and remembers where the mount
-was mounted so that eventually when the code makes it to a
-sleeping context the mountpoint can be dput.
-
-Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/namespace.c | 14 +++++++++++---
- 1 file changed, 11 insertions(+), 3 deletions(-)
-
-diff --git a/fs/namespace.c b/fs/namespace.c
-index 0e95c84..c905e48 100644
---- a/fs/namespace.c
-+++ b/fs/namespace.c
-@@ -822,6 +822,16 @@ static void detach_mnt(struct mount *mnt, struct path *old_path)
- /*
-  * vfsmount lock must be held for write
-  */
-+static void umount_mnt(struct mount *mnt)
-+{
-+	/* old mountpoint will be dropped when we can do that */
-+	mnt->mnt_ex_mountpoint = mnt->mnt_mountpoint;
-+	unhash_mnt(mnt);
-+}
+diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c
+index 86807ee..9bd5611 100644
+--- a/drivers/gpu/drm/radeon/atombios_crtc.c
++++ b/drivers/gpu/drm/radeon/atombios_crtc.c
+@@ -330,8 +330,10 @@ atombios_set_crtc_dtd_timing(struct drm_crtc *crtc,
+ 		misc |= ATOM_COMPOSITESYNC;
+ 	if (mode->flags & DRM_MODE_FLAG_INTERLACE)
+ 		misc |= ATOM_INTERLACE;
+-	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
+ 		misc |= ATOM_DOUBLE_CLOCK_MODE;
++	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++		misc |= ATOM_H_REPLICATIONBY2 | ATOM_V_REPLICATIONBY2;
+ 
+ 	args.susModeMiscInfo.usAccess = cpu_to_le16(misc);
+ 	args.ucCRTC = radeon_crtc->crtc_id;
+@@ -374,8 +376,10 @@ static void atombios_crtc_set_timing(struct drm_crtc *crtc,
+ 		misc |= ATOM_COMPOSITESYNC;
+ 	if (mode->flags & DRM_MODE_FLAG_INTERLACE)
+ 		misc |= ATOM_INTERLACE;
+-	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
+ 		misc |= ATOM_DOUBLE_CLOCK_MODE;
++	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++		misc |= ATOM_H_REPLICATIONBY2 | ATOM_V_REPLICATIONBY2;
+ 
+ 	args.susModeMiscInfo.usAccess = cpu_to_le16(misc);
+ 	args.ucCRTC = radeon_crtc->crtc_id;
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 9c47867..7fe5590 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -459,6 +459,10 @@
+ #define USB_DEVICE_ID_UGCI_FLYING	0x0020
+ #define USB_DEVICE_ID_UGCI_FIGHTING	0x0030
+ 
++#define USB_VENDOR_ID_HP		0x03f0
++#define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE	0x0a4a
++#define USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE		0x134a
 +
-+/*
-+ * vfsmount lock must be held for write
-+ */
- void mnt_set_mountpoint(struct mount *mnt,
- 			struct mountpoint *mp,
- 			struct mount *child_mnt)
-@@ -1373,9 +1383,7 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
- 		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt, &unmounted);
- 		if (mnt_has_parent(p)) {
- 			mnt_add_count(p->mnt_parent, -1);
--			/* old mountpoint will be dropped when we can do that */
--			p->mnt_ex_mountpoint = p->mnt_mountpoint;
--			unhash_mnt(p);
-+			umount_mnt(p);
- 		}
- 		change_mnt_propagation(p, MS_PRIVATE);
+ #define USB_VENDOR_ID_HUION		0x256c
+ #define USB_DEVICE_ID_HUION_TABLET	0x006e
+ 
+diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c
+index a821277..4e3ae9f 100644
+--- a/drivers/hid/usbhid/hid-quirks.c
++++ b/drivers/hid/usbhid/hid-quirks.c
+@@ -78,6 +78,8 @@ static const struct hid_blacklist {
+ 	{ USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS },
+ 	{ USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET },
++	{ USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL },
++	{ USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL },
+ 	{ USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077, HID_QUIRK_ALWAYS_POLL },
+ 	{ USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3, HID_QUIRK_NO_INIT_REPORTS },
+diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
+index 2978f5e..00bc30e 100644
+--- a/drivers/hv/channel.c
++++ b/drivers/hv/channel.c
+@@ -135,7 +135,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
+ 			   GFP_KERNEL);
+ 	if (!open_info) {
+ 		err = -ENOMEM;
+-		goto error0;
++		goto error_gpadl;
  	}
--- 
-2.3.6
-
-
-From 20e62ee6fa3da23a792ca31d4b68069060317260 Mon Sep 17 00:00:00 2001
-From: "Eric W. Biederman" <ebiederm@xmission.com>
-Date: Tue, 23 Dec 2014 21:37:03 -0600
-Subject: [PATCH 116/219] mnt: Honor MNT_LOCKED when detaching mounts
-Cc: mpagano@gentoo.org
-
-commit ce07d891a0891d3c0d0c2d73d577490486b809e1 upstream.
-
-Modify umount(MNT_DETACH) to keep mounts in the hash table that are
-locked to their parent mounts, when the parent is lazily unmounted.
-
-In mntput_no_expire detach the children from the hash table, depending
-on mnt_pin_kill in cleanup_mnt to decrement the mnt_count of the children.
-
-In __detach_mounts if there are any mounts that have been unmounted
-but still are on the list of mounts of a mountpoint, remove their
-children from the mount hash table and those children to the unmounted
-list so they won't linger potentially indefinitely waiting for their
-final mntput, now that the mounts serve no purpose.
-
-Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/namespace.c | 29 ++++++++++++++++++++++++++---
- fs/pnode.h     |  2 ++
- 2 files changed, 28 insertions(+), 3 deletions(-)
-
-diff --git a/fs/namespace.c b/fs/namespace.c
-index c905e48..24de1e9 100644
---- a/fs/namespace.c
-+++ b/fs/namespace.c
-@@ -1099,6 +1099,13 @@ static void mntput_no_expire(struct mount *mnt)
- 	rcu_read_unlock();
  
- 	list_del(&mnt->mnt_instance);
-+
-+	if (unlikely(!list_empty(&mnt->mnt_mounts))) {
-+		struct mount *p, *tmp;
-+		list_for_each_entry_safe(p, tmp, &mnt->mnt_mounts,  mnt_child) {
-+			umount_mnt(p);
-+		}
-+	}
- 	unlock_mount_hash();
+ 	init_completion(&open_info->waitevent);
+@@ -151,7 +151,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
  
- 	if (likely(!(mnt->mnt.mnt_flags & MNT_INTERNAL))) {
-@@ -1372,6 +1379,7 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
- 		propagate_umount(&tmp_list);
+ 	if (userdatalen > MAX_USER_DEFINED_BYTES) {
+ 		err = -EINVAL;
+-		goto error0;
++		goto error_gpadl;
+ 	}
  
- 	while (!list_empty(&tmp_list)) {
-+		bool disconnect;
- 		p = list_first_entry(&tmp_list, struct mount, mnt_list);
- 		list_del_init(&p->mnt_expire);
- 		list_del_init(&p->mnt_list);
-@@ -1380,10 +1388,18 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
- 		if (how & UMOUNT_SYNC)
- 			p->mnt.mnt_flags |= MNT_SYNC_UMOUNT;
+ 	if (userdatalen)
+@@ -195,6 +195,9 @@ error1:
+ 	list_del(&open_info->msglistentry);
+ 	spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
  
--		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt, &unmounted);
-+		disconnect = !IS_MNT_LOCKED_AND_LAZY(p);
++error_gpadl:
++	vmbus_teardown_gpadl(newchannel, newchannel->ringbuffer_gpadlhandle);
 +
-+		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt,
-+				 disconnect ? &unmounted : NULL);
- 		if (mnt_has_parent(p)) {
- 			mnt_add_count(p->mnt_parent, -1);
--			umount_mnt(p);
-+			if (!disconnect) {
-+				/* Don't forget about p */
-+				list_add_tail(&p->mnt_child, &p->mnt_parent->mnt_mounts);
-+			} else {
-+				umount_mnt(p);
-+			}
- 		}
- 		change_mnt_propagation(p, MS_PRIVATE);
- 	}
-@@ -1508,7 +1524,14 @@ void __detach_mounts(struct dentry *dentry)
- 	lock_mount_hash();
- 	while (!hlist_empty(&mp->m_list)) {
- 		mnt = hlist_entry(mp->m_list.first, struct mount, mnt_mp_list);
--		umount_tree(mnt, 0);
-+		if (mnt->mnt.mnt_flags & MNT_UMOUNT) {
-+			struct mount *p, *tmp;
-+			list_for_each_entry_safe(p, tmp, &mnt->mnt_mounts,  mnt_child) {
-+				hlist_add_head(&p->mnt_umount.s_list, &unmounted);
-+				umount_mnt(p);
-+			}
-+		}
-+		else umount_tree(mnt, 0);
- 	}
- 	unlock_mount_hash();
- 	put_mountpoint(mp);
-diff --git a/fs/pnode.h b/fs/pnode.h
-index 0fcdbe7..7114ce6 100644
---- a/fs/pnode.h
-+++ b/fs/pnode.h
-@@ -20,6 +20,8 @@
- #define SET_MNT_MARK(m) ((m)->mnt.mnt_flags |= MNT_MARKED)
- #define CLEAR_MNT_MARK(m) ((m)->mnt.mnt_flags &= ~MNT_MARKED)
- #define IS_MNT_LOCKED(m) ((m)->mnt.mnt_flags & MNT_LOCKED)
-+#define IS_MNT_LOCKED_AND_LAZY(m) \
-+	(((m)->mnt.mnt_flags & (MNT_LOCKED|MNT_SYNC_UMOUNT)) == MNT_LOCKED)
+ error0:
+ 	free_pages((unsigned long)out,
+ 		get_order(send_ringbuffer_size + recv_ringbuffer_size));
+diff --git a/drivers/i2c/busses/i2c-rk3x.c b/drivers/i2c/busses/i2c-rk3x.c
+index 5f96b1b..019d542 100644
+--- a/drivers/i2c/busses/i2c-rk3x.c
++++ b/drivers/i2c/busses/i2c-rk3x.c
+@@ -833,7 +833,7 @@ static int rk3x_i2c_xfer(struct i2c_adapter *adap,
+ 	clk_disable(i2c->clk);
+ 	spin_unlock_irqrestore(&i2c->lock, flags);
  
- #define CL_EXPIRE    		0x01
- #define CL_SLAVE     		0x02
--- 
-2.3.6
-
-
-From c076cbf218f3cb83dffe6982587d2b9751318962 Mon Sep 17 00:00:00 2001
-From: "Eric W. Biederman" <ebiederm@xmission.com>
-Date: Mon, 19 Jan 2015 11:48:45 -0600
-Subject: [PATCH 117/219] mnt: Fix the error check in __detach_mounts
-Cc: mpagano@gentoo.org
-
-commit f53e57975151f54ad8caa1b0ac8a78091cd5700a upstream.
-
-lookup_mountpoint can return either NULL or an error value.
-Update the test in __detach_mounts to test for an error value
-to avoid pathological cases causing a NULL pointer dereferences.
-
-The callers of __detach_mounts should prevent it from ever being
-called on an unlinked dentry but don't take any chances.
-
-Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/namespace.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/fs/namespace.c b/fs/namespace.c
-index 24de1e9..9e33895 100644
---- a/fs/namespace.c
-+++ b/fs/namespace.c
-@@ -1518,7 +1518,7 @@ void __detach_mounts(struct dentry *dentry)
+-	return ret;
++	return ret < 0 ? ret : num;
+ }
  
- 	namespace_lock();
- 	mp = lookup_mountpoint(dentry);
--	if (!mp)
-+	if (IS_ERR_OR_NULL(mp))
- 		goto out_unlock;
+ static u32 rk3x_i2c_func(struct i2c_adapter *adap)
+diff --git a/drivers/i2c/i2c-core.c b/drivers/i2c/i2c-core.c
+index edf274c..8143162 100644
+--- a/drivers/i2c/i2c-core.c
++++ b/drivers/i2c/i2c-core.c
+@@ -596,6 +596,7 @@ int i2c_generic_scl_recovery(struct i2c_adapter *adap)
+ 	adap->bus_recovery_info->set_scl(adap, 1);
+ 	return i2c_generic_recovery(adap);
+ }
++EXPORT_SYMBOL_GPL(i2c_generic_scl_recovery);
  
- 	lock_mount_hash();
--- 
-2.3.6
-
-
-From 84b78514033ff22c443473214ab6d0508394cf7a Mon Sep 17 00:00:00 2001
-From: "Eric W. Biederman" <ebiederm@xmission.com>
-Date: Wed, 1 Apr 2015 18:30:06 -0500
-Subject: [PATCH 118/219] mnt: Update detach_mounts to leave mounts connected
-Cc: mpagano@gentoo.org
-
-commit e0c9c0afd2fc958ffa34b697972721d81df8a56f upstream.
-
-Now that it is possible to lazily unmount an entire mount tree and
-leave the individual mounts connected to each other add a new flag
-UMOUNT_CONNECTED to umount_tree to force this behavior and use
-this flag in detach_mounts.
-
-This closes a bug where the deletion of a file or directory could
-trigger an unmount and reveal data under a mount point.
-
-Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/namespace.c | 8 ++++++--
- 1 file changed, 6 insertions(+), 2 deletions(-)
-
-diff --git a/fs/namespace.c b/fs/namespace.c
-index 9e33895..4622ee3 100644
---- a/fs/namespace.c
-+++ b/fs/namespace.c
-@@ -1350,6 +1350,7 @@ static inline void namespace_lock(void)
- enum umount_tree_flags {
- 	UMOUNT_SYNC = 1,
- 	UMOUNT_PROPAGATE = 2,
-+	UMOUNT_CONNECTED = 4,
- };
- /*
-  * mount_lock must be held
-@@ -1388,7 +1389,10 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
- 		if (how & UMOUNT_SYNC)
- 			p->mnt.mnt_flags |= MNT_SYNC_UMOUNT;
+ int i2c_generic_gpio_recovery(struct i2c_adapter *adap)
+ {
+@@ -610,6 +611,7 @@ int i2c_generic_gpio_recovery(struct i2c_adapter *adap)
  
--		disconnect = !IS_MNT_LOCKED_AND_LAZY(p);
-+		disconnect = !(((how & UMOUNT_CONNECTED) &&
-+				mnt_has_parent(p) &&
-+				(p->mnt_parent->mnt.mnt_flags & MNT_UMOUNT)) ||
-+			       IS_MNT_LOCKED_AND_LAZY(p));
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(i2c_generic_gpio_recovery);
  
- 		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt,
- 				 disconnect ? &unmounted : NULL);
-@@ -1531,7 +1535,7 @@ void __detach_mounts(struct dentry *dentry)
- 				umount_mnt(p);
- 			}
- 		}
--		else umount_tree(mnt, 0);
-+		else umount_tree(mnt, UMOUNT_CONNECTED);
- 	}
- 	unlock_mount_hash();
- 	put_mountpoint(mp);
--- 
-2.3.6
-
-
-From 85c75cd8131b5aa9fe4efc6400ae1d0631497720 Mon Sep 17 00:00:00 2001
-From: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
-Date: Wed, 18 Mar 2015 08:17:14 +0200
-Subject: [PATCH 119/219] tpm: fix: sanitized code paths in tpm_chip_register()
-Cc: mpagano@gentoo.org
-
-commit 34d47b6322087665be33ca3aa81775b143a4d7ac upstream.
-
-I started to work with PPI interface so that it would be available
-under character device sysfs directory and realized that chip
-registeration was still too messy.
-
-In TPM 1.x in some rare scenarios (errors that almost never occur)
-wrong order in deinitialization steps was taken in teardown. I
-reproduced these scenarios by manually inserting error codes in the
-place of the corresponding function calls.
-
-The key problem is that the teardown is messy with two separate code
-paths (this was inherited when moving code from tpm-interface.c).
-
-Moved TPM 1.x specific register/unregister functionality to own helper
-functions and added single code path for teardown in tpm_chip_register().
-Now the code paths have been fixed and it should be easier to review
-later on this part of the code.
-
-Fixes: 7a1d7e6dd76a ("tpm: TPM 2.0 baseline support")
-Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
-Tested-by: Scot Doyle <lkml14@scotdoyle.com>
-Reviewed-by: Peter Huewe <peterhuewe@gmx.de>
-Signed-off-by: Peter Huewe <peterhuewe@gmx.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/char/tpm/tpm-chip.c | 66 ++++++++++++++++++++++++++++-----------------
- 1 file changed, 42 insertions(+), 24 deletions(-)
-
-diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
-index e096e9c..283f00a 100644
---- a/drivers/char/tpm/tpm-chip.c
-+++ b/drivers/char/tpm/tpm-chip.c
-@@ -170,6 +170,41 @@ static void tpm_dev_del_device(struct tpm_chip *chip)
- 	device_unregister(&chip->dev);
+ int i2c_recover_bus(struct i2c_adapter *adap)
+ {
+@@ -619,6 +621,7 @@ int i2c_recover_bus(struct i2c_adapter *adap)
+ 	dev_dbg(&adap->dev, "Trying i2c bus recovery\n");
+ 	return adap->bus_recovery_info->recover_bus(adap);
  }
++EXPORT_SYMBOL_GPL(i2c_recover_bus);
  
-+static int tpm1_chip_register(struct tpm_chip *chip)
-+{
-+	int rc;
-+
-+	if (chip->flags & TPM_CHIP_FLAG_TPM2)
-+		return 0;
-+
-+	rc = tpm_sysfs_add_device(chip);
-+	if (rc)
-+		return rc;
-+
-+	rc = tpm_add_ppi(chip);
-+	if (rc) {
-+		tpm_sysfs_del_device(chip);
-+		return rc;
-+	}
-+
-+	chip->bios_dir = tpm_bios_log_setup(chip->devname);
-+
-+	return 0;
-+}
-+
-+static void tpm1_chip_unregister(struct tpm_chip *chip)
-+{
-+	if (chip->flags & TPM_CHIP_FLAG_TPM2)
-+		return;
+ static int i2c_device_probe(struct device *dev)
+ {
+@@ -1410,6 +1413,8 @@ static int i2c_register_adapter(struct i2c_adapter *adap)
+ 
+ 	dev_dbg(&adap->dev, "adapter [%s] registered\n", adap->name);
+ 
++	pm_runtime_no_callbacks(&adap->dev);
 +
-+	if (chip->bios_dir)
-+		tpm_bios_log_teardown(chip->bios_dir);
-+
-+	tpm_remove_ppi(chip);
-+
-+	tpm_sysfs_del_device(chip);
-+}
-+
- /*
-  * tpm_chip_register() - create a character device for the TPM chip
-  * @chip: TPM chip to use.
-@@ -185,22 +220,13 @@ int tpm_chip_register(struct tpm_chip *chip)
- {
- 	int rc;
+ #ifdef CONFIG_I2C_COMPAT
+ 	res = class_compat_create_link(i2c_adapter_compat_class, &adap->dev,
+ 				       adap->dev.parent);
+diff --git a/drivers/i2c/i2c-mux.c b/drivers/i2c/i2c-mux.c
+index 593f7ca..06cc1ff 100644
+--- a/drivers/i2c/i2c-mux.c
++++ b/drivers/i2c/i2c-mux.c
+@@ -32,8 +32,9 @@ struct i2c_mux_priv {
+ 	struct i2c_algorithm algo;
  
--	/* Populate sysfs for TPM1 devices. */
--	if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
--		rc = tpm_sysfs_add_device(chip);
--		if (rc)
--			goto del_misc;
--
--		rc = tpm_add_ppi(chip);
--		if (rc)
--			goto del_sysfs;
--
--		chip->bios_dir = tpm_bios_log_setup(chip->devname);
--	}
-+	rc = tpm1_chip_register(chip);
-+	if (rc)
-+		return rc;
+ 	struct i2c_adapter *parent;
+-	void *mux_priv;	/* the mux chip/device */
+-	u32  chan_id;	/* the channel id */
++	struct device *mux_dev;
++	void *mux_priv;
++	u32 chan_id;
  
- 	rc = tpm_dev_add_device(chip);
- 	if (rc)
--		return rc;
-+		goto out_err;
+ 	int (*select)(struct i2c_adapter *, void *mux_priv, u32 chan_id);
+ 	int (*deselect)(struct i2c_adapter *, void *mux_priv, u32 chan_id);
+@@ -119,6 +120,7 @@ struct i2c_adapter *i2c_add_mux_adapter(struct i2c_adapter *parent,
  
- 	/* Make the chip available. */
- 	spin_lock(&driver_lock);
-@@ -210,10 +236,8 @@ int tpm_chip_register(struct tpm_chip *chip)
- 	chip->flags |= TPM_CHIP_FLAG_REGISTERED;
+ 	/* Set up private adapter data */
+ 	priv->parent = parent;
++	priv->mux_dev = mux_dev;
+ 	priv->mux_priv = mux_priv;
+ 	priv->chan_id = chan_id;
+ 	priv->select = select;
+@@ -203,7 +205,7 @@ void i2c_del_mux_adapter(struct i2c_adapter *adap)
+ 	char symlink_name[20];
  
- 	return 0;
--del_sysfs:
--	tpm_sysfs_del_device(chip);
--del_misc:
--	tpm_dev_del_device(chip);
-+out_err:
-+	tpm1_chip_unregister(chip);
- 	return rc;
- }
- EXPORT_SYMBOL_GPL(tpm_chip_register);
-@@ -238,13 +262,7 @@ void tpm_chip_unregister(struct tpm_chip *chip)
- 	spin_unlock(&driver_lock);
- 	synchronize_rcu();
+ 	snprintf(symlink_name, sizeof(symlink_name), "channel-%u", priv->chan_id);
+-	sysfs_remove_link(&adap->dev.parent->kobj, symlink_name);
++	sysfs_remove_link(&priv->mux_dev->kobj, symlink_name);
  
--	if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
--		if (chip->bios_dir)
--			tpm_bios_log_teardown(chip->bios_dir);
--		tpm_remove_ppi(chip);
--		tpm_sysfs_del_device(chip);
--	}
--
-+	tpm1_chip_unregister(chip);
- 	tpm_dev_del_device(chip);
- }
- EXPORT_SYMBOL_GPL(tpm_chip_unregister);
--- 
-2.3.6
-
-
-From b0566aa080d2ab7f5810f5bdea53c02dfc78ff16 Mon Sep 17 00:00:00 2001
-From: Vinson Lee <vlee@twitter.com>
-Date: Mon, 9 Feb 2015 16:29:37 -0800
-Subject: [PATCH 120/219] perf symbols: Define STT_GNU_IFUNC for glibc 2.9 and
- older.
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-Cc: mpagano@gentoo.org
-
-commit 4e31050f482c02c822b150d71cf1ea5be7c9d6e4 upstream.
-
-The token STT_GNU_IFUNC is not available with glibc 2.9 and older.
-Define this token if it is not already defined.
-
-This patch fixes this build errors with older versions of glibc.
-
-  CC       util/symbol-elf.o
-util/symbol-elf.c: In function ‘elf_sym__is_function’:
-util/symbol-elf.c:75: error: ‘STT_GNU_IFUNC’ undeclared (first use in this function)
-util/symbol-elf.c:75: error: (Each undeclared identifier is reported only once
-util/symbol-elf.c:75: error: for each function it appears in.)
-make: *** [util/symbol-elf.o] Error 1
-
-Signed-off-by: Vinson Lee <vlee@twitter.com>
-Acked-by: Namhyung Kim <namhyung@kernel.org>
-Cc: Adrian Hunter <adrian.hunter@intel.com>
-Cc: Anton Blanchard <anton@samba.org>
-Cc: Avi Kivity <avi@cloudius-systems.com>
-Cc: Jiri Olsa <jolsa@redhat.com>
-Cc: Paul Mackerras <paulus@samba.org>
-Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
-Cc: Stephane Eranian <eranian@google.com>
-Cc: Waiman Long <Waiman.Long@hp.com>
-Link: http://lkml.kernel.org/r/1423528286-13630-1-git-send-email-vlee@twopensource.com
-Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- tools/perf/util/symbol-elf.c | 4 ++++
- 1 file changed, 4 insertions(+)
-
-diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
-index 33b7a2a..9bdf007 100644
---- a/tools/perf/util/symbol-elf.c
-+++ b/tools/perf/util/symbol-elf.c
-@@ -74,6 +74,10 @@ static inline uint8_t elf_sym__type(const GElf_Sym *sym)
- 	return GELF_ST_TYPE(sym->st_info);
- }
+ 	sysfs_remove_link(&priv->adap.dev.kobj, "mux_device");
+ 	i2c_del_adapter(adap);
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index b0e5852..44d1d79 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -218,18 +218,10 @@ static struct cpuidle_state byt_cstates[] = {
+ 		.enter = &intel_idle,
+ 		.enter_freeze = intel_idle_freeze, },
+ 	{
+-		.name = "C1E-BYT",
+-		.desc = "MWAIT 0x01",
+-		.flags = MWAIT2flg(0x01),
+-		.exit_latency = 15,
+-		.target_residency = 30,
+-		.enter = &intel_idle,
+-		.enter_freeze = intel_idle_freeze, },
+-	{
+ 		.name = "C6N-BYT",
+ 		.desc = "MWAIT 0x58",
+ 		.flags = MWAIT2flg(0x58) | CPUIDLE_FLAG_TLB_FLUSHED,
+-		.exit_latency = 40,
++		.exit_latency = 300,
+ 		.target_residency = 275,
+ 		.enter = &intel_idle,
+ 		.enter_freeze = intel_idle_freeze, },
+@@ -237,7 +229,7 @@ static struct cpuidle_state byt_cstates[] = {
+ 		.name = "C6S-BYT",
+ 		.desc = "MWAIT 0x52",
+ 		.flags = MWAIT2flg(0x52) | CPUIDLE_FLAG_TLB_FLUSHED,
+-		.exit_latency = 140,
++		.exit_latency = 500,
+ 		.target_residency = 560,
+ 		.enter = &intel_idle,
+ 		.enter_freeze = intel_idle_freeze, },
+@@ -246,7 +238,7 @@ static struct cpuidle_state byt_cstates[] = {
+ 		.desc = "MWAIT 0x60",
+ 		.flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED,
+ 		.exit_latency = 1200,
+-		.target_residency = 1500,
++		.target_residency = 4000,
+ 		.enter = &intel_idle,
+ 		.enter_freeze = intel_idle_freeze, },
+ 	{
+diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
+index 8c014b5..38acb3c 100644
+--- a/drivers/infiniband/core/umem.c
++++ b/drivers/infiniband/core/umem.c
+@@ -99,12 +99,15 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
+ 	if (dmasync)
+ 		dma_set_attr(DMA_ATTR_WRITE_BARRIER, &attrs);
  
-+#ifndef STT_GNU_IFUNC
-+#define STT_GNU_IFUNC 10
-+#endif
++	if (!size)
++		return ERR_PTR(-EINVAL);
 +
- static inline int elf_sym__is_function(const GElf_Sym *sym)
- {
- 	return (elf_sym__type(sym) == STT_FUNC ||
--- 
-2.3.6
-
-
-From eefadbaae8af748e25d6fb903b56c6d3e38215b8 Mon Sep 17 00:00:00 2001
-From: "H.J. Lu" <hjl.tools@gmail.com>
-Date: Tue, 17 Mar 2015 15:27:48 -0700
-Subject: [PATCH 121/219] perf tools: Fix perf-read-vdsox32 not building and
- lib64 install dir
-Cc: mpagano@gentoo.org
-
-commit 76aea7731e7050c066943a1d7456ec6510702601 upstream.
-
-Commit:
-
-  c6e5e9fbc3ea ("perf tools: Fix building error in x86_64 when dwarf unwind is on")
-
-removed the definition of IS_X86_64 but not all places using it, with
-the consequence that perf-read-vdsox32 would not be built anymore, and
-the default lib install directory was 'lib' instead of 'lib64'.
-
-Also needs to go to v3.19.
-
-Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
-Acked-by: Adrian Hunter <adrian.hunter@intel.com>
-Acked-by: Jiri Olsa <jolsa@kernel.org>
-Link: http://lkml.kernel.org/r/CAMe9rOqpGVq3D88w+D15ef7sv6G6k57ZeTvxBm46=WFgzo9p1w@mail.gmail.com
-Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- tools/perf/config/Makefile | 4 ++--
- tools/perf/tests/make      | 2 +-
- 2 files changed, 3 insertions(+), 3 deletions(-)
-
-diff --git a/tools/perf/config/Makefile b/tools/perf/config/Makefile
-index cc22408..0884d31 100644
---- a/tools/perf/config/Makefile
-+++ b/tools/perf/config/Makefile
-@@ -651,7 +651,7 @@ ifeq (${IS_64_BIT}, 1)
-       NO_PERF_READ_VDSO32 := 1
-     endif
-   endif
--  ifneq (${IS_X86_64}, 1)
-+  ifneq ($(ARCH), x86)
-     NO_PERF_READ_VDSOX32 := 1
-   endif
-   ifndef NO_PERF_READ_VDSOX32
-@@ -699,7 +699,7 @@ sysconfdir = $(prefix)/etc
- ETC_PERFCONFIG = etc/perfconfig
- endif
- ifndef lib
--ifeq ($(IS_X86_64),1)
-+ifeq ($(ARCH)$(IS_64_BIT), x861)
- lib = lib64
- else
- lib = lib
-diff --git a/tools/perf/tests/make b/tools/perf/tests/make
-index 75709d2..bff8532 100644
---- a/tools/perf/tests/make
-+++ b/tools/perf/tests/make
-@@ -5,7 +5,7 @@ include config/Makefile.arch
+ 	/*
+ 	 * If the combination of the addr and size requested for this memory
+ 	 * region causes an integer overflow, return error.
+ 	 */
+-	if ((PAGE_ALIGN(addr + size) <= size) ||
+-	    (PAGE_ALIGN(addr + size) <= addr))
++	if (((addr + size) < addr) ||
++	    PAGE_ALIGN(addr + size) < (addr + size))
+ 		return ERR_PTR(-EINVAL);
  
- # FIXME looks like x86 is the only arch running tests ;-)
- # we need some IS_(32/64) flag to make this generic
--ifeq ($(IS_X86_64),1)
-+ifeq ($(ARCH)$(IS_64_BIT), x861)
- lib = lib64
- else
- lib = lib
--- 
-2.3.6
-
-
-From a245448568a6f791b7d4617e622adf6e7118d174 Mon Sep 17 00:00:00 2001
-From: Vinson Lee <vlee@twitter.com>
-Date: Mon, 23 Mar 2015 12:09:16 -0700
-Subject: [PATCH 122/219] perf tools: Work around lack of sched_getcpu in glibc
- < 2.6.
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-Cc: mpagano@gentoo.org
-
-commit e1e455f4f4d35850c30235747620d0d078fe9f64 upstream.
-
-This patch fixes this build error with glibc < 2.6.
-
-  CC       util/cloexec.o
-cc1: warnings being treated as errors
-util/cloexec.c: In function ‘perf_flag_probe’:
-util/cloexec.c:24: error: implicit declaration of function
-‘sched_getcpu’
-util/cloexec.c:24: error: nested extern declaration of ‘sched_getcpu’
-make: *** [util/cloexec.o] Error 1
-
-Signed-off-by: Vinson Lee <vlee@twitter.com>
-Acked-by: Jiri Olsa <jolsa@kernel.org>
-Acked-by: Namhyung Kim <namhyung@kernel.org>
-Cc: Adrian Hunter <adrian.hunter@intel.com>
-Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
-Cc: Paul Mackerras <paulus@samba.org>
-Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
-Cc: Yann Droneaud <ydroneaud@opteya.com>
-Link: http://lkml.kernel.org/r/1427137761-16119-1-git-send-email-vlee@twopensource.com
-Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- tools/perf/util/cloexec.c | 6 ++++++
- tools/perf/util/cloexec.h | 6 ++++++
- 2 files changed, 12 insertions(+)
-
-diff --git a/tools/perf/util/cloexec.c b/tools/perf/util/cloexec.c
-index 6da965b..85b5238 100644
---- a/tools/perf/util/cloexec.c
-+++ b/tools/perf/util/cloexec.c
-@@ -7,6 +7,12 @@
+ 	if (!can_do_mlock())
+diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
+index ed2bd67..fbde33a 100644
+--- a/drivers/infiniband/hw/mlx4/qp.c
++++ b/drivers/infiniband/hw/mlx4/qp.c
+@@ -2605,8 +2605,7 @@ static int build_lso_seg(struct mlx4_wqe_lso_seg *wqe, struct ib_send_wr *wr,
  
- static unsigned long flag = PERF_FLAG_FD_CLOEXEC;
+ 	memcpy(wqe->header, wr->wr.ud.header, wr->wr.ud.hlen);
  
-+int __weak sched_getcpu(void)
-+{
-+	errno = ENOSYS;
-+	return -1;
-+}
-+
- static int perf_flag_probe(void)
- {
- 	/* use 'safest' configuration as used in perf_evsel__fallback() */
-diff --git a/tools/perf/util/cloexec.h b/tools/perf/util/cloexec.h
-index 94a5a7d..68888c2 100644
---- a/tools/perf/util/cloexec.h
-+++ b/tools/perf/util/cloexec.h
-@@ -3,4 +3,10 @@
+-	*lso_hdr_sz  = cpu_to_be32((wr->wr.ud.mss - wr->wr.ud.hlen) << 16 |
+-				   wr->wr.ud.hlen);
++	*lso_hdr_sz  = cpu_to_be32(wr->wr.ud.mss << 16 | wr->wr.ud.hlen);
+ 	*lso_seg_len = halign;
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/ulp/iser/iser_initiator.c b/drivers/infiniband/ulp/iser/iser_initiator.c
+index 20e859a..76eb57b 100644
+--- a/drivers/infiniband/ulp/iser/iser_initiator.c
++++ b/drivers/infiniband/ulp/iser/iser_initiator.c
+@@ -409,8 +409,8 @@ int iser_send_command(struct iscsi_conn *conn,
+ 	if (scsi_prot_sg_count(sc)) {
+ 		prot_buf->buf  = scsi_prot_sglist(sc);
+ 		prot_buf->size = scsi_prot_sg_count(sc);
+-		prot_buf->data_len = data_buf->data_len >>
+-				     ilog2(sc->device->sector_size) * 8;
++		prot_buf->data_len = (data_buf->data_len >>
++				     ilog2(sc->device->sector_size)) * 8;
+ 	}
  
- unsigned long perf_event_open_cloexec_flag(void);
+ 	if (hdr->flags & ISCSI_FLAG_CMD_READ) {
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index 075b19c..147029a 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -222,7 +222,7 @@ fail:
+ static void
+ isert_free_rx_descriptors(struct isert_conn *isert_conn)
+ {
+-	struct ib_device *ib_dev = isert_conn->conn_cm_id->device;
++	struct ib_device *ib_dev = isert_conn->conn_device->ib_device;
+ 	struct iser_rx_desc *rx_desc;
+ 	int i;
  
-+#ifdef __GLIBC_PREREQ
-+#if !__GLIBC_PREREQ(2, 6)
-+extern int sched_getcpu(void) __THROW;
-+#endif
-+#endif
-+
- #endif /* __PERF_CLOEXEC_H */
--- 
-2.3.6
-
-
-From beda5943f15926783dc6768e8f821266ae6e8fb3 Mon Sep 17 00:00:00 2001
-From: Anton Blanchard <anton@samba.org>
-Date: Tue, 14 Apr 2015 07:51:03 +1000
-Subject: [PATCH 123/219] powerpc/perf: Cap 64bit userspace backtraces to
- PERF_MAX_STACK_DEPTH
-Cc: mpagano@gentoo.org
-
-commit 9a5cbce421a283e6aea3c4007f141735bf9da8c3 upstream.
-
-We cap 32bit userspace backtraces to PERF_MAX_STACK_DEPTH
-(currently 127), but we forgot to do the same for 64bit backtraces.
-
-Signed-off-by: Anton Blanchard <anton@samba.org>
-Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/powerpc/perf/callchain.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
-index 2396dda..ead5535 100644
---- a/arch/powerpc/perf/callchain.c
-+++ b/arch/powerpc/perf/callchain.c
-@@ -243,7 +243,7 @@ static void perf_callchain_user_64(struct perf_callchain_entry *entry,
- 	sp = regs->gpr[1];
- 	perf_callchain_store(entry, next_ip);
+@@ -719,8 +719,8 @@ out:
+ static void
+ isert_connect_release(struct isert_conn *isert_conn)
+ {
+-	struct ib_device *ib_dev = isert_conn->conn_cm_id->device;
+ 	struct isert_device *device = isert_conn->conn_device;
++	struct ib_device *ib_dev = device->ib_device;
  
--	for (;;) {
-+	while (entry->nr < PERF_MAX_STACK_DEPTH) {
- 		fp = (unsigned long __user *) sp;
- 		if (!valid_user_sp(sp, 1) || read_user_stack_64(fp, &next_sp))
- 			return;
--- 
-2.3.6
-
-
-From f0289e90ac96271337d6d0f9c9a6ceb2aea62a05 Mon Sep 17 00:00:00 2001
-From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>
-Date: Tue, 24 Mar 2015 09:57:55 -0400
-Subject: [PATCH 124/219] tools lib traceevent kbuffer: Remove extra update to
- data pointer in PADDING
-Cc: mpagano@gentoo.org
-
-commit c5e691928bf166ac03430e957038b60adba3cf6c upstream.
-
-When a event PADDING is hit (a deleted event that is still in the ring
-buffer), translate_data() sets the length of the padding and also updates
-the data pointer which is passed back to the caller.
-
-This is unneeded because the caller also updates the data pointer with
-the passed back length. translate_data() should not update the pointer,
-only set the length.
-
-Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-Cc: Andrew Morton <akpm@linux-foundation.org>
-Cc: Jiri Olsa <jolsa@redhat.com>
-Cc: Namhyung Kim <namhyung@kernel.org>
-Link: http://lkml.kernel.org/r/20150324135923.461431960@goodmis.org
-Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- tools/lib/traceevent/kbuffer-parse.c | 1 -
- 1 file changed, 1 deletion(-)
-
-diff --git a/tools/lib/traceevent/kbuffer-parse.c b/tools/lib/traceevent/kbuffer-parse.c
-index dcc6652..deb3569 100644
---- a/tools/lib/traceevent/kbuffer-parse.c
-+++ b/tools/lib/traceevent/kbuffer-parse.c
-@@ -372,7 +372,6 @@ translate_data(struct kbuffer *kbuf, void *data, void **rptr,
- 	switch (type_len) {
- 	case KBUFFER_TYPE_PADDING:
- 		*length = read_4(kbuf, data);
--		data += *length;
- 		break;
+ 	isert_dbg("conn %p\n", isert_conn);
  
- 	case KBUFFER_TYPE_TIME_EXTEND:
--- 
-2.3.6
-
-
-From e5e82af52cd373fed10be67faba90cd2eed6fb17 Mon Sep 17 00:00:00 2001
-From: Thomas D <whissi@whissi.de>
-Date: Mon, 5 Jan 2015 21:37:23 +0100
-Subject: [PATCH 125/219] tools/power turbostat: Use $(CURDIR) instead of
- $(PWD) and add support for O= option in Makefile
-Cc: mpagano@gentoo.org
-
-commit f82263c6989c31ae9b94cecddffb29dcbec38710 upstream.
-
-Since commit ee0778a30153
-("tools/power: turbostat: make Makefile a bit more capable")
-turbostat's Makefile is using
-
-  [...]
-  BUILD_OUTPUT    := $(PWD)
-  [...]
-
-which obviously causes trouble when building "turbostat" with
-
-  make -C /usr/src/linux/tools/power/x86/turbostat ARCH=x86 turbostat
-
-because GNU make does not update nor guarantee that $PWD is set.
-
-This patch changes the Makefile to use $CURDIR instead, which GNU make
-guarantees to set and update (i.e. when using "make -C ...") and also
-adds support for the O= option (see "make help" in your root of your
-kernel source tree for more details).
-
-Link: https://bugs.gentoo.org/show_bug.cgi?id=533918
-Fixes: ee0778a30153 ("tools/power: turbostat: make Makefile a bit more capable")
-Signed-off-by: Thomas D. <whissi@whissi.de>
-Cc: Mark Asselstine <mark.asselstine@windriver.com>
-Signed-off-by: Len Brown <len.brown@intel.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- tools/power/x86/turbostat/Makefile | 6 +++++-
- 1 file changed, 5 insertions(+), 1 deletion(-)
-
-diff --git a/tools/power/x86/turbostat/Makefile b/tools/power/x86/turbostat/Makefile
-index d1b3a36..4039854 100644
---- a/tools/power/x86/turbostat/Makefile
-+++ b/tools/power/x86/turbostat/Makefile
-@@ -1,8 +1,12 @@
- CC		= $(CROSS_COMPILE)gcc
--BUILD_OUTPUT	:= $(PWD)
-+BUILD_OUTPUT	:= $(CURDIR)
- PREFIX		:= /usr
- DESTDIR		:=
+@@ -728,7 +728,8 @@ isert_connect_release(struct isert_conn *isert_conn)
+ 		isert_conn_free_fastreg_pool(isert_conn);
  
-+ifeq ("$(origin O)", "command line")
-+	BUILD_OUTPUT := $(O)
-+endif
-+
- turbostat : turbostat.c
- CFLAGS +=	-Wall
- CFLAGS +=	-DMSRHEADER='"../../../../arch/x86/include/uapi/asm/msr-index.h"'
--- 
-2.3.6
-
-
-From 67e9563f2e494959696ff3128cf9d5fb1b3dbad7 Mon Sep 17 00:00:00 2001
-From: Brian Norris <computersforpeace@gmail.com>
-Date: Sat, 28 Feb 2015 02:23:25 -0800
-Subject: [PATCH 126/219] UBI: account for bitflips in both the VID header and
- data
-Cc: mpagano@gentoo.org
-
-commit 8eef7d70f7c6772c3490f410ee2bceab3b543fa1 upstream.
-
-We are completely discarding the earlier value of 'bitflips', which
-could reflect a bitflip found in ubi_io_read_vid_hdr(). Let's use the
-bitwise OR of header and data 'bitflip' statuses instead.
-
-Coverity CID #1226856
-
-Signed-off-by: Brian Norris <computersforpeace@gmail.com>
-Signed-off-by: Richard Weinberger <richard@nod.at>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/mtd/ubi/attach.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/mtd/ubi/attach.c b/drivers/mtd/ubi/attach.c
-index 9d2e16f..b5e1548 100644
---- a/drivers/mtd/ubi/attach.c
-+++ b/drivers/mtd/ubi/attach.c
-@@ -410,7 +410,7 @@ int ubi_compare_lebs(struct ubi_device *ubi, const struct ubi_ainf_peb *aeb,
- 		second_is_newer = !second_is_newer;
- 	} else {
- 		dbg_bld("PEB %d CRC is OK", pnum);
--		bitflips = !!err;
-+		bitflips |= !!err;
- 	}
- 	mutex_unlock(&ubi->buf_mutex);
+ 	isert_free_rx_descriptors(isert_conn);
+-	rdma_destroy_id(isert_conn->conn_cm_id);
++	if (isert_conn->conn_cm_id)
++		rdma_destroy_id(isert_conn->conn_cm_id);
  
--- 
-2.3.6
-
-
-From 921b47c10b2b18b3562152aa0eacc1b2e56c6996 Mon Sep 17 00:00:00 2001
-From: Brian Norris <computersforpeace@gmail.com>
-Date: Sat, 28 Feb 2015 02:23:26 -0800
-Subject: [PATCH 127/219] UBI: fix out of bounds write
-Cc: mpagano@gentoo.org
-
-commit d74adbdb9abf0d2506a6c4afa534d894f28b763f upstream.
-
-If aeb->len >= vol->reserved_pebs, we should not be writing aeb into the
-PEB->LEB mapping.
-
-Caught by Coverity, CID #711212.
-
-Signed-off-by: Brian Norris <computersforpeace@gmail.com>
-Signed-off-by: Richard Weinberger <richard@nod.at>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/mtd/ubi/eba.c | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/drivers/mtd/ubi/eba.c b/drivers/mtd/ubi/eba.c
-index 16e34b3..8c9a710 100644
---- a/drivers/mtd/ubi/eba.c
-+++ b/drivers/mtd/ubi/eba.c
-@@ -1419,7 +1419,8 @@ int ubi_eba_init(struct ubi_device *ubi, struct ubi_attach_info *ai)
- 				 * during re-size.
- 				 */
- 				ubi_move_aeb_to_list(av, aeb, &ai->erase);
--			vol->eba_tbl[aeb->lnum] = aeb->pnum;
-+			else
-+				vol->eba_tbl[aeb->lnum] = aeb->pnum;
- 		}
- 	}
+ 	if (isert_conn->conn_qp) {
+ 		struct isert_comp *comp = isert_conn->conn_qp->recv_cq->cq_context;
+@@ -878,12 +879,15 @@ isert_disconnected_handler(struct rdma_cm_id *cma_id,
+ 	return 0;
+ }
  
--- 
-2.3.6
-
-
-From 5a156e848f96a0f0024ef94a3e19979f8f7e9dbc Mon Sep 17 00:00:00 2001
-From: Brian Norris <computersforpeace@gmail.com>
-Date: Sat, 28 Feb 2015 02:23:27 -0800
-Subject: [PATCH 128/219] UBI: initialize LEB number variable
-Cc: mpagano@gentoo.org
-
-commit f16db8071ce18819fbd705ddcc91c6f392fb61f8 upstream.
-
-In some of the 'out_not_moved' error paths, lnum may be used
-uninitialized. Don't ignore the warning; let's fix it.
-
-This uninitialized variable doesn't have much visible effect in the end,
-since we just schedule the PEB for erasure, and its LEB number doesn't
-really matter (it just gets printed in debug messages). But let's get it
-straight anyway.
-
-Coverity CID #113449
-
-Signed-off-by: Brian Norris <computersforpeace@gmail.com>
-Signed-off-by: Richard Weinberger <richard@nod.at>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/mtd/ubi/wl.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
-index 8f7bde6..0bd92d8 100644
---- a/drivers/mtd/ubi/wl.c
-+++ b/drivers/mtd/ubi/wl.c
-@@ -1002,7 +1002,7 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
- 				int shutdown)
+-static void
++static int
+ isert_connect_error(struct rdma_cm_id *cma_id)
  {
- 	int err, scrubbing = 0, torture = 0, protect = 0, erroneous = 0;
--	int vol_id = -1, uninitialized_var(lnum);
-+	int vol_id = -1, lnum = -1;
- #ifdef CONFIG_MTD_UBI_FASTMAP
- 	int anchor = wrk->anchor;
- #endif
--- 
-2.3.6
-
-
-From 075831830ff0277572a93633cce3807394955358 Mon Sep 17 00:00:00 2001
-From: Brian Norris <computersforpeace@gmail.com>
-Date: Sat, 28 Feb 2015 02:23:28 -0800
-Subject: [PATCH 129/219] UBI: fix check for "too many bytes"
-Cc: mpagano@gentoo.org
-
-commit 299d0c5b27346a77a0777c993372bf8777d4f2e5 upstream.
-
-The comparison from the previous line seems to have been erroneously
-(partially) copied-and-pasted onto the next. The second line should be
-checking req.bytes, not req.lnum.
-
-Coverity CID #139400
-
-Signed-off-by: Brian Norris <computersforpeace@gmail.com>
-[rw: Fixed comparison]
-Signed-off-by: Richard Weinberger <richard@nod.at>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/mtd/ubi/cdev.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/mtd/ubi/cdev.c b/drivers/mtd/ubi/cdev.c
-index d647e50..d16fccf 100644
---- a/drivers/mtd/ubi/cdev.c
-+++ b/drivers/mtd/ubi/cdev.c
-@@ -455,7 +455,7 @@ static long vol_cdev_ioctl(struct file *file, unsigned int cmd,
- 		/* Validate the request */
- 		err = -EINVAL;
- 		if (req.lnum < 0 || req.lnum >= vol->reserved_pebs ||
--		    req.bytes < 0 || req.lnum >= vol->usable_leb_size)
-+		    req.bytes < 0 || req.bytes > vol->usable_leb_size)
- 			break;
- 
- 		err = get_exclusive(desc);
--- 
-2.3.6
-
-
-From 1d05935b31efb2e398e1772b76a6513b9484574a Mon Sep 17 00:00:00 2001
-From: "K. Y. Srinivasan" <kys@microsoft.com>
-Date: Fri, 27 Mar 2015 00:27:18 -0700
-Subject: [PATCH 130/219] scsi: storvsc: Fix a bug in copy_from_bounce_buffer()
-Cc: mpagano@gentoo.org
-
-commit 8de580742fee8bc34d116f57a20b22b9a5f08403 upstream.
-
-We may exit this function without properly freeing up the maapings
-we may have acquired. Fix the bug.
-
-Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
-Reviewed-by: Long Li <longli@microsoft.com>
-Signed-off-by: James Bottomley <JBottomley@Odin.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/scsi/storvsc_drv.c | 15 ++++++++-------
- 1 file changed, 8 insertions(+), 7 deletions(-)
-
-diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
-index efc6e44..bf8c5c1 100644
---- a/drivers/scsi/storvsc_drv.c
-+++ b/drivers/scsi/storvsc_drv.c
-@@ -746,21 +746,22 @@ static unsigned int copy_to_bounce_buffer(struct scatterlist *orig_sgl,
- 			if (bounce_sgl[j].length == PAGE_SIZE) {
- 				/* full..move to next entry */
- 				sg_kunmap_atomic(bounce_addr);
-+				bounce_addr = 0;
- 				j++;
-+			}
- 
--				/* if we need to use another bounce buffer */
--				if (srclen || i != orig_sgl_count - 1)
--					bounce_addr = sg_kmap_atomic(bounce_sgl,j);
-+			/* if we need to use another bounce buffer */
-+			if (srclen && bounce_addr == 0)
-+				bounce_addr = sg_kmap_atomic(bounce_sgl, j);
- 
--			} else if (srclen == 0 && i == orig_sgl_count - 1) {
--				/* unmap the last bounce that is < PAGE_SIZE */
--				sg_kunmap_atomic(bounce_addr);
--			}
- 		}
- 
- 		sg_kunmap_atomic(src_addr - orig_sgl[i].offset);
- 	}
+ 	struct isert_conn *isert_conn = cma_id->qp->qp_context;
  
-+	if (bounce_addr)
-+		sg_kunmap_atomic(bounce_addr);
++	isert_conn->conn_cm_id = NULL;
+ 	isert_put_conn(isert_conn);
 +
- 	local_irq_restore(flags);
++	return -1;
+ }
  
- 	return total_copied;
--- 
-2.3.6
-
-
-From 7f61df07930dae7b1a94f088365362a191d2f4ec Mon Sep 17 00:00:00 2001
-From: Nicholas Bellinger <nab@linux-iscsi.org>
-Date: Thu, 26 Feb 2015 22:19:15 -0800
-Subject: [PATCH 131/219] iscsi-target: Convert iscsi_thread_set usage to
- kthread.h
-Cc: mpagano@gentoo.org
-
-commit 88dcd2dab5c23b1c9cfc396246d8f476c872f0ca upstream.
-
-This patch converts iscsi-target code to use modern kthread.h API
-callers for creating RX/TX threads for each new iscsi_conn descriptor,
-and releasing associated RX/TX threads during connection shutdown.
-
-This is done using iscsit_start_kthreads() -> kthread_run() to start
-new kthreads from within iscsi_post_login_handler(), and invoking
-kthread_stop() from existing iscsit_close_connection() code.
-
-Also, convert iscsit_logout_post_handler_closesession() code to use
-cmpxchg when determing when iscsit_cause_connection_reinstatement()
-needs to sleep waiting for completion.
-
-Reported-by: Sagi Grimberg <sagig@mellanox.com>
-Tested-by: Sagi Grimberg <sagig@mellanox.com>
-Cc: Slava Shwartsman <valyushash@gmail.com>
-Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/target/iscsi/iscsi_target.c       | 104 +++++++++++++-----------------
- drivers/target/iscsi/iscsi_target_erl0.c  |  13 ++--
- drivers/target/iscsi/iscsi_target_login.c |  59 +++++++++++++++--
- include/target/iscsi/iscsi_target_core.h  |   7 ++
- 4 files changed, 114 insertions(+), 69 deletions(-)
-
-diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
-index 77d6425..5e35612 100644
---- a/drivers/target/iscsi/iscsi_target.c
-+++ b/drivers/target/iscsi/iscsi_target.c
-@@ -537,7 +537,7 @@ static struct iscsit_transport iscsi_target_transport = {
+ static int
+@@ -912,7 +916,7 @@ isert_cma_handler(struct rdma_cm_id *cma_id, struct rdma_cm_event *event)
+ 	case RDMA_CM_EVENT_REJECTED:       /* FALLTHRU */
+ 	case RDMA_CM_EVENT_UNREACHABLE:    /* FALLTHRU */
+ 	case RDMA_CM_EVENT_CONNECT_ERROR:
+-		isert_connect_error(cma_id);
++		ret = isert_connect_error(cma_id);
+ 		break;
+ 	default:
+ 		isert_err("Unhandled RDMA CMA event: %d\n", event->event);
+@@ -1861,11 +1865,13 @@ isert_completion_rdma_read(struct iser_tx_desc *tx_desc,
+ 	cmd->i_state = ISTATE_RECEIVED_LAST_DATAOUT;
+ 	spin_unlock_bh(&cmd->istate_lock);
  
- static int __init iscsi_target_init_module(void)
- {
--	int ret = 0;
-+	int ret = 0, size;
+-	if (ret)
++	if (ret) {
++		target_put_sess_cmd(se_cmd->se_sess, se_cmd);
+ 		transport_send_check_condition_and_sense(se_cmd,
+ 							 se_cmd->pi_err, 0);
+-	else
++	} else {
+ 		target_execute_cmd(se_cmd);
++	}
+ }
  
- 	pr_debug("iSCSI-Target "ISCSIT_VERSION"\n");
+ static void
+diff --git a/drivers/input/mouse/alps.c b/drivers/input/mouse/alps.c
+index 27bcdbc..ea6cb64 100644
+--- a/drivers/input/mouse/alps.c
++++ b/drivers/input/mouse/alps.c
+@@ -1159,13 +1159,14 @@ static void alps_report_bare_ps2_packet(struct psmouse *psmouse,
+ 					bool report_buttons)
+ {
+ 	struct alps_data *priv = psmouse->private;
+-	struct input_dev *dev;
++	struct input_dev *dev, *dev2 = NULL;
  
-@@ -546,6 +546,7 @@ static int __init iscsi_target_init_module(void)
- 		pr_err("Unable to allocate memory for iscsit_global\n");
- 		return -1;
+ 	/* Figure out which device to use to report the bare packet */
+ 	if (priv->proto_version == ALPS_PROTO_V2 &&
+ 	    (priv->flags & ALPS_DUALPOINT)) {
+ 		/* On V2 devices the DualPoint Stick reports bare packets */
+ 		dev = priv->dev2;
++		dev2 = psmouse->dev;
+ 	} else if (unlikely(IS_ERR_OR_NULL(priv->dev3))) {
+ 		/* Register dev3 mouse if we received PS/2 packet first time */
+ 		if (!IS_ERR(priv->dev3))
+@@ -1177,7 +1178,7 @@ static void alps_report_bare_ps2_packet(struct psmouse *psmouse,
  	}
-+	spin_lock_init(&iscsit_global->ts_bitmap_lock);
- 	mutex_init(&auth_id_lock);
- 	spin_lock_init(&sess_idr_lock);
- 	idr_init(&tiqn_idr);
-@@ -555,15 +556,11 @@ static int __init iscsi_target_init_module(void)
- 	if (ret < 0)
- 		goto out;
  
--	ret = iscsi_thread_set_init();
--	if (ret < 0)
-+	size = BITS_TO_LONGS(ISCSIT_BITMAP_BITS) * sizeof(long);
-+	iscsit_global->ts_bitmap = vzalloc(size);
-+	if (!iscsit_global->ts_bitmap) {
-+		pr_err("Unable to allocate iscsit_global->ts_bitmap\n");
- 		goto configfs_out;
--
--	if (iscsi_allocate_thread_sets(TARGET_THREAD_SET_COUNT) !=
--			TARGET_THREAD_SET_COUNT) {
--		pr_err("iscsi_allocate_thread_sets() returned"
--			" unexpected value!\n");
--		goto ts_out1;
- 	}
+ 	if (report_buttons)
+-		alps_report_buttons(dev, NULL,
++		alps_report_buttons(dev, dev2,
+ 				packet[0] & 1, packet[0] & 2, packet[0] & 4);
  
- 	lio_qr_cache = kmem_cache_create("lio_qr_cache",
-@@ -572,7 +569,7 @@ static int __init iscsi_target_init_module(void)
- 	if (!lio_qr_cache) {
- 		pr_err("nable to kmem_cache_create() for"
- 				" lio_qr_cache\n");
--		goto ts_out2;
-+		goto bitmap_out;
- 	}
+ 	input_report_rel(dev, REL_X,
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index 6e22682..991dc6b 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -893,6 +893,21 @@ static psmouse_ret_t elantech_process_byte(struct psmouse *psmouse)
+ }
  
- 	lio_dr_cache = kmem_cache_create("lio_dr_cache",
-@@ -617,10 +614,8 @@ dr_out:
- 	kmem_cache_destroy(lio_dr_cache);
- qr_out:
- 	kmem_cache_destroy(lio_qr_cache);
--ts_out2:
--	iscsi_deallocate_thread_sets();
--ts_out1:
--	iscsi_thread_set_free();
-+bitmap_out:
-+	vfree(iscsit_global->ts_bitmap);
- configfs_out:
- 	iscsi_target_deregister_configfs();
- out:
-@@ -630,8 +625,6 @@ out:
+ /*
++ * This writes the reg_07 value again to the hardware at the end of every
++ * set_rate call because the register loses its value. reg_07 allows setting
++ * absolute mode on v4 hardware
++ */
++static void elantech_set_rate_restore_reg_07(struct psmouse *psmouse,
++		unsigned int rate)
++{
++	struct elantech_data *etd = psmouse->private;
++
++	etd->original_set_rate(psmouse, rate);
++	if (elantech_write_reg(psmouse, 0x07, etd->reg_07))
++		psmouse_err(psmouse, "restoring reg_07 failed\n");
++}
++
++/*
+  * Put the touchpad into absolute mode
+  */
+ static int elantech_set_absolute_mode(struct psmouse *psmouse)
+@@ -1094,6 +1109,8 @@ static int elantech_get_resolution_v4(struct psmouse *psmouse,
+  * Asus K53SV              0x450f01        78, 15, 0c      2 hw buttons
+  * Asus G46VW              0x460f02        00, 18, 0c      2 hw buttons
+  * Asus G750JX             0x360f00        00, 16, 0c      2 hw buttons
++ * Asus TP500LN            0x381f17        10, 14, 0e      clickpad
++ * Asus X750JN             0x381f17        10, 14, 0e      clickpad
+  * Asus UX31               0x361f00        20, 15, 0e      clickpad
+  * Asus UX32VD             0x361f02        00, 15, 0e      clickpad
+  * Avatar AVIU-145A2       0x361f00        ?               clickpad
+@@ -1635,6 +1652,11 @@ int elantech_init(struct psmouse *psmouse)
+ 		goto init_fail;
+ 	}
  
- static void __exit iscsi_target_cleanup_module(void)
- {
--	iscsi_deallocate_thread_sets();
--	iscsi_thread_set_free();
- 	iscsit_release_discovery_tpg();
- 	iscsit_unregister_transport(&iscsi_target_transport);
- 	kmem_cache_destroy(lio_qr_cache);
-@@ -641,6 +634,7 @@ static void __exit iscsi_target_cleanup_module(void)
++	if (etd->fw_version == 0x381f17) {
++		etd->original_set_rate = psmouse->set_rate;
++		psmouse->set_rate = elantech_set_rate_restore_reg_07;
++	}
++
+ 	if (elantech_set_input_params(psmouse)) {
+ 		psmouse_err(psmouse, "failed to query touchpad range.\n");
+ 		goto init_fail;
+diff --git a/drivers/input/mouse/elantech.h b/drivers/input/mouse/elantech.h
+index 6f3afec..f965d15 100644
+--- a/drivers/input/mouse/elantech.h
++++ b/drivers/input/mouse/elantech.h
+@@ -142,6 +142,7 @@ struct elantech_data {
+ 	struct finger_pos mt[ETP_MAX_FINGERS];
+ 	unsigned char parity[256];
+ 	int (*send_cmd)(struct psmouse *psmouse, unsigned char c, unsigned char *param);
++	void (*original_set_rate)(struct psmouse *psmouse, unsigned int rate);
+ };
  
- 	iscsi_target_deregister_configfs();
+ #ifdef CONFIG_MOUSE_PS2_ELANTECH
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 713a962..41473929 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -925,11 +925,10 @@ static int crypt_convert(struct crypt_config *cc,
  
-+	vfree(iscsit_global->ts_bitmap);
- 	kfree(iscsit_global);
- }
+ 		switch (r) {
+ 		/* async */
++		case -EINPROGRESS:
+ 		case -EBUSY:
+ 			wait_for_completion(&ctx->restart);
+ 			reinit_completion(&ctx->restart);
+-			/* fall through*/
+-		case -EINPROGRESS:
+ 			ctx->req = NULL;
+ 			ctx->cc_sector++;
+ 			continue;
+@@ -1346,10 +1345,8 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
+ 	struct dm_crypt_io *io = container_of(ctx, struct dm_crypt_io, ctx);
+ 	struct crypt_config *cc = io->cc;
  
-@@ -3715,17 +3709,16 @@ static int iscsit_send_reject(
+-	if (error == -EINPROGRESS) {
+-		complete(&ctx->restart);
++	if (error == -EINPROGRESS)
+ 		return;
+-	}
  
- void iscsit_thread_get_cpumask(struct iscsi_conn *conn)
- {
--	struct iscsi_thread_set *ts = conn->thread_set;
- 	int ord, cpu;
- 	/*
--	 * thread_id is assigned from iscsit_global->ts_bitmap from
--	 * within iscsi_thread_set.c:iscsi_allocate_thread_sets()
-+	 * bitmap_id is assigned from iscsit_global->ts_bitmap from
-+	 * within iscsit_start_kthreads()
- 	 *
--	 * Here we use thread_id to determine which CPU that this
--	 * iSCSI connection's iscsi_thread_set will be scheduled to
-+	 * Here we use bitmap_id to determine which CPU that this
-+	 * iSCSI connection's RX/TX threads will be scheduled to
- 	 * execute upon.
- 	 */
--	ord = ts->thread_id % cpumask_weight(cpu_online_mask);
-+	ord = conn->bitmap_id % cpumask_weight(cpu_online_mask);
- 	for_each_online_cpu(cpu) {
- 		if (ord-- == 0) {
- 			cpumask_set_cpu(cpu, conn->conn_cpumask);
-@@ -3914,7 +3907,7 @@ check_rsp_state:
- 	switch (state) {
- 	case ISTATE_SEND_LOGOUTRSP:
- 		if (!iscsit_logout_post_handler(cmd, conn))
--			goto restart;
-+			return -ECONNRESET;
- 		/* fall through */
- 	case ISTATE_SEND_STATUS:
- 	case ISTATE_SEND_ASYNCMSG:
-@@ -3942,8 +3935,6 @@ check_rsp_state:
+ 	if (!error && cc->iv_gen_ops && cc->iv_gen_ops->post)
+ 		error = cc->iv_gen_ops->post(cc, iv_of_dmreq(cc, dmreq), dmreq);
+@@ -1360,12 +1357,15 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
+ 	crypt_free_req(cc, req_of_dmreq(cc, dmreq), io->base_bio);
  
- err:
- 	return -1;
--restart:
--	return -EAGAIN;
- }
+ 	if (!atomic_dec_and_test(&ctx->cc_pending))
+-		return;
++		goto done;
  
- static int iscsit_handle_response_queue(struct iscsi_conn *conn)
-@@ -3970,21 +3961,13 @@ static int iscsit_handle_response_queue(struct iscsi_conn *conn)
- int iscsi_target_tx_thread(void *arg)
- {
- 	int ret = 0;
--	struct iscsi_conn *conn;
--	struct iscsi_thread_set *ts = arg;
-+	struct iscsi_conn *conn = arg;
- 	/*
- 	 * Allow ourselves to be interrupted by SIGINT so that a
- 	 * connection recovery / failure event can be triggered externally.
- 	 */
- 	allow_signal(SIGINT);
+ 	if (bio_data_dir(io->base_bio) == READ)
+ 		kcryptd_crypt_read_done(io);
+ 	else
+ 		kcryptd_crypt_write_io_submit(io, 1);
++done:
++	if (!completion_done(&ctx->restart))
++		complete(&ctx->restart);
+ }
  
--restart:
--	conn = iscsi_tx_thread_pre_handler(ts);
--	if (!conn)
--		goto out;
--
--	ret = 0;
--
- 	while (!kthread_should_stop()) {
- 		/*
- 		 * Ensure that both TX and RX per connection kthreads
-@@ -3993,11 +3976,9 @@ restart:
- 		iscsit_thread_check_cpumask(conn, current, 1);
+ static void kcryptd_crypt(struct work_struct *work)
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 717daad..e617878 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -249,6 +249,7 @@ static void md_make_request(struct request_queue *q, struct bio *bio)
+ 	const int rw = bio_data_dir(bio);
+ 	struct mddev *mddev = q->queuedata;
+ 	unsigned int sectors;
++	int cpu;
  
- 		wait_event_interruptible(conn->queues_wq,
--					 !iscsit_conn_all_queues_empty(conn) ||
--					 ts->status == ISCSI_THREAD_SET_RESET);
-+					 !iscsit_conn_all_queues_empty(conn));
+ 	if (mddev == NULL || mddev->pers == NULL
+ 	    || !mddev->ready) {
+@@ -284,7 +285,10 @@ static void md_make_request(struct request_queue *q, struct bio *bio)
+ 	sectors = bio_sectors(bio);
+ 	mddev->pers->make_request(mddev, bio);
  
--		if ((ts->status == ISCSI_THREAD_SET_RESET) ||
--		     signal_pending(current))
-+		if (signal_pending(current))
- 			goto transport_err;
+-	generic_start_io_acct(rw, sectors, &mddev->gendisk->part0);
++	cpu = part_stat_lock();
++	part_stat_inc(cpu, &mddev->gendisk->part0, ios[rw]);
++	part_stat_add(cpu, &mddev->gendisk->part0, sectors[rw], sectors);
++	part_stat_unlock();
  
- get_immediate:
-@@ -4008,15 +3989,14 @@ get_immediate:
- 		ret = iscsit_handle_response_queue(conn);
- 		if (ret == 1)
- 			goto get_immediate;
--		else if (ret == -EAGAIN)
--			goto restart;
-+		else if (ret == -ECONNRESET)
-+			goto out;
- 		else if (ret < 0)
- 			goto transport_err;
- 	}
+ 	if (atomic_dec_and_test(&mddev->active_io) && mddev->suspended)
+ 		wake_up(&mddev->sb_wait);
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index 3ed9f42..3b5d7f7 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -313,7 +313,7 @@ static struct strip_zone *find_zone(struct r0conf *conf,
  
- transport_err:
- 	iscsit_take_action_for_connection_exit(conn);
--	goto restart;
- out:
- 	return 0;
- }
-@@ -4111,8 +4091,7 @@ int iscsi_target_rx_thread(void *arg)
- 	int ret;
- 	u8 buffer[ISCSI_HDR_LEN], opcode;
- 	u32 checksum = 0, digest = 0;
--	struct iscsi_conn *conn = NULL;
--	struct iscsi_thread_set *ts = arg;
-+	struct iscsi_conn *conn = arg;
- 	struct kvec iov;
- 	/*
- 	 * Allow ourselves to be interrupted by SIGINT so that a
-@@ -4120,11 +4099,6 @@ int iscsi_target_rx_thread(void *arg)
- 	 */
- 	allow_signal(SIGINT);
+ /*
+  * remaps the bio to the target device. we separate two flows.
+- * power 2 flow and a general flow for the sake of perfromance
++ * power 2 flow and a general flow for the sake of performance
+ */
+ static struct md_rdev *map_sector(struct mddev *mddev, struct strip_zone *zone,
+ 				sector_t sector, sector_t *sector_offset)
+@@ -524,6 +524,7 @@ static void raid0_make_request(struct mddev *mddev, struct bio *bio)
+ 			split = bio;
+ 		}
  
--restart:
--	conn = iscsi_rx_thread_pre_handler(ts);
--	if (!conn)
--		goto out;
--
- 	if (conn->conn_transport->transport_type == ISCSI_INFINIBAND) {
- 		struct completion comp;
- 		int rc;
-@@ -4134,7 +4108,7 @@ restart:
- 		if (rc < 0)
- 			goto transport_err;
++		sector = bio->bi_iter.bi_sector;
+ 		zone = find_zone(mddev->private, &sector);
+ 		tmp_dev = map_sector(mddev, zone, sector, &sector);
+ 		split->bi_bdev = tmp_dev->bdev;
+diff --git a/drivers/media/rc/img-ir/img-ir-core.c b/drivers/media/rc/img-ir/img-ir-core.c
+index 77c78de..7020659 100644
+--- a/drivers/media/rc/img-ir/img-ir-core.c
++++ b/drivers/media/rc/img-ir/img-ir-core.c
+@@ -146,7 +146,7 @@ static int img_ir_remove(struct platform_device *pdev)
+ {
+ 	struct img_ir_priv *priv = platform_get_drvdata(pdev);
  
--		goto out;
-+		goto transport_err;
- 	}
+-	free_irq(priv->irq, img_ir_isr);
++	free_irq(priv->irq, priv);
+ 	img_ir_remove_hw(priv);
+ 	img_ir_remove_raw(priv);
  
- 	while (!kthread_should_stop()) {
-@@ -4210,8 +4184,6 @@ transport_err:
- 	if (!signal_pending(current))
- 		atomic_set(&conn->transport_failed, 1);
- 	iscsit_take_action_for_connection_exit(conn);
--	goto restart;
--out:
- 	return 0;
- }
+diff --git a/drivers/media/usb/stk1160/stk1160-v4l.c b/drivers/media/usb/stk1160/stk1160-v4l.c
+index 65a326c..749ad56 100644
+--- a/drivers/media/usb/stk1160/stk1160-v4l.c
++++ b/drivers/media/usb/stk1160/stk1160-v4l.c
+@@ -240,6 +240,11 @@ static int stk1160_stop_streaming(struct stk1160 *dev)
+ 	if (mutex_lock_interruptible(&dev->v4l_lock))
+ 		return -ERESTARTSYS;
  
-@@ -4273,7 +4245,24 @@ int iscsit_close_connection(
- 	if (conn->conn_transport->transport_type == ISCSI_TCP)
- 		complete(&conn->conn_logout_comp);
++	/*
++	 * Once URBs are cancelled, the URB complete handler
++	 * won't be running. This is required to safely release the
++	 * current buffer (dev->isoc_ctl.buf).
++	 */
+ 	stk1160_cancel_isoc(dev);
  
--	iscsi_release_thread_set(conn);
-+	if (!strcmp(current->comm, ISCSI_RX_THREAD_NAME)) {
-+		if (conn->tx_thread &&
-+		    cmpxchg(&conn->tx_thread_active, true, false)) {
-+			send_sig(SIGINT, conn->tx_thread, 1);
-+			kthread_stop(conn->tx_thread);
-+		}
-+	} else if (!strcmp(current->comm, ISCSI_TX_THREAD_NAME)) {
-+		if (conn->rx_thread &&
-+		    cmpxchg(&conn->rx_thread_active, true, false)) {
-+			send_sig(SIGINT, conn->rx_thread, 1);
-+			kthread_stop(conn->rx_thread);
-+		}
-+	}
+ 	/*
+@@ -620,8 +625,16 @@ void stk1160_clear_queue(struct stk1160 *dev)
+ 		stk1160_info("buffer [%p/%d] aborted\n",
+ 				buf, buf->vb.v4l2_buf.index);
+ 	}
+-	/* It's important to clear current buffer */
+-	dev->isoc_ctl.buf = NULL;
 +
-+	spin_lock(&iscsit_global->ts_bitmap_lock);
-+	bitmap_release_region(iscsit_global->ts_bitmap, conn->bitmap_id,
-+			      get_order(1));
-+	spin_unlock(&iscsit_global->ts_bitmap_lock);
++	/* It's important to release the current buffer */
++	if (dev->isoc_ctl.buf) {
++		buf = dev->isoc_ctl.buf;
++		dev->isoc_ctl.buf = NULL;
++
++		vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR);
++		stk1160_info("buffer [%p/%d] aborted\n",
++				buf, buf->vb.v4l2_buf.index);
++	}
+ 	spin_unlock_irqrestore(&dev->buf_lock, flags);
+ }
  
- 	iscsit_stop_timers_for_cmds(conn);
- 	iscsit_stop_nopin_response_timer(conn);
-@@ -4551,15 +4540,13 @@ static void iscsit_logout_post_handler_closesession(
- 	struct iscsi_conn *conn)
- {
- 	struct iscsi_session *sess = conn->sess;
--
--	iscsi_set_thread_clear(conn, ISCSI_CLEAR_TX_THREAD);
--	iscsi_set_thread_set_signal(conn, ISCSI_SIGNAL_TX_THREAD);
-+	int sleep = cmpxchg(&conn->tx_thread_active, true, false);
+diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
+index fc145d2..922a750 100644
+--- a/drivers/memstick/core/mspro_block.c
++++ b/drivers/memstick/core/mspro_block.c
+@@ -758,7 +758,7 @@ static int mspro_block_complete_req(struct memstick_dev *card, int error)
  
- 	atomic_set(&conn->conn_logout_remove, 0);
- 	complete(&conn->conn_logout_comp);
+ 		if (error || (card->current_mrq.tpc == MSPRO_CMD_STOP)) {
+ 			if (msb->data_dir == READ) {
+-				for (cnt = 0; cnt < msb->current_seg; cnt++)
++				for (cnt = 0; cnt < msb->current_seg; cnt++) {
+ 					t_len += msb->req_sg[cnt].length
+ 						 / msb->page_size;
  
- 	iscsit_dec_conn_usage_count(conn);
--	iscsit_stop_session(sess, 1, 1);
-+	iscsit_stop_session(sess, sleep, sleep);
- 	iscsit_dec_session_usage_count(sess);
- 	target_put_session(sess->se_sess);
- }
-@@ -4567,13 +4554,12 @@ static void iscsit_logout_post_handler_closesession(
- static void iscsit_logout_post_handler_samecid(
- 	struct iscsi_conn *conn)
- {
--	iscsi_set_thread_clear(conn, ISCSI_CLEAR_TX_THREAD);
--	iscsi_set_thread_set_signal(conn, ISCSI_SIGNAL_TX_THREAD);
-+	int sleep = cmpxchg(&conn->tx_thread_active, true, false);
+@@ -766,6 +766,7 @@ static int mspro_block_complete_req(struct memstick_dev *card, int error)
+ 						t_len += msb->current_page - 1;
  
- 	atomic_set(&conn->conn_logout_remove, 0);
- 	complete(&conn->conn_logout_comp);
+ 					t_len *= msb->page_size;
++				}
+ 			}
+ 		} else
+ 			t_len = blk_rq_bytes(msb->block_req);
+diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c
+index 2a87f69..1aed3b7 100644
+--- a/drivers/mfd/mfd-core.c
++++ b/drivers/mfd/mfd-core.c
+@@ -128,7 +128,7 @@ static int mfd_add_device(struct device *parent, int id,
+ 	int platform_id;
+ 	int r;
  
--	iscsit_cause_connection_reinstatement(conn, 1);
-+	iscsit_cause_connection_reinstatement(conn, sleep);
- 	iscsit_dec_conn_usage_count(conn);
+-	if (id < 0)
++	if (id == PLATFORM_DEVID_AUTO)
+ 		platform_id = id;
+ 	else
+ 		platform_id = id + cell->id;
+diff --git a/drivers/mmc/host/sunxi-mmc.c b/drivers/mmc/host/sunxi-mmc.c
+index e8a4218..459ed1b 100644
+--- a/drivers/mmc/host/sunxi-mmc.c
++++ b/drivers/mmc/host/sunxi-mmc.c
+@@ -930,7 +930,9 @@ static int sunxi_mmc_resource_request(struct sunxi_mmc_host *host,
+ 		return PTR_ERR(host->clk_sample);
+ 	}
+ 
+-	host->reset = devm_reset_control_get(&pdev->dev, "ahb");
++	host->reset = devm_reset_control_get_optional(&pdev->dev, "ahb");
++	if (PTR_ERR(host->reset) == -EPROBE_DEFER)
++		return PTR_ERR(host->reset);
+ 
+ 	ret = clk_prepare_enable(host->clk_ahb);
+ 	if (ret) {
+diff --git a/drivers/mmc/host/tmio_mmc_pio.c b/drivers/mmc/host/tmio_mmc_pio.c
+index a31c357..dba7e1c 100644
+--- a/drivers/mmc/host/tmio_mmc_pio.c
++++ b/drivers/mmc/host/tmio_mmc_pio.c
+@@ -1073,8 +1073,6 @@ EXPORT_SYMBOL(tmio_mmc_host_alloc);
+ void tmio_mmc_host_free(struct tmio_mmc_host *host)
+ {
+ 	mmc_free_host(host->mmc);
+-
+-	host->mmc = NULL;
  }
+ EXPORT_SYMBOL(tmio_mmc_host_free);
  
-diff --git a/drivers/target/iscsi/iscsi_target_erl0.c b/drivers/target/iscsi/iscsi_target_erl0.c
-index bdd8731..e008ed2 100644
---- a/drivers/target/iscsi/iscsi_target_erl0.c
-+++ b/drivers/target/iscsi/iscsi_target_erl0.c
-@@ -860,7 +860,10 @@ void iscsit_connection_reinstatement_rcfr(struct iscsi_conn *conn)
+diff --git a/drivers/mtd/ubi/attach.c b/drivers/mtd/ubi/attach.c
+index 9d2e16f..b5e1548 100644
+--- a/drivers/mtd/ubi/attach.c
++++ b/drivers/mtd/ubi/attach.c
+@@ -410,7 +410,7 @@ int ubi_compare_lebs(struct ubi_device *ubi, const struct ubi_ainf_peb *aeb,
+ 		second_is_newer = !second_is_newer;
+ 	} else {
+ 		dbg_bld("PEB %d CRC is OK", pnum);
+-		bitflips = !!err;
++		bitflips |= !!err;
  	}
- 	spin_unlock_bh(&conn->state_lock);
+ 	mutex_unlock(&ubi->buf_mutex);
  
--	iscsi_thread_set_force_reinstatement(conn);
-+	if (conn->tx_thread && conn->tx_thread_active)
-+		send_sig(SIGINT, conn->tx_thread, 1);
-+	if (conn->rx_thread && conn->rx_thread_active)
-+		send_sig(SIGINT, conn->rx_thread, 1);
+diff --git a/drivers/mtd/ubi/cdev.c b/drivers/mtd/ubi/cdev.c
+index d647e50..d16fccf 100644
+--- a/drivers/mtd/ubi/cdev.c
++++ b/drivers/mtd/ubi/cdev.c
+@@ -455,7 +455,7 @@ static long vol_cdev_ioctl(struct file *file, unsigned int cmd,
+ 		/* Validate the request */
+ 		err = -EINVAL;
+ 		if (req.lnum < 0 || req.lnum >= vol->reserved_pebs ||
+-		    req.bytes < 0 || req.lnum >= vol->usable_leb_size)
++		    req.bytes < 0 || req.bytes > vol->usable_leb_size)
+ 			break;
  
- sleep:
- 	wait_for_completion(&conn->conn_wait_rcfr_comp);
-@@ -885,10 +888,10 @@ void iscsit_cause_connection_reinstatement(struct iscsi_conn *conn, int sleep)
- 		return;
+ 		err = get_exclusive(desc);
+diff --git a/drivers/mtd/ubi/eba.c b/drivers/mtd/ubi/eba.c
+index 16e34b3..8c9a710 100644
+--- a/drivers/mtd/ubi/eba.c
++++ b/drivers/mtd/ubi/eba.c
+@@ -1419,7 +1419,8 @@ int ubi_eba_init(struct ubi_device *ubi, struct ubi_attach_info *ai)
+ 				 * during re-size.
+ 				 */
+ 				ubi_move_aeb_to_list(av, aeb, &ai->erase);
+-			vol->eba_tbl[aeb->lnum] = aeb->pnum;
++			else
++				vol->eba_tbl[aeb->lnum] = aeb->pnum;
+ 		}
  	}
  
--	if (iscsi_thread_set_force_reinstatement(conn) < 0) {
--		spin_unlock_bh(&conn->state_lock);
--		return;
--	}
-+	if (conn->tx_thread && conn->tx_thread_active)
-+		send_sig(SIGINT, conn->tx_thread, 1);
-+	if (conn->rx_thread && conn->rx_thread_active)
-+		send_sig(SIGINT, conn->rx_thread, 1);
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index 8f7bde6..0bd92d8 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -1002,7 +1002,7 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+ 				int shutdown)
+ {
+ 	int err, scrubbing = 0, torture = 0, protect = 0, erroneous = 0;
+-	int vol_id = -1, uninitialized_var(lnum);
++	int vol_id = -1, lnum = -1;
+ #ifdef CONFIG_MTD_UBI_FASTMAP
+ 	int anchor = wrk->anchor;
+ #endif
+diff --git a/drivers/net/ethernet/cadence/macb.c b/drivers/net/ethernet/cadence/macb.c
+index 81d4153..77bf133 100644
+--- a/drivers/net/ethernet/cadence/macb.c
++++ b/drivers/net/ethernet/cadence/macb.c
+@@ -2165,7 +2165,7 @@ static void macb_configure_caps(struct macb *bp)
+ 		}
+ 	}
  
- 	atomic_set(&conn->connection_reinstatement, 1);
- 	if (!sleep) {
-diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
-index 153fb66..345f073 100644
---- a/drivers/target/iscsi/iscsi_target_login.c
-+++ b/drivers/target/iscsi/iscsi_target_login.c
-@@ -699,6 +699,51 @@ static void iscsi_post_login_start_timers(struct iscsi_conn *conn)
- 		iscsit_start_nopin_timer(conn);
- }
+-	if (MACB_BFEXT(IDNUM, macb_readl(bp, MID)) == 0x2)
++	if (MACB_BFEXT(IDNUM, macb_readl(bp, MID)) >= 0x2)
+ 		bp->caps |= MACB_CAPS_MACB_IS_GEM;
  
-+int iscsit_start_kthreads(struct iscsi_conn *conn)
+ 	if (macb_is_gem(bp)) {
+diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
+index 7f997d3..a71c446 100644
+--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
++++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
+@@ -144,6 +144,11 @@ static bool e1000_clean_rx_irq(struct e1000_adapter *adapter,
+ static bool e1000_clean_jumbo_rx_irq(struct e1000_adapter *adapter,
+ 				     struct e1000_rx_ring *rx_ring,
+ 				     int *work_done, int work_to_do);
++static void e1000_alloc_dummy_rx_buffers(struct e1000_adapter *adapter,
++					 struct e1000_rx_ring *rx_ring,
++					 int cleaned_count)
 +{
-+	int ret = 0;
-+
-+	spin_lock(&iscsit_global->ts_bitmap_lock);
-+	conn->bitmap_id = bitmap_find_free_region(iscsit_global->ts_bitmap,
-+					ISCSIT_BITMAP_BITS, get_order(1));
-+	spin_unlock(&iscsit_global->ts_bitmap_lock);
-+
-+	if (conn->bitmap_id < 0) {
-+		pr_err("bitmap_find_free_region() failed for"
-+		       " iscsit_start_kthreads()\n");
-+		return -ENOMEM;
-+	}
-+
-+	conn->tx_thread = kthread_run(iscsi_target_tx_thread, conn,
-+				      "%s", ISCSI_TX_THREAD_NAME);
-+	if (IS_ERR(conn->tx_thread)) {
-+		pr_err("Unable to start iscsi_target_tx_thread\n");
-+		ret = PTR_ERR(conn->tx_thread);
-+		goto out_bitmap;
-+	}
-+	conn->tx_thread_active = true;
-+
-+	conn->rx_thread = kthread_run(iscsi_target_rx_thread, conn,
-+				      "%s", ISCSI_RX_THREAD_NAME);
-+	if (IS_ERR(conn->rx_thread)) {
-+		pr_err("Unable to start iscsi_target_rx_thread\n");
-+		ret = PTR_ERR(conn->rx_thread);
-+		goto out_tx;
-+	}
-+	conn->rx_thread_active = true;
-+
-+	return 0;
-+out_tx:
-+	kthread_stop(conn->tx_thread);
-+	conn->tx_thread_active = false;
-+out_bitmap:
-+	spin_lock(&iscsit_global->ts_bitmap_lock);
-+	bitmap_release_region(iscsit_global->ts_bitmap, conn->bitmap_id,
-+			      get_order(1));
-+	spin_unlock(&iscsit_global->ts_bitmap_lock);
-+	return ret;
 +}
-+
- int iscsi_post_login_handler(
- 	struct iscsi_np *np,
- 	struct iscsi_conn *conn,
-@@ -709,7 +754,7 @@ int iscsi_post_login_handler(
- 	struct se_session *se_sess = sess->se_sess;
- 	struct iscsi_portal_group *tpg = sess->tpg;
- 	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
--	struct iscsi_thread_set *ts;
-+	int rc;
- 
- 	iscsit_inc_conn_usage_count(conn);
+ static void e1000_alloc_rx_buffers(struct e1000_adapter *adapter,
+ 				   struct e1000_rx_ring *rx_ring,
+ 				   int cleaned_count);
+@@ -3552,8 +3557,11 @@ static int e1000_change_mtu(struct net_device *netdev, int new_mtu)
+ 		msleep(1);
+ 	/* e1000_down has a dependency on max_frame_size */
+ 	hw->max_frame_size = max_frame;
+-	if (netif_running(netdev))
++	if (netif_running(netdev)) {
++		/* prevent buffers from being reallocated */
++		adapter->alloc_rx_buf = e1000_alloc_dummy_rx_buffers;
+ 		e1000_down(adapter);
++	}
  
-@@ -724,7 +769,6 @@ int iscsi_post_login_handler(
- 	/*
- 	 * SCSI Initiator -> SCSI Target Port Mapping
- 	 */
--	ts = iscsi_get_thread_set();
- 	if (!zero_tsih) {
- 		iscsi_set_session_parameters(sess->sess_ops,
- 				conn->param_list, 0);
-@@ -751,9 +795,11 @@ int iscsi_post_login_handler(
- 			sess->sess_ops->InitiatorName);
- 		spin_unlock_bh(&sess->conn_lock);
+ 	/* NOTE: netdev_alloc_skb reserves 16 bytes, and typically NET_IP_ALIGN
+ 	 * means we reserve 2 more, this pushes us to allocate from the next
+diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
+index af829c5..7ace07d 100644
+--- a/drivers/net/ethernet/marvell/pxa168_eth.c
++++ b/drivers/net/ethernet/marvell/pxa168_eth.c
+@@ -1508,7 +1508,8 @@ static int pxa168_eth_probe(struct platform_device *pdev)
+ 		np = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
+ 		if (!np) {
+ 			dev_err(&pdev->dev, "missing phy-handle\n");
+-			return -EINVAL;
++			err = -EINVAL;
++			goto err_netdev;
+ 		}
+ 		of_property_read_u32(np, "reg", &pep->phy_addr);
+ 		pep->phy_intf = of_get_phy_mode(pdev->dev.of_node);
+@@ -1526,7 +1527,7 @@ static int pxa168_eth_probe(struct platform_device *pdev)
+ 	pep->smi_bus = mdiobus_alloc();
+ 	if (pep->smi_bus == NULL) {
+ 		err = -ENOMEM;
+-		goto err_base;
++		goto err_netdev;
+ 	}
+ 	pep->smi_bus->priv = pep;
+ 	pep->smi_bus->name = "pxa168_eth smi";
+@@ -1551,13 +1552,10 @@ err_mdiobus:
+ 	mdiobus_unregister(pep->smi_bus);
+ err_free_mdio:
+ 	mdiobus_free(pep->smi_bus);
+-err_base:
+-	iounmap(pep->base);
+ err_netdev:
+ 	free_netdev(dev);
+ err_clk:
+-	clk_disable(clk);
+-	clk_put(clk);
++	clk_disable_unprepare(clk);
+ 	return err;
+ }
  
--		iscsi_post_login_start_timers(conn);
-+		rc = iscsit_start_kthreads(conn);
-+		if (rc)
-+			return rc;
+@@ -1574,13 +1572,9 @@ static int pxa168_eth_remove(struct platform_device *pdev)
+ 	if (pep->phy)
+ 		phy_disconnect(pep->phy);
+ 	if (pep->clk) {
+-		clk_disable(pep->clk);
+-		clk_put(pep->clk);
+-		pep->clk = NULL;
++		clk_disable_unprepare(pep->clk);
+ 	}
  
--		iscsi_activate_thread_set(conn, ts);
-+		iscsi_post_login_start_timers(conn);
- 		/*
- 		 * Determine CPU mask to ensure connection's RX and TX kthreads
- 		 * are scheduled on the same CPU.
-@@ -810,8 +856,11 @@ int iscsi_post_login_handler(
- 		" iSCSI Target Portal Group: %hu\n", tpg->nsessions, tpg->tpgt);
- 	spin_unlock_bh(&se_tpg->session_lock);
+-	iounmap(pep->base);
+-	pep->base = NULL;
+ 	mdiobus_unregister(pep->smi_bus);
+ 	mdiobus_free(pep->smi_bus);
+ 	unregister_netdev(dev);
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+index a7b58ba..3dccf01 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+@@ -981,20 +981,21 @@ static int mlx4_en_check_rxfh_func(struct net_device *dev, u8 hfunc)
+ 	struct mlx4_en_priv *priv = netdev_priv(dev);
  
-+	rc = iscsit_start_kthreads(conn);
-+	if (rc)
-+		return rc;
-+
- 	iscsi_post_login_start_timers(conn);
--	iscsi_activate_thread_set(conn, ts);
- 	/*
- 	 * Determine CPU mask to ensure connection's RX and TX kthreads
- 	 * are scheduled on the same CPU.
-diff --git a/include/target/iscsi/iscsi_target_core.h b/include/target/iscsi/iscsi_target_core.h
-index d3583d3..dd0f3ab 100644
---- a/include/target/iscsi/iscsi_target_core.h
-+++ b/include/target/iscsi/iscsi_target_core.h
-@@ -602,6 +602,11 @@ struct iscsi_conn {
- 	struct iscsi_session	*sess;
- 	/* Pointer to thread_set in use for this conn's threads */
- 	struct iscsi_thread_set	*thread_set;
-+	int			bitmap_id;
-+	int			rx_thread_active;
-+	struct task_struct	*rx_thread;
-+	int			tx_thread_active;
-+	struct task_struct	*tx_thread;
- 	/* list_head for session connection list */
- 	struct list_head	conn_list;
- } ____cacheline_aligned;
-@@ -871,10 +876,12 @@ struct iscsit_global {
- 	/* Unique identifier used for the authentication daemon */
- 	u32			auth_id;
- 	u32			inactive_ts;
-+#define ISCSIT_BITMAP_BITS	262144
- 	/* Thread Set bitmap count */
- 	int			ts_bitmap_count;
- 	/* Thread Set bitmap pointer */
- 	unsigned long		*ts_bitmap;
-+	spinlock_t		ts_bitmap_lock;
- 	/* Used for iSCSI discovery session authentication */
- 	struct iscsi_node_acl	discovery_acl;
- 	struct iscsi_portal_group	*discovery_tpg;
--- 
-2.3.6
-
-
-From ca7767a3f859d6e5487ddcf7a23515e19188b922 Mon Sep 17 00:00:00 2001
-From: Nicholas Bellinger <nab@linux-iscsi.org>
-Date: Tue, 7 Apr 2015 21:53:27 +0000
-Subject: [PATCH 132/219] target: Fix COMPARE_AND_WRITE with SG_TO_MEM_NOALLOC
- handling
-Cc: mpagano@gentoo.org
-
-commit c8e639852ad720499912acedfd6b072325fd2807 upstream.
-
-This patch fixes a bug for COMPARE_AND_WRITE handling with
-fabrics using SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC.
-
-It adds the missing allocation for cmd->t_bidi_data_sg within
-transport_generic_new_cmd() that is used by COMPARE_AND_WRITE
-for the initial READ payload, even if the fabric is already
-providing a pre-allocated buffer for cmd->t_data_sg.
-
-Also, fix zero-length COMPARE_AND_WRITE handling within the
-compare_and_write_callback() and target_complete_ok_work()
-to queue the response, skipping the initial READ.
-
-This fixes COMPARE_AND_WRITE emulation with loopback, vhost,
-and xen-backend fabric drivers using SG_TO_MEM_NOALLOC.
-
-Reported-by: Christoph Hellwig <hch@lst.de>
-Cc: Christoph Hellwig <hch@lst.de>
-Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/target/target_core_sbc.c       | 15 +++++++++-----
- drivers/target/target_core_transport.c | 37 ++++++++++++++++++++++++++++++----
- include/target/target_core_base.h      |  2 +-
- 3 files changed, 44 insertions(+), 10 deletions(-)
-
-diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
-index 3e72974..755bd9b3 100644
---- a/drivers/target/target_core_sbc.c
-+++ b/drivers/target/target_core_sbc.c
-@@ -312,7 +312,7 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
- 	return 0;
- }
+ 	/* check if requested function is supported by the device */
+-	if ((hfunc == ETH_RSS_HASH_TOP &&
+-	     !(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_TOP)) ||
+-	    (hfunc == ETH_RSS_HASH_XOR &&
+-	     !(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_XOR)))
+-		return -EINVAL;
++	if (hfunc == ETH_RSS_HASH_TOP) {
++		if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_TOP))
++			return -EINVAL;
++		if (!(dev->features & NETIF_F_RXHASH))
++			en_warn(priv, "Toeplitz hash function should be used in conjunction with RX hashing for optimal performance\n");
++		return 0;
++	} else if (hfunc == ETH_RSS_HASH_XOR) {
++		if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_XOR))
++			return -EINVAL;
++		if (dev->features & NETIF_F_RXHASH)
++			en_warn(priv, "Enabling both XOR Hash function and RX Hashing can limit RPS functionality\n");
++		return 0;
++	}
  
--static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd)
-+static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd, bool success)
- {
- 	unsigned char *buf, *addr;
- 	struct scatterlist *sg;
-@@ -376,7 +376,7 @@ sbc_execute_rw(struct se_cmd *cmd)
- 			       cmd->data_direction);
+-	priv->rss_hash_fn = hfunc;
+-	if (hfunc == ETH_RSS_HASH_TOP && !(dev->features & NETIF_F_RXHASH))
+-		en_warn(priv,
+-			"Toeplitz hash function should be used in conjunction with RX hashing for optimal performance\n");
+-	if (hfunc == ETH_RSS_HASH_XOR && (dev->features & NETIF_F_RXHASH))
+-		en_warn(priv,
+-			"Enabling both XOR Hash function and RX Hashing can limit RPS functionality\n");
+-	return 0;
++	return -EINVAL;
  }
  
--static sense_reason_t compare_and_write_post(struct se_cmd *cmd)
-+static sense_reason_t compare_and_write_post(struct se_cmd *cmd, bool success)
- {
- 	struct se_device *dev = cmd->se_dev;
+ static int mlx4_en_get_rxfh(struct net_device *dev, u32 *ring_index, u8 *key,
+@@ -1068,6 +1069,8 @@ static int mlx4_en_set_rxfh(struct net_device *dev, const u32 *ring_index,
+ 		priv->prof->rss_rings = rss_rings;
+ 	if (key)
+ 		memcpy(priv->rss_key, key, MLX4_EN_RSS_KEY_SIZE);
++	if (hfunc !=  ETH_RSS_HASH_NO_CHANGE)
++		priv->rss_hash_fn = hfunc;
  
-@@ -399,7 +399,7 @@ static sense_reason_t compare_and_write_post(struct se_cmd *cmd)
- 	return TCM_NO_SENSE;
- }
- 
--static sense_reason_t compare_and_write_callback(struct se_cmd *cmd)
-+static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool success)
+ 	if (port_up) {
+ 		err = mlx4_en_start_port(dev);
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index af034db..9d15566 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -1716,6 +1716,7 @@ ppp_receive_frame(struct ppp *ppp, struct sk_buff *skb, struct channel *pch)
  {
- 	struct se_device *dev = cmd->se_dev;
- 	struct scatterlist *write_sg = NULL, *sg;
-@@ -414,11 +414,16 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd)
+ 	/* note: a 0-length skb is used as an error indication */
+ 	if (skb->len > 0) {
++		skb_checksum_complete_unset(skb);
+ #ifdef CONFIG_PPP_MULTILINK
+ 		/* XXX do channel-level decompression here */
+ 		if (PPP_PROTO(skb) == PPP_MP)
+diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
+index 90a714c..23806c2 100644
+--- a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
++++ b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
+@@ -321,6 +321,7 @@ static struct usb_device_id rtl8192c_usb_ids[] = {
+ 	{RTL_USB_DEVICE(0x07b8, 0x8188, rtl92cu_hal_cfg)}, /*Abocom - Abocom*/
+ 	{RTL_USB_DEVICE(0x07b8, 0x8189, rtl92cu_hal_cfg)}, /*Funai - Abocom*/
+ 	{RTL_USB_DEVICE(0x0846, 0x9041, rtl92cu_hal_cfg)}, /*NetGear WNA1000M*/
++	{RTL_USB_DEVICE(0x0b05, 0x17ba, rtl92cu_hal_cfg)}, /*ASUS-Edimax*/
+ 	{RTL_USB_DEVICE(0x0bda, 0x5088, rtl92cu_hal_cfg)}, /*Thinkware-CC&C*/
+ 	{RTL_USB_DEVICE(0x0df6, 0x0052, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/
+ 	{RTL_USB_DEVICE(0x0df6, 0x005c, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/
+@@ -377,6 +378,7 @@ static struct usb_device_id rtl8192c_usb_ids[] = {
+ 	{RTL_USB_DEVICE(0x2001, 0x3307, rtl92cu_hal_cfg)}, /*D-Link-Cameo*/
+ 	{RTL_USB_DEVICE(0x2001, 0x3309, rtl92cu_hal_cfg)}, /*D-Link-Alpha*/
+ 	{RTL_USB_DEVICE(0x2001, 0x330a, rtl92cu_hal_cfg)}, /*D-Link-Alpha*/
++	{RTL_USB_DEVICE(0x2001, 0x330d, rtl92cu_hal_cfg)}, /*D-Link DWA-131 */
+ 	{RTL_USB_DEVICE(0x2019, 0xab2b, rtl92cu_hal_cfg)}, /*Planex -Abocom*/
+ 	{RTL_USB_DEVICE(0x20f4, 0x624d, rtl92cu_hal_cfg)}, /*TRENDNet*/
+ 	{RTL_USB_DEVICE(0x2357, 0x0100, rtl92cu_hal_cfg)}, /*TP-Link WN8200ND*/
+diff --git a/drivers/net/wireless/ti/wl18xx/debugfs.c b/drivers/net/wireless/ti/wl18xx/debugfs.c
+index c93fae9..5fbd223 100644
+--- a/drivers/net/wireless/ti/wl18xx/debugfs.c
++++ b/drivers/net/wireless/ti/wl18xx/debugfs.c
+@@ -139,7 +139,7 @@ WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, protection_filter, "%u");
+ WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, accum_arp_pend_requests, "%u");
+ WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, max_arp_queue_dep, "%u");
  
- 	/*
- 	 * Handle early failure in transport_generic_request_failure(),
--	 * which will not have taken ->caw_mutex yet..
-+	 * which will not have taken ->caw_sem yet..
- 	 */
--	if (!cmd->t_data_sg || !cmd->t_bidi_data_sg)
-+	if (!success && (!cmd->t_data_sg || !cmd->t_bidi_data_sg))
- 		return TCM_NO_SENSE;
- 	/*
-+	 * Handle special case for zero-length COMPARE_AND_WRITE
-+	 */
-+	if (!cmd->data_length)
-+		goto out;
-+	/*
- 	 * Immediately exit + release dev->caw_sem if command has already
- 	 * been failed with a non-zero SCSI status.
- 	 */
-diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
-index ac3cbab..f786de0 100644
---- a/drivers/target/target_core_transport.c
-+++ b/drivers/target/target_core_transport.c
-@@ -1615,11 +1615,11 @@ void transport_generic_request_failure(struct se_cmd *cmd,
- 	transport_complete_task_attr(cmd);
- 	/*
- 	 * Handle special case for COMPARE_AND_WRITE failure, where the
--	 * callback is expected to drop the per device ->caw_mutex.
-+	 * callback is expected to drop the per device ->caw_sem.
- 	 */
- 	if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) &&
- 	     cmd->transport_complete_callback)
--		cmd->transport_complete_callback(cmd);
-+		cmd->transport_complete_callback(cmd, false);
+-WL18XX_DEBUGFS_FWSTATS_FILE(rx_rate, rx_frames_per_rates, "%u");
++WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY(rx_rate, rx_frames_per_rates, 50);
  
- 	switch (sense_reason) {
- 	case TCM_NON_EXISTENT_LUN:
-@@ -1975,8 +1975,12 @@ static void target_complete_ok_work(struct work_struct *work)
- 	if (cmd->transport_complete_callback) {
- 		sense_reason_t rc;
+ WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY(aggr_size, tx_agg_vs_rate,
+ 				  AGGR_STATS_TX_AGG*AGGR_STATS_TX_RATE);
+diff --git a/drivers/net/wireless/ti/wlcore/debugfs.h b/drivers/net/wireless/ti/wlcore/debugfs.h
+index 0f2cfb0..bf14676 100644
+--- a/drivers/net/wireless/ti/wlcore/debugfs.h
++++ b/drivers/net/wireless/ti/wlcore/debugfs.h
+@@ -26,8 +26,8 @@
  
--		rc = cmd->transport_complete_callback(cmd);
-+		rc = cmd->transport_complete_callback(cmd, true);
- 		if (!rc && !(cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE_POST)) {
-+			if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) &&
-+			    !cmd->data_length)
-+				goto queue_rsp;
-+
- 			return;
- 		} else if (rc) {
- 			ret = transport_send_check_condition_and_sense(cmd,
-@@ -1990,6 +1994,7 @@ static void target_complete_ok_work(struct work_struct *work)
- 		}
- 	}
+ #include "wlcore.h"
  
-+queue_rsp:
- 	switch (cmd->data_direction) {
- 	case DMA_FROM_DEVICE:
- 		spin_lock(&cmd->se_lun->lun_sep_lock);
-@@ -2094,6 +2099,16 @@ static inline void transport_reset_sgl_orig(struct se_cmd *cmd)
- static inline void transport_free_pages(struct se_cmd *cmd)
- {
- 	if (cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) {
-+		/*
-+		 * Release special case READ buffer payload required for
-+		 * SG_TO_MEM_NOALLOC to function with COMPARE_AND_WRITE
-+		 */
-+		if (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) {
-+			transport_free_sgl(cmd->t_bidi_data_sg,
-+					   cmd->t_bidi_data_nents);
-+			cmd->t_bidi_data_sg = NULL;
-+			cmd->t_bidi_data_nents = 0;
-+		}
- 		transport_reset_sgl_orig(cmd);
- 		return;
- 	}
-@@ -2246,6 +2261,7 @@ sense_reason_t
- transport_generic_new_cmd(struct se_cmd *cmd)
- {
- 	int ret = 0;
-+	bool zero_flag = !(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB);
+-int wl1271_format_buffer(char __user *userbuf, size_t count,
+-			 loff_t *ppos, char *fmt, ...);
++__printf(4, 5) int wl1271_format_buffer(char __user *userbuf, size_t count,
++					loff_t *ppos, char *fmt, ...);
  
- 	/*
- 	 * Determine is the TCM fabric module has already allocated physical
-@@ -2254,7 +2270,6 @@ transport_generic_new_cmd(struct se_cmd *cmd)
- 	 */
- 	if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) &&
- 	    cmd->data_length) {
--		bool zero_flag = !(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB);
+ int wl1271_debugfs_init(struct wl1271 *wl);
+ void wl1271_debugfs_exit(struct wl1271 *wl);
+diff --git a/drivers/nfc/st21nfcb/i2c.c b/drivers/nfc/st21nfcb/i2c.c
+index eb88693..7b53a5c 100644
+--- a/drivers/nfc/st21nfcb/i2c.c
++++ b/drivers/nfc/st21nfcb/i2c.c
+@@ -109,7 +109,7 @@ static int st21nfcb_nci_i2c_write(void *phy_id, struct sk_buff *skb)
+ 		return phy->ndlc->hard_fault;
  
- 		if ((cmd->se_cmd_flags & SCF_BIDI) ||
- 		    (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE)) {
-@@ -2285,6 +2300,20 @@ transport_generic_new_cmd(struct se_cmd *cmd)
- 				       cmd->data_length, zero_flag);
- 		if (ret < 0)
- 			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
-+	} else if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) &&
-+		    cmd->data_length) {
-+		/*
-+		 * Special case for COMPARE_AND_WRITE with fabrics
-+		 * using SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC.
-+		 */
-+		u32 caw_length = cmd->t_task_nolb *
-+				 cmd->se_dev->dev_attrib.block_size;
-+
-+		ret = target_alloc_sgl(&cmd->t_bidi_data_sg,
-+				       &cmd->t_bidi_data_nents,
-+				       caw_length, zero_flag);
-+		if (ret < 0)
-+			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+ 	r = i2c_master_send(client, skb->data, skb->len);
+-	if (r == -EREMOTEIO) {  /* Retry, chip was in standby */
++	if (r < 0) {  /* Retry, chip was in standby */
+ 		usleep_range(1000, 4000);
+ 		r = i2c_master_send(client, skb->data, skb->len);
  	}
- 	/*
- 	 * If this command is not a write we can execute it right here,
-diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
-index 672150b..985ca4c 100644
---- a/include/target/target_core_base.h
-+++ b/include/target/target_core_base.h
-@@ -524,7 +524,7 @@ struct se_cmd {
- 	sense_reason_t		(*execute_cmd)(struct se_cmd *);
- 	sense_reason_t		(*execute_rw)(struct se_cmd *, struct scatterlist *,
- 					      u32, enum dma_data_direction);
--	sense_reason_t (*transport_complete_callback)(struct se_cmd *);
-+	sense_reason_t (*transport_complete_callback)(struct se_cmd *, bool);
+@@ -148,7 +148,7 @@ static int st21nfcb_nci_i2c_read(struct st21nfcb_i2c_phy *phy,
+ 	struct i2c_client *client = phy->i2c_dev;
  
- 	unsigned char		*t_task_cdb;
- 	unsigned char		__t_task_cdb[TCM_MAX_COMMAND_SIZE];
--- 
-2.3.6
-
-
-From 54afccf4a4f42da1ef3eca9b56ed8dd25a8d7f1c Mon Sep 17 00:00:00 2001
-From: Akinobu Mita <akinobu.mita@gmail.com>
-Date: Mon, 13 Apr 2015 23:21:56 +0900
-Subject: [PATCH 133/219] target/file: Fix BUG() when CONFIG_DEBUG_SG=y and DIF
- protection enabled
-Cc: mpagano@gentoo.org
-
-commit 38da0f49e8aa1649af397d53f88e163d0e60c058 upstream.
-
-When CONFIG_DEBUG_SG=y and DIF protection support enabled, kernel
-BUG()s are triggered due to the following two issues:
-
-1) prot_sg is not initialized by sg_init_table().
-
-When CONFIG_DEBUG_SG=y, scatterlist helpers check sg entry has a
-correct magic value.
-
-2) vmalloc'ed buffer is passed to sg_set_buf().
-
-sg_set_buf() uses virt_to_page() to convert virtual address to struct
-page, but it doesn't work with vmalloc address.  vmalloc_to_page()
-should be used instead.  As prot_buf isn't usually too large, so
-fix it by allocating prot_buf by kmalloc instead of vmalloc.
-
-Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
-Cc: Sagi Grimberg <sagig@mellanox.com>
-Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
-Cc: Christoph Hellwig <hch@lst.de>
-Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
-Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/target/target_core_file.c | 15 ++++++++-------
- 1 file changed, 8 insertions(+), 7 deletions(-)
-
-diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
-index 44620fb..8ca1883 100644
---- a/drivers/target/target_core_file.c
-+++ b/drivers/target/target_core_file.c
-@@ -274,7 +274,7 @@ static int fd_do_prot_rw(struct se_cmd *cmd, struct fd_prot *fd_prot,
- 		     se_dev->prot_length;
+ 	r = i2c_master_recv(client, buf, ST21NFCB_NCI_I2C_MIN_SIZE);
+-	if (r == -EREMOTEIO) {  /* Retry, chip was in standby */
++	if (r < 0) {  /* Retry, chip was in standby */
+ 		usleep_range(1000, 4000);
+ 		r = i2c_master_recv(client, buf, ST21NFCB_NCI_I2C_MIN_SIZE);
+ 	}
+diff --git a/drivers/platform/x86/compal-laptop.c b/drivers/platform/x86/compal-laptop.c
+index 15c0fab..bceb30b 100644
+--- a/drivers/platform/x86/compal-laptop.c
++++ b/drivers/platform/x86/compal-laptop.c
+@@ -1026,9 +1026,9 @@ static int compal_probe(struct platform_device *pdev)
+ 	if (err)
+ 		return err;
  
- 	if (!is_write) {
--		fd_prot->prot_buf = vzalloc(prot_size);
-+		fd_prot->prot_buf = kzalloc(prot_size, GFP_KERNEL);
- 		if (!fd_prot->prot_buf) {
- 			pr_err("Unable to allocate fd_prot->prot_buf\n");
- 			return -ENOMEM;
-@@ -286,9 +286,10 @@ static int fd_do_prot_rw(struct se_cmd *cmd, struct fd_prot *fd_prot,
- 					   fd_prot->prot_sg_nents, GFP_KERNEL);
- 		if (!fd_prot->prot_sg) {
- 			pr_err("Unable to allocate fd_prot->prot_sg\n");
--			vfree(fd_prot->prot_buf);
-+			kfree(fd_prot->prot_buf);
- 			return -ENOMEM;
- 		}
-+		sg_init_table(fd_prot->prot_sg, fd_prot->prot_sg_nents);
- 		size = prot_size;
+-	hwmon_dev = hwmon_device_register_with_groups(&pdev->dev,
+-						      "compal", data,
+-						      compal_hwmon_groups);
++	hwmon_dev = devm_hwmon_device_register_with_groups(&pdev->dev,
++							   "compal", data,
++							   compal_hwmon_groups);
+ 	if (IS_ERR(hwmon_dev)) {
+ 		err = PTR_ERR(hwmon_dev);
+ 		goto remove;
+@@ -1036,7 +1036,9 @@ static int compal_probe(struct platform_device *pdev)
  
- 		for_each_sg(fd_prot->prot_sg, sg, fd_prot->prot_sg_nents, i) {
-@@ -318,7 +319,7 @@ static int fd_do_prot_rw(struct se_cmd *cmd, struct fd_prot *fd_prot,
+ 	/* Power supply */
+ 	initialize_power_supply_data(data);
+-	power_supply_register(&compal_device->dev, &data->psy);
++	err = power_supply_register(&compal_device->dev, &data->psy);
++	if (err < 0)
++		goto remove;
  
- 	if (is_write || ret < 0) {
- 		kfree(fd_prot->prot_sg);
--		vfree(fd_prot->prot_buf);
-+		kfree(fd_prot->prot_buf);
- 	}
+ 	platform_set_drvdata(pdev, data);
  
- 	return ret;
-@@ -658,11 +659,11 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
- 						 0, fd_prot.prot_sg, 0);
- 			if (rc) {
- 				kfree(fd_prot.prot_sg);
--				vfree(fd_prot.prot_buf);
-+				kfree(fd_prot.prot_buf);
- 				return rc;
- 			}
- 			kfree(fd_prot.prot_sg);
--			vfree(fd_prot.prot_buf);
-+			kfree(fd_prot.prot_buf);
- 		}
- 	} else {
- 		memset(&fd_prot, 0, sizeof(struct fd_prot));
-@@ -678,7 +679,7 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
- 						  0, fd_prot.prot_sg, 0);
- 			if (rc) {
- 				kfree(fd_prot.prot_sg);
--				vfree(fd_prot.prot_buf);
-+				kfree(fd_prot.prot_buf);
- 				return rc;
- 			}
- 		}
-@@ -714,7 +715,7 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
- 
- 	if (ret < 0) {
- 		kfree(fd_prot.prot_sg);
--		vfree(fd_prot.prot_buf);
-+		kfree(fd_prot.prot_buf);
- 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
- 	}
+diff --git a/drivers/power/ipaq_micro_battery.c b/drivers/power/ipaq_micro_battery.c
+index 9d69460..96b15e0 100644
+--- a/drivers/power/ipaq_micro_battery.c
++++ b/drivers/power/ipaq_micro_battery.c
+@@ -226,6 +226,7 @@ static struct power_supply micro_ac_power = {
+ static int micro_batt_probe(struct platform_device *pdev)
+ {
+ 	struct micro_battery *mb;
++	int ret;
  
--- 
-2.3.6
-
-
-From 1d6b56f309d72a9ce2be3129f41c4a1138693091 Mon Sep 17 00:00:00 2001
-From: Akinobu Mita <akinobu.mita@gmail.com>
-Date: Mon, 13 Apr 2015 23:21:58 +0900
-Subject: [PATCH 134/219] target/file: Fix UNMAP with DIF protection support
-Cc: mpagano@gentoo.org
-
-commit 64d240b721b21e266ffde645ec965c3b6d1c551f upstream.
-
-When UNMAP command is issued with DIF protection support enabled,
-the protection info for the unmapped region is remain unchanged.
-So READ command for the region causes data integrity failure.
-
-This fixes it by invalidating protection info for the unmapped region
-by filling with 0xff pattern.  This change also adds helper function
-fd_do_prot_fill() in order to reduce code duplication with existing
-fd_format_prot().
-
-Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
-Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
-Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com>
-Cc: Christoph Hellwig <hch@lst.de>
-Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
-Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/target/target_core_file.c | 86 +++++++++++++++++++++++++++------------
- 1 file changed, 61 insertions(+), 25 deletions(-)
-
-diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
-index 8ca1883..7e12909 100644
---- a/drivers/target/target_core_file.c
-+++ b/drivers/target/target_core_file.c
-@@ -550,6 +550,56 @@ fd_execute_write_same(struct se_cmd *cmd)
- 	return 0;
- }
+ 	mb = devm_kzalloc(&pdev->dev, sizeof(*mb), GFP_KERNEL);
+ 	if (!mb)
+@@ -233,14 +234,30 @@ static int micro_batt_probe(struct platform_device *pdev)
  
-+static int
-+fd_do_prot_fill(struct se_device *se_dev, sector_t lba, sector_t nolb,
-+		void *buf, size_t bufsize)
-+{
-+	struct fd_dev *fd_dev = FD_DEV(se_dev);
-+	struct file *prot_fd = fd_dev->fd_prot_file;
-+	sector_t prot_length, prot;
-+	loff_t pos = lba * se_dev->prot_length;
-+
-+	if (!prot_fd) {
-+		pr_err("Unable to locate fd_dev->fd_prot_file\n");
-+		return -ENODEV;
-+	}
-+
-+	prot_length = nolb * se_dev->prot_length;
-+
-+	for (prot = 0; prot < prot_length;) {
-+		sector_t len = min_t(sector_t, bufsize, prot_length - prot);
-+		ssize_t ret = kernel_write(prot_fd, buf, len, pos + prot);
-+
-+		if (ret != len) {
-+			pr_err("vfs_write to prot file failed: %zd\n", ret);
-+			return ret < 0 ? ret : -ENODEV;
-+		}
-+		prot += ret;
-+	}
-+
-+	return 0;
-+}
-+
-+static int
-+fd_do_prot_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
-+{
-+	void *buf;
-+	int rc;
-+
-+	buf = (void *)__get_free_page(GFP_KERNEL);
-+	if (!buf) {
-+		pr_err("Unable to allocate FILEIO prot buf\n");
+ 	mb->micro = dev_get_drvdata(pdev->dev.parent);
+ 	mb->wq = create_singlethread_workqueue("ipaq-battery-wq");
++	if (!mb->wq)
 +		return -ENOMEM;
-+	}
-+	memset(buf, 0xff, PAGE_SIZE);
 +
-+	rc = fd_do_prot_fill(cmd->se_dev, lba, nolb, buf, PAGE_SIZE);
-+
-+	free_page((unsigned long)buf);
+ 	INIT_DELAYED_WORK(&mb->update, micro_battery_work);
+ 	platform_set_drvdata(pdev, mb);
+ 	queue_delayed_work(mb->wq, &mb->update, 1);
+-	power_supply_register(&pdev->dev, &micro_batt_power);
+-	power_supply_register(&pdev->dev, &micro_ac_power);
 +
-+	return rc;
-+}
++	ret = power_supply_register(&pdev->dev, &micro_batt_power);
++	if (ret < 0)
++		goto batt_err;
 +
- static sense_reason_t
- fd_do_unmap(struct se_cmd *cmd, void *priv, sector_t lba, sector_t nolb)
- {
-@@ -557,6 +607,12 @@ fd_do_unmap(struct se_cmd *cmd, void *priv, sector_t lba, sector_t nolb)
- 	struct inode *inode = file->f_mapping->host;
- 	int ret;
++	ret = power_supply_register(&pdev->dev, &micro_ac_power);
++	if (ret < 0)
++		goto ac_err;
  
-+	if (cmd->se_dev->dev_attrib.pi_prot_type) {
-+		ret = fd_do_prot_unmap(cmd, lba, nolb);
-+		if (ret)
-+			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
-+	}
+ 	dev_info(&pdev->dev, "iPAQ micro battery driver\n");
+ 	return 0;
 +
- 	if (S_ISBLK(inode->i_mode)) {
- 		/* The backend is block device, use discard */
- 		struct block_device *bdev = inode->i_bdev;
-@@ -879,48 +935,28 @@ static int fd_init_prot(struct se_device *dev)
- 
- static int fd_format_prot(struct se_device *dev)
- {
--	struct fd_dev *fd_dev = FD_DEV(dev);
--	struct file *prot_fd = fd_dev->fd_prot_file;
--	sector_t prot_length, prot;
- 	unsigned char *buf;
--	loff_t pos = 0;
- 	int unit_size = FDBD_FORMAT_UNIT_SIZE * dev->dev_attrib.block_size;
--	int rc, ret = 0, size, len;
-+	int ret;
++ac_err:
++	power_supply_unregister(&micro_ac_power);
++batt_err:
++	cancel_delayed_work_sync(&mb->update);
++	destroy_workqueue(mb->wq);
++	return ret;
+ }
  
- 	if (!dev->dev_attrib.pi_prot_type) {
- 		pr_err("Unable to format_prot while pi_prot_type == 0\n");
- 		return -ENODEV;
- 	}
--	if (!prot_fd) {
--		pr_err("Unable to locate fd_dev->fd_prot_file\n");
--		return -ENODEV;
--	}
+ static int micro_batt_remove(struct platform_device *pdev)
+@@ -251,6 +268,7 @@ static int micro_batt_remove(struct platform_device *pdev)
+ 	power_supply_unregister(&micro_ac_power);
+ 	power_supply_unregister(&micro_batt_power);
+ 	cancel_delayed_work_sync(&mb->update);
++	destroy_workqueue(mb->wq);
  
- 	buf = vzalloc(unit_size);
- 	if (!buf) {
- 		pr_err("Unable to allocate FILEIO prot buf\n");
- 		return -ENOMEM;
- 	}
--	prot_length = (dev->transport->get_blocks(dev) + 1) * dev->prot_length;
--	size = prot_length;
+ 	return 0;
+ }
+diff --git a/drivers/power/lp8788-charger.c b/drivers/power/lp8788-charger.c
+index 21fc233..176dab2 100644
+--- a/drivers/power/lp8788-charger.c
++++ b/drivers/power/lp8788-charger.c
+@@ -417,8 +417,10 @@ static int lp8788_psy_register(struct platform_device *pdev,
+ 	pchg->battery.num_properties = ARRAY_SIZE(lp8788_battery_prop);
+ 	pchg->battery.get_property = lp8788_battery_get_property;
  
- 	pr_debug("Using FILEIO prot_length: %llu\n",
--		 (unsigned long long)prot_length);
-+		 (unsigned long long)(dev->transport->get_blocks(dev) + 1) *
-+					dev->prot_length);
+-	if (power_supply_register(&pdev->dev, &pchg->battery))
++	if (power_supply_register(&pdev->dev, &pchg->battery)) {
++		power_supply_unregister(&pchg->charger);
+ 		return -EPERM;
++	}
  
- 	memset(buf, 0xff, unit_size);
--	for (prot = 0; prot < prot_length; prot += unit_size) {
--		len = min(unit_size, size);
--		rc = kernel_write(prot_fd, buf, len, pos);
--		if (rc != len) {
--			pr_err("vfs_write to prot file failed: %d\n", rc);
--			ret = -ENODEV;
--			goto out;
--		}
--		pos += len;
--		size -= len;
--	}
--
--out:
-+	ret = fd_do_prot_fill(dev, 0, dev->transport->get_blocks(dev) + 1,
-+			      buf, unit_size);
- 	vfree(buf);
- 	return ret;
+ 	return 0;
  }
--- 
-2.3.6
-
-
-From 53e5aa168e3ba918741417ac2177db04a84f77c1 Mon Sep 17 00:00:00 2001
-From: Akinobu Mita <akinobu.mita@gmail.com>
-Date: Mon, 13 Apr 2015 23:21:57 +0900
-Subject: [PATCH 135/219] target/file: Fix SG table for prot_buf initialization
-Cc: mpagano@gentoo.org
-
-commit c836777830428372074d5129ac513e1472c99791 upstream.
-
-In fd_do_prot_rw(), it allocates prot_buf which is used to copy from
-se_cmd->t_prot_sg by sbc_dif_copy_prot().  The SG table for prot_buf
-is also initialized by allocating 'se_cmd->t_prot_nents' entries of
-scatterlist and setting the data length of each entry to PAGE_SIZE
-at most.
-
-However if se_cmd->t_prot_sg contains a clustered entry (i.e.
-sg->length > PAGE_SIZE), the SG table for prot_buf can't be
-initialized correctly and sbc_dif_copy_prot() can't copy to prot_buf.
-(This actually happened with TCM loopback fabric module)
-
-As prot_buf is allocated by kzalloc() and it's physically contiguous,
-we only need a single scatterlist entry.
-
-Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
-Cc: Sagi Grimberg <sagig@mellanox.com>
-Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
-Cc: Christoph Hellwig <hch@lst.de>
-Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
-Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/target/target_core_file.c | 21 ++++++---------------
- 1 file changed, 6 insertions(+), 15 deletions(-)
-
-diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
-index 7e12909..cbb0cc2 100644
---- a/drivers/target/target_core_file.c
-+++ b/drivers/target/target_core_file.c
-@@ -264,11 +264,10 @@ static int fd_do_prot_rw(struct se_cmd *cmd, struct fd_prot *fd_prot,
- 	struct se_device *se_dev = cmd->se_dev;
- 	struct fd_dev *dev = FD_DEV(se_dev);
- 	struct file *prot_fd = dev->fd_prot_file;
--	struct scatterlist *sg;
- 	loff_t pos = (cmd->t_task_lba * se_dev->prot_length);
- 	unsigned char *buf;
--	u32 prot_size, len, size;
--	int rc, ret = 1, i;
-+	u32 prot_size;
-+	int rc, ret = 1;
- 
- 	prot_size = (cmd->data_length / se_dev->dev_attrib.block_size) *
- 		     se_dev->prot_length;
-@@ -281,24 +280,16 @@ static int fd_do_prot_rw(struct se_cmd *cmd, struct fd_prot *fd_prot,
- 		}
- 		buf = fd_prot->prot_buf;
+diff --git a/drivers/power/twl4030_madc_battery.c b/drivers/power/twl4030_madc_battery.c
+index 7ef445a..cf90760 100644
+--- a/drivers/power/twl4030_madc_battery.c
++++ b/drivers/power/twl4030_madc_battery.c
+@@ -192,6 +192,7 @@ static int twl4030_madc_battery_probe(struct platform_device *pdev)
+ {
+ 	struct twl4030_madc_battery *twl4030_madc_bat;
+ 	struct twl4030_madc_bat_platform_data *pdata = pdev->dev.platform_data;
++	int ret = 0;
  
--		fd_prot->prot_sg_nents = cmd->t_prot_nents;
--		fd_prot->prot_sg = kzalloc(sizeof(struct scatterlist) *
--					   fd_prot->prot_sg_nents, GFP_KERNEL);
-+		fd_prot->prot_sg_nents = 1;
-+		fd_prot->prot_sg = kzalloc(sizeof(struct scatterlist),
-+					   GFP_KERNEL);
- 		if (!fd_prot->prot_sg) {
- 			pr_err("Unable to allocate fd_prot->prot_sg\n");
- 			kfree(fd_prot->prot_buf);
- 			return -ENOMEM;
- 		}
- 		sg_init_table(fd_prot->prot_sg, fd_prot->prot_sg_nents);
--		size = prot_size;
--
--		for_each_sg(fd_prot->prot_sg, sg, fd_prot->prot_sg_nents, i) {
--
--			len = min_t(u32, PAGE_SIZE, size);
--			sg_set_buf(sg, buf, len);
--			size -= len;
--			buf += len;
--		}
-+		sg_set_buf(fd_prot->prot_sg, buf, prot_size);
- 	}
+ 	twl4030_madc_bat = kzalloc(sizeof(*twl4030_madc_bat), GFP_KERNEL);
+ 	if (!twl4030_madc_bat)
+@@ -216,9 +217,11 @@ static int twl4030_madc_battery_probe(struct platform_device *pdev)
  
- 	if (is_write) {
--- 
-2.3.6
-
-
-From 6c617001eadca79dc3c26a6e2d2844ad48c1a178 Mon Sep 17 00:00:00 2001
-From: Sagi Grimberg <sagig@mellanox.com>
-Date: Sun, 29 Mar 2015 15:52:03 +0300
-Subject: [PATCH 136/219] iser-target: Fix session hang in case of an rdma read
- DIF error
-Cc: mpagano@gentoo.org
-
-commit 364189f0ada5478e4faf8a552d6071a650d757cd upstream.
-
-This hang was a result of a missing command put when
-a DIF error occurred during a rdma read (and we sent
-an CHECK_CONDITION error without passing it to the
-backend).
-
-Signed-off-by: Sagi Grimberg <sagig@mellanox.com>
-Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/infiniband/ulp/isert/ib_isert.c | 6 ++++--
- 1 file changed, 4 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
-index 075b19c..4b8d518 100644
---- a/drivers/infiniband/ulp/isert/ib_isert.c
-+++ b/drivers/infiniband/ulp/isert/ib_isert.c
-@@ -1861,11 +1861,13 @@ isert_completion_rdma_read(struct iser_tx_desc *tx_desc,
- 	cmd->i_state = ISTATE_RECEIVED_LAST_DATAOUT;
- 	spin_unlock_bh(&cmd->istate_lock);
+ 	twl4030_madc_bat->pdata = pdata;
+ 	platform_set_drvdata(pdev, twl4030_madc_bat);
+-	power_supply_register(&pdev->dev, &twl4030_madc_bat->psy);
++	ret = power_supply_register(&pdev->dev, &twl4030_madc_bat->psy);
++	if (ret < 0)
++		kfree(twl4030_madc_bat);
  
--	if (ret)
-+	if (ret) {
-+		target_put_sess_cmd(se_cmd->se_sess, se_cmd);
- 		transport_send_check_condition_and_sense(se_cmd,
- 							 se_cmd->pi_err, 0);
--	else
-+	} else {
- 		target_execute_cmd(se_cmd);
-+	}
+-	return 0;
++	return ret;
  }
  
- static void
--- 
-2.3.6
-
-
-From c1398bc9478760e098fd1a36c9d67eeaf1bc5813 Mon Sep 17 00:00:00 2001
-From: Sagi Grimberg <sagig@mellanox.com>
-Date: Sun, 29 Mar 2015 15:52:04 +0300
-Subject: [PATCH 137/219] iser-target: Fix possible deadlock in RDMA_CM
- connection error
-Cc: mpagano@gentoo.org
-
-commit 4a579da2586bd3b79b025947ea24ede2bbfede62 upstream.
-
-Before we reach to connection established we may get an
-error event. In this case the core won't teardown this
-connection (never established it), so we take care of freeing
-it ourselves.
-
-Signed-off-by: Sagi Grimberg <sagig@mellanox.com>
-Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/infiniband/ulp/isert/ib_isert.c | 14 +++++++++-----
- 1 file changed, 9 insertions(+), 5 deletions(-)
-
-diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
-index 4b8d518..147029a 100644
---- a/drivers/infiniband/ulp/isert/ib_isert.c
-+++ b/drivers/infiniband/ulp/isert/ib_isert.c
-@@ -222,7 +222,7 @@ fail:
- static void
- isert_free_rx_descriptors(struct isert_conn *isert_conn)
- {
--	struct ib_device *ib_dev = isert_conn->conn_cm_id->device;
-+	struct ib_device *ib_dev = isert_conn->conn_device->ib_device;
- 	struct iser_rx_desc *rx_desc;
- 	int i;
+ static int twl4030_madc_battery_remove(struct platform_device *pdev)
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index 675b5e7..5a0800d 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -1584,11 +1584,11 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
+ 			fp_possible = io_info.fpOkForIo;
+ 	}
  
-@@ -719,8 +719,8 @@ out:
- static void
- isert_connect_release(struct isert_conn *isert_conn)
- {
--	struct ib_device *ib_dev = isert_conn->conn_cm_id->device;
- 	struct isert_device *device = isert_conn->conn_device;
-+	struct ib_device *ib_dev = device->ib_device;
+-	/* Use smp_processor_id() for now until cmd->request->cpu is CPU
++	/* Use raw_smp_processor_id() for now until cmd->request->cpu is CPU
+ 	   id by default, not CPU group id, otherwise all MSI-X queues won't
+ 	   be utilized */
+ 	cmd->request_desc->SCSIIO.MSIxIndex = instance->msix_vectors ?
+-		smp_processor_id() % instance->msix_vectors : 0;
++		raw_smp_processor_id() % instance->msix_vectors : 0;
  
- 	isert_dbg("conn %p\n", isert_conn);
+ 	if (fp_possible) {
+ 		megasas_set_pd_lba(io_request, scp->cmd_len, &io_info, scp,
+@@ -1693,7 +1693,10 @@ megasas_build_dcdb_fusion(struct megasas_instance *instance,
+ 			<< MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT;
+ 		cmd->request_desc->SCSIIO.DevHandle = io_request->DevHandle;
+ 		cmd->request_desc->SCSIIO.MSIxIndex =
+-			instance->msix_vectors ? smp_processor_id() % instance->msix_vectors : 0;
++			instance->msix_vectors ?
++				raw_smp_processor_id() %
++					instance->msix_vectors :
++				0;
+ 		os_timeout_value = scmd->request->timeout / HZ;
  
-@@ -728,7 +728,8 @@ isert_connect_release(struct isert_conn *isert_conn)
- 		isert_conn_free_fastreg_pool(isert_conn);
+ 		if (instance->secure_jbod_support &&
+diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
+index 2d5ab6d..454536c 100644
+--- a/drivers/scsi/mvsas/mv_sas.c
++++ b/drivers/scsi/mvsas/mv_sas.c
+@@ -441,14 +441,11 @@ static u32 mvs_get_ncq_tag(struct sas_task *task, u32 *tag)
+ static int mvs_task_prep_ata(struct mvs_info *mvi,
+ 			     struct mvs_task_exec_info *tei)
+ {
+-	struct sas_ha_struct *sha = mvi->sas;
+ 	struct sas_task *task = tei->task;
+ 	struct domain_device *dev = task->dev;
+ 	struct mvs_device *mvi_dev = dev->lldd_dev;
+ 	struct mvs_cmd_hdr *hdr = tei->hdr;
+ 	struct asd_sas_port *sas_port = dev->port;
+-	struct sas_phy *sphy = dev->phy;
+-	struct asd_sas_phy *sas_phy = sha->sas_phy[sphy->number];
+ 	struct mvs_slot_info *slot;
+ 	void *buf_prd;
+ 	u32 tag = tei->tag, hdr_tag;
+@@ -468,7 +465,7 @@ static int mvs_task_prep_ata(struct mvs_info *mvi,
+ 	slot->tx = mvi->tx_prod;
+ 	del_q = TXQ_MODE_I | tag |
+ 		(TXQ_CMD_STP << TXQ_CMD_SHIFT) |
+-		(MVS_PHY_ID << TXQ_PHY_SHIFT) |
++		((sas_port->phy_mask & TXQ_PHY_MASK) << TXQ_PHY_SHIFT) |
+ 		(mvi_dev->taskfileset << TXQ_SRS_SHIFT);
+ 	mvi->tx[mvi->tx_prod] = cpu_to_le32(del_q);
  
- 	isert_free_rx_descriptors(isert_conn);
--	rdma_destroy_id(isert_conn->conn_cm_id);
-+	if (isert_conn->conn_cm_id)
-+		rdma_destroy_id(isert_conn->conn_cm_id);
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 6b78476..3290a3e 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3100,6 +3100,7 @@ static void scsi_disk_release(struct device *dev)
+ 	ida_remove(&sd_index_ida, sdkp->index);
+ 	spin_unlock(&sd_index_lock);
  
- 	if (isert_conn->conn_qp) {
- 		struct isert_comp *comp = isert_conn->conn_qp->recv_cq->cq_context;
-@@ -878,12 +879,15 @@ isert_disconnected_handler(struct rdma_cm_id *cma_id,
- 	return 0;
- }
++	blk_integrity_unregister(disk);
+ 	disk->private_data = NULL;
+ 	put_disk(disk);
+ 	put_device(&sdkp->device->sdev_gendev);
+diff --git a/drivers/scsi/sd_dif.c b/drivers/scsi/sd_dif.c
+index 14c7d42..5c06d29 100644
+--- a/drivers/scsi/sd_dif.c
++++ b/drivers/scsi/sd_dif.c
+@@ -77,7 +77,7 @@ void sd_dif_config_host(struct scsi_disk *sdkp)
  
--static void
-+static int
- isert_connect_error(struct rdma_cm_id *cma_id)
- {
- 	struct isert_conn *isert_conn = cma_id->qp->qp_context;
+ 		disk->integrity->flags |= BLK_INTEGRITY_DEVICE_CAPABLE;
  
-+	isert_conn->conn_cm_id = NULL;
- 	isert_put_conn(isert_conn);
-+
-+	return -1;
- }
+-		if (!sdkp)
++		if (!sdkp->ATO)
+ 			return;
  
- static int
-@@ -912,7 +916,7 @@ isert_cma_handler(struct rdma_cm_id *cma_id, struct rdma_cm_event *event)
- 	case RDMA_CM_EVENT_REJECTED:       /* FALLTHRU */
- 	case RDMA_CM_EVENT_UNREACHABLE:    /* FALLTHRU */
- 	case RDMA_CM_EVENT_CONNECT_ERROR:
--		isert_connect_error(cma_id);
-+		ret = isert_connect_error(cma_id);
- 		break;
- 	default:
- 		isert_err("Unhandled RDMA CMA event: %d\n", event->event);
--- 
-2.3.6
-
-
-From 1ed449ae56cbf5db4f3ea0560a5bfbe95e30e89a Mon Sep 17 00:00:00 2001
-From: Alexander Ploumistos <alex.ploumistos@gmail.com>
-Date: Fri, 13 Feb 2015 21:05:11 +0200
-Subject: [PATCH 138/219] Bluetooth: ath3k: Add support Atheros AR5B195 combo
- Mini PCIe card
-Cc: mpagano@gentoo.org
-
-commit 2eeff0b4317a02f0e281df891d990194f0737aae upstream.
-
-Add 04f2:aff1 to ath3k.c supported devices list and btusb.c blacklist, so
-that the device can load the ath3k firmware and re-enumerate itself as an
-AR3011 device.
-
-T:  Bus=05 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#=  2 Spd=12   MxCh= 0
-D:  Ver= 1.10 Cls=e0(wlcon) Sub=01 Prot=01 MxPS=64 #Cfgs=  1
-P:  Vendor=04f2 ProdID=aff1 Rev= 0.01
-C:* #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=100mA
-I:* If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
-E:  Ad=81(I) Atr=03(Int.) MxPS=  16 Ivl=1ms
-E:  Ad=82(I) Atr=02(Bulk) MxPS=  64 Ivl=0ms
-E:  Ad=02(O) Atr=02(Bulk) MxPS=  64 Ivl=0ms
-I:* If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
-E:  Ad=83(I) Atr=01(Isoc) MxPS=   0 Ivl=1ms
-E:  Ad=03(O) Atr=01(Isoc) MxPS=   0 Ivl=1ms
-I:  If#= 1 Alt= 1 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
-E:  Ad=83(I) Atr=01(Isoc) MxPS=   9 Ivl=1ms
-E:  Ad=03(O) Atr=01(Isoc) MxPS=   9 Ivl=1ms
-I:  If#= 1 Alt= 2 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
-E:  Ad=83(I) Atr=01(Isoc) MxPS=  17 Ivl=1ms
-E:  Ad=03(O) Atr=01(Isoc) MxPS=  17 Ivl=1ms
-I:  If#= 1 Alt= 3 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
-E:  Ad=83(I) Atr=01(Isoc) MxPS=  25 Ivl=1ms
-E:  Ad=03(O) Atr=01(Isoc) MxPS=  25 Ivl=1ms
-I:  If#= 1 Alt= 4 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
-E:  Ad=83(I) Atr=01(Isoc) MxPS=  33 Ivl=1ms
-E:  Ad=03(O) Atr=01(Isoc) MxPS=  33 Ivl=1ms
-I:  If#= 1 Alt= 5 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
-E:  Ad=83(I) Atr=01(Isoc) MxPS=  49 Ivl=1ms
-E:  Ad=03(O) Atr=01(Isoc) MxPS=  49 Ivl=1ms
-
-Signed-off-by: Alexander Ploumistos <alexpl@fedoraproject.org>
-Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/bluetooth/ath3k.c | 1 +
- drivers/bluetooth/btusb.c | 1 +
- 2 files changed, 2 insertions(+)
-
-diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c
-index de4c849..288547a 100644
---- a/drivers/bluetooth/ath3k.c
-+++ b/drivers/bluetooth/ath3k.c
-@@ -65,6 +65,7 @@ static const struct usb_device_id ath3k_table[] = {
- 	/* Atheros AR3011 with sflash firmware*/
- 	{ USB_DEVICE(0x0489, 0xE027) },
- 	{ USB_DEVICE(0x0489, 0xE03D) },
-+	{ USB_DEVICE(0x04F2, 0xAFF1) },
- 	{ USB_DEVICE(0x0930, 0x0215) },
- 	{ USB_DEVICE(0x0CF3, 0x3002) },
- 	{ USB_DEVICE(0x0CF3, 0xE019) },
-diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
-index 8bfc4c2..2c527da 100644
---- a/drivers/bluetooth/btusb.c
-+++ b/drivers/bluetooth/btusb.c
-@@ -159,6 +159,7 @@ static const struct usb_device_id blacklist_table[] = {
- 	/* Atheros 3011 with sflash firmware */
- 	{ USB_DEVICE(0x0489, 0xe027), .driver_info = BTUSB_IGNORE },
- 	{ USB_DEVICE(0x0489, 0xe03d), .driver_info = BTUSB_IGNORE },
-+	{ USB_DEVICE(0x04f2, 0xaff1), .driver_info = BTUSB_IGNORE },
- 	{ USB_DEVICE(0x0930, 0x0215), .driver_info = BTUSB_IGNORE },
- 	{ USB_DEVICE(0x0cf3, 0x3002), .driver_info = BTUSB_IGNORE },
- 	{ USB_DEVICE(0x0cf3, 0xe019), .driver_info = BTUSB_IGNORE },
--- 
-2.3.6
-
-
-From 929315920e42097f53f97bfc88c6da4a41e19f66 Mon Sep 17 00:00:00 2001
-From: Bo Yan <byan@nvidia.com>
-Date: Tue, 31 Mar 2015 21:30:48 +0100
-Subject: [PATCH 139/219] arm64: fix midr range for Cortex-A57 erratum 832075
-Cc: mpagano@gentoo.org
-
-commit 6d1966dfd6e0ad2f8aa4b664ae1a62e33abe1998 upstream.
-
-Register MIDR_EL1 is masked to get variant and revision fields, then
-compared against midr_range_min and midr_range_max when checking
-whether CPU is affected by any particular erratum. However, variant
-and revision fields in MIDR_EL1 are separated by 16 bits, so the min
-and max of midr range should be constructed accordingly, otherwise
-the patch will not be applied when variant field is non-0.
-
-Acked-by: Andre Przywara <andre.przywara@arm.com>
-Reviewed-by: Paul Walmsley <paul@pwsan.com>
-Signed-off-by: Bo Yan <byan@nvidia.com>
-[will: use MIDR_VARIANT_SHIFT to construct upper bound]
-Signed-off-by: Will Deacon <will.deacon@arm.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/arm64/kernel/cpu_errata.c | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
-index fa62637..7c48494 100644
---- a/arch/arm64/kernel/cpu_errata.c
-+++ b/arch/arm64/kernel/cpu_errata.c
-@@ -88,7 +88,8 @@ struct arm64_cpu_capabilities arm64_errata[] = {
- 	/* Cortex-A57 r0p0 - r1p2 */
- 		.desc = "ARM erratum 832075",
- 		.capability = ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE,
--		MIDR_RANGE(MIDR_CORTEX_A57, 0x00, 0x12),
-+		MIDR_RANGE(MIDR_CORTEX_A57, 0x00,
-+			   (1 << MIDR_VARIANT_SHIFT) | 2),
- 	},
- #endif
- 	{
--- 
-2.3.6
-
-
-From 28a75aebb66869d9b48970bc9ad2c50d06ca2368 Mon Sep 17 00:00:00 2001
-From: Mark Rutland <mark.rutland@arm.com>
-Date: Tue, 24 Mar 2015 13:50:27 +0000
-Subject: [PATCH 140/219] arm64: head.S: ensure visibility of page tables
-Cc: mpagano@gentoo.org
-
-commit 91d57155dc5ab4b311624b7ee570339b6af19ad5 upstream.
-
-After writing the page tables, we use __inval_cache_range to invalidate
-any stale cache entries. Strongly Ordered memory accesses are not
-ordered w.r.t. cache maintenance instructions, and hence explicit memory
-barriers are required to provide this ordering. However,
-__inval_cache_range was written to be used on Normal Cacheable memory
-once the MMU and caches are on, and does not have any barriers prior to
-the DC instructions.
-
-This patch adds a DMB between the page tables being written and the
-corresponding cachelines being invalidated, ensuring that the
-invalidation makes the new data visible to subsequent cacheable
-accesses. A barrier is not required before the prior invalidate as we do
-not access the page table memory area prior to this, and earlier
-barriers in preserve_boot_args and set_cpu_boot_mode_flag ensures
-ordering w.r.t. any stores performed prior to entering Linux.
-
-Signed-off-by: Mark Rutland <mark.rutland@arm.com>
-Cc: Catalin Marinas <catalin.marinas@arm.com>
-Cc: Will Deacon <will.deacon@arm.com>
-Fixes: c218bca74eeafa2f ("arm64: Relax the kernel cache requirements for boot")
-Signed-off-by: Will Deacon <will.deacon@arm.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/arm64/kernel/head.S | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
-index 07f9305..c237ffb 100644
---- a/arch/arm64/kernel/head.S
-+++ b/arch/arm64/kernel/head.S
-@@ -426,6 +426,7 @@ __create_page_tables:
- 	 */
- 	mov	x0, x25
- 	add	x1, x26, #SWAPPER_DIR_SIZE
-+	dmb	sy
- 	bl	__inval_cache_range
+ 		if (type == SD_DIF_TYPE3_PROTECTION)
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index efc6e44..bf8c5c1 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -746,21 +746,22 @@ static unsigned int copy_to_bounce_buffer(struct scatterlist *orig_sgl,
+ 			if (bounce_sgl[j].length == PAGE_SIZE) {
+ 				/* full..move to next entry */
+ 				sg_kunmap_atomic(bounce_addr);
++				bounce_addr = 0;
+ 				j++;
++			}
  
- 	mov	lr, x27
--- 
-2.3.6
-
-
-From 3b4f68e9d08a42860dd7491e973a1ba2abcf4ea7 Mon Sep 17 00:00:00 2001
-From: Steve Capper <steve.capper@linaro.org>
-Date: Mon, 16 Mar 2015 09:30:39 +0000
-Subject: [PATCH 141/219] arm64: Adjust EFI libstub object include logic
-Cc: mpagano@gentoo.org
-
-commit ad08fd494bf00c03ae372e0bbd9cefa37bf608d6 upstream.
-
-Commit f4f75ad5 ("efi: efistub: Convert into static library")
-introduced a static library for EFI stub, libstub.
-
-The EFI libstub directory is referenced by the kernel build system via
-a obj subdirectory rule in:
-drivers/firmware/efi/Makefile
-
-Unfortunately, arm64 also references the EFI libstub via:
-libs-$(CONFIG_EFI_STUB) += drivers/firmware/efi/libstub/
-
-If we're unlucky, the kernel build system can enter libstub via two
-simultaneous threads resulting in build failures such as:
-
-fixdep: error opening depfile: drivers/firmware/efi/libstub/.efi-stub-helper.o.d: No such file or directory
-scripts/Makefile.build:257: recipe for target 'drivers/firmware/efi/libstub/efi-stub-helper.o' failed
-make[1]: *** [drivers/firmware/efi/libstub/efi-stub-helper.o] Error 2
-Makefile:939: recipe for target 'drivers/firmware/efi/libstub' failed
-make: *** [drivers/firmware/efi/libstub] Error 2
-make: *** Waiting for unfinished jobs....
-
-This patch adjusts the arm64 Makefile to reference the compiled library
-explicitly (as is currently done in x86), rather than the directory.
-
-Fixes: f4f75ad5 efi: efistub: Convert into static library
-Signed-off-by: Steve Capper <steve.capper@linaro.org>
-Signed-off-by: Will Deacon <will.deacon@arm.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/arm64/Makefile | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
-index 69ceedc..4d2a925 100644
---- a/arch/arm64/Makefile
-+++ b/arch/arm64/Makefile
-@@ -48,7 +48,7 @@ core-$(CONFIG_KVM) += arch/arm64/kvm/
- core-$(CONFIG_XEN) += arch/arm64/xen/
- core-$(CONFIG_CRYPTO) += arch/arm64/crypto/
- libs-y		:= arch/arm64/lib/ $(libs-y)
--libs-$(CONFIG_EFI_STUB) += drivers/firmware/efi/libstub/
-+core-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a
+-				/* if we need to use another bounce buffer */
+-				if (srclen || i != orig_sgl_count - 1)
+-					bounce_addr = sg_kmap_atomic(bounce_sgl,j);
++			/* if we need to use another bounce buffer */
++			if (srclen && bounce_addr == 0)
++				bounce_addr = sg_kmap_atomic(bounce_sgl, j);
  
- # Default target when executing plain make
- KBUILD_IMAGE	:= Image.gz
--- 
-2.3.6
-
-
-From f5fc6d70222ede94eb601c8f2697df1a9bcd9535 Mon Sep 17 00:00:00 2001
-From: Mark Rutland <mark.rutland@arm.com>
-Date: Fri, 13 Mar 2015 16:14:34 +0000
-Subject: [PATCH 142/219] arm64: apply alternatives for !SMP kernels
-Cc: mpagano@gentoo.org
-
-commit 137650aad96c9594683445e41afa8ac5a2097520 upstream.
-
-Currently we only perform alternative patching for kernels built with
-CONFIG_SMP, as we call apply_alternatives_all() in smp.c, which is only
-built for CONFIG_SMP. Thus !SMP kernels may not have necessary
-alternatives patched in.
-
-This patch ensures that we call apply_alternatives_all() once all CPUs
-are booted, even for !SMP kernels, by having the smp_init_cpus() stub
-call this for !SMP kernels via up_late_init. A new wrapper,
-do_post_cpus_up_work, is added so we can hook other calls here later
-(e.g. boot mode logging).
-
-Cc: Andre Przywara <andre.przywara@arm.com>
-Cc: Catalin Marinas <catalin.marinas@arm.com>
-Fixes: e039ee4ee3fcf174 ("arm64: add alternative runtime patching")
-Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
-Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
-Signed-off-by: Mark Rutland <mark.rutland@arm.com>
-Signed-off-by: Will Deacon <will.deacon@arm.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/arm64/Kconfig                |  4 ++++
- arch/arm64/include/asm/smp_plat.h |  2 ++
- arch/arm64/kernel/setup.c         | 12 ++++++++++++
- arch/arm64/kernel/smp.c           |  2 +-
- 4 files changed, 19 insertions(+), 1 deletion(-)
-
-diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
-index 1b8e973..0d46deb 100644
---- a/arch/arm64/Kconfig
-+++ b/arch/arm64/Kconfig
-@@ -470,6 +470,10 @@ config HOTPLUG_CPU
+-			} else if (srclen == 0 && i == orig_sgl_count - 1) {
+-				/* unmap the last bounce that is < PAGE_SIZE */
+-				sg_kunmap_atomic(bounce_addr);
+-			}
+ 		}
  
- source kernel/Kconfig.preempt
+ 		sg_kunmap_atomic(src_addr - orig_sgl[i].offset);
+ 	}
  
-+config UP_LATE_INIT
-+       def_bool y
-+       depends on !SMP
++	if (bounce_addr)
++		sg_kunmap_atomic(bounce_addr);
 +
- config HZ
- 	int
- 	default 100
-diff --git a/arch/arm64/include/asm/smp_plat.h b/arch/arm64/include/asm/smp_plat.h
-index 59e2823..8dcd61e 100644
---- a/arch/arm64/include/asm/smp_plat.h
-+++ b/arch/arm64/include/asm/smp_plat.h
-@@ -40,4 +40,6 @@ static inline u32 mpidr_hash_size(void)
- extern u64 __cpu_logical_map[NR_CPUS];
- #define cpu_logical_map(cpu)    __cpu_logical_map[cpu]
+ 	local_irq_restore(flags);
  
-+void __init do_post_cpus_up_work(void);
-+
- #endif /* __ASM_SMP_PLAT_H */
-diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
-index e8420f6..781f469 100644
---- a/arch/arm64/kernel/setup.c
-+++ b/arch/arm64/kernel/setup.c
-@@ -207,6 +207,18 @@ static void __init smp_build_mpidr_hash(void)
- }
- #endif
+ 	return total_copied;
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 6fea4af..aea3a67 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -370,8 +370,6 @@ static int __maybe_unused mx51_ecspi_config(struct spi_imx_data *spi_imx,
+ 	if (spi_imx->dma_is_inited) {
+ 		dma = readl(spi_imx->base + MX51_ECSPI_DMA);
  
-+void __init do_post_cpus_up_work(void)
-+{
-+	apply_alternatives_all();
-+}
-+
-+#ifdef CONFIG_UP_LATE_INIT
-+void __init up_late_init(void)
-+{
-+	do_post_cpus_up_work();
-+}
-+#endif /* CONFIG_UP_LATE_INIT */
-+
- static void __init setup_processor(void)
- {
- 	struct cpu_info *cpu_info;
-diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
-index 328b8ce..4257369 100644
---- a/arch/arm64/kernel/smp.c
-+++ b/arch/arm64/kernel/smp.c
-@@ -309,7 +309,7 @@ void cpu_die(void)
- void __init smp_cpus_done(unsigned int max_cpus)
- {
- 	pr_info("SMP: Total of %d processors activated.\n", num_online_cpus());
--	apply_alternatives_all();
-+	do_post_cpus_up_work();
- }
- 
- void __init smp_prepare_boot_cpu(void)
--- 
-2.3.6
-
-
-From d56f1962494430ce86e221537a2116a8ff0dca7e Mon Sep 17 00:00:00 2001
-From: Will Deacon <will.deacon@arm.com>
-Date: Mon, 23 Mar 2015 19:07:02 +0000
-Subject: [PATCH 143/219] arm64: errata: add workaround for cortex-a53 erratum
- #845719
-Cc: mpagano@gentoo.org
-
-commit 905e8c5dcaa147163672b06fe9dcb5abaacbc711 upstream.
-
-When running a compat (AArch32) userspace on Cortex-A53, a load at EL0
-from a virtual address that matches the bottom 32 bits of the virtual
-address used by a recent load at (AArch64) EL1 might return incorrect
-data.
-
-This patch works around the issue by writing to the contextidr_el1
-register on the exception return path when returning to a 32-bit task.
-This workaround is patched in at runtime based on the MIDR value of the
-processor.
-
-Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
-Tested-by: Mark Rutland <mark.rutland@arm.com>
-Signed-off-by: Will Deacon <will.deacon@arm.com>
-Signed-off-by: Kevin Hilman <khilman@linaro.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/arm64/Kconfig                  | 21 +++++++++++++++++++++
- arch/arm64/include/asm/cpufeature.h |  3 ++-
- arch/arm64/kernel/cpu_errata.c      |  8 ++++++++
- arch/arm64/kernel/entry.S           | 20 ++++++++++++++++++++
- 4 files changed, 51 insertions(+), 1 deletion(-)
-
-diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
-index 0d46deb..a6186c2 100644
---- a/arch/arm64/Kconfig
-+++ b/arch/arm64/Kconfig
-@@ -361,6 +361,27 @@ config ARM64_ERRATUM_832075
+-		spi_imx->tx_wml = spi_imx_get_fifosize(spi_imx) / 2;
+-		spi_imx->rx_wml = spi_imx_get_fifosize(spi_imx) / 2;
+ 		spi_imx->rxt_wml = spi_imx_get_fifosize(spi_imx) / 2;
+ 		rx_wml_cfg = spi_imx->rx_wml << MX51_ECSPI_DMA_RX_WML_OFFSET;
+ 		tx_wml_cfg = spi_imx->tx_wml << MX51_ECSPI_DMA_TX_WML_OFFSET;
+@@ -868,6 +866,8 @@ static int spi_imx_sdma_init(struct device *dev, struct spi_imx_data *spi_imx,
+ 	master->max_dma_len = MAX_SDMA_BD_BYTES;
+ 	spi_imx->bitbang.master->flags = SPI_MASTER_MUST_RX |
+ 					 SPI_MASTER_MUST_TX;
++	spi_imx->tx_wml = spi_imx_get_fifosize(spi_imx) / 2;
++	spi_imx->rx_wml = spi_imx_get_fifosize(spi_imx) / 2;
+ 	spi_imx->dma_is_inited = 1;
  
- 	  If unsure, say Y.
+ 	return 0;
+diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
+index 4eb7a98..7bf5186 100644
+--- a/drivers/spi/spidev.c
++++ b/drivers/spi/spidev.c
+@@ -245,7 +245,10 @@ static int spidev_message(struct spidev_data *spidev,
+ 		k_tmp->len = u_tmp->len;
  
-+config ARM64_ERRATUM_845719
-+	bool "Cortex-A53: 845719: a load might read incorrect data"
-+	depends on COMPAT
-+	default y
-+	help
-+	  This option adds an alternative code sequence to work around ARM
-+	  erratum 845719 on Cortex-A53 parts up to r0p4.
-+
-+	  When running a compat (AArch32) userspace on an affected Cortex-A53
-+	  part, a load at EL0 from a virtual address that matches the bottom 32
-+	  bits of the virtual address used by a recent load at (AArch64) EL1
-+	  might return incorrect data.
-+
-+	  The workaround is to write the contextidr_el1 register on exception
-+	  return to a 32-bit task.
-+	  Please note that this does not necessarily enable the workaround,
-+	  as it depends on the alternative framework, which will only patch
-+	  the kernel if an affected CPU is detected.
-+
-+	  If unsure, say Y.
-+
- endmenu
+ 		total += k_tmp->len;
+-		if (total > bufsiz) {
++		/* Check total length of transfers.  Also check each
++		 * transfer length to avoid arithmetic overflow.
++		 */
++		if (total > bufsiz || k_tmp->len > bufsiz) {
+ 			status = -EMSGSIZE;
+ 			goto done;
+ 		}
+diff --git a/drivers/staging/android/sync.c b/drivers/staging/android/sync.c
+index 7bdb62b..f83e00c 100644
+--- a/drivers/staging/android/sync.c
++++ b/drivers/staging/android/sync.c
+@@ -114,7 +114,7 @@ void sync_timeline_signal(struct sync_timeline *obj)
+ 	list_for_each_entry_safe(pt, next, &obj->active_list_head,
+ 				 active_list) {
+ 		if (fence_is_signaled_locked(&pt->base))
+-			list_del(&pt->active_list);
++			list_del_init(&pt->active_list);
+ 	}
  
+ 	spin_unlock_irqrestore(&obj->child_list_lock, flags);
+diff --git a/drivers/staging/panel/panel.c b/drivers/staging/panel/panel.c
+index 6ed35b6..04fc217 100644
+--- a/drivers/staging/panel/panel.c
++++ b/drivers/staging/panel/panel.c
+@@ -335,11 +335,11 @@ static unsigned char lcd_bits[LCD_PORTS][LCD_BITS][BIT_STATES];
+  * LCD types
+  */
+ #define LCD_TYPE_NONE		0
+-#define LCD_TYPE_OLD		1
+-#define LCD_TYPE_KS0074		2
+-#define LCD_TYPE_HANTRONIX	3
+-#define LCD_TYPE_NEXCOM		4
+-#define LCD_TYPE_CUSTOM		5
++#define LCD_TYPE_CUSTOM		1
++#define LCD_TYPE_OLD		2
++#define LCD_TYPE_KS0074		3
++#define LCD_TYPE_HANTRONIX	4
++#define LCD_TYPE_NEXCOM		5
  
-diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
-index b6c16d5..3f0c53c 100644
---- a/arch/arm64/include/asm/cpufeature.h
-+++ b/arch/arm64/include/asm/cpufeature.h
-@@ -23,8 +23,9 @@
+ /*
+  * keypad types
+@@ -502,7 +502,7 @@ MODULE_PARM_DESC(keypad_type,
+ static int lcd_type = NOT_SET;
+ module_param(lcd_type, int, 0000);
+ MODULE_PARM_DESC(lcd_type,
+-		 "LCD type: 0=none, 1=old //, 2=serial ks0074, 3=hantronix //, 4=nexcom //, 5=compiled-in");
++		 "LCD type: 0=none, 1=compiled-in, 2=old, 3=serial ks0074, 4=hantronix, 5=nexcom");
  
- #define ARM64_WORKAROUND_CLEAN_CACHE		0
- #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE	1
-+#define ARM64_WORKAROUND_845719			2
+ static int lcd_height = NOT_SET;
+ module_param(lcd_height, int, 0000);
+diff --git a/drivers/staging/vt6655/rxtx.c b/drivers/staging/vt6655/rxtx.c
+index 07ce3fd..fdf5c56 100644
+--- a/drivers/staging/vt6655/rxtx.c
++++ b/drivers/staging/vt6655/rxtx.c
+@@ -1308,10 +1308,18 @@ int vnt_generate_fifo_header(struct vnt_private *priv, u32 dma_idx,
+ 			    priv->hw->conf.chandef.chan->hw_value);
+ 	}
  
--#define ARM64_NCAPS				2
-+#define ARM64_NCAPS				3
+-	if (current_rate > RATE_11M)
+-		pkt_type = (u8)priv->byPacketType;
+-	else
++	if (current_rate > RATE_11M) {
++		if (info->band == IEEE80211_BAND_5GHZ) {
++			pkt_type = PK_TYPE_11A;
++		} else {
++			if (tx_rate->flags & IEEE80211_TX_RC_USE_CTS_PROTECT)
++				pkt_type = PK_TYPE_11GB;
++			else
++				pkt_type = PK_TYPE_11GA;
++		}
++	} else {
+ 		pkt_type = PK_TYPE_11B;
++	}
  
- #ifndef __ASSEMBLY__
+ 	/*Set fifo controls */
+ 	if (pkt_type == PK_TYPE_11A)
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index 77d6425..5e35612 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -537,7 +537,7 @@ static struct iscsit_transport iscsi_target_transport = {
  
-diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
-index 7c48494..ad6d523 100644
---- a/arch/arm64/kernel/cpu_errata.c
-+++ b/arch/arm64/kernel/cpu_errata.c
-@@ -92,6 +92,14 @@ struct arm64_cpu_capabilities arm64_errata[] = {
- 			   (1 << MIDR_VARIANT_SHIFT) | 2),
- 	},
- #endif
-+#ifdef CONFIG_ARM64_ERRATUM_845719
-+	{
-+	/* Cortex-A53 r0p[01234] */
-+		.desc = "ARM erratum 845719",
-+		.capability = ARM64_WORKAROUND_845719,
-+		MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x04),
-+	},
-+#endif
- 	{
- 	}
- };
-diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
-index cf21bb3..959fe87 100644
---- a/arch/arm64/kernel/entry.S
-+++ b/arch/arm64/kernel/entry.S
-@@ -21,8 +21,10 @@
- #include <linux/init.h>
- #include <linux/linkage.h>
+ static int __init iscsi_target_init_module(void)
+ {
+-	int ret = 0;
++	int ret = 0, size;
  
-+#include <asm/alternative-asm.h>
- #include <asm/assembler.h>
- #include <asm/asm-offsets.h>
-+#include <asm/cpufeature.h>
- #include <asm/errno.h>
- #include <asm/esr.h>
- #include <asm/thread_info.h>
-@@ -120,6 +122,24 @@
- 	ct_user_enter
- 	ldr	x23, [sp, #S_SP]		// load return stack pointer
- 	msr	sp_el0, x23
-+
-+#ifdef CONFIG_ARM64_ERRATUM_845719
-+	alternative_insn						\
-+	"nop",								\
-+	"tbz x22, #4, 1f",						\
-+	ARM64_WORKAROUND_845719
-+#ifdef CONFIG_PID_IN_CONTEXTIDR
-+	alternative_insn						\
-+	"nop; nop",							\
-+	"mrs x29, contextidr_el1; msr contextidr_el1, x29; 1:",		\
-+	ARM64_WORKAROUND_845719
-+#else
-+	alternative_insn						\
-+	"nop",								\
-+	"msr contextidr_el1, xzr; 1:",					\
-+	ARM64_WORKAROUND_845719
-+#endif
-+#endif
- 	.endif
- 	msr	elr_el1, x21			// set up the return data
- 	msr	spsr_el1, x22
--- 
-2.3.6
-
-
-From aa54f8fb00ef9c739f564672048ec0fcc08a61dc Mon Sep 17 00:00:00 2001
-From: Gavin Shan <gwshan@linux.vnet.ibm.com>
-Date: Fri, 27 Mar 2015 11:29:00 +1100
-Subject: [PATCH 144/219] powerpc/powernv: Don't map M64 segments using M32DT
-Cc: mpagano@gentoo.org
-
-commit 027fa02f84e851e21daffdf8900d6117071890f8 upstream.
-
-If M64 has been supported, the prefetchable 64-bits memory resources
-shouldn't be mapped to the corresponding PE# via M32DT. Unfortunately,
-we're doing that in pnv_ioda_setup_pe_seg() wrongly. The issue was
-introduced by commit 262af55 ("powerpc/powernv: Enable M64 aperatus
-for PHB3"). The patch fixes the issue by simply skipping M64 resources
-when updating to M32DT.
-
-Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
-Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/powerpc/platforms/powernv/pci-ioda.c | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
-index 6c9ff2b..1d9369e 100644
---- a/arch/powerpc/platforms/powernv/pci-ioda.c
-+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
-@@ -1777,7 +1777,8 @@ static void pnv_ioda_setup_pe_seg(struct pci_controller *hose,
- 				region.start += phb->ioda.io_segsize;
- 				index++;
- 			}
--		} else if (res->flags & IORESOURCE_MEM) {
-+		} else if ((res->flags & IORESOURCE_MEM) &&
-+			   !pnv_pci_is_mem_pref_64(res->flags)) {
- 			region.start = res->start -
- 				       hose->mem_offset[0] -
- 				       phb->ioda.m32_pci_base;
--- 
-2.3.6
-
-
-From 7ef1951eca49005fdbb4768574b7076cae1eeb4c Mon Sep 17 00:00:00 2001
-From: Dave Olson <olson@cumulusnetworks.com>
-Date: Thu, 2 Apr 2015 21:28:45 -0700
-Subject: [PATCH 145/219] powerpc: Fix missing L2 cache size in
- /sys/devices/system/cpu
-Cc: mpagano@gentoo.org
-
-commit f7e9e358362557c3aa2c1ec47490f29fe880a09e upstream.
-
-This problem appears to have been introduced in 2.6.29 by commit
-93197a36a9c1 "Rewrite sysfs processor cache info code".
-
-This caused lscpu to error out on at least e500v2 devices, eg:
-
-  error: cannot open /sys/devices/system/cpu/cpu0/cache/index2/size: No such file or directory
-
-Some embedded powerpc systems use cache-size in DTS for the unified L2
-cache size, not d-cache-size, so we need to allow for both DTS names.
-Added a new CACHE_TYPE_UNIFIED_D cache_type_info structure to handle
-this.
-
-Fixes: 93197a36a9c1 ("powerpc: Rewrite sysfs processor cache info code")
-Signed-off-by: Dave Olson <olson@cumulusnetworks.com>
-Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/powerpc/kernel/cacheinfo.c | 44 +++++++++++++++++++++++++++++++----------
- 1 file changed, 34 insertions(+), 10 deletions(-)
-
-diff --git a/arch/powerpc/kernel/cacheinfo.c b/arch/powerpc/kernel/cacheinfo.c
-index ae77b7e..c641983 100644
---- a/arch/powerpc/kernel/cacheinfo.c
-+++ b/arch/powerpc/kernel/cacheinfo.c
-@@ -61,12 +61,22 @@ struct cache_type_info {
- };
+ 	pr_debug("iSCSI-Target "ISCSIT_VERSION"\n");
  
- /* These are used to index the cache_type_info array. */
--#define CACHE_TYPE_UNIFIED     0
--#define CACHE_TYPE_INSTRUCTION 1
--#define CACHE_TYPE_DATA        2
-+#define CACHE_TYPE_UNIFIED     0 /* cache-size, cache-block-size, etc. */
-+#define CACHE_TYPE_UNIFIED_D   1 /* d-cache-size, d-cache-block-size, etc */
-+#define CACHE_TYPE_INSTRUCTION 2
-+#define CACHE_TYPE_DATA        3
+@@ -546,6 +546,7 @@ static int __init iscsi_target_init_module(void)
+ 		pr_err("Unable to allocate memory for iscsit_global\n");
+ 		return -1;
+ 	}
++	spin_lock_init(&iscsit_global->ts_bitmap_lock);
+ 	mutex_init(&auth_id_lock);
+ 	spin_lock_init(&sess_idr_lock);
+ 	idr_init(&tiqn_idr);
+@@ -555,15 +556,11 @@ static int __init iscsi_target_init_module(void)
+ 	if (ret < 0)
+ 		goto out;
  
- static const struct cache_type_info cache_type_info[] = {
- 	{
-+		/* Embedded systems that use cache-size, cache-block-size,
-+		 * etc. for the Unified (typically L2) cache. */
-+		.name            = "Unified",
-+		.size_prop       = "cache-size",
-+		.line_size_props = { "cache-line-size",
-+				     "cache-block-size", },
-+		.nr_sets_prop    = "cache-sets",
-+	},
-+	{
- 		/* PowerPC Processor binding says the [di]-cache-*
- 		 * must be equal on unified caches, so just use
- 		 * d-cache properties. */
-@@ -293,7 +303,8 @@ static struct cache *cache_find_first_sibling(struct cache *cache)
- {
- 	struct cache *iter;
+-	ret = iscsi_thread_set_init();
+-	if (ret < 0)
++	size = BITS_TO_LONGS(ISCSIT_BITMAP_BITS) * sizeof(long);
++	iscsit_global->ts_bitmap = vzalloc(size);
++	if (!iscsit_global->ts_bitmap) {
++		pr_err("Unable to allocate iscsit_global->ts_bitmap\n");
+ 		goto configfs_out;
+-
+-	if (iscsi_allocate_thread_sets(TARGET_THREAD_SET_COUNT) !=
+-			TARGET_THREAD_SET_COUNT) {
+-		pr_err("iscsi_allocate_thread_sets() returned"
+-			" unexpected value!\n");
+-		goto ts_out1;
+ 	}
  
--	if (cache->type == CACHE_TYPE_UNIFIED)
-+	if (cache->type == CACHE_TYPE_UNIFIED ||
-+	    cache->type == CACHE_TYPE_UNIFIED_D)
- 		return cache;
+ 	lio_qr_cache = kmem_cache_create("lio_qr_cache",
+@@ -572,7 +569,7 @@ static int __init iscsi_target_init_module(void)
+ 	if (!lio_qr_cache) {
+ 		pr_err("nable to kmem_cache_create() for"
+ 				" lio_qr_cache\n");
+-		goto ts_out2;
++		goto bitmap_out;
+ 	}
  
- 	list_for_each_entry(iter, &cache_list, list)
-@@ -324,16 +335,29 @@ static bool cache_node_is_unified(const struct device_node *np)
- 	return of_get_property(np, "cache-unified", NULL);
- }
+ 	lio_dr_cache = kmem_cache_create("lio_dr_cache",
+@@ -617,10 +614,8 @@ dr_out:
+ 	kmem_cache_destroy(lio_dr_cache);
+ qr_out:
+ 	kmem_cache_destroy(lio_qr_cache);
+-ts_out2:
+-	iscsi_deallocate_thread_sets();
+-ts_out1:
+-	iscsi_thread_set_free();
++bitmap_out:
++	vfree(iscsit_global->ts_bitmap);
+ configfs_out:
+ 	iscsi_target_deregister_configfs();
+ out:
+@@ -630,8 +625,6 @@ out:
  
--static struct cache *cache_do_one_devnode_unified(struct device_node *node,
--						  int level)
-+/*
-+ * Unified caches can have two different sets of tags.  Most embedded
-+ * use cache-size, etc. for the unified cache size, but open firmware systems
-+ * use d-cache-size, etc.   Check on initialization for which type we have, and
-+ * return the appropriate structure type.  Assume it's embedded if it isn't
-+ * open firmware.  If it's yet a 3rd type, then there will be missing entries
-+ * in /sys/devices/system/cpu/cpu0/cache/index2/, and this code will need
-+ * to be extended further.
-+ */
-+static int cache_is_unified_d(const struct device_node *np)
+ static void __exit iscsi_target_cleanup_module(void)
  {
--	struct cache *cache;
-+	return of_get_property(np,
-+		cache_type_info[CACHE_TYPE_UNIFIED_D].size_prop, NULL) ?
-+		CACHE_TYPE_UNIFIED_D : CACHE_TYPE_UNIFIED;
-+}
+-	iscsi_deallocate_thread_sets();
+-	iscsi_thread_set_free();
+ 	iscsit_release_discovery_tpg();
+ 	iscsit_unregister_transport(&iscsi_target_transport);
+ 	kmem_cache_destroy(lio_qr_cache);
+@@ -641,6 +634,7 @@ static void __exit iscsi_target_cleanup_module(void)
  
-+/*
-+ */
-+static struct cache *cache_do_one_devnode_unified(struct device_node *node, int level)
-+{
- 	pr_debug("creating L%d ucache for %s\n", level, node->full_name);
+ 	iscsi_target_deregister_configfs();
  
--	cache = new_cache(CACHE_TYPE_UNIFIED, level, node);
--
--	return cache;
-+	return new_cache(cache_is_unified_d(node), level, node);
++	vfree(iscsit_global->ts_bitmap);
+ 	kfree(iscsit_global);
  }
  
- static struct cache *cache_do_one_devnode_split(struct device_node *node,
--- 
-2.3.6
-
-
-From 9fb1018337f9767398e0d62e5dce8499fd0f2bf0 Mon Sep 17 00:00:00 2001
-From: Michael Ellerman <mpe@ellerman.id.au>
-Date: Fri, 3 Apr 2015 14:11:53 +1100
-Subject: [PATCH 146/219] powerpc/cell: Fix crash in iic_setup_cpu() after
- per_cpu changes
-Cc: mpagano@gentoo.org
-
-commit b0dd00addc5035f87ec9c5820dacc1ebc7fcb3e6 upstream.
-
-The conversion from __get_cpu_var() to this_cpu_ptr() in iic_setup_cpu()
-is wrong. It causes an oops at boot.
-
-We need the per-cpu address of struct cpu_iic, not cpu_iic.regs->prio.
-
-Sparse noticed this, because we pass a non-iomem pointer to out_be64(),
-but we obviously don't check the sparse results often enough.
-
-Fixes: 69111bac42f5 ("powerpc: Replace __get_cpu_var uses")
-Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/powerpc/platforms/cell/interrupt.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/arch/powerpc/platforms/cell/interrupt.c b/arch/powerpc/platforms/cell/interrupt.c
-index 4c11421..3af8324 100644
---- a/arch/powerpc/platforms/cell/interrupt.c
-+++ b/arch/powerpc/platforms/cell/interrupt.c
-@@ -163,7 +163,7 @@ static unsigned int iic_get_irq(void)
+@@ -3715,17 +3709,16 @@ static int iscsit_send_reject(
  
- void iic_setup_cpu(void)
+ void iscsit_thread_get_cpumask(struct iscsi_conn *conn)
  {
--	out_be64(this_cpu_ptr(&cpu_iic.regs->prio), 0xff);
-+	out_be64(&this_cpu_ptr(&cpu_iic)->regs->prio, 0xff);
+-	struct iscsi_thread_set *ts = conn->thread_set;
+ 	int ord, cpu;
+ 	/*
+-	 * thread_id is assigned from iscsit_global->ts_bitmap from
+-	 * within iscsi_thread_set.c:iscsi_allocate_thread_sets()
++	 * bitmap_id is assigned from iscsit_global->ts_bitmap from
++	 * within iscsit_start_kthreads()
+ 	 *
+-	 * Here we use thread_id to determine which CPU that this
+-	 * iSCSI connection's iscsi_thread_set will be scheduled to
++	 * Here we use bitmap_id to determine which CPU that this
++	 * iSCSI connection's RX/TX threads will be scheduled to
+ 	 * execute upon.
+ 	 */
+-	ord = ts->thread_id % cpumask_weight(cpu_online_mask);
++	ord = conn->bitmap_id % cpumask_weight(cpu_online_mask);
+ 	for_each_online_cpu(cpu) {
+ 		if (ord-- == 0) {
+ 			cpumask_set_cpu(cpu, conn->conn_cpumask);
+@@ -3914,7 +3907,7 @@ check_rsp_state:
+ 	switch (state) {
+ 	case ISTATE_SEND_LOGOUTRSP:
+ 		if (!iscsit_logout_post_handler(cmd, conn))
+-			goto restart;
++			return -ECONNRESET;
+ 		/* fall through */
+ 	case ISTATE_SEND_STATUS:
+ 	case ISTATE_SEND_ASYNCMSG:
+@@ -3942,8 +3935,6 @@ check_rsp_state:
+ 
+ err:
+ 	return -1;
+-restart:
+-	return -EAGAIN;
  }
  
- u8 iic_get_target_id(int cpu)
--- 
-2.3.6
-
-
-From 94a5f3b014e7d81936ae02cc095cdf895f94fb19 Mon Sep 17 00:00:00 2001
-From: Michael Ellerman <mpe@ellerman.id.au>
-Date: Fri, 3 Apr 2015 14:11:54 +1100
-Subject: [PATCH 147/219] powerpc/cell: Fix cell iommu after it_page_shift
- changes
-Cc: mpagano@gentoo.org
-
-commit 7261b956b276aa97fbf60d00f1d7717d2ea6ee78 upstream.
-
-The patch to add it_page_shift incorrectly changed the increment of
-uaddr to use it_page_shift, rather then (1 << it_page_shift).
-
-This broke booting on at least some Cell blades, as the iommu was
-basically non-functional.
-
-Fixes: 3a553170d35d ("powerpc/iommu: Add it_page_shift field to determine iommu page size")
-Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
-Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/powerpc/platforms/cell/iommu.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c
-index c7c8720..63db1b0 100644
---- a/arch/powerpc/platforms/cell/iommu.c
-+++ b/arch/powerpc/platforms/cell/iommu.c
-@@ -197,7 +197,7 @@ static int tce_build_cell(struct iommu_table *tbl, long index, long npages,
+ static int iscsit_handle_response_queue(struct iscsi_conn *conn)
+@@ -3970,21 +3961,13 @@ static int iscsit_handle_response_queue(struct iscsi_conn *conn)
+ int iscsi_target_tx_thread(void *arg)
+ {
+ 	int ret = 0;
+-	struct iscsi_conn *conn;
+-	struct iscsi_thread_set *ts = arg;
++	struct iscsi_conn *conn = arg;
+ 	/*
+ 	 * Allow ourselves to be interrupted by SIGINT so that a
+ 	 * connection recovery / failure event can be triggered externally.
+ 	 */
+ 	allow_signal(SIGINT);
  
- 	io_pte = (unsigned long *)tbl->it_base + (index - tbl->it_offset);
+-restart:
+-	conn = iscsi_tx_thread_pre_handler(ts);
+-	if (!conn)
+-		goto out;
+-
+-	ret = 0;
+-
+ 	while (!kthread_should_stop()) {
+ 		/*
+ 		 * Ensure that both TX and RX per connection kthreads
+@@ -3993,11 +3976,9 @@ restart:
+ 		iscsit_thread_check_cpumask(conn, current, 1);
  
--	for (i = 0; i < npages; i++, uaddr += tbl->it_page_shift)
-+	for (i = 0; i < npages; i++, uaddr += (1 << tbl->it_page_shift))
- 		io_pte[i] = base_pte | (__pa(uaddr) & CBE_IOPTE_RPN_Mask);
+ 		wait_event_interruptible(conn->queues_wq,
+-					 !iscsit_conn_all_queues_empty(conn) ||
+-					 ts->status == ISCSI_THREAD_SET_RESET);
++					 !iscsit_conn_all_queues_empty(conn));
  
- 	mb();
--- 
-2.3.6
-
-
-From 755b29de0d793e3915b35f35c716705d9910109f Mon Sep 17 00:00:00 2001
-From: Pascal Huerst <pascal.huerst@gmail.com>
-Date: Thu, 2 Apr 2015 10:17:40 +0200
-Subject: [PATCH 148/219] ASoC: cs4271: Increase delay time after reset
-Cc: mpagano@gentoo.org
-
-commit 74ff960222d90999508b4ba0d3449f796695b6d5 upstream.
-
-The delay time after a reset in the codec probe callback was too short,
-and did not work on certain hw because the codec needs more time to
-power on. This increases the delay time from 1us to 1ms.
-
-Signed-off-by: Pascal Huerst <pascal.huerst@gmail.com>
-Acked-by: Brian Austin <brian.austin@cirrus.com>
-Signed-off-by: Mark Brown <broonie@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- sound/soc/codecs/cs4271.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/sound/soc/codecs/cs4271.c b/sound/soc/codecs/cs4271.c
-index 7d3a6ac..e770ee6 100644
---- a/sound/soc/codecs/cs4271.c
-+++ b/sound/soc/codecs/cs4271.c
-@@ -561,10 +561,10 @@ static int cs4271_codec_probe(struct snd_soc_codec *codec)
- 	if (gpio_is_valid(cs4271->gpio_nreset)) {
- 		/* Reset codec */
- 		gpio_direction_output(cs4271->gpio_nreset, 0);
--		udelay(1);
-+		mdelay(1);
- 		gpio_set_value(cs4271->gpio_nreset, 1);
- 		/* Give the codec time to wake up */
--		udelay(1);
-+		mdelay(1);
- 	}
+-		if ((ts->status == ISCSI_THREAD_SET_RESET) ||
+-		     signal_pending(current))
++		if (signal_pending(current))
+ 			goto transport_err;
  
- 	ret = regmap_update_bits(cs4271->regmap, CS4271_MODE2,
--- 
-2.3.6
-
-
-From d9493a0723e5a23b0250f43ea5e6d8ed66e1a343 Mon Sep 17 00:00:00 2001
-From: Sergej Sawazki <ce3a@gmx.de>
-Date: Tue, 24 Mar 2015 21:13:22 +0100
-Subject: [PATCH 149/219] ASoC: wm8741: Fix rates constraints values
-Cc: mpagano@gentoo.org
-
-commit 8787041d9bb832b9449b1eb878cedcebce42c61a upstream.
-
-The WM8741 DAC supports the following typical audio sampling rates:
-  44.1kHz, 88.2kHz, 176.4kHz (eg: with a master clock of 22.5792MHz)
-  32kHz, 48kHz, 96kHz, 192kHz (eg: with a master clock of 24.576MHz)
-
-For the rates lists, we should use 82000 instead of 88235, 176400
-instead of 1764000 and 192000 instead of 19200 (seems to be a typo).
-
-Signed-off-by: Sergej Sawazki <ce3a@gmx.de>
-Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
-Signed-off-by: Mark Brown <broonie@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- sound/soc/codecs/wm8741.c | 8 ++++----
- 1 file changed, 4 insertions(+), 4 deletions(-)
-
-diff --git a/sound/soc/codecs/wm8741.c b/sound/soc/codecs/wm8741.c
-index 31bb480..9e71c76 100644
---- a/sound/soc/codecs/wm8741.c
-+++ b/sound/soc/codecs/wm8741.c
-@@ -123,7 +123,7 @@ static struct {
- };
+ get_immediate:
+@@ -4008,15 +3989,14 @@ get_immediate:
+ 		ret = iscsit_handle_response_queue(conn);
+ 		if (ret == 1)
+ 			goto get_immediate;
+-		else if (ret == -EAGAIN)
+-			goto restart;
++		else if (ret == -ECONNRESET)
++			goto out;
+ 		else if (ret < 0)
+ 			goto transport_err;
+ 	}
  
- static const unsigned int rates_11289[] = {
--	44100, 88235,
-+	44100, 88200,
- };
+ transport_err:
+ 	iscsit_take_action_for_connection_exit(conn);
+-	goto restart;
+ out:
+ 	return 0;
+ }
+@@ -4111,8 +4091,7 @@ int iscsi_target_rx_thread(void *arg)
+ 	int ret;
+ 	u8 buffer[ISCSI_HDR_LEN], opcode;
+ 	u32 checksum = 0, digest = 0;
+-	struct iscsi_conn *conn = NULL;
+-	struct iscsi_thread_set *ts = arg;
++	struct iscsi_conn *conn = arg;
+ 	struct kvec iov;
+ 	/*
+ 	 * Allow ourselves to be interrupted by SIGINT so that a
+@@ -4120,11 +4099,6 @@ int iscsi_target_rx_thread(void *arg)
+ 	 */
+ 	allow_signal(SIGINT);
  
- static const struct snd_pcm_hw_constraint_list constraints_11289 = {
-@@ -150,7 +150,7 @@ static const struct snd_pcm_hw_constraint_list constraints_16384 = {
- };
+-restart:
+-	conn = iscsi_rx_thread_pre_handler(ts);
+-	if (!conn)
+-		goto out;
+-
+ 	if (conn->conn_transport->transport_type == ISCSI_INFINIBAND) {
+ 		struct completion comp;
+ 		int rc;
+@@ -4134,7 +4108,7 @@ restart:
+ 		if (rc < 0)
+ 			goto transport_err;
  
- static const unsigned int rates_16934[] = {
--	44100, 88235,
-+	44100, 88200,
- };
+-		goto out;
++		goto transport_err;
+ 	}
  
- static const struct snd_pcm_hw_constraint_list constraints_16934 = {
-@@ -168,7 +168,7 @@ static const struct snd_pcm_hw_constraint_list constraints_18432 = {
- };
+ 	while (!kthread_should_stop()) {
+@@ -4210,8 +4184,6 @@ transport_err:
+ 	if (!signal_pending(current))
+ 		atomic_set(&conn->transport_failed, 1);
+ 	iscsit_take_action_for_connection_exit(conn);
+-	goto restart;
+-out:
+ 	return 0;
+ }
  
- static const unsigned int rates_22579[] = {
--	44100, 88235, 1764000
-+	44100, 88200, 176400
- };
+@@ -4273,7 +4245,24 @@ int iscsit_close_connection(
+ 	if (conn->conn_transport->transport_type == ISCSI_TCP)
+ 		complete(&conn->conn_logout_comp);
  
- static const struct snd_pcm_hw_constraint_list constraints_22579 = {
-@@ -186,7 +186,7 @@ static const struct snd_pcm_hw_constraint_list constraints_24576 = {
- };
+-	iscsi_release_thread_set(conn);
++	if (!strcmp(current->comm, ISCSI_RX_THREAD_NAME)) {
++		if (conn->tx_thread &&
++		    cmpxchg(&conn->tx_thread_active, true, false)) {
++			send_sig(SIGINT, conn->tx_thread, 1);
++			kthread_stop(conn->tx_thread);
++		}
++	} else if (!strcmp(current->comm, ISCSI_TX_THREAD_NAME)) {
++		if (conn->rx_thread &&
++		    cmpxchg(&conn->rx_thread_active, true, false)) {
++			send_sig(SIGINT, conn->rx_thread, 1);
++			kthread_stop(conn->rx_thread);
++		}
++	}
++
++	spin_lock(&iscsit_global->ts_bitmap_lock);
++	bitmap_release_region(iscsit_global->ts_bitmap, conn->bitmap_id,
++			      get_order(1));
++	spin_unlock(&iscsit_global->ts_bitmap_lock);
  
- static const unsigned int rates_36864[] = {
--	48000, 96000, 19200
-+	48000, 96000, 192000
- };
+ 	iscsit_stop_timers_for_cmds(conn);
+ 	iscsit_stop_nopin_response_timer(conn);
+@@ -4551,15 +4540,13 @@ static void iscsit_logout_post_handler_closesession(
+ 	struct iscsi_conn *conn)
+ {
+ 	struct iscsi_session *sess = conn->sess;
+-
+-	iscsi_set_thread_clear(conn, ISCSI_CLEAR_TX_THREAD);
+-	iscsi_set_thread_set_signal(conn, ISCSI_SIGNAL_TX_THREAD);
++	int sleep = cmpxchg(&conn->tx_thread_active, true, false);
  
- static const struct snd_pcm_hw_constraint_list constraints_36864 = {
--- 
-2.3.6
-
-
-From f7a469cdb54b146db35083f167e9f844ffc31f0c Mon Sep 17 00:00:00 2001
-From: Manish Badarkhe <manishvb@ti.com>
-Date: Thu, 26 Mar 2015 15:38:25 +0200
-Subject: [PATCH 150/219] ASoC: davinci-evm: drop un-necessary remove function
-Cc: mpagano@gentoo.org
-
-commit a57069e33fbc6625f39e1b09c88ea44629a35206 upstream.
-
-As davinci card gets registered using 'devm_' api
-there is no need to unregister the card in 'remove'
-function.
-Hence drop the 'remove' function.
-
-Fixes: ee2f615d6e59c (ASoC: davinci-evm: Add device tree binding)
-Signed-off-by: Manish Badarkhe <manishvb@ti.com>
-Signed-off-by: Jyri Sarha <jsarha@ti.com>
-Signed-off-by: Mark Brown <broonie@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- sound/soc/davinci/davinci-evm.c | 10 ----------
- 1 file changed, 10 deletions(-)
-
-diff --git a/sound/soc/davinci/davinci-evm.c b/sound/soc/davinci/davinci-evm.c
-index b6bb594..8c2b9be 100644
---- a/sound/soc/davinci/davinci-evm.c
-+++ b/sound/soc/davinci/davinci-evm.c
-@@ -425,18 +425,8 @@ static int davinci_evm_probe(struct platform_device *pdev)
- 	return ret;
+ 	atomic_set(&conn->conn_logout_remove, 0);
+ 	complete(&conn->conn_logout_comp);
+ 
+ 	iscsit_dec_conn_usage_count(conn);
+-	iscsit_stop_session(sess, 1, 1);
++	iscsit_stop_session(sess, sleep, sleep);
+ 	iscsit_dec_session_usage_count(sess);
+ 	target_put_session(sess->se_sess);
  }
+@@ -4567,13 +4554,12 @@ static void iscsit_logout_post_handler_closesession(
+ static void iscsit_logout_post_handler_samecid(
+ 	struct iscsi_conn *conn)
+ {
+-	iscsi_set_thread_clear(conn, ISCSI_CLEAR_TX_THREAD);
+-	iscsi_set_thread_set_signal(conn, ISCSI_SIGNAL_TX_THREAD);
++	int sleep = cmpxchg(&conn->tx_thread_active, true, false);
  
--static int davinci_evm_remove(struct platform_device *pdev)
--{
--	struct snd_soc_card *card = platform_get_drvdata(pdev);
--
--	snd_soc_unregister_card(card);
--
--	return 0;
--}
--
- static struct platform_driver davinci_evm_driver = {
- 	.probe		= davinci_evm_probe,
--	.remove		= davinci_evm_remove,
- 	.driver		= {
- 		.name	= "davinci_evm",
- 		.pm	= &snd_soc_pm_ops,
--- 
-2.3.6
-
-
-From f646e040a619bcea31a6cab378ccaccb6f4cb659 Mon Sep 17 00:00:00 2001
-From: Howard Mitchell <hm@hmbedded.co.uk>
-Date: Thu, 19 Mar 2015 12:08:30 +0000
-Subject: [PATCH 151/219] ASoC: pcm512x: Add 'Analogue' prefix to analogue
- volume controls
-Cc: mpagano@gentoo.org
-
-commit 4d9b13c7cc803fbde59d7e998f7de2b9a2101c7e upstream.
-
-This is to ensure that 'alsactl restore' does not apply default
-initialisation as the chip reset defaults are preferred.
-
-Signed-off-by: Howard Mitchell <hm@hmbedded.co.uk>
-Signed-off-by: Mark Brown <broonie@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- sound/soc/codecs/pcm512x.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/sound/soc/codecs/pcm512x.c b/sound/soc/codecs/pcm512x.c
-index 474cae8..b48624c 100644
---- a/sound/soc/codecs/pcm512x.c
-+++ b/sound/soc/codecs/pcm512x.c
-@@ -304,9 +304,9 @@ static const struct soc_enum pcm512x_veds =
- static const struct snd_kcontrol_new pcm512x_controls[] = {
- SOC_DOUBLE_R_TLV("Digital Playback Volume", PCM512x_DIGITAL_VOLUME_2,
- 		 PCM512x_DIGITAL_VOLUME_3, 0, 255, 1, digital_tlv),
--SOC_DOUBLE_TLV("Playback Volume", PCM512x_ANALOG_GAIN_CTRL,
-+SOC_DOUBLE_TLV("Analogue Playback Volume", PCM512x_ANALOG_GAIN_CTRL,
- 	       PCM512x_LAGN_SHIFT, PCM512x_RAGN_SHIFT, 1, 1, analog_tlv),
--SOC_DOUBLE_TLV("Playback Boost Volume", PCM512x_ANALOG_GAIN_BOOST,
-+SOC_DOUBLE_TLV("Analogue Playback Boost Volume", PCM512x_ANALOG_GAIN_BOOST,
- 	       PCM512x_AGBL_SHIFT, PCM512x_AGBR_SHIFT, 1, 0, boost_tlv),
- SOC_DOUBLE("Digital Playback Switch", PCM512x_MUTE, PCM512x_RQML_SHIFT,
- 	   PCM512x_RQMR_SHIFT, 1, 1),
--- 
-2.3.6
-
-
-From 43ebd1a85ee86416c2d45a3834e7425c396890e9 Mon Sep 17 00:00:00 2001
-From: Howard Mitchell <hm@hmbedded.co.uk>
-Date: Fri, 20 Mar 2015 21:13:45 +0000
-Subject: [PATCH 152/219] ASoC: pcm512x: Fix divide by zero issue
-Cc: mpagano@gentoo.org
-
-commit f073faa73626f41db7050a69edd5074c53ce6d6c upstream.
-
-If den=1 and pllin_rate>20MHz then den and num are adjusted to 0
-causing a divide by zero error a few lines further on. Therefore
-this patch correctly scales num and den such that
-pllin_rate/den < 20MHz as required in the device data sheet.
-
-Signed-off-by: Howard Mitchell <hm@hmbedded.co.uk>
-Signed-off-by: Mark Brown <broonie@sirena.org.uk>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- sound/soc/codecs/pcm512x.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/sound/soc/codecs/pcm512x.c b/sound/soc/codecs/pcm512x.c
-index b48624c..8c09e3f 100644
---- a/sound/soc/codecs/pcm512x.c
-+++ b/sound/soc/codecs/pcm512x.c
-@@ -576,8 +576,8 @@ static int pcm512x_find_pll_coeff(struct snd_soc_dai *dai,
+ 	atomic_set(&conn->conn_logout_remove, 0);
+ 	complete(&conn->conn_logout_comp);
  
- 	/* pllin_rate / P (or here, den) cannot be greater than 20 MHz */
- 	if (pllin_rate / den > 20000000 && num < 8) {
--		num *= 20000000 / (pllin_rate / den);
--		den *= 20000000 / (pllin_rate / den);
-+		num *= DIV_ROUND_UP(pllin_rate / den, 20000000);
-+		den *= DIV_ROUND_UP(pllin_rate / den, 20000000);
+-	iscsit_cause_connection_reinstatement(conn, 1);
++	iscsit_cause_connection_reinstatement(conn, sleep);
+ 	iscsit_dec_conn_usage_count(conn);
+ }
+ 
+diff --git a/drivers/target/iscsi/iscsi_target_erl0.c b/drivers/target/iscsi/iscsi_target_erl0.c
+index bdd8731..e008ed2 100644
+--- a/drivers/target/iscsi/iscsi_target_erl0.c
++++ b/drivers/target/iscsi/iscsi_target_erl0.c
+@@ -860,7 +860,10 @@ void iscsit_connection_reinstatement_rcfr(struct iscsi_conn *conn)
  	}
- 	dev_dbg(dev, "num / den = %lu / %lu\n", num, den);
+ 	spin_unlock_bh(&conn->state_lock);
  
--- 
-2.3.6
-
-
-From 650a628d5725e7eb8ed5f979fee058795cb06355 Mon Sep 17 00:00:00 2001
-From: Lv Zheng <lv.zheng@intel.com>
-Date: Mon, 13 Apr 2015 11:48:58 +0800
-Subject: [PATCH 153/219] ACPICA: Utilities: split IO address types from data
- type models.
-Cc: mpagano@gentoo.org
-
-commit 2b8760100e1de69b6ff004c986328a82947db4ad upstream.
-
-ACPICA commit aacf863cfffd46338e268b7415f7435cae93b451
-
-It is reported that on a physically 64-bit addressed machine, 32-bit kernel
-can trigger crashes in accessing the memory regions that are beyond the
-32-bit boundary. The region field's start address should still be 32-bit
-compliant, but after a calculation (adding some offsets), it may exceed the
-32-bit boundary. This case is rare and buggy, but there are real BIOSes
-leaked with such issues (see References below).
-
-This patch fixes this gap by always defining IO addresses as 64-bit, and
-allows OSPMs to optimize it for a real 32-bit machine to reduce the size of
-the internal objects.
-
-Internal acpi_physical_address usages in the structures that can be fixed
-by this change include:
- 1. struct acpi_object_region:
-    acpi_physical_address		address;
- 2. struct acpi_address_range:
-    acpi_physical_address		start_address;
-    acpi_physical_address		end_address;
- 3. struct acpi_mem_space_context;
-    acpi_physical_address		address;
- 4. struct acpi_table_desc
-    acpi_physical_address		address;
-See known issues 1 for other usages.
-
-Note that acpi_io_address which is used for ACPI_PROCESSOR may also suffer
-from same problem, so this patch changes it accordingly.
-
-For iasl, it will enforce acpi_physical_address as 32-bit to generate
-32-bit OSPM compatible tables on 32-bit platforms, we need to define
-ACPI_32BIT_PHYSICAL_ADDRESS for it in acenv.h.
-
-Known issues:
- 1. Cleanup of mapped virtual address
-   In struct acpi_mem_space_context, acpi_physical_address is used as a virtual
-   address:
-    acpi_physical_address                   mapped_physical_address;
-   It is better to introduce acpi_virtual_address or use acpi_size instead.
-   This patch doesn't make such a change. Because this should be done along
-   with a change to acpi_os_map_memory()/acpi_os_unmap_memory().
-   There should be no functional problem to leave this unchanged except
-   that only this structure is enlarged unexpectedly.
-
-Link: https://github.com/acpica/acpica/commit/aacf863c
-Reference: https://bugzilla.kernel.org/show_bug.cgi?id=87971
-Reference: https://bugzilla.kernel.org/show_bug.cgi?id=79501
-Reported-and-tested-by: Paul Menzel <paulepanter@users.sourceforge.net>
-Reported-and-tested-by: Sial Nije <sialnije@gmail.com>
-Signed-off-by: Lv Zheng <lv.zheng@intel.com>
-Signed-off-by: Bob Moore <robert.moore@intel.com>
-Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- include/acpi/actypes.h        | 20 ++++++++++++++++++++
- include/acpi/platform/acenv.h |  1 +
- 2 files changed, 21 insertions(+)
-
-diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h
-index b034f10..658c42e 100644
---- a/include/acpi/actypes.h
-+++ b/include/acpi/actypes.h
-@@ -199,9 +199,29 @@ typedef int s32;
- typedef s32 acpi_native_int;
+-	iscsi_thread_set_force_reinstatement(conn);
++	if (conn->tx_thread && conn->tx_thread_active)
++		send_sig(SIGINT, conn->tx_thread, 1);
++	if (conn->rx_thread && conn->rx_thread_active)
++		send_sig(SIGINT, conn->rx_thread, 1);
  
- typedef u32 acpi_size;
+ sleep:
+ 	wait_for_completion(&conn->conn_wait_rcfr_comp);
+@@ -885,10 +888,10 @@ void iscsit_cause_connection_reinstatement(struct iscsi_conn *conn, int sleep)
+ 		return;
+ 	}
+ 
+-	if (iscsi_thread_set_force_reinstatement(conn) < 0) {
+-		spin_unlock_bh(&conn->state_lock);
+-		return;
+-	}
++	if (conn->tx_thread && conn->tx_thread_active)
++		send_sig(SIGINT, conn->tx_thread, 1);
++	if (conn->rx_thread && conn->rx_thread_active)
++		send_sig(SIGINT, conn->rx_thread, 1);
+ 
+ 	atomic_set(&conn->connection_reinstatement, 1);
+ 	if (!sleep) {
+diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
+index 153fb66..345f073 100644
+--- a/drivers/target/iscsi/iscsi_target_login.c
++++ b/drivers/target/iscsi/iscsi_target_login.c
+@@ -699,6 +699,51 @@ static void iscsi_post_login_start_timers(struct iscsi_conn *conn)
+ 		iscsit_start_nopin_timer(conn);
+ }
+ 
++int iscsit_start_kthreads(struct iscsi_conn *conn)
++{
++	int ret = 0;
 +
-+#ifdef ACPI_32BIT_PHYSICAL_ADDRESS
++	spin_lock(&iscsit_global->ts_bitmap_lock);
++	conn->bitmap_id = bitmap_find_free_region(iscsit_global->ts_bitmap,
++					ISCSIT_BITMAP_BITS, get_order(1));
++	spin_unlock(&iscsit_global->ts_bitmap_lock);
 +
-+/*
-+ * OSPMs can define this to shrink the size of the structures for 32-bit
-+ * none PAE environment. ASL compiler may always define this to generate
-+ * 32-bit OSPM compliant tables.
-+ */
- typedef u32 acpi_io_address;
- typedef u32 acpi_physical_address;
- 
-+#else				/* ACPI_32BIT_PHYSICAL_ADDRESS */
++	if (conn->bitmap_id < 0) {
++		pr_err("bitmap_find_free_region() failed for"
++		       " iscsit_start_kthreads()\n");
++		return -ENOMEM;
++	}
 +
-+/*
-+ * It is reported that, after some calculations, the physical addresses can
-+ * wrap over the 32-bit boundary on 32-bit PAE environment.
-+ * https://bugzilla.kernel.org/show_bug.cgi?id=87971
-+ */
-+typedef u64 acpi_io_address;
-+typedef u64 acpi_physical_address;
++	conn->tx_thread = kthread_run(iscsi_target_tx_thread, conn,
++				      "%s", ISCSI_TX_THREAD_NAME);
++	if (IS_ERR(conn->tx_thread)) {
++		pr_err("Unable to start iscsi_target_tx_thread\n");
++		ret = PTR_ERR(conn->tx_thread);
++		goto out_bitmap;
++	}
++	conn->tx_thread_active = true;
 +
-+#endif				/* ACPI_32BIT_PHYSICAL_ADDRESS */
++	conn->rx_thread = kthread_run(iscsi_target_rx_thread, conn,
++				      "%s", ISCSI_RX_THREAD_NAME);
++	if (IS_ERR(conn->rx_thread)) {
++		pr_err("Unable to start iscsi_target_rx_thread\n");
++		ret = PTR_ERR(conn->rx_thread);
++		goto out_tx;
++	}
++	conn->rx_thread_active = true;
 +
- #define ACPI_MAX_PTR                    ACPI_UINT32_MAX
- #define ACPI_SIZE_MAX                   ACPI_UINT32_MAX
++	return 0;
++out_tx:
++	kthread_stop(conn->tx_thread);
++	conn->tx_thread_active = false;
++out_bitmap:
++	spin_lock(&iscsit_global->ts_bitmap_lock);
++	bitmap_release_region(iscsit_global->ts_bitmap, conn->bitmap_id,
++			      get_order(1));
++	spin_unlock(&iscsit_global->ts_bitmap_lock);
++	return ret;
++}
++
+ int iscsi_post_login_handler(
+ 	struct iscsi_np *np,
+ 	struct iscsi_conn *conn,
+@@ -709,7 +754,7 @@ int iscsi_post_login_handler(
+ 	struct se_session *se_sess = sess->se_sess;
+ 	struct iscsi_portal_group *tpg = sess->tpg;
+ 	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+-	struct iscsi_thread_set *ts;
++	int rc;
  
-diff --git a/include/acpi/platform/acenv.h b/include/acpi/platform/acenv.h
-index ad74dc5..ecdf940 100644
---- a/include/acpi/platform/acenv.h
-+++ b/include/acpi/platform/acenv.h
-@@ -76,6 +76,7 @@
- #define ACPI_LARGE_NAMESPACE_NODE
- #define ACPI_DATA_TABLE_DISASSEMBLY
- #define ACPI_SINGLE_THREADED
-+#define ACPI_32BIT_PHYSICAL_ADDRESS
- #endif
+ 	iscsit_inc_conn_usage_count(conn);
  
- /* acpi_exec configuration. Multithreaded with full AML debugger */
--- 
-2.3.6
-
-
-From 5980bf8bc5dbb8e5338a3db6e311539eeb6242da Mon Sep 17 00:00:00 2001
-From: Octavian Purdila <octavian.purdila@intel.com>
-Date: Mon, 13 Apr 2015 11:49:05 +0800
-Subject: [PATCH 154/219] ACPICA: Tables: Don't release ACPI_MTX_TABLES in
- acpi_tb_install_standard_table().
-Cc: mpagano@gentoo.org
-
-commit 77ddc2fe08329e375505bc36a3df3233fe57317b upstream.
-
-ACPICA commit c70434d4da13e65b6163c79a5aa16b40193631c7
-
-ACPI_MTX_TABLES is acquired and released by the callers of
-acpi_tb_install_standard_table() so releasing it in the function itself is
-causing the following error in Linux kernel if the table is reloaded:
-
-ACPI Error: Mutex [0x2] is not acquired, cannot release (20141107/utmutex-321)
-Call Trace:
-  [<ffffffff81b0bd48>] dump_stack+0x4f/0x7b
-  [<ffffffff81546bf5>] acpi_ut_release_mutex+0x47/0x67
-  [<ffffffff81544357>] acpi_load_table+0x73/0xcb
-
-Link: https://github.com/acpica/acpica/commit/c70434d4
-Signed-off-by: Octavian Purdila <octavian.purdila@intel.com>
-Signed-off-by: Lv Zheng <lv.zheng@intel.com>
-Signed-off-by: Bob Moore <robert.moore@intel.com>
-Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/acpi/acpica/tbinstal.c | 1 -
- 1 file changed, 1 deletion(-)
-
-diff --git a/drivers/acpi/acpica/tbinstal.c b/drivers/acpi/acpica/tbinstal.c
-index 9bad45e..7fbc2b9 100644
---- a/drivers/acpi/acpica/tbinstal.c
-+++ b/drivers/acpi/acpica/tbinstal.c
-@@ -346,7 +346,6 @@ acpi_tb_install_standard_table(acpi_physical_address address,
- 				 */
- 				acpi_tb_uninstall_table(&new_table_desc);
- 				*table_index = i;
--				(void)acpi_ut_release_mutex(ACPI_MTX_TABLES);
- 				return_ACPI_STATUS(AE_OK);
- 			}
- 		}
--- 
-2.3.6
-
-
-From afaed716d9f945416e6f0967384714ee3b066020 Mon Sep 17 00:00:00 2001
-From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
-Date: Wed, 15 Apr 2015 04:00:27 +0200
-Subject: [PATCH 155/219] ACPICA: Store GPE register enable masks upfront
-Cc: mpagano@gentoo.org
-
-commit 0ee0d34985ceffe4036319e1e46df8bff591b9e3 upstream.
-
-It is reported that ACPI interrupts do not work any more on
-Dell Latitude D600 after commit c50f13c672df (ACPICA: Save
-current masks of enabled GPEs after enable register writes).
-The problem turns out to be related to the fact that the
-enable_mask and enable_for_run GPE bit masks are not in
-sync (in the absence of any system suspend/resume events)
-for at least one GPE register on that machine.
-
-Address this problem by writing the enable_for_run mask into
-enable_mask as soon as enable_for_run is updated instead of
-doing that only after the subsequent register write has
-succeeded.  For consistency, update acpi_hw_gpe_enable_write()
-to store the bit mask to be written into the GPE register
-in enable_mask unconditionally before the write.
-
-Since the ACPI_GPE_SAVE_MASK flag is not necessary any more after
-that, drop it along with the symbols depending on it.
-
-Reported-and-tested-by: Jim Bos <jim876@xs4all.nl>
-Fixes: c50f13c672df (ACPICA: Save current masks of enabled GPEs after enable register writes)
-Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/acpi/acpica/evgpe.c |  5 +++--
- drivers/acpi/acpica/hwgpe.c | 11 ++++-------
- include/acpi/actypes.h      |  4 ----
- 3 files changed, 7 insertions(+), 13 deletions(-)
-
-diff --git a/drivers/acpi/acpica/evgpe.c b/drivers/acpi/acpica/evgpe.c
-index 5ed064e..ccf7932 100644
---- a/drivers/acpi/acpica/evgpe.c
-+++ b/drivers/acpi/acpica/evgpe.c
-@@ -92,6 +92,7 @@ acpi_ev_update_gpe_enable_mask(struct acpi_gpe_event_info *gpe_event_info)
- 		ACPI_SET_BIT(gpe_register_info->enable_for_run,
- 			     (u8)register_bit);
- 	}
-+	gpe_register_info->enable_mask = gpe_register_info->enable_for_run;
- 
- 	return_ACPI_STATUS(AE_OK);
- }
-@@ -123,7 +124,7 @@ acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info)
- 
- 	/* Enable the requested GPE */
- 
--	status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE_SAVE);
-+	status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE);
- 	return_ACPI_STATUS(status);
- }
- 
-@@ -202,7 +203,7 @@ acpi_ev_remove_gpe_reference(struct acpi_gpe_event_info *gpe_event_info)
- 		if (ACPI_SUCCESS(status)) {
- 			status =
- 			    acpi_hw_low_set_gpe(gpe_event_info,
--						ACPI_GPE_DISABLE_SAVE);
-+						ACPI_GPE_DISABLE);
- 		}
- 
- 		if (ACPI_FAILURE(status)) {
-diff --git a/drivers/acpi/acpica/hwgpe.c b/drivers/acpi/acpica/hwgpe.c
-index 84bc550..af6514e 100644
---- a/drivers/acpi/acpica/hwgpe.c
-+++ b/drivers/acpi/acpica/hwgpe.c
-@@ -89,6 +89,8 @@ u32 acpi_hw_get_gpe_register_bit(struct acpi_gpe_event_info *gpe_event_info)
-  * RETURN:	Status
-  *
-  * DESCRIPTION: Enable or disable a single GPE in the parent enable register.
-+ *              The enable_mask field of the involved GPE register must be
-+ *              updated by the caller if necessary.
-  *
-  ******************************************************************************/
- 
-@@ -119,7 +121,7 @@ acpi_hw_low_set_gpe(struct acpi_gpe_event_info *gpe_event_info, u32 action)
- 	/* Set or clear just the bit that corresponds to this GPE */
+@@ -724,7 +769,6 @@ int iscsi_post_login_handler(
+ 	/*
+ 	 * SCSI Initiator -> SCSI Target Port Mapping
+ 	 */
+-	ts = iscsi_get_thread_set();
+ 	if (!zero_tsih) {
+ 		iscsi_set_session_parameters(sess->sess_ops,
+ 				conn->param_list, 0);
+@@ -751,9 +795,11 @@ int iscsi_post_login_handler(
+ 			sess->sess_ops->InitiatorName);
+ 		spin_unlock_bh(&sess->conn_lock);
  
- 	register_bit = acpi_hw_get_gpe_register_bit(gpe_event_info);
--	switch (action & ~ACPI_GPE_SAVE_MASK) {
-+	switch (action) {
- 	case ACPI_GPE_CONDITIONAL_ENABLE:
+-		iscsi_post_login_start_timers(conn);
++		rc = iscsit_start_kthreads(conn);
++		if (rc)
++			return rc;
  
- 		/* Only enable if the corresponding enable_mask bit is set */
-@@ -149,9 +151,6 @@ acpi_hw_low_set_gpe(struct acpi_gpe_event_info *gpe_event_info, u32 action)
- 	/* Write the updated enable mask */
+-		iscsi_activate_thread_set(conn, ts);
++		iscsi_post_login_start_timers(conn);
+ 		/*
+ 		 * Determine CPU mask to ensure connection's RX and TX kthreads
+ 		 * are scheduled on the same CPU.
+@@ -810,8 +856,11 @@ int iscsi_post_login_handler(
+ 		" iSCSI Target Portal Group: %hu\n", tpg->nsessions, tpg->tpgt);
+ 	spin_unlock_bh(&se_tpg->session_lock);
  
- 	status = acpi_hw_write(enable_mask, &gpe_register_info->enable_address);
--	if (ACPI_SUCCESS(status) && (action & ACPI_GPE_SAVE_MASK)) {
--		gpe_register_info->enable_mask = (u8)enable_mask;
--	}
- 	return (status);
- }
++	rc = iscsit_start_kthreads(conn);
++	if (rc)
++		return rc;
++
+ 	iscsi_post_login_start_timers(conn);
+-	iscsi_activate_thread_set(conn, ts);
+ 	/*
+ 	 * Determine CPU mask to ensure connection's RX and TX kthreads
+ 	 * are scheduled on the same CPU.
+diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
+index 44620fb..cbb0cc2 100644
+--- a/drivers/target/target_core_file.c
++++ b/drivers/target/target_core_file.c
+@@ -264,40 +264,32 @@ static int fd_do_prot_rw(struct se_cmd *cmd, struct fd_prot *fd_prot,
+ 	struct se_device *se_dev = cmd->se_dev;
+ 	struct fd_dev *dev = FD_DEV(se_dev);
+ 	struct file *prot_fd = dev->fd_prot_file;
+-	struct scatterlist *sg;
+ 	loff_t pos = (cmd->t_task_lba * se_dev->prot_length);
+ 	unsigned char *buf;
+-	u32 prot_size, len, size;
+-	int rc, ret = 1, i;
++	u32 prot_size;
++	int rc, ret = 1;
  
-@@ -286,10 +285,8 @@ acpi_hw_gpe_enable_write(u8 enable_mask,
- {
- 	acpi_status status;
+ 	prot_size = (cmd->data_length / se_dev->dev_attrib.block_size) *
+ 		     se_dev->prot_length;
  
-+	gpe_register_info->enable_mask = enable_mask;
- 	status = acpi_hw_write(enable_mask, &gpe_register_info->enable_address);
--	if (ACPI_SUCCESS(status)) {
--		gpe_register_info->enable_mask = enable_mask;
--	}
- 	return (status);
- }
+ 	if (!is_write) {
+-		fd_prot->prot_buf = vzalloc(prot_size);
++		fd_prot->prot_buf = kzalloc(prot_size, GFP_KERNEL);
+ 		if (!fd_prot->prot_buf) {
+ 			pr_err("Unable to allocate fd_prot->prot_buf\n");
+ 			return -ENOMEM;
+ 		}
+ 		buf = fd_prot->prot_buf;
  
-diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h
-index 658c42e..0d58525 100644
---- a/include/acpi/actypes.h
-+++ b/include/acpi/actypes.h
-@@ -756,10 +756,6 @@ typedef u32 acpi_event_status;
- #define ACPI_GPE_ENABLE                 0
- #define ACPI_GPE_DISABLE                1
- #define ACPI_GPE_CONDITIONAL_ENABLE     2
--#define ACPI_GPE_SAVE_MASK              4
+-		fd_prot->prot_sg_nents = cmd->t_prot_nents;
+-		fd_prot->prot_sg = kzalloc(sizeof(struct scatterlist) *
+-					   fd_prot->prot_sg_nents, GFP_KERNEL);
++		fd_prot->prot_sg_nents = 1;
++		fd_prot->prot_sg = kzalloc(sizeof(struct scatterlist),
++					   GFP_KERNEL);
+ 		if (!fd_prot->prot_sg) {
+ 			pr_err("Unable to allocate fd_prot->prot_sg\n");
+-			vfree(fd_prot->prot_buf);
++			kfree(fd_prot->prot_buf);
+ 			return -ENOMEM;
+ 		}
+-		size = prot_size;
 -
--#define ACPI_GPE_ENABLE_SAVE            (ACPI_GPE_ENABLE | ACPI_GPE_SAVE_MASK)
--#define ACPI_GPE_DISABLE_SAVE           (ACPI_GPE_DISABLE | ACPI_GPE_SAVE_MASK)
- 
- /*
-  * GPE info flags - Per GPE
--- 
-2.3.6
-
-
-From 7b2f4da529f27b81d06a9c5d49803dc4b1d5eea3 Mon Sep 17 00:00:00 2001
-From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
-Date: Sat, 18 Apr 2015 01:25:46 +0200
-Subject: [PATCH 156/219] ACPI / scan: Annotate physical_node_lock in
- acpi_scan_is_offline()
-Cc: mpagano@gentoo.org
-
-commit 4c533c801d1c9b5c38458a0e7516e0cf50643782 upstream.
-
-acpi_scan_is_offline() may be called under the physical_node_lock
-lock of the given device object's parent, so prevent lockdep from
-complaining about that by annotating that instance with
-SINGLE_DEPTH_NESTING.
-
-Fixes: caa73ea158de (ACPI / hotplug / driver core: Handle containers in a special way)
-Reported-and-tested-by: Xie XiuQi <xiexiuqi@huawei.com>
-Reviewed-by: Toshi Kani <toshi.kani@hp.com>
-Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/acpi/scan.c | 6 +++++-
- 1 file changed, 5 insertions(+), 1 deletion(-)
-
-diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
-index bbca783..349f4fd 100644
---- a/drivers/acpi/scan.c
-+++ b/drivers/acpi/scan.c
-@@ -298,7 +298,11 @@ bool acpi_scan_is_offline(struct acpi_device *adev, bool uevent)
- 	struct acpi_device_physical_node *pn;
- 	bool offline = true;
+-		for_each_sg(fd_prot->prot_sg, sg, fd_prot->prot_sg_nents, i) {
+-
+-			len = min_t(u32, PAGE_SIZE, size);
+-			sg_set_buf(sg, buf, len);
+-			size -= len;
+-			buf += len;
+-		}
++		sg_init_table(fd_prot->prot_sg, fd_prot->prot_sg_nents);
++		sg_set_buf(fd_prot->prot_sg, buf, prot_size);
+ 	}
  
--	mutex_lock(&adev->physical_node_lock);
-+	/*
-+	 * acpi_container_offline() calls this for all of the container's
-+	 * children under the container's physical_node_lock lock.
-+	 */
-+	mutex_lock_nested(&adev->physical_node_lock, SINGLE_DEPTH_NESTING);
+ 	if (is_write) {
+@@ -318,7 +310,7 @@ static int fd_do_prot_rw(struct se_cmd *cmd, struct fd_prot *fd_prot,
  
- 	list_for_each_entry(pn, &adev->physical_node_list, node)
- 		if (device_supports_offline(pn->dev) && !pn->dev->offline) {
--- 
-2.3.6
-
-
-From 042741ecc3287d365daab83a5fd287aee607ea32 Mon Sep 17 00:00:00 2001
-From: Max Filippov <jcmvbkbc@gmail.com>
-Date: Fri, 27 Feb 2015 06:28:00 +0300
-Subject: [PATCH 157/219] xtensa: xtfpga: fix hardware lockup caused by LCD
- driver
-Cc: mpagano@gentoo.org
-
-commit 4949009eb8d40a441dcddcd96e101e77d31cf1b2 upstream.
-
-LCD driver is always built for the XTFPGA platform, but its base address
-is not configurable, and is wrong for ML605/KC705. Its initialization
-locks up KC705 board hardware.
-
-Make the whole driver optional, and its base address and bus width
-configurable. Implement 4-bit bus access method.
-
-Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/xtensa/Kconfig                                | 30 ++++++++++++
- arch/xtensa/platforms/xtfpga/Makefile              |  3 +-
- .../platforms/xtfpga/include/platform/hardware.h   |  3 --
- .../xtensa/platforms/xtfpga/include/platform/lcd.h | 15 ++++++
- arch/xtensa/platforms/xtfpga/lcd.c                 | 55 +++++++++++++---------
- 5 files changed, 81 insertions(+), 25 deletions(-)
-
-diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
-index e31d494..87be10e 100644
---- a/arch/xtensa/Kconfig
-+++ b/arch/xtensa/Kconfig
-@@ -428,6 +428,36 @@ config DEFAULT_MEM_SIZE
+ 	if (is_write || ret < 0) {
+ 		kfree(fd_prot->prot_sg);
+-		vfree(fd_prot->prot_buf);
++		kfree(fd_prot->prot_buf);
+ 	}
  
- 	  If unsure, leave the default value here.
+ 	return ret;
+@@ -549,6 +541,56 @@ fd_execute_write_same(struct se_cmd *cmd)
+ 	return 0;
+ }
  
-+config XTFPGA_LCD
-+	bool "Enable XTFPGA LCD driver"
-+	depends on XTENSA_PLATFORM_XTFPGA
-+	default n
-+	help
-+	  There's a 2x16 LCD on most of XTFPGA boards, kernel may output
-+	  progress messages there during bootup/shutdown. It may be useful
-+	  during board bringup.
++static int
++fd_do_prot_fill(struct se_device *se_dev, sector_t lba, sector_t nolb,
++		void *buf, size_t bufsize)
++{
++	struct fd_dev *fd_dev = FD_DEV(se_dev);
++	struct file *prot_fd = fd_dev->fd_prot_file;
++	sector_t prot_length, prot;
++	loff_t pos = lba * se_dev->prot_length;
 +
-+	  If unsure, say N.
++	if (!prot_fd) {
++		pr_err("Unable to locate fd_dev->fd_prot_file\n");
++		return -ENODEV;
++	}
 +
-+config XTFPGA_LCD_BASE_ADDR
-+	hex "XTFPGA LCD base address"
-+	depends on XTFPGA_LCD
-+	default "0x0d0c0000"
-+	help
-+	  Base address of the LCD controller inside KIO region.
-+	  Different boards from XTFPGA family have LCD controller at different
-+	  addresses. Please consult prototyping user guide for your board for
-+	  the correct address. Wrong address here may lead to hardware lockup.
++	prot_length = nolb * se_dev->prot_length;
 +
-+config XTFPGA_LCD_8BIT_ACCESS
-+	bool "Use 8-bit access to XTFPGA LCD"
-+	depends on XTFPGA_LCD
-+	default n
-+	help
-+	  LCD may be connected with 4- or 8-bit interface, 8-bit access may
-+	  only be used with 8-bit interface. Please consult prototyping user
-+	  guide for your board for the correct interface width.
++	for (prot = 0; prot < prot_length;) {
++		sector_t len = min_t(sector_t, bufsize, prot_length - prot);
++		ssize_t ret = kernel_write(prot_fd, buf, len, pos + prot);
 +
- endmenu
- 
- menu "Executable file formats"
-diff --git a/arch/xtensa/platforms/xtfpga/Makefile b/arch/xtensa/platforms/xtfpga/Makefile
-index b9ae206..7839d38 100644
---- a/arch/xtensa/platforms/xtfpga/Makefile
-+++ b/arch/xtensa/platforms/xtfpga/Makefile
-@@ -6,4 +6,5 @@
- #
- # Note 2! The CFLAGS definitions are in the main makefile...
- 
--obj-y			= setup.o lcd.o
-+obj-y			+= setup.o
-+obj-$(CONFIG_XTFPGA_LCD) += lcd.o
-diff --git a/arch/xtensa/platforms/xtfpga/include/platform/hardware.h b/arch/xtensa/platforms/xtfpga/include/platform/hardware.h
-index 6edd20b..4e0af26 100644
---- a/arch/xtensa/platforms/xtfpga/include/platform/hardware.h
-+++ b/arch/xtensa/platforms/xtfpga/include/platform/hardware.h
-@@ -40,9 +40,6 @@
- 
- /* UART */
- #define DUART16552_PADDR	(XCHAL_KIO_PADDR + 0x0D050020)
--/* LCD instruction and data addresses. */
--#define LCD_INSTR_ADDR		((char *)IOADDR(0x0D040000))
--#define LCD_DATA_ADDR		((char *)IOADDR(0x0D040004))
- 
- /* Misc. */
- #define XTFPGA_FPGAREGS_VADDR	IOADDR(0x0D020000)
-diff --git a/arch/xtensa/platforms/xtfpga/include/platform/lcd.h b/arch/xtensa/platforms/xtfpga/include/platform/lcd.h
-index 0e43564..4c8541e 100644
---- a/arch/xtensa/platforms/xtfpga/include/platform/lcd.h
-+++ b/arch/xtensa/platforms/xtfpga/include/platform/lcd.h
-@@ -11,10 +11,25 @@
- #ifndef __XTENSA_XTAVNET_LCD_H
- #define __XTENSA_XTAVNET_LCD_H
- 
-+#ifdef CONFIG_XTFPGA_LCD
- /* Display string STR at position POS on the LCD. */
- void lcd_disp_at_pos(char *str, unsigned char pos);
- 
- /* Shift the contents of the LCD display left or right. */
- void lcd_shiftleft(void);
- void lcd_shiftright(void);
-+#else
-+static inline void lcd_disp_at_pos(char *str, unsigned char pos)
-+{
-+}
++		if (ret != len) {
++			pr_err("vfs_write to prot file failed: %zd\n", ret);
++			return ret < 0 ? ret : -ENODEV;
++		}
++		prot += ret;
++	}
 +
-+static inline void lcd_shiftleft(void)
-+{
++	return 0;
 +}
 +
-+static inline void lcd_shiftright(void)
++static int
++fd_do_prot_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
 +{
++	void *buf;
++	int rc;
++
++	buf = (void *)__get_free_page(GFP_KERNEL);
++	if (!buf) {
++		pr_err("Unable to allocate FILEIO prot buf\n");
++		return -ENOMEM;
++	}
++	memset(buf, 0xff, PAGE_SIZE);
++
++	rc = fd_do_prot_fill(cmd->se_dev, lba, nolb, buf, PAGE_SIZE);
++
++	free_page((unsigned long)buf);
++
++	return rc;
 +}
-+#endif
 +
- #endif
-diff --git a/arch/xtensa/platforms/xtfpga/lcd.c b/arch/xtensa/platforms/xtfpga/lcd.c
-index 2872301..4dc0c1b 100644
---- a/arch/xtensa/platforms/xtfpga/lcd.c
-+++ b/arch/xtensa/platforms/xtfpga/lcd.c
-@@ -1,50 +1,63 @@
- /*
-- * Driver for the LCD display on the Tensilica LX60 Board.
-+ * Driver for the LCD display on the Tensilica XTFPGA board family.
-+ * http://www.mytechcorp.com/cfdata/productFile/File1/MOC-16216B-B-A0A04.pdf
-  *
-  * This file is subject to the terms and conditions of the GNU General Public
-  * License.  See the file "COPYING" in the main directory of this archive
-  * for more details.
-  *
-  * Copyright (C) 2001, 2006 Tensilica Inc.
-+ * Copyright (C) 2015 Cadence Design Systems Inc.
-  */
- 
--/*
-- *
-- * FIXME: this code is from the examples from the LX60 user guide.
-- *
-- * The lcd_pause function does busy waiting, which is probably not
-- * great. Maybe the code could be changed to use kernel timers, or
-- * change the hardware to not need to wait.
-- */
--
-+#include <linux/delay.h>
- #include <linux/init.h>
- #include <linux/io.h>
- 
- #include <platform/hardware.h>
- #include <platform/lcd.h>
--#include <linux/delay.h>
+ static sense_reason_t
+ fd_do_unmap(struct se_cmd *cmd, void *priv, sector_t lba, sector_t nolb)
+ {
+@@ -556,6 +598,12 @@ fd_do_unmap(struct se_cmd *cmd, void *priv, sector_t lba, sector_t nolb)
+ 	struct inode *inode = file->f_mapping->host;
+ 	int ret;
  
--#define LCD_PAUSE_ITERATIONS	4000
-+/* LCD instruction and data addresses. */
-+#define LCD_INSTR_ADDR		((char *)IOADDR(CONFIG_XTFPGA_LCD_BASE_ADDR))
-+#define LCD_DATA_ADDR		(LCD_INSTR_ADDR + 4)
++	if (cmd->se_dev->dev_attrib.pi_prot_type) {
++		ret = fd_do_prot_unmap(cmd, lba, nolb);
++		if (ret)
++			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
++	}
 +
- #define LCD_CLEAR		0x1
- #define LCD_DISPLAY_ON		0xc
+ 	if (S_ISBLK(inode->i_mode)) {
+ 		/* The backend is block device, use discard */
+ 		struct block_device *bdev = inode->i_bdev;
+@@ -658,11 +706,11 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
+ 						 0, fd_prot.prot_sg, 0);
+ 			if (rc) {
+ 				kfree(fd_prot.prot_sg);
+-				vfree(fd_prot.prot_buf);
++				kfree(fd_prot.prot_buf);
+ 				return rc;
+ 			}
+ 			kfree(fd_prot.prot_sg);
+-			vfree(fd_prot.prot_buf);
++			kfree(fd_prot.prot_buf);
+ 		}
+ 	} else {
+ 		memset(&fd_prot, 0, sizeof(struct fd_prot));
+@@ -678,7 +726,7 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
+ 						  0, fd_prot.prot_sg, 0);
+ 			if (rc) {
+ 				kfree(fd_prot.prot_sg);
+-				vfree(fd_prot.prot_buf);
++				kfree(fd_prot.prot_buf);
+ 				return rc;
+ 			}
+ 		}
+@@ -714,7 +762,7 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
  
- /* 8bit and 2 lines display */
- #define LCD_DISPLAY_MODE8BIT	0x38
-+#define LCD_DISPLAY_MODE4BIT	0x28
- #define LCD_DISPLAY_POS		0x80
- #define LCD_SHIFT_LEFT		0x18
- #define LCD_SHIFT_RIGHT		0x1c
+ 	if (ret < 0) {
+ 		kfree(fd_prot.prot_sg);
+-		vfree(fd_prot.prot_buf);
++		kfree(fd_prot.prot_buf);
+ 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+ 	}
  
-+static void lcd_put_byte(u8 *addr, u8 data)
-+{
-+#ifdef CONFIG_XTFPGA_LCD_8BIT_ACCESS
-+	ACCESS_ONCE(*addr) = data;
-+#else
-+	ACCESS_ONCE(*addr) = data & 0xf0;
-+	ACCESS_ONCE(*addr) = (data << 4) & 0xf0;
-+#endif
-+}
-+
- static int __init lcd_init(void)
- {
--	*LCD_INSTR_ADDR = LCD_DISPLAY_MODE8BIT;
-+	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE8BIT;
- 	mdelay(5);
--	*LCD_INSTR_ADDR = LCD_DISPLAY_MODE8BIT;
-+	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE8BIT;
- 	udelay(200);
--	*LCD_INSTR_ADDR = LCD_DISPLAY_MODE8BIT;
-+	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE8BIT;
-+	udelay(50);
-+#ifndef CONFIG_XTFPGA_LCD_8BIT_ACCESS
-+	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE4BIT;
-+	udelay(50);
-+	lcd_put_byte(LCD_INSTR_ADDR, LCD_DISPLAY_MODE4BIT);
- 	udelay(50);
--	*LCD_INSTR_ADDR = LCD_DISPLAY_ON;
-+#endif
-+	lcd_put_byte(LCD_INSTR_ADDR, LCD_DISPLAY_ON);
- 	udelay(50);
--	*LCD_INSTR_ADDR = LCD_CLEAR;
-+	lcd_put_byte(LCD_INSTR_ADDR, LCD_CLEAR);
- 	mdelay(10);
- 	lcd_disp_at_pos("XTENSA LINUX", 0);
- 	return 0;
-@@ -52,10 +65,10 @@ static int __init lcd_init(void)
+@@ -878,48 +926,28 @@ static int fd_init_prot(struct se_device *dev)
  
- void lcd_disp_at_pos(char *str, unsigned char pos)
+ static int fd_format_prot(struct se_device *dev)
  {
--	*LCD_INSTR_ADDR = LCD_DISPLAY_POS | pos;
-+	lcd_put_byte(LCD_INSTR_ADDR, LCD_DISPLAY_POS | pos);
- 	udelay(100);
- 	while (*str != 0) {
--		*LCD_DATA_ADDR = *str;
-+		lcd_put_byte(LCD_DATA_ADDR, *str);
- 		udelay(200);
- 		str++;
+-	struct fd_dev *fd_dev = FD_DEV(dev);
+-	struct file *prot_fd = fd_dev->fd_prot_file;
+-	sector_t prot_length, prot;
+ 	unsigned char *buf;
+-	loff_t pos = 0;
+ 	int unit_size = FDBD_FORMAT_UNIT_SIZE * dev->dev_attrib.block_size;
+-	int rc, ret = 0, size, len;
++	int ret;
+ 
+ 	if (!dev->dev_attrib.pi_prot_type) {
+ 		pr_err("Unable to format_prot while pi_prot_type == 0\n");
+ 		return -ENODEV;
  	}
-@@ -63,13 +76,13 @@ void lcd_disp_at_pos(char *str, unsigned char pos)
+-	if (!prot_fd) {
+-		pr_err("Unable to locate fd_dev->fd_prot_file\n");
+-		return -ENODEV;
+-	}
  
- void lcd_shiftleft(void)
- {
--	*LCD_INSTR_ADDR = LCD_SHIFT_LEFT;
-+	lcd_put_byte(LCD_INSTR_ADDR, LCD_SHIFT_LEFT);
- 	udelay(50);
+ 	buf = vzalloc(unit_size);
+ 	if (!buf) {
+ 		pr_err("Unable to allocate FILEIO prot buf\n");
+ 		return -ENOMEM;
+ 	}
+-	prot_length = (dev->transport->get_blocks(dev) + 1) * dev->prot_length;
+-	size = prot_length;
+ 
+ 	pr_debug("Using FILEIO prot_length: %llu\n",
+-		 (unsigned long long)prot_length);
++		 (unsigned long long)(dev->transport->get_blocks(dev) + 1) *
++					dev->prot_length);
+ 
+ 	memset(buf, 0xff, unit_size);
+-	for (prot = 0; prot < prot_length; prot += unit_size) {
+-		len = min(unit_size, size);
+-		rc = kernel_write(prot_fd, buf, len, pos);
+-		if (rc != len) {
+-			pr_err("vfs_write to prot file failed: %d\n", rc);
+-			ret = -ENODEV;
+-			goto out;
+-		}
+-		pos += len;
+-		size -= len;
+-	}
+-
+-out:
++	ret = fd_do_prot_fill(dev, 0, dev->transport->get_blocks(dev) + 1,
++			      buf, unit_size);
+ 	vfree(buf);
+ 	return ret;
+ }
+diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
+index 3e72974..755bd9b3 100644
+--- a/drivers/target/target_core_sbc.c
++++ b/drivers/target/target_core_sbc.c
+@@ -312,7 +312,7 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
+ 	return 0;
  }
  
- void lcd_shiftright(void)
+-static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd)
++static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd, bool success)
  {
--	*LCD_INSTR_ADDR = LCD_SHIFT_RIGHT;
-+	lcd_put_byte(LCD_INSTR_ADDR, LCD_SHIFT_RIGHT);
- 	udelay(50);
+ 	unsigned char *buf, *addr;
+ 	struct scatterlist *sg;
+@@ -376,7 +376,7 @@ sbc_execute_rw(struct se_cmd *cmd)
+ 			       cmd->data_direction);
  }
  
--- 
-2.3.6
-
-
-From 3d421b4703e664742e5f8b80c8f61d64d6435fa2 Mon Sep 17 00:00:00 2001
-From: Max Filippov <jcmvbkbc@gmail.com>
-Date: Fri, 27 Feb 2015 11:02:38 +0300
-Subject: [PATCH 158/219] xtensa: provide __NR_sync_file_range2 instead of
- __NR_sync_file_range
-Cc: mpagano@gentoo.org
-
-commit 01e84c70fe40c8111f960987bcf7f931842e6d07 upstream.
-
-xtensa actually uses sync_file_range2 implementation, so it should
-define __NR_sync_file_range2 as other architectures that use that
-function. That fixes userspace interface (that apparently never worked)
-and avoids special-casing xtensa in libc implementations.
-See the thread ending at
-http://lists.busybox.net/pipermail/uclibc/2015-February/048833.html
-for more details.
-
-Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/xtensa/include/uapi/asm/unistd.h | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/arch/xtensa/include/uapi/asm/unistd.h b/arch/xtensa/include/uapi/asm/unistd.h
-index db5bb72..62d8465 100644
---- a/arch/xtensa/include/uapi/asm/unistd.h
-+++ b/arch/xtensa/include/uapi/asm/unistd.h
-@@ -715,7 +715,7 @@ __SYSCALL(323, sys_process_vm_writev, 6)
- __SYSCALL(324, sys_name_to_handle_at, 5)
- #define __NR_open_by_handle_at			325
- __SYSCALL(325, sys_open_by_handle_at, 3)
--#define __NR_sync_file_range			326
-+#define __NR_sync_file_range2			326
- __SYSCALL(326, sys_sync_file_range2, 6)
- #define __NR_perf_event_open			327
- __SYSCALL(327, sys_perf_event_open, 5)
--- 
-2.3.6
-
-
-From 63c94a9787fee217938e65b3e11bed2b7179481f Mon Sep 17 00:00:00 2001
-From: Max Filippov <jcmvbkbc@gmail.com>
-Date: Fri, 3 Apr 2015 09:56:21 +0300
-Subject: [PATCH 159/219] xtensa: ISS: fix locking in TAP network adapter
-Cc: mpagano@gentoo.org
-
-commit 24e94454c8cb6a13634f5a2f5a01da53a546a58d upstream.
-
-- don't lock lp->lock in the iss_net_timer for the call of iss_net_poll,
-  it will lock it itself;
-- invert order of lp->lock and opened_lock acquisition in the
-  iss_net_open to make it consistent with iss_net_poll;
-- replace spin_lock with spin_lock_bh when acquiring locks used in
-  iss_net_timer from non-atomic context;
-- replace spin_lock_irqsave with spin_lock_bh in the iss_net_start_xmit
-  as the driver doesn't use lp->lock in the hard IRQ context;
-- replace __SPIN_LOCK_UNLOCKED(lp.lock) with spin_lock_init, otherwise
-  lockdep is unhappy about using non-static key.
-
-Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/xtensa/platforms/iss/network.c | 29 +++++++++++++++--------------
- 1 file changed, 15 insertions(+), 14 deletions(-)
-
-diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c
-index d05f8fe..17b1ef3 100644
---- a/arch/xtensa/platforms/iss/network.c
-+++ b/arch/xtensa/platforms/iss/network.c
-@@ -349,8 +349,8 @@ static void iss_net_timer(unsigned long priv)
+-static sense_reason_t compare_and_write_post(struct se_cmd *cmd)
++static sense_reason_t compare_and_write_post(struct se_cmd *cmd, bool success)
  {
- 	struct iss_net_private *lp = (struct iss_net_private *)priv;
- 
--	spin_lock(&lp->lock);
- 	iss_net_poll();
-+	spin_lock(&lp->lock);
- 	mod_timer(&lp->timer, jiffies + lp->timer_val);
- 	spin_unlock(&lp->lock);
- }
-@@ -361,7 +361,7 @@ static int iss_net_open(struct net_device *dev)
- 	struct iss_net_private *lp = netdev_priv(dev);
- 	int err;
- 
--	spin_lock(&lp->lock);
-+	spin_lock_bh(&lp->lock);
- 
- 	err = lp->tp.open(lp);
- 	if (err < 0)
-@@ -376,9 +376,11 @@ static int iss_net_open(struct net_device *dev)
- 	while ((err = iss_net_rx(dev)) > 0)
- 		;
- 
--	spin_lock(&opened_lock);
-+	spin_unlock_bh(&lp->lock);
-+	spin_lock_bh(&opened_lock);
- 	list_add(&lp->opened_list, &opened);
--	spin_unlock(&opened_lock);
-+	spin_unlock_bh(&opened_lock);
-+	spin_lock_bh(&lp->lock);
- 
- 	init_timer(&lp->timer);
- 	lp->timer_val = ISS_NET_TIMER_VALUE;
-@@ -387,7 +389,7 @@ static int iss_net_open(struct net_device *dev)
- 	mod_timer(&lp->timer, jiffies + lp->timer_val);
+ 	struct se_device *dev = cmd->se_dev;
  
- out:
--	spin_unlock(&lp->lock);
-+	spin_unlock_bh(&lp->lock);
- 	return err;
+@@ -399,7 +399,7 @@ static sense_reason_t compare_and_write_post(struct se_cmd *cmd)
+ 	return TCM_NO_SENSE;
  }
  
-@@ -395,7 +397,7 @@ static int iss_net_close(struct net_device *dev)
+-static sense_reason_t compare_and_write_callback(struct se_cmd *cmd)
++static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool success)
  {
- 	struct iss_net_private *lp = netdev_priv(dev);
- 	netif_stop_queue(dev);
--	spin_lock(&lp->lock);
-+	spin_lock_bh(&lp->lock);
+ 	struct se_device *dev = cmd->se_dev;
+ 	struct scatterlist *write_sg = NULL, *sg;
+@@ -414,11 +414,16 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd)
  
- 	spin_lock(&opened_lock);
- 	list_del(&opened);
-@@ -405,18 +407,17 @@ static int iss_net_close(struct net_device *dev)
+ 	/*
+ 	 * Handle early failure in transport_generic_request_failure(),
+-	 * which will not have taken ->caw_mutex yet..
++	 * which will not have taken ->caw_sem yet..
+ 	 */
+-	if (!cmd->t_data_sg || !cmd->t_bidi_data_sg)
++	if (!success && (!cmd->t_data_sg || !cmd->t_bidi_data_sg))
+ 		return TCM_NO_SENSE;
+ 	/*
++	 * Handle special case for zero-length COMPARE_AND_WRITE
++	 */
++	if (!cmd->data_length)
++		goto out;
++	/*
+ 	 * Immediately exit + release dev->caw_sem if command has already
+ 	 * been failed with a non-zero SCSI status.
+ 	 */
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index ac3cbab..f786de0 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -1615,11 +1615,11 @@ void transport_generic_request_failure(struct se_cmd *cmd,
+ 	transport_complete_task_attr(cmd);
+ 	/*
+ 	 * Handle special case for COMPARE_AND_WRITE failure, where the
+-	 * callback is expected to drop the per device ->caw_mutex.
++	 * callback is expected to drop the per device ->caw_sem.
+ 	 */
+ 	if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) &&
+ 	     cmd->transport_complete_callback)
+-		cmd->transport_complete_callback(cmd);
++		cmd->transport_complete_callback(cmd, false);
  
- 	lp->tp.close(lp);
+ 	switch (sense_reason) {
+ 	case TCM_NON_EXISTENT_LUN:
+@@ -1975,8 +1975,12 @@ static void target_complete_ok_work(struct work_struct *work)
+ 	if (cmd->transport_complete_callback) {
+ 		sense_reason_t rc;
  
--	spin_unlock(&lp->lock);
-+	spin_unlock_bh(&lp->lock);
- 	return 0;
- }
+-		rc = cmd->transport_complete_callback(cmd);
++		rc = cmd->transport_complete_callback(cmd, true);
+ 		if (!rc && !(cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE_POST)) {
++			if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) &&
++			    !cmd->data_length)
++				goto queue_rsp;
++
+ 			return;
+ 		} else if (rc) {
+ 			ret = transport_send_check_condition_and_sense(cmd,
+@@ -1990,6 +1994,7 @@ static void target_complete_ok_work(struct work_struct *work)
+ 		}
+ 	}
  
- static int iss_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
++queue_rsp:
+ 	switch (cmd->data_direction) {
+ 	case DMA_FROM_DEVICE:
+ 		spin_lock(&cmd->se_lun->lun_sep_lock);
+@@ -2094,6 +2099,16 @@ static inline void transport_reset_sgl_orig(struct se_cmd *cmd)
+ static inline void transport_free_pages(struct se_cmd *cmd)
  {
- 	struct iss_net_private *lp = netdev_priv(dev);
--	unsigned long flags;
- 	int len;
+ 	if (cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) {
++		/*
++		 * Release special case READ buffer payload required for
++		 * SG_TO_MEM_NOALLOC to function with COMPARE_AND_WRITE
++		 */
++		if (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) {
++			transport_free_sgl(cmd->t_bidi_data_sg,
++					   cmd->t_bidi_data_nents);
++			cmd->t_bidi_data_sg = NULL;
++			cmd->t_bidi_data_nents = 0;
++		}
+ 		transport_reset_sgl_orig(cmd);
+ 		return;
+ 	}
+@@ -2246,6 +2261,7 @@ sense_reason_t
+ transport_generic_new_cmd(struct se_cmd *cmd)
+ {
+ 	int ret = 0;
++	bool zero_flag = !(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB);
  
- 	netif_stop_queue(dev);
--	spin_lock_irqsave(&lp->lock, flags);
-+	spin_lock_bh(&lp->lock);
+ 	/*
+ 	 * Determine is the TCM fabric module has already allocated physical
+@@ -2254,7 +2270,6 @@ transport_generic_new_cmd(struct se_cmd *cmd)
+ 	 */
+ 	if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) &&
+ 	    cmd->data_length) {
+-		bool zero_flag = !(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB);
  
- 	len = lp->tp.write(lp, &skb);
+ 		if ((cmd->se_cmd_flags & SCF_BIDI) ||
+ 		    (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE)) {
+@@ -2285,6 +2300,20 @@ transport_generic_new_cmd(struct se_cmd *cmd)
+ 				       cmd->data_length, zero_flag);
+ 		if (ret < 0)
+ 			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
++	} else if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) &&
++		    cmd->data_length) {
++		/*
++		 * Special case for COMPARE_AND_WRITE with fabrics
++		 * using SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC.
++		 */
++		u32 caw_length = cmd->t_task_nolb *
++				 cmd->se_dev->dev_attrib.block_size;
++
++		ret = target_alloc_sgl(&cmd->t_bidi_data_sg,
++				       &cmd->t_bidi_data_nents,
++				       caw_length, zero_flag);
++		if (ret < 0)
++			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+ 	}
+ 	/*
+ 	 * If this command is not a write we can execute it right here,
+diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
+index deae122..d465ace 100644
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -3444,7 +3444,8 @@ void serial8250_suspend_port(int line)
+ 	    port->type != PORT_8250) {
+ 		unsigned char canary = 0xa5;
+ 		serial_out(up, UART_SCR, canary);
+-		up->canary = canary;
++		if (serial_in(up, UART_SCR) == canary)
++			up->canary = canary;
+ 	}
  
-@@ -438,7 +439,7 @@ static int iss_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
- 		pr_err("%s: %s failed(%d)\n", dev->name, __func__, len);
+ 	uart_suspend_port(&serial8250_reg, port);
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index 6ae5b85..7a80250 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -629,6 +629,7 @@ static const struct acpi_device_id dw8250_acpi_match[] = {
+ 	{ "80860F0A", 0 },
+ 	{ "8086228A", 0 },
+ 	{ "APMC0D08", 0},
++	{ "AMD0020", 0 },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(acpi, dw8250_acpi_match);
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 0eb29b1..2306191 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -818,7 +818,7 @@ static irqreturn_t imx_int(int irq, void *dev_id)
+ 	if (sts2 & USR2_ORE) {
+ 		dev_err(sport->port.dev, "Rx FIFO overrun\n");
+ 		sport->port.icount.overrun++;
+-		writel(sts2 | USR2_ORE, sport->port.membase + USR2);
++		writel(USR2_ORE, sport->port.membase + USR2);
  	}
  
--	spin_unlock_irqrestore(&lp->lock, flags);
-+	spin_unlock_bh(&lp->lock);
+ 	return IRQ_HANDLED;
+@@ -1181,10 +1181,12 @@ static int imx_startup(struct uart_port *port)
+ 		imx_uart_dma_init(sport);
  
- 	dev_kfree_skb(skb);
- 	return NETDEV_TX_OK;
-@@ -466,9 +467,9 @@ static int iss_net_set_mac(struct net_device *dev, void *addr)
+ 	spin_lock_irqsave(&sport->port.lock, flags);
++
+ 	/*
+ 	 * Finally, clear and enable interrupts
+ 	 */
+ 	writel(USR1_RTSD, sport->port.membase + USR1);
++	writel(USR2_ORE, sport->port.membase + USR2);
  
- 	if (!is_valid_ether_addr(hwaddr->sa_data))
- 		return -EADDRNOTAVAIL;
--	spin_lock(&lp->lock);
-+	spin_lock_bh(&lp->lock);
- 	memcpy(dev->dev_addr, hwaddr->sa_data, ETH_ALEN);
--	spin_unlock(&lp->lock);
-+	spin_unlock_bh(&lp->lock);
- 	return 0;
- }
- 
-@@ -520,11 +521,11 @@ static int iss_net_configure(int index, char *init)
- 	*lp = (struct iss_net_private) {
- 		.device_list		= LIST_HEAD_INIT(lp->device_list),
- 		.opened_list		= LIST_HEAD_INIT(lp->opened_list),
--		.lock			= __SPIN_LOCK_UNLOCKED(lp.lock),
- 		.dev			= dev,
- 		.index			= index,
--		};
-+	};
- 
-+	spin_lock_init(&lp->lock);
- 	/*
- 	 * If this name ends up conflicting with an existing registered
- 	 * netdevice, that is OK, register_netdev{,ice}() will notice this
--- 
-2.3.6
-
-
-From 6d4724e609d9640755996c9dc8f3f4ee79790957 Mon Sep 17 00:00:00 2001
-From: Gregory CLEMENT <gregory.clement@free-electrons.com>
-Date: Thu, 2 Apr 2015 17:11:11 +0200
-Subject: [PATCH 160/219] gpio: mvebu: Fix mask/unmask managment per irq chip
- type
-Cc: mpagano@gentoo.org
-
-commit 61819549f572edd7fce53f228c0d8420cdc85f71 upstream.
-
-Level IRQ handlers and edge IRQ handler are managed by tow different
-sets of registers. But currently the driver uses the same mask for the
-both registers. It lead to issues with the following scenario:
-
-First, an IRQ is requested on a GPIO to be triggered on front. After,
-this an other IRQ is requested for a GPIO of the same bank but
-triggered on level. Then the first one will be also setup to be
-triggered on level. It leads to an interrupt storm.
-
-The different kind of handler are already associated with two
-different irq chip type. With this patch the driver uses a private
-mask for each one which solves this issue.
-
-It has been tested on an Armada XP based board and on an Armada 375
-board. For the both boards, with this patch is applied, there is no
-such interrupt storm when running the previous scenario.
-
-This bug was already fixed but in a different way in the legacy
-version of this driver by Evgeniy Dushistov:
-9ece8839b1277fb9128ff6833411614ab6c88d68 "ARM: orion: Fix for certain
-sequence of request_irq can cause irq storm". The fact the new version
-of the gpio drive could be affected had been discussed there:
-http://thread.gmane.org/gmane.linux.ports.arm.kernel/344670/focus=364012
-
-Reported-by: Evgeniy A. Dushistov <dushistov@mail.ru>
-Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
-Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/gpio/gpio-mvebu.c | 24 ++++++++++++++++--------
- 1 file changed, 16 insertions(+), 8 deletions(-)
-
-diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c
-index d0bc123..1a54205 100644
---- a/drivers/gpio/gpio-mvebu.c
-+++ b/drivers/gpio/gpio-mvebu.c
-@@ -320,11 +320,13 @@ static void mvebu_gpio_edge_irq_mask(struct irq_data *d)
- {
- 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
- 	struct mvebu_gpio_chip *mvchip = gc->private;
-+	struct irq_chip_type *ct = irq_data_get_chip_type(d);
- 	u32 mask = 1 << (d->irq - gc->irq_base);
- 
- 	irq_gc_lock(gc);
--	gc->mask_cache &= ~mask;
--	writel_relaxed(gc->mask_cache, mvebu_gpioreg_edge_mask(mvchip));
-+	ct->mask_cache_priv &= ~mask;
-+
-+	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_edge_mask(mvchip));
- 	irq_gc_unlock(gc);
- }
- 
-@@ -332,11 +334,13 @@ static void mvebu_gpio_edge_irq_unmask(struct irq_data *d)
- {
- 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
- 	struct mvebu_gpio_chip *mvchip = gc->private;
-+	struct irq_chip_type *ct = irq_data_get_chip_type(d);
-+
- 	u32 mask = 1 << (d->irq - gc->irq_base);
- 
- 	irq_gc_lock(gc);
--	gc->mask_cache |= mask;
--	writel_relaxed(gc->mask_cache, mvebu_gpioreg_edge_mask(mvchip));
-+	ct->mask_cache_priv |= mask;
-+	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_edge_mask(mvchip));
- 	irq_gc_unlock(gc);
- }
- 
-@@ -344,11 +348,13 @@ static void mvebu_gpio_level_irq_mask(struct irq_data *d)
- {
- 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
- 	struct mvebu_gpio_chip *mvchip = gc->private;
-+	struct irq_chip_type *ct = irq_data_get_chip_type(d);
-+
- 	u32 mask = 1 << (d->irq - gc->irq_base);
- 
- 	irq_gc_lock(gc);
--	gc->mask_cache &= ~mask;
--	writel_relaxed(gc->mask_cache, mvebu_gpioreg_level_mask(mvchip));
-+	ct->mask_cache_priv &= ~mask;
-+	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_level_mask(mvchip));
- 	irq_gc_unlock(gc);
- }
- 
-@@ -356,11 +362,13 @@ static void mvebu_gpio_level_irq_unmask(struct irq_data *d)
- {
- 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
- 	struct mvebu_gpio_chip *mvchip = gc->private;
-+	struct irq_chip_type *ct = irq_data_get_chip_type(d);
-+
- 	u32 mask = 1 << (d->irq - gc->irq_base);
- 
- 	irq_gc_lock(gc);
--	gc->mask_cache |= mask;
--	writel_relaxed(gc->mask_cache, mvebu_gpioreg_level_mask(mvchip));
-+	ct->mask_cache_priv |= mask;
-+	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_level_mask(mvchip));
- 	irq_gc_unlock(gc);
- }
- 
--- 
-2.3.6
-
-
-From fb8e85723598714f519a827184910324690e2896 Mon Sep 17 00:00:00 2001
-From: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
-Date: Fri, 27 Mar 2015 17:27:10 +0100
-Subject: [PATCH 161/219] clk: samsung: exynos4: Disable ARMCLK down feature on
- Exynos4210 SoC
-Cc: mpagano@gentoo.org
-
-commit 3a9e9cb65be84d6c64fbe9c69a73c15d59f29454 upstream.
-
-Commit 42773b28e71d ("clk: samsung: exynos4: Enable ARMCLK
-down feature") enabled ARMCLK down feature on all Exynos4
-SoCs.  Unfortunately on Exynos4210 SoC ARMCLK down feature
-causes a lockup when ondemand cpufreq governor is used.
-Fix it by limiting ARMCLK down feature to Exynos4x12 SoCs.
-
-This patch was tested on:
-- Exynos4210 SoC based Trats board
-- Exynos4210 SoC based Origen board
-- Exynos4412 SoC based Trats2 board
-- Exynos4412 SoC based Odroid-U3 board
-
-Cc: Daniel Drake <drake@endlessm.com>
-Cc: Tomasz Figa <t.figa@samsung.com>
-Cc: Kukjin Kim <kgene@kernel.org>
-Fixes: 42773b28e71d ("clk: samsung: exynos4: Enable ARMCLK down feature")
-Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
-Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
-Signed-off-by: Michael Turquette <mturquette@linaro.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/clk/samsung/clk-exynos4.c | 11 +++++------
- 1 file changed, 5 insertions(+), 6 deletions(-)
-
-diff --git a/drivers/clk/samsung/clk-exynos4.c b/drivers/clk/samsung/clk-exynos4.c
-index 51462e8..714d6ba 100644
---- a/drivers/clk/samsung/clk-exynos4.c
-+++ b/drivers/clk/samsung/clk-exynos4.c
-@@ -1354,7 +1354,7 @@ static struct samsung_pll_clock exynos4x12_plls[nr_plls] __initdata = {
- 			VPLL_LOCK, VPLL_CON0, NULL),
- };
- 
--static void __init exynos4_core_down_clock(enum exynos4_soc soc)
-+static void __init exynos4x12_core_down_clock(void)
- {
- 	unsigned int tmp;
- 
-@@ -1373,11 +1373,9 @@ static void __init exynos4_core_down_clock(enum exynos4_soc soc)
- 	__raw_writel(tmp, reg_base + PWR_CTRL1);
- 
- 	/*
--	 * Disable the clock up feature on Exynos4x12, in case it was
--	 * enabled by bootloader.
-+	 * Disable the clock up feature in case it was enabled by bootloader.
- 	 */
--	if (exynos4_soc == EXYNOS4X12)
--		__raw_writel(0x0, reg_base + E4X12_PWR_CTRL2);
-+	__raw_writel(0x0, reg_base + E4X12_PWR_CTRL2);
- }
- 
- /* register exynos4 clocks */
-@@ -1474,7 +1472,8 @@ static void __init exynos4_clk_init(struct device_node *np,
- 	samsung_clk_register_alias(ctx, exynos4_aliases,
- 			ARRAY_SIZE(exynos4_aliases));
- 
--	exynos4_core_down_clock(soc);
-+	if (soc == EXYNOS4X12)
-+		exynos4x12_core_down_clock();
- 	exynos4_clk_sleep_init();
+ 	if (sport->dma_is_inited && !sport->dma_is_enabled)
+ 		imx_enable_dma(sport);
+@@ -1199,10 +1201,6 @@ static int imx_startup(struct uart_port *port)
  
- 	samsung_clk_of_add_provider(np, ctx);
--- 
-2.3.6
-
-
-From 41761ed1e3b457699c416c4e5eea1c86aa2d307c Mon Sep 17 00:00:00 2001
-From: Thierry Reding <treding@nvidia.com>
-Date: Mon, 23 Mar 2015 10:57:46 +0100
-Subject: [PATCH 162/219] clk: tegra: Register the proper number of resets
-Cc: mpagano@gentoo.org
-
-commit 5e43e259171e1eee8bc074d9c44be434e685087b upstream.
-
-The number of resets controls is 32 times the number of peripheral
-register banks rather than 32 times the number of clocks. This reduces
-(drastically) the number of reset controls registered from 10080 (315
-clocks * 32) to 224 (6 peripheral register banks * 32).
-
-This also fixes a potential crash because trying to use any of the
-excess reset controls (224-10079) would have caused accesses beyond
-the array bounds of the peripheral register banks definition array.
-
-Cc: Peter De Schrijver <pdeschrijver@nvidia.com>
-Cc: Prashant Gaikwad <pgaikwad@nvidia.com>
-Fixes: 6d5b988e7dc5 ("clk: tegra: implement a reset driver")
-Signed-off-by: Thierry Reding <treding@nvidia.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/clk/tegra/clk.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/clk/tegra/clk.c b/drivers/clk/tegra/clk.c
-index 9ddb754..7a1df61 100644
---- a/drivers/clk/tegra/clk.c
-+++ b/drivers/clk/tegra/clk.c
-@@ -272,7 +272,7 @@ void __init tegra_add_of_provider(struct device_node *np)
- 	of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data);
+ 	writel(temp, sport->port.membase + UCR1);
  
- 	rst_ctlr.of_node = np;
--	rst_ctlr.nr_resets = clk_num * 32;
-+	rst_ctlr.nr_resets = periph_banks * 32;
- 	reset_controller_register(&rst_ctlr);
- }
+-	/* Clear any pending ORE flag before enabling interrupt */
+-	temp = readl(sport->port.membase + USR2);
+-	writel(temp | USR2_ORE, sport->port.membase + USR2);
+-
+ 	temp = readl(sport->port.membase + UCR4);
+ 	temp |= UCR4_OREN;
+ 	writel(temp, sport->port.membase + UCR4);
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index a051a7a..a81f9dd 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -245,7 +245,7 @@ static void wdm_int_callback(struct urb *urb)
+ 	case USB_CDC_NOTIFY_RESPONSE_AVAILABLE:
+ 		dev_dbg(&desc->intf->dev,
+ 			"NOTIFY_RESPONSE_AVAILABLE received: index %d len %d",
+-			dr->wIndex, dr->wLength);
++			le16_to_cpu(dr->wIndex), le16_to_cpu(dr->wLength));
+ 		break;
  
--- 
-2.3.6
-
-
-From 7c646709786798cd41b4e2feb7f9136214169c92 Mon Sep 17 00:00:00 2001
-From: Thierry Reding <treding@nvidia.com>
-Date: Thu, 26 Mar 2015 17:53:01 +0100
-Subject: [PATCH 163/219] clk: tegra: Use the proper parent for plld_dsi
-Cc: mpagano@gentoo.org
-
-commit c1d676cec572544616273d5853cb7cc38fbaa62b upstream.
-
-The current parent, plld_out0, does not exist. The proper name is
-pll_d_out0. While at it, rename the plld_dsi clock to pll_d_dsi_out to
-be more consistent with other clock names.
-
-Fixes: b270491eb9a0 ("clk: tegra: Define PLLD_DSI and remove dsia(b)_mux")
-Signed-off-by: Thierry Reding <treding@nvidia.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/clk/tegra/clk-tegra124.c                | 14 ++++++++------
- include/dt-bindings/clock/tegra124-car-common.h |  2 +-
- 2 files changed, 9 insertions(+), 7 deletions(-)
-
-diff --git a/drivers/clk/tegra/clk-tegra124.c b/drivers/clk/tegra/clk-tegra124.c
-index 9a893f2..23ce0af 100644
---- a/drivers/clk/tegra/clk-tegra124.c
-+++ b/drivers/clk/tegra/clk-tegra124.c
-@@ -1110,16 +1110,18 @@ static __init void tegra124_periph_clk_init(void __iomem *clk_base,
- 					1, 2);
- 	clks[TEGRA124_CLK_XUSB_SS_DIV2] = clk;
+ 	case USB_CDC_NOTIFY_NETWORK_CONNECTION:
+@@ -262,7 +262,9 @@ static void wdm_int_callback(struct urb *urb)
+ 		clear_bit(WDM_POLL_RUNNING, &desc->flags);
+ 		dev_err(&desc->intf->dev,
+ 			"unknown notification %d received: index %d len %d\n",
+-			dr->bNotificationType, dr->wIndex, dr->wLength);
++			dr->bNotificationType,
++			le16_to_cpu(dr->wIndex),
++			le16_to_cpu(dr->wLength));
+ 		goto exit;
+ 	}
  
--	clk = clk_register_gate(NULL, "plld_dsi", "plld_out0", 0,
-+	clk = clk_register_gate(NULL, "pll_d_dsi_out", "pll_d_out0", 0,
- 				clk_base + PLLD_MISC, 30, 0, &pll_d_lock);
--	clks[TEGRA124_CLK_PLLD_DSI] = clk;
-+	clks[TEGRA124_CLK_PLL_D_DSI_OUT] = clk;
+@@ -408,7 +410,7 @@ static ssize_t wdm_write
+ 			     USB_RECIP_INTERFACE);
+ 	req->bRequest = USB_CDC_SEND_ENCAPSULATED_COMMAND;
+ 	req->wValue = 0;
+-	req->wIndex = desc->inum;
++	req->wIndex = desc->inum; /* already converted */
+ 	req->wLength = cpu_to_le16(count);
+ 	set_bit(WDM_IN_USE, &desc->flags);
+ 	desc->outbuf = buf;
+@@ -422,7 +424,7 @@ static ssize_t wdm_write
+ 		rv = usb_translate_errors(rv);
+ 	} else {
+ 		dev_dbg(&desc->intf->dev, "Tx URB has been submitted index=%d",
+-			req->wIndex);
++			le16_to_cpu(req->wIndex));
+ 	}
+ out:
+ 	usb_autopm_put_interface(desc->intf);
+@@ -820,7 +822,7 @@ static int wdm_create(struct usb_interface *intf, struct usb_endpoint_descriptor
+ 	desc->irq->bRequestType = (USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE);
+ 	desc->irq->bRequest = USB_CDC_GET_ENCAPSULATED_RESPONSE;
+ 	desc->irq->wValue = 0;
+-	desc->irq->wIndex = desc->inum;
++	desc->irq->wIndex = desc->inum; /* already converted */
+ 	desc->irq->wLength = cpu_to_le16(desc->wMaxCommand);
  
--	clk = tegra_clk_register_periph_gate("dsia", "plld_dsi", 0, clk_base,
--					     0, 48, periph_clk_enb_refcnt);
-+	clk = tegra_clk_register_periph_gate("dsia", "pll_d_dsi_out", 0,
-+					     clk_base, 0, 48,
-+					     periph_clk_enb_refcnt);
- 	clks[TEGRA124_CLK_DSIA] = clk;
+ 	usb_fill_control_urb(
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index d7c3d5a..3b71516 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -3406,10 +3406,10 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
+ 	if (status) {
+ 		dev_dbg(&port_dev->dev, "can't resume, status %d\n", status);
+ 	} else {
+-		/* drive resume for at least 20 msec */
++		/* drive resume for USB_RESUME_TIMEOUT msec */
+ 		dev_dbg(&udev->dev, "usb %sresume\n",
+ 				(PMSG_IS_AUTO(msg) ? "auto-" : ""));
+-		msleep(25);
++		msleep(USB_RESUME_TIMEOUT);
  
--	clk = tegra_clk_register_periph_gate("dsib", "plld_dsi", 0, clk_base,
--					     0, 82, periph_clk_enb_refcnt);
-+	clk = tegra_clk_register_periph_gate("dsib", "pll_d_dsi_out", 0,
-+					     clk_base, 0, 82,
-+					     periph_clk_enb_refcnt);
- 	clks[TEGRA124_CLK_DSIB] = clk;
+ 		/* Virtual root hubs can trigger on GET_PORT_STATUS to
+ 		 * stop resume signaling.  Then finish the resume
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index c78c874..758b7e0 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -1521,7 +1521,7 @@ static int dwc2_hcd_hub_control(struct dwc2_hsotg *hsotg, u16 typereq,
+ 			dev_dbg(hsotg->dev,
+ 				"ClearPortFeature USB_PORT_FEAT_SUSPEND\n");
+ 			writel(0, hsotg->regs + PCGCTL);
+-			usleep_range(20000, 40000);
++			msleep(USB_RESUME_TIMEOUT);
  
- 	/* emc mux */
-diff --git a/include/dt-bindings/clock/tegra124-car-common.h b/include/dt-bindings/clock/tegra124-car-common.h
-index ae2eb17..a215609 100644
---- a/include/dt-bindings/clock/tegra124-car-common.h
-+++ b/include/dt-bindings/clock/tegra124-car-common.h
-@@ -297,7 +297,7 @@
- #define TEGRA124_CLK_PLL_C4 270
- #define TEGRA124_CLK_PLL_DP 271
- #define TEGRA124_CLK_PLL_E_MUX 272
--#define TEGRA124_CLK_PLLD_DSI 273
-+#define TEGRA124_CLK_PLL_D_DSI_OUT 273
- /* 274 */
- /* 275 */
- /* 276 */
--- 
-2.3.6
-
-
-From 1d77b1031e7230917ed6c8fd1ac82f18a9c33c9d Mon Sep 17 00:00:00 2001
-From: Stephen Boyd <sboyd@codeaurora.org>
-Date: Mon, 23 Feb 2015 13:30:28 -0800
-Subject: [PATCH 164/219] clk: qcom: Fix i2c frequency table
-Cc: mpagano@gentoo.org
-
-commit 0bf0ff82c34da02ee5795101b328225a2d519594 upstream.
-
-PXO is 25MHz, not 27MHz. Fix the table.
-
-Fixes: 24d8fba44af3 "clk: qcom: Add support for IPQ8064's global
-clock controller (GCC)"
-
-Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
-Reviewed-by: Andy Gross <agross@codeaurora.org>
-Tested-by: Andy Gross <agross@codeaurora.org>
-Signed-off-by: Michael Turquette <mturquette@linaro.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/clk/qcom/gcc-ipq806x.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/clk/qcom/gcc-ipq806x.c b/drivers/clk/qcom/gcc-ipq806x.c
-index cbdc31d..a015bb0 100644
---- a/drivers/clk/qcom/gcc-ipq806x.c
-+++ b/drivers/clk/qcom/gcc-ipq806x.c
-@@ -525,8 +525,8 @@ static struct freq_tbl clk_tbl_gsbi_qup[] = {
- 	{ 10800000, P_PXO,  1, 2,  5 },
- 	{ 15060000, P_PLL8, 1, 2, 51 },
- 	{ 24000000, P_PLL8, 4, 1,  4 },
-+	{ 25000000, P_PXO,  1, 0,  0 },
- 	{ 25600000, P_PLL8, 1, 1, 15 },
--	{ 27000000, P_PXO,  1, 0,  0 },
- 	{ 48000000, P_PLL8, 4, 1,  2 },
- 	{ 51200000, P_PLL8, 1, 2, 15 },
- 	{ }
--- 
-2.3.6
-
-
-From 6761ec536ade4be25c5b846e71f96c8ecdc08347 Mon Sep 17 00:00:00 2001
-From: Stephen Boyd <sboyd@codeaurora.org>
-Date: Fri, 6 Mar 2015 15:41:53 -0800
-Subject: [PATCH 165/219] clk: qcom: Properly change rates for ahbix clock
-Cc: mpagano@gentoo.org
-
-commit 9d3745d44a7faa7d24db7facb1949a1378162f3e upstream.
-
-The ahbix clock can never be turned off in practice. To change the
-rates we need to switch the mux off the M/N counter to an always on
-source (XO), reprogram the M/N counter to get the rate we want and
-finally switch back to the M/N counter. Add a new ops structure
-for this type of clock so that we can set the rate properly.
-
-Fixes: c99e515a92e9 "clk: qcom: Add IPQ806X LPASS clock controller (LCC) driver"
-Tested-by: Kenneth Westfield <kwestfie@codeaurora.org>
-Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/clk/qcom/clk-rcg.c     | 62 ++++++++++++++++++++++++++++++++++++++++++
- drivers/clk/qcom/clk-rcg.h     |  1 +
- drivers/clk/qcom/lcc-ipq806x.c |  5 ++--
- 3 files changed, 65 insertions(+), 3 deletions(-)
-
-diff --git a/drivers/clk/qcom/clk-rcg.c b/drivers/clk/qcom/clk-rcg.c
-index 0039bd7..466f30c 100644
---- a/drivers/clk/qcom/clk-rcg.c
-+++ b/drivers/clk/qcom/clk-rcg.c
-@@ -495,6 +495,57 @@ static int clk_rcg_bypass_set_rate(struct clk_hw *hw, unsigned long rate,
- 	return __clk_rcg_set_rate(rcg, rcg->freq_tbl);
+ 			hprt0 = dwc2_read_hprt0(hsotg);
+ 			hprt0 |= HPRT0_RES;
+diff --git a/drivers/usb/gadget/legacy/printer.c b/drivers/usb/gadget/legacy/printer.c
+index 9054598..6385c19 100644
+--- a/drivers/usb/gadget/legacy/printer.c
++++ b/drivers/usb/gadget/legacy/printer.c
+@@ -1031,6 +1031,15 @@ unknown:
+ 		break;
+ 	}
+ 	/* host either stalls (value < 0) or reports success */
++	if (value >= 0) {
++		req->length = value;
++		req->zero = value < wLength;
++		value = usb_ep_queue(cdev->gadget->ep0, req, GFP_ATOMIC);
++		if (value < 0) {
++			ERROR(dev, "%s:%d Error!\n", __func__, __LINE__);
++			req->status = 0;
++		}
++	}
+ 	return value;
  }
  
-+/*
-+ * This type of clock has a glitch-free mux that switches between the output of
-+ * the M/N counter and an always on clock source (XO). When clk_set_rate() is
-+ * called we need to make sure that we don't switch to the M/N counter if it
-+ * isn't clocking because the mux will get stuck and the clock will stop
-+ * outputting a clock. This can happen if the framework isn't aware that this
-+ * clock is on and so clk_set_rate() doesn't turn on the new parent. To fix
-+ * this we switch the mux in the enable/disable ops and reprogram the M/N
-+ * counter in the set_rate op. We also make sure to switch away from the M/N
-+ * counter in set_rate if software thinks the clock is off.
-+ */
-+static int clk_rcg_lcc_set_rate(struct clk_hw *hw, unsigned long rate,
-+				unsigned long parent_rate)
-+{
-+	struct clk_rcg *rcg = to_clk_rcg(hw);
-+	const struct freq_tbl *f;
-+	int ret;
-+	u32 gfm = BIT(10);
-+
-+	f = qcom_find_freq(rcg->freq_tbl, rate);
-+	if (!f)
-+		return -EINVAL;
-+
-+	/* Switch to XO to avoid glitches */
-+	regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, 0);
-+	ret = __clk_rcg_set_rate(rcg, f);
-+	/* Switch back to M/N if it's clocking */
-+	if (__clk_is_enabled(hw->clk))
-+		regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, gfm);
-+
-+	return ret;
-+}
-+
-+static int clk_rcg_lcc_enable(struct clk_hw *hw)
-+{
-+	struct clk_rcg *rcg = to_clk_rcg(hw);
-+	u32 gfm = BIT(10);
-+
-+	/* Use M/N */
-+	return regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, gfm);
-+}
-+
-+static void clk_rcg_lcc_disable(struct clk_hw *hw)
-+{
-+	struct clk_rcg *rcg = to_clk_rcg(hw);
-+	u32 gfm = BIT(10);
-+
-+	/* Use XO */
-+	regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, 0);
-+}
-+
- static int __clk_dyn_rcg_set_rate(struct clk_hw *hw, unsigned long rate)
- {
- 	struct clk_dyn_rcg *rcg = to_clk_dyn_rcg(hw);
-@@ -543,6 +594,17 @@ const struct clk_ops clk_rcg_bypass_ops = {
- };
- EXPORT_SYMBOL_GPL(clk_rcg_bypass_ops);
- 
-+const struct clk_ops clk_rcg_lcc_ops = {
-+	.enable = clk_rcg_lcc_enable,
-+	.disable = clk_rcg_lcc_disable,
-+	.get_parent = clk_rcg_get_parent,
-+	.set_parent = clk_rcg_set_parent,
-+	.recalc_rate = clk_rcg_recalc_rate,
-+	.determine_rate = clk_rcg_determine_rate,
-+	.set_rate = clk_rcg_lcc_set_rate,
-+};
-+EXPORT_SYMBOL_GPL(clk_rcg_lcc_ops);
-+
- const struct clk_ops clk_dyn_rcg_ops = {
- 	.enable = clk_enable_regmap,
- 	.is_enabled = clk_is_enabled_regmap,
-diff --git a/drivers/clk/qcom/clk-rcg.h b/drivers/clk/qcom/clk-rcg.h
-index 687e41f..d09d06b 100644
---- a/drivers/clk/qcom/clk-rcg.h
-+++ b/drivers/clk/qcom/clk-rcg.h
-@@ -96,6 +96,7 @@ struct clk_rcg {
- 
- extern const struct clk_ops clk_rcg_ops;
- extern const struct clk_ops clk_rcg_bypass_ops;
-+extern const struct clk_ops clk_rcg_lcc_ops;
+diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
+index 85e56d1..f4d88df 100644
+--- a/drivers/usb/host/ehci-hcd.c
++++ b/drivers/usb/host/ehci-hcd.c
+@@ -792,12 +792,12 @@ static irqreturn_t ehci_irq (struct usb_hcd *hcd)
+ 					ehci->reset_done[i] == 0))
+ 				continue;
  
- #define to_clk_rcg(_hw) container_of(to_clk_regmap(_hw), struct clk_rcg, clkr)
+-			/* start 20 msec resume signaling from this port,
+-			 * and make hub_wq collect PORT_STAT_C_SUSPEND to
+-			 * stop that signaling.  Use 5 ms extra for safety,
+-			 * like usb_port_resume() does.
++			/* start USB_RESUME_TIMEOUT msec resume signaling from
++			 * this port, and make hub_wq collect
++			 * PORT_STAT_C_SUSPEND to stop that signaling.
+ 			 */
+-			ehci->reset_done[i] = jiffies + msecs_to_jiffies(25);
++			ehci->reset_done[i] = jiffies +
++				msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			set_bit(i, &ehci->resuming_ports);
+ 			ehci_dbg (ehci, "port %d remote wakeup\n", i + 1);
+ 			usb_hcd_start_port_resume(&hcd->self, i);
+diff --git a/drivers/usb/host/ehci-hub.c b/drivers/usb/host/ehci-hub.c
+index 87cf86f..7354d01 100644
+--- a/drivers/usb/host/ehci-hub.c
++++ b/drivers/usb/host/ehci-hub.c
+@@ -471,10 +471,13 @@ static int ehci_bus_resume (struct usb_hcd *hcd)
+ 		ehci_writel(ehci, temp, &ehci->regs->port_status [i]);
+ 	}
  
-diff --git a/drivers/clk/qcom/lcc-ipq806x.c b/drivers/clk/qcom/lcc-ipq806x.c
-index c9ff27b..19378b0 100644
---- a/drivers/clk/qcom/lcc-ipq806x.c
-+++ b/drivers/clk/qcom/lcc-ipq806x.c
-@@ -386,13 +386,12 @@ static struct clk_rcg ahbix_clk = {
- 	.freq_tbl = clk_tbl_ahbix,
- 	.clkr = {
- 		.enable_reg = 0x38,
--		.enable_mask = BIT(10), /* toggle the gfmux to select mn/pxo */
-+		.enable_mask = BIT(11),
- 		.hw.init = &(struct clk_init_data){
- 			.name = "ahbix",
- 			.parent_names = lcc_pxo_pll4,
- 			.num_parents = 2,
--			.ops = &clk_rcg_ops,
--			.flags = CLK_SET_RATE_GATE,
-+			.ops = &clk_rcg_lcc_ops,
- 		},
- 	},
- };
--- 
-2.3.6
-
-
-From 0602addf5fe488d8ced792e6a8f7da073516d33b Mon Sep 17 00:00:00 2001
-From: Archit Taneja <architt@codeaurora.org>
-Date: Wed, 4 Mar 2015 15:19:35 +0530
-Subject: [PATCH 166/219] clk: qcom: fix RCG M/N counter configuration
-Cc: mpagano@gentoo.org
-
-commit 0b21503dbbfa669dbd847b33578d4041513cddb2 upstream.
-
-Currently, a RCG's M/N counter (used for fraction division) is
-set to either 'bypass' (counter disabled) or 'dual edge' (counter
-enabled) based on whether the corresponding rcg struct has a mnd
-field specified and a non-zero N.
-
-In the case where M and N are the same value, the M/N counter is
-still enabled by code even though no division takes place.
-Leaving the RCG in such a state can result in improper behavior.
-This was observed with the DSI pixel clock RCG when M and N were
-both set to 1.
-
-Add an additional check (M != N) to enable the M/N counter only
-when it's needed for fraction division.
-
-Signed-off-by: Archit Taneja <architt@codeaurora.org>
-Fixes: bcd61c0f535a (clk: qcom: Add support for root clock
-generators (RCGs))
-Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/clk/qcom/clk-rcg2.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
-index 742acfa..381f274 100644
---- a/drivers/clk/qcom/clk-rcg2.c
-+++ b/drivers/clk/qcom/clk-rcg2.c
-@@ -243,7 +243,7 @@ static int clk_rcg2_configure(struct clk_rcg2 *rcg, const struct freq_tbl *f)
- 	mask |= CFG_SRC_SEL_MASK | CFG_MODE_MASK;
- 	cfg = f->pre_div << CFG_SRC_DIV_SHIFT;
- 	cfg |= rcg->parent_map[f->src] << CFG_SRC_SEL_SHIFT;
--	if (rcg->mnd_width && f->n)
-+	if (rcg->mnd_width && f->n && (f->m != f->n))
- 		cfg |= CFG_MODE_DUAL_EDGE;
- 	ret = regmap_update_bits(rcg->clkr.regmap,
- 			rcg->cmd_rcgr + CFG_REG, mask, cfg);
--- 
-2.3.6
-
-
-From ea8ae530984cacf55cebc6a12bc43061f1dd41ed Mon Sep 17 00:00:00 2001
-From: Stephen Boyd <sboyd@codeaurora.org>
-Date: Thu, 26 Feb 2015 19:34:35 -0800
-Subject: [PATCH 167/219] clk: qcom: Fix ipq806x LCC frequency tables
-Cc: mpagano@gentoo.org
-
-commit b3261d768bcdd4b368179ed85becf38c95461848 upstream.
-
-These frequency tables list the wrong rates. Either they don't
-have the correct frequency at all, or they're specified in kHz
-instead of Hz. Fix it.
-
-Fixes: c99e515a92e9 "clk: qcom: Add IPQ806X LPASS clock controller (LCC) driver"
-Tested-by: Kenneth Westfield <kwestfie@codeaurora.org>
-Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/clk/qcom/lcc-ipq806x.c | 18 +++++++++---------
- 1 file changed, 9 insertions(+), 9 deletions(-)
-
-diff --git a/drivers/clk/qcom/lcc-ipq806x.c b/drivers/clk/qcom/lcc-ipq806x.c
-index 19378b0..a6d3a67 100644
---- a/drivers/clk/qcom/lcc-ipq806x.c
-+++ b/drivers/clk/qcom/lcc-ipq806x.c
-@@ -294,14 +294,14 @@ static struct clk_regmap_mux pcm_clk = {
- };
+-	/* msleep for 20ms only if code is trying to resume port */
++	/*
++	 * msleep for USB_RESUME_TIMEOUT ms only if code is trying to resume
++	 * port
++	 */
+ 	if (resume_needed) {
+ 		spin_unlock_irq(&ehci->lock);
+-		msleep(20);
++		msleep(USB_RESUME_TIMEOUT);
+ 		spin_lock_irq(&ehci->lock);
+ 		if (ehci->shutdown)
+ 			goto shutdown;
+@@ -942,7 +945,7 @@ int ehci_hub_control(
+ 			temp &= ~PORT_WAKE_BITS;
+ 			ehci_writel(ehci, temp | PORT_RESUME, status_reg);
+ 			ehci->reset_done[wIndex] = jiffies
+-					+ msecs_to_jiffies(20);
++					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			set_bit(wIndex, &ehci->resuming_ports);
+ 			usb_hcd_start_port_resume(&hcd->self, wIndex);
+ 			break;
+diff --git a/drivers/usb/host/fotg210-hcd.c b/drivers/usb/host/fotg210-hcd.c
+index 475b21f..7a6681f 100644
+--- a/drivers/usb/host/fotg210-hcd.c
++++ b/drivers/usb/host/fotg210-hcd.c
+@@ -1595,7 +1595,7 @@ static int fotg210_hub_control(
+ 			/* resume signaling for 20 msec */
+ 			fotg210_writel(fotg210, temp | PORT_RESUME, status_reg);
+ 			fotg210->reset_done[wIndex] = jiffies
+-					+ msecs_to_jiffies(20);
++					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			break;
+ 		case USB_PORT_FEAT_C_SUSPEND:
+ 			clear_bit(wIndex, &fotg210->port_c_suspend);
+diff --git a/drivers/usb/host/fusbh200-hcd.c b/drivers/usb/host/fusbh200-hcd.c
+index a83eefe..ba77e2e 100644
+--- a/drivers/usb/host/fusbh200-hcd.c
++++ b/drivers/usb/host/fusbh200-hcd.c
+@@ -1550,10 +1550,9 @@ static int fusbh200_hub_control (
+ 			if ((temp & PORT_PE) == 0)
+ 				goto error;
  
- static struct freq_tbl clk_tbl_aif_osr[] = {
--	{  22050, P_PLL4, 1, 147, 20480 },
--	{  32000, P_PLL4, 1,   1,    96 },
--	{  44100, P_PLL4, 1, 147, 10240 },
--	{  48000, P_PLL4, 1,   1,    64 },
--	{  88200, P_PLL4, 1, 147,  5120 },
--	{  96000, P_PLL4, 1,   1,    32 },
--	{ 176400, P_PLL4, 1, 147,  2560 },
--	{ 192000, P_PLL4, 1,   1,    16 },
-+	{  2822400, P_PLL4, 1, 147, 20480 },
-+	{  4096000, P_PLL4, 1,   1,    96 },
-+	{  5644800, P_PLL4, 1, 147, 10240 },
-+	{  6144000, P_PLL4, 1,   1,    64 },
-+	{ 11289600, P_PLL4, 1, 147,  5120 },
-+	{ 12288000, P_PLL4, 1,   1,    32 },
-+	{ 22579200, P_PLL4, 1, 147,  2560 },
-+	{ 24576000, P_PLL4, 1,   1,    16 },
- 	{ },
- };
+-			/* resume signaling for 20 msec */
+ 			fusbh200_writel(fusbh200, temp | PORT_RESUME, status_reg);
+ 			fusbh200->reset_done[wIndex] = jiffies
+-					+ msecs_to_jiffies(20);
++					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			break;
+ 		case USB_PORT_FEAT_C_SUSPEND:
+ 			clear_bit(wIndex, &fusbh200->port_c_suspend);
+diff --git a/drivers/usb/host/isp116x-hcd.c b/drivers/usb/host/isp116x-hcd.c
+index 113d0cc..9ef5644 100644
+--- a/drivers/usb/host/isp116x-hcd.c
++++ b/drivers/usb/host/isp116x-hcd.c
+@@ -1490,7 +1490,7 @@ static int isp116x_bus_resume(struct usb_hcd *hcd)
+ 	spin_unlock_irq(&isp116x->lock);
  
-@@ -360,7 +360,7 @@ static struct clk_branch spdif_clk = {
- };
+ 	hcd->state = HC_STATE_RESUMING;
+-	msleep(20);
++	msleep(USB_RESUME_TIMEOUT);
  
- static struct freq_tbl clk_tbl_ahbix[] = {
--	{ 131072, P_PLL4, 1, 1, 3 },
-+	{ 131072000, P_PLL4, 1, 1, 3 },
- 	{ },
- };
+ 	/* Go operational */
+ 	spin_lock_irq(&isp116x->lock);
+diff --git a/drivers/usb/host/oxu210hp-hcd.c b/drivers/usb/host/oxu210hp-hcd.c
+index ef7efb2..28a2866 100644
+--- a/drivers/usb/host/oxu210hp-hcd.c
++++ b/drivers/usb/host/oxu210hp-hcd.c
+@@ -2500,11 +2500,12 @@ static irqreturn_t oxu210_hcd_irq(struct usb_hcd *hcd)
+ 					|| oxu->reset_done[i] != 0)
+ 				continue;
  
--- 
-2.3.6
-
-
-From b1c9b99dda6dfe49023214a772ff59debfaa6824 Mon Sep 17 00:00:00 2001
-From: Ben Collins <ben.c@servergy.com>
-Date: Fri, 3 Apr 2015 16:09:46 +0000
-Subject: [PATCH 168/219] dm crypt: fix deadlock when async crypto algorithm
- returns -EBUSY
-Cc: mpagano@gentoo.org
-
-commit 0618764cb25f6fa9fb31152995de42a8a0496475 upstream.
-
-I suspect this doesn't show up for most anyone because software
-algorithms typically don't have a sense of being too busy.  However,
-when working with the Freescale CAAM driver it will return -EBUSY on
-occasion under heavy -- which resulted in dm-crypt deadlock.
-
-After checking the logic in some other drivers, the scheme for
-crypt_convert() and it's callback, kcryptd_async_done(), were not
-correctly laid out to properly handle -EBUSY or -EINPROGRESS.
-
-Fix this by using the completion for both -EBUSY and -EINPROGRESS.  Now
-crypt_convert()'s use of completion is comparable to
-af_alg_wait_for_completion().  Similarly, kcryptd_async_done() follows
-the pattern used in af_alg_complete().
-
-Before this fix dm-crypt would lockup within 1-2 minutes running with
-the CAAM driver.  Fix was regression tested against software algorithms
-on PPC32 and x86_64, and things seem perfectly happy there as well.
-
-Signed-off-by: Ben Collins <ben.c@servergy.com>
-Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/md/dm-crypt.c | 12 ++++++------
- 1 file changed, 6 insertions(+), 6 deletions(-)
-
-diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
-index 713a962..41473929 100644
---- a/drivers/md/dm-crypt.c
-+++ b/drivers/md/dm-crypt.c
-@@ -925,11 +925,10 @@ static int crypt_convert(struct crypt_config *cc,
+-			/* start 20 msec resume signaling from this port,
+-			 * and make hub_wq collect PORT_STAT_C_SUSPEND to
++			/* start USB_RESUME_TIMEOUT resume signaling from this
++			 * port, and make hub_wq collect PORT_STAT_C_SUSPEND to
+ 			 * stop that signaling.
+ 			 */
+-			oxu->reset_done[i] = jiffies + msecs_to_jiffies(20);
++			oxu->reset_done[i] = jiffies +
++				msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			oxu_dbg(oxu, "port %d remote wakeup\n", i + 1);
+ 			mod_timer(&hcd->rh_timer, oxu->reset_done[i]);
+ 		}
+diff --git a/drivers/usb/host/r8a66597-hcd.c b/drivers/usb/host/r8a66597-hcd.c
+index bdc82fe..54a4170 100644
+--- a/drivers/usb/host/r8a66597-hcd.c
++++ b/drivers/usb/host/r8a66597-hcd.c
+@@ -2301,7 +2301,7 @@ static int r8a66597_bus_resume(struct usb_hcd *hcd)
+ 		rh->port &= ~USB_PORT_STAT_SUSPEND;
+ 		rh->port |= USB_PORT_STAT_C_SUSPEND << 16;
+ 		r8a66597_mdfy(r8a66597, RESUME, RESUME | UACT, dvstctr_reg);
+-		msleep(50);
++		msleep(USB_RESUME_TIMEOUT);
+ 		r8a66597_mdfy(r8a66597, UACT, RESUME | UACT, dvstctr_reg);
+ 	}
  
- 		switch (r) {
- 		/* async */
-+		case -EINPROGRESS:
- 		case -EBUSY:
- 			wait_for_completion(&ctx->restart);
- 			reinit_completion(&ctx->restart);
--			/* fall through*/
--		case -EINPROGRESS:
- 			ctx->req = NULL;
- 			ctx->cc_sector++;
- 			continue;
-@@ -1346,10 +1345,8 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
- 	struct dm_crypt_io *io = container_of(ctx, struct dm_crypt_io, ctx);
- 	struct crypt_config *cc = io->cc;
+diff --git a/drivers/usb/host/sl811-hcd.c b/drivers/usb/host/sl811-hcd.c
+index 4f4ba1e..9118cd8 100644
+--- a/drivers/usb/host/sl811-hcd.c
++++ b/drivers/usb/host/sl811-hcd.c
+@@ -1259,7 +1259,7 @@ sl811h_hub_control(
+ 			sl811_write(sl811, SL11H_CTLREG1, sl811->ctrl1);
  
--	if (error == -EINPROGRESS) {
--		complete(&ctx->restart);
-+	if (error == -EINPROGRESS)
- 		return;
--	}
+ 			mod_timer(&sl811->timer, jiffies
+-					+ msecs_to_jiffies(20));
++					+ msecs_to_jiffies(USB_RESUME_TIMEOUT));
+ 			break;
+ 		case USB_PORT_FEAT_POWER:
+ 			port_power(sl811, 0);
+diff --git a/drivers/usb/host/uhci-hub.c b/drivers/usb/host/uhci-hub.c
+index 19ba5ea..7b3d1af 100644
+--- a/drivers/usb/host/uhci-hub.c
++++ b/drivers/usb/host/uhci-hub.c
+@@ -166,7 +166,7 @@ static void uhci_check_ports(struct uhci_hcd *uhci)
+ 				/* Port received a wakeup request */
+ 				set_bit(port, &uhci->resuming_ports);
+ 				uhci->ports_timeout = jiffies +
+-						msecs_to_jiffies(25);
++					msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 				usb_hcd_start_port_resume(
+ 						&uhci_to_hcd(uhci)->self, port);
  
- 	if (!error && cc->iv_gen_ops && cc->iv_gen_ops->post)
- 		error = cc->iv_gen_ops->post(cc, iv_of_dmreq(cc, dmreq), dmreq);
-@@ -1360,12 +1357,15 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
- 	crypt_free_req(cc, req_of_dmreq(cc, dmreq), io->base_bio);
+@@ -338,7 +338,8 @@ static int uhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 			uhci_finish_suspend(uhci, port, port_addr);
  
- 	if (!atomic_dec_and_test(&ctx->cc_pending))
--		return;
-+		goto done;
+ 			/* USB v2.0 7.1.7.5 */
+-			uhci->ports_timeout = jiffies + msecs_to_jiffies(50);
++			uhci->ports_timeout = jiffies +
++				msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			break;
+ 		case USB_PORT_FEAT_POWER:
+ 			/* UHCI has no power switching */
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 73485fa..eeedde8 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1574,7 +1574,7 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ 		} else {
+ 			xhci_dbg(xhci, "resume HS port %d\n", port_id);
+ 			bus_state->resume_done[faked_port_index] = jiffies +
+-				msecs_to_jiffies(20);
++				msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			set_bit(faked_port_index, &bus_state->resuming_ports);
+ 			mod_timer(&hcd->rh_timer,
+ 				  bus_state->resume_done[faked_port_index]);
+diff --git a/drivers/usb/isp1760/isp1760-hcd.c b/drivers/usb/isp1760/isp1760-hcd.c
+index 3cb98b1..7911b6b 100644
+--- a/drivers/usb/isp1760/isp1760-hcd.c
++++ b/drivers/usb/isp1760/isp1760-hcd.c
+@@ -1869,7 +1869,7 @@ static int isp1760_hub_control(struct usb_hcd *hcd, u16 typeReq,
+ 				reg_write32(hcd->regs, HC_PORTSC1,
+ 							temp | PORT_RESUME);
+ 				priv->reset_done = jiffies +
+-					msecs_to_jiffies(20);
++					msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			}
+ 			break;
+ 		case USB_PORT_FEAT_C_SUSPEND:
+diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
+index 067920f..ec0ee3b 100644
+--- a/drivers/usb/musb/musb_core.c
++++ b/drivers/usb/musb/musb_core.c
+@@ -99,6 +99,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/io.h>
+ #include <linux/dma-mapping.h>
++#include <linux/usb.h>
  
- 	if (bio_data_dir(io->base_bio) == READ)
- 		kcryptd_crypt_read_done(io);
- 	else
- 		kcryptd_crypt_write_io_submit(io, 1);
-+done:
-+	if (!completion_done(&ctx->restart))
-+		complete(&ctx->restart);
- }
+ #include "musb_core.h"
  
- static void kcryptd_crypt(struct work_struct *work)
--- 
-2.3.6
-
-
-From 39b991a4765e2f7bd2faa383c66df5237117a8bb Mon Sep 17 00:00:00 2001
-From: Ken Xue <Ken.Xue@amd.com>
-Date: Mon, 9 Mar 2015 17:10:13 +0800
-Subject: [PATCH 169/219] serial: 8250_dw: add support for AMD SOC Carrizo
-Cc: mpagano@gentoo.org
-
-commit 5ef86b74209db33c133b5f18738dd8f3189b63a1 upstream.
-
-Add ACPI identifier for UART on AMD SOC Carrizo.
-
-Signed-off-by: Ken Xue <Ken.Xue@amd.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/tty/serial/8250/8250_dw.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
-index 6ae5b85..7a80250 100644
---- a/drivers/tty/serial/8250/8250_dw.c
-+++ b/drivers/tty/serial/8250/8250_dw.c
-@@ -629,6 +629,7 @@ static const struct acpi_device_id dw8250_acpi_match[] = {
- 	{ "80860F0A", 0 },
- 	{ "8086228A", 0 },
- 	{ "APMC0D08", 0},
-+	{ "AMD0020", 0 },
- 	{ },
- };
- MODULE_DEVICE_TABLE(acpi, dw8250_acpi_match);
--- 
-2.3.6
-
-
-From 8067aec1b07ce3f80c8209eb3589abdf38753ac1 Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?Uwe=20Kleine-K=C3=B6nig?= <u.kleine-koenig@pengutronix.de>
-Date: Tue, 24 Feb 2015 11:17:05 +0100
-Subject: [PATCH 170/219] serial: imx: Fix clearing of receiver overrun flag
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-Cc: mpagano@gentoo.org
-
-commit 91555ce9012557b2d621d7b0b6ec694218a2a9bc upstream.
-
-The writeable bits in the USR2 register are all "write 1 to
-clear" so only write the bits that actually should be cleared.
-
-Fixes: f1f836e4209e ("serial: imx: Add Rx Fifo overrun error message")
-Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/tty/serial/imx.c | 8 +++-----
- 1 file changed, 3 insertions(+), 5 deletions(-)
-
-diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
-index 0eb29b1..2306191 100644
---- a/drivers/tty/serial/imx.c
-+++ b/drivers/tty/serial/imx.c
-@@ -818,7 +818,7 @@ static irqreturn_t imx_int(int irq, void *dev_id)
- 	if (sts2 & USR2_ORE) {
- 		dev_err(sport->port.dev, "Rx FIFO overrun\n");
- 		sport->port.icount.overrun++;
--		writel(sts2 | USR2_ORE, sport->port.membase + USR2);
-+		writel(USR2_ORE, sport->port.membase + USR2);
+@@ -562,7 +563,7 @@ static irqreturn_t musb_stage0_irq(struct musb *musb, u8 int_usb,
+ 						(USB_PORT_STAT_C_SUSPEND << 16)
+ 						| MUSB_PORT_STAT_RESUME;
+ 				musb->rh_timer = jiffies
+-						 + msecs_to_jiffies(20);
++					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 				musb->need_finish_resume = 1;
+ 
+ 				musb->xceiv->otg->state = OTG_STATE_A_HOST;
+@@ -1597,16 +1598,30 @@ irqreturn_t musb_interrupt(struct musb *musb)
+ 		is_host_active(musb) ? "host" : "peripheral",
+ 		musb->int_usb, musb->int_tx, musb->int_rx);
+ 
+-	/* the core can interrupt us for multiple reasons; docs have
+-	 * a generic interrupt flowchart to follow
++	/**
++	 * According to Mentor Graphics' documentation, flowchart on page 98,
++	 * IRQ should be handled as follows:
++	 *
++	 * . Resume IRQ
++	 * . Session Request IRQ
++	 * . VBUS Error IRQ
++	 * . Suspend IRQ
++	 * . Connect IRQ
++	 * . Disconnect IRQ
++	 * . Reset/Babble IRQ
++	 * . SOF IRQ (we're not using this one)
++	 * . Endpoint 0 IRQ
++	 * . TX Endpoints
++	 * . RX Endpoints
++	 *
++	 * We will be following that flowchart in order to avoid any problems
++	 * that might arise with internal Finite State Machine.
+ 	 */
++
+ 	if (musb->int_usb)
+ 		retval |= musb_stage0_irq(musb, musb->int_usb,
+ 				devctl);
+ 
+-	/* "stage 1" is handling endpoint irqs */
+-
+-	/* handle endpoint 0 first */
+ 	if (musb->int_tx & 1) {
+ 		if (is_host_active(musb))
+ 			retval |= musb_h_ep0_irq(musb);
+@@ -1614,37 +1629,31 @@ irqreturn_t musb_interrupt(struct musb *musb)
+ 			retval |= musb_g_ep0_irq(musb);
+ 	}
+ 
+-	/* RX on endpoints 1-15 */
+-	reg = musb->int_rx >> 1;
++	reg = musb->int_tx >> 1;
+ 	ep_num = 1;
+ 	while (reg) {
+ 		if (reg & 1) {
+-			/* musb_ep_select(musb->mregs, ep_num); */
+-			/* REVISIT just retval = ep->rx_irq(...) */
+ 			retval = IRQ_HANDLED;
+ 			if (is_host_active(musb))
+-				musb_host_rx(musb, ep_num);
++				musb_host_tx(musb, ep_num);
+ 			else
+-				musb_g_rx(musb, ep_num);
++				musb_g_tx(musb, ep_num);
+ 		}
+-
+ 		reg >>= 1;
+ 		ep_num++;
  	}
  
- 	return IRQ_HANDLED;
-@@ -1181,10 +1181,12 @@ static int imx_startup(struct uart_port *port)
- 		imx_uart_dma_init(sport);
- 
- 	spin_lock_irqsave(&sport->port.lock, flags);
+-	/* TX on endpoints 1-15 */
+-	reg = musb->int_tx >> 1;
++	reg = musb->int_rx >> 1;
+ 	ep_num = 1;
+ 	while (reg) {
+ 		if (reg & 1) {
+-			/* musb_ep_select(musb->mregs, ep_num); */
+-			/* REVISIT just retval |= ep->tx_irq(...) */
+ 			retval = IRQ_HANDLED;
+ 			if (is_host_active(musb))
+-				musb_host_tx(musb, ep_num);
++				musb_host_rx(musb, ep_num);
+ 			else
+-				musb_g_tx(musb, ep_num);
++				musb_g_rx(musb, ep_num);
+ 		}
 +
- 	/*
- 	 * Finally, clear and enable interrupts
- 	 */
- 	writel(USR1_RTSD, sport->port.membase + USR1);
-+	writel(USR2_ORE, sport->port.membase + USR2);
- 
- 	if (sport->dma_is_inited && !sport->dma_is_enabled)
- 		imx_enable_dma(sport);
-@@ -1199,10 +1201,6 @@ static int imx_startup(struct uart_port *port)
+ 		reg >>= 1;
+ 		ep_num++;
+ 	}
+@@ -2463,7 +2472,7 @@ static int musb_resume(struct device *dev)
+ 	if (musb->need_finish_resume) {
+ 		musb->need_finish_resume = 0;
+ 		schedule_delayed_work(&musb->finish_resume_work,
+-				      msecs_to_jiffies(20));
++				      msecs_to_jiffies(USB_RESUME_TIMEOUT));
+ 	}
  
- 	writel(temp, sport->port.membase + UCR1);
+ 	/*
+@@ -2506,7 +2515,7 @@ static int musb_runtime_resume(struct device *dev)
+ 	if (musb->need_finish_resume) {
+ 		musb->need_finish_resume = 0;
+ 		schedule_delayed_work(&musb->finish_resume_work,
+-				msecs_to_jiffies(20));
++				msecs_to_jiffies(USB_RESUME_TIMEOUT));
+ 	}
  
--	/* Clear any pending ORE flag before enabling interrupt */
--	temp = readl(sport->port.membase + USR2);
--	writel(temp | USR2_ORE, sport->port.membase + USR2);
--
- 	temp = readl(sport->port.membase + UCR4);
- 	temp |= UCR4_OREN;
- 	writel(temp, sport->port.membase + UCR4);
--- 
-2.3.6
-
-
-From cc1064fc8f1d71f9c3429e6bdd8129629fc39784 Mon Sep 17 00:00:00 2001
-From: Peter Hurley <peter@hurleysoftware.com>
-Date: Mon, 9 Mar 2015 14:05:01 -0400
-Subject: [PATCH 171/219] serial: 8250: Check UART_SCR is writable
-Cc: mpagano@gentoo.org
-
-commit f01a0bd8921b9d6668d41fae3198970e6318f532 upstream.
-
-Au1x00/RT2800+ doesn't implement the 8250 scratch register (and
-this may be true of other h/w currently supported by the 8250 driver);
-read back the canary value written to the scratch register to enable
-the console h/w restart after resume from system suspend.
-
-Fixes: 4516d50aabedb ("serial: 8250: Use canary to restart console ...")
-Reported-by: Mason <slash.tmp@free.fr>
-Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/tty/serial/8250/8250_core.c | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
-index deae122..d465ace 100644
---- a/drivers/tty/serial/8250/8250_core.c
-+++ b/drivers/tty/serial/8250/8250_core.c
-@@ -3444,7 +3444,8 @@ void serial8250_suspend_port(int line)
- 	    port->type != PORT_8250) {
- 		unsigned char canary = 0xa5;
- 		serial_out(up, UART_SCR, canary);
--		up->canary = canary;
-+		if (serial_in(up, UART_SCR) == canary)
-+			up->canary = canary;
+ 	return 0;
+diff --git a/drivers/usb/musb/musb_virthub.c b/drivers/usb/musb/musb_virthub.c
+index 294e159..5428ed1 100644
+--- a/drivers/usb/musb/musb_virthub.c
++++ b/drivers/usb/musb/musb_virthub.c
+@@ -136,7 +136,7 @@ void musb_port_suspend(struct musb *musb, bool do_suspend)
+ 		/* later, GetPortStatus will stop RESUME signaling */
+ 		musb->port1_status |= MUSB_PORT_STAT_RESUME;
+ 		schedule_delayed_work(&musb->finish_resume_work,
+-				      msecs_to_jiffies(20));
++				      msecs_to_jiffies(USB_RESUME_TIMEOUT));
  	}
+ }
  
- 	uart_suspend_port(&serial8250_reg, port);
--- 
-2.3.6
-
-
-From 5cd06dd45f7cc5c15517266a61f8051ec16912ff Mon Sep 17 00:00:00 2001
-From: "Martin K. Petersen" <martin.petersen@oracle.com>
-Date: Tue, 14 Apr 2015 16:56:23 -0400
-Subject: [PATCH 172/219] sd: Unregister integrity profile
-Cc: mpagano@gentoo.org
-
-commit e727c42bd55794765c460b7ac2b6cc969f2a9698 upstream.
-
-The new integrity code did not correctly unregister the profile for SD
-disks. Call blk_integrity_unregister() when we release a disk.
-
-Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-Reported-by: Sagi Grimberg <sagig@dev.mellanox.co.il>
-Tested-by: Sagi Grimberg <sagig@mellanox.com>
-Signed-off-by: James Bottomley <JBottomley@Odin.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/scsi/sd.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
-index 6b78476..3290a3e 100644
---- a/drivers/scsi/sd.c
-+++ b/drivers/scsi/sd.c
-@@ -3100,6 +3100,7 @@ static void scsi_disk_release(struct device *dev)
- 	ida_remove(&sd_index_ida, sdkp->index);
- 	spin_unlock(&sd_index_lock);
+diff --git a/drivers/usb/phy/phy.c b/drivers/usb/phy/phy.c
+index 2f9735b..d1cd6b5 100644
+--- a/drivers/usb/phy/phy.c
++++ b/drivers/usb/phy/phy.c
+@@ -81,7 +81,9 @@ static void devm_usb_phy_release(struct device *dev, void *res)
  
-+	blk_integrity_unregister(disk);
- 	disk->private_data = NULL;
- 	put_disk(disk);
- 	put_device(&sdkp->device->sdev_gendev);
--- 
-2.3.6
-
-
-From 5c87838eadeb1a63546e36f76917241d8fa6ea52 Mon Sep 17 00:00:00 2001
-From: "Martin K. Petersen" <martin.petersen@oracle.com>
-Date: Tue, 14 Apr 2015 17:11:03 -0400
-Subject: [PATCH 173/219] sd: Fix missing ATO tag check
-Cc: mpagano@gentoo.org
-
-commit e557990e358934fb168d30371c9c0f63e314c6b8 upstream.
-
-3aec2f41a8bae introduced a merge error where we would end up check for
-sdkp instead of sdkp->ATO. Fix this so we register app tag capability
-correctly.
-
-Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
-Signed-off-by: James Bottomley <JBottomley@Odin.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/scsi/sd_dif.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/scsi/sd_dif.c b/drivers/scsi/sd_dif.c
-index 14c7d42..5c06d29 100644
---- a/drivers/scsi/sd_dif.c
-+++ b/drivers/scsi/sd_dif.c
-@@ -77,7 +77,7 @@ void sd_dif_config_host(struct scsi_disk *sdkp)
+ static int devm_usb_phy_match(struct device *dev, void *res, void *match_data)
+ {
+-	return res == match_data;
++	struct usb_phy **phy = res;
++
++	return *phy == match_data;
+ }
  
- 		disk->integrity->flags |= BLK_INTEGRITY_DEVICE_CAPABLE;
+ /**
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 995986b..d925f55 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -862,6 +862,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ 	    i < loc->elf_ex.e_phnum; i++, elf_ppnt++) {
+ 		int elf_prot = 0, elf_flags;
+ 		unsigned long k, vaddr;
++		unsigned long total_size = 0;
  
--		if (!sdkp)
-+		if (!sdkp->ATO)
- 			return;
+ 		if (elf_ppnt->p_type != PT_LOAD)
+ 			continue;
+@@ -924,10 +925,16 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ #else
+ 			load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
+ #endif
++			total_size = total_mapping_size(elf_phdata,
++							loc->elf_ex.e_phnum);
++			if (!total_size) {
++				error = -EINVAL;
++				goto out_free_dentry;
++			}
+ 		}
  
- 		if (type == SD_DIF_TYPE3_PROTECTION)
--- 
-2.3.6
-
-
-From b9b4320c38bf2fadfd9299c36165c46f131200e0 Mon Sep 17 00:00:00 2001
-From: "K. Y. Srinivasan" <kys@microsoft.com>
-Date: Fri, 27 Feb 2015 11:26:04 -0800
-Subject: [PATCH 174/219] Drivers: hv: vmbus: Fix a bug in the error path in
- vmbus_open()
-Cc: mpagano@gentoo.org
-
-commit 40384e4bbeb9f2651fe9bffc0062d9f31ef625bf upstream.
-
-Correctly rollback state if the failure occurs after we have handed over
-the ownership of the buffer to the host.
-
-Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/hv/channel.c | 7 +++++--
- 1 file changed, 5 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
-index 2978f5e..00bc30e 100644
---- a/drivers/hv/channel.c
-+++ b/drivers/hv/channel.c
-@@ -135,7 +135,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
- 			   GFP_KERNEL);
- 	if (!open_info) {
- 		err = -ENOMEM;
--		goto error0;
-+		goto error_gpadl;
+ 		error = elf_map(bprm->file, load_bias + vaddr, elf_ppnt,
+-				elf_prot, elf_flags, 0);
++				elf_prot, elf_flags, total_size);
+ 		if (BAD_ADDR(error)) {
+ 			retval = IS_ERR((void *)error) ?
+ 				PTR_ERR((void*)error) : -EINVAL;
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 8b353ad..0a795c9 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -6956,12 +6956,11 @@ static int __btrfs_free_reserved_extent(struct btrfs_root *root,
+ 		return -ENOSPC;
  	}
  
- 	init_completion(&open_info->waitevent);
-@@ -151,7 +151,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
- 
- 	if (userdatalen > MAX_USER_DEFINED_BYTES) {
- 		err = -EINVAL;
--		goto error0;
-+		goto error_gpadl;
+-	if (btrfs_test_opt(root, DISCARD))
+-		ret = btrfs_discard_extent(root, start, len, NULL);
+-
+ 	if (pin)
+ 		pin_down_extent(root, cache, start, len, 1);
+ 	else {
++		if (btrfs_test_opt(root, DISCARD))
++			ret = btrfs_discard_extent(root, start, len, NULL);
+ 		btrfs_add_free_space(cache, start, len);
+ 		btrfs_update_reserved_bytes(cache, len, RESERVE_FREE, delalloc);
  	}
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 74609b9..f23d4be 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -2897,6 +2897,9 @@ static int btrfs_extent_same(struct inode *src, u64 loff, u64 len,
+ 	if (src == dst)
+ 		return -EINVAL;
  
- 	if (userdatalen)
-@@ -195,6 +195,9 @@ error1:
- 	list_del(&open_info->msglistentry);
- 	spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
- 
-+error_gpadl:
-+	vmbus_teardown_gpadl(newchannel, newchannel->ringbuffer_gpadlhandle);
++	if (len == 0)
++		return 0;
 +
- error0:
- 	free_pages((unsigned long)out,
- 		get_order(send_ringbuffer_size + recv_ringbuffer_size));
--- 
-2.3.6
-
-
-From 1f77a24829ac6dbe9a942752ee15054d403653d9 Mon Sep 17 00:00:00 2001
-From: James Bottomley <JBottomley@Odin.com>
-Date: Wed, 15 Apr 2015 22:16:01 -0700
-Subject: [PATCH 175/219] mvsas: fix panic on expander attached SATA devices
-Cc: mpagano@gentoo.org
-
-commit 56cbd0ccc1b508de19561211d7ab9e1c77e6b384 upstream.
-
-mvsas is giving a General protection fault when it encounters an expander
-attached ATA device.  Analysis of mvs_task_prep_ata() shows that the driver is
-assuming all ATA devices are locally attached and obtaining the phy mask by
-indexing the local phy table (in the HBA structure) with the phy id.  Since
-expanders have many more phys than the HBA, this is causing the index into the
-HBA phy table to overflow and returning rubbish as the pointer.
-
-mvs_task_prep_ssp() instead does the phy mask using the port properties.
-Mirror this in mvs_task_prep_ata() to fix the panic.
-
-Reported-by: Adam Talbot <ajtalbot1@gmail.com>
-Tested-by: Adam Talbot <ajtalbot1@gmail.com>
-Signed-off-by: James Bottomley <JBottomley@Odin.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/scsi/mvsas/mv_sas.c | 5 +----
- 1 file changed, 1 insertion(+), 4 deletions(-)
-
-diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
-index 2d5ab6d..454536c 100644
---- a/drivers/scsi/mvsas/mv_sas.c
-+++ b/drivers/scsi/mvsas/mv_sas.c
-@@ -441,14 +441,11 @@ static u32 mvs_get_ncq_tag(struct sas_task *task, u32 *tag)
- static int mvs_task_prep_ata(struct mvs_info *mvi,
- 			     struct mvs_task_exec_info *tei)
- {
--	struct sas_ha_struct *sha = mvi->sas;
- 	struct sas_task *task = tei->task;
- 	struct domain_device *dev = task->dev;
- 	struct mvs_device *mvi_dev = dev->lldd_dev;
- 	struct mvs_cmd_hdr *hdr = tei->hdr;
- 	struct asd_sas_port *sas_port = dev->port;
--	struct sas_phy *sphy = dev->phy;
--	struct asd_sas_phy *sas_phy = sha->sas_phy[sphy->number];
- 	struct mvs_slot_info *slot;
- 	void *buf_prd;
- 	u32 tag = tei->tag, hdr_tag;
-@@ -468,7 +465,7 @@ static int mvs_task_prep_ata(struct mvs_info *mvi,
- 	slot->tx = mvi->tx_prod;
- 	del_q = TXQ_MODE_I | tag |
- 		(TXQ_CMD_STP << TXQ_CMD_SHIFT) |
--		(MVS_PHY_ID << TXQ_PHY_SHIFT) |
-+		((sas_port->phy_mask & TXQ_PHY_MASK) << TXQ_PHY_SHIFT) |
- 		(mvi_dev->taskfileset << TXQ_SRS_SHIFT);
- 	mvi->tx[mvi->tx_prod] = cpu_to_le32(del_q);
- 
--- 
-2.3.6
-
-
-From 287189f739322ef2f2b7698e613c85e7be8c9b9c Mon Sep 17 00:00:00 2001
-From: Sifan Naeem <sifan.naeem@imgtec.com>
-Date: Tue, 10 Feb 2015 07:41:56 -0300
-Subject: [PATCH 176/219] rc: img-ir: fix error in parameters passed to
- irq_free()
-Cc: mpagano@gentoo.org
-
-commit 80ccf4ad06dc9d2f06a8347b2d309cdc959f72b3 upstream.
-
-img_ir_remove() passes a pointer to the ISR function as the 2nd
-parameter to irq_free() instead of a pointer to the device data
-structure.
-This issue causes unloading img-ir module to fail with the below
-warning after building and loading img-ir as a module.
-
-WARNING: CPU: 2 PID: 155 at ../kernel/irq/manage.c:1278
-__free_irq+0xb4/0x214() Trying to free already-free IRQ 58
-Modules linked in: img_ir(-)
-CPU: 2 PID: 155 Comm: rmmod Not tainted 3.14.0 #55 ...
-Call Trace:
-...
-[<8048d420>] __free_irq+0xb4/0x214
-[<8048d6b4>] free_irq+0xac/0xf4
-[<c009b130>] img_ir_remove+0x54/0xd4 [img_ir] [<8073ded0>]
-platform_drv_remove+0x30/0x54 ...
-
-Fixes: 160a8f8aec4d ("[media] rc: img-ir: add base driver")
-
-Signed-off-by: Sifan Naeem <sifan.naeem@imgtec.com>
-Acked-by: James Hogan <james.hogan@imgtec.com>
-Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/media/rc/img-ir/img-ir-core.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/media/rc/img-ir/img-ir-core.c b/drivers/media/rc/img-ir/img-ir-core.c
-index 77c78de..7020659 100644
---- a/drivers/media/rc/img-ir/img-ir-core.c
-+++ b/drivers/media/rc/img-ir/img-ir-core.c
-@@ -146,7 +146,7 @@ static int img_ir_remove(struct platform_device *pdev)
- {
- 	struct img_ir_priv *priv = platform_get_drvdata(pdev);
- 
--	free_irq(priv->irq, img_ir_isr);
-+	free_irq(priv->irq, priv);
- 	img_ir_remove_hw(priv);
- 	img_ir_remove_raw(priv);
+ 	btrfs_double_lock(src, loff, dst, dst_loff, len);
  
--- 
-2.3.6
-
-
-From ecfdbe6a56ddd74036337f651bb2bd933341faa7 Mon Sep 17 00:00:00 2001
-From: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
-Date: Tue, 10 Mar 2015 11:37:14 -0300
-Subject: [PATCH 177/219] stk1160: Make sure current buffer is released
-Cc: mpagano@gentoo.org
-
-commit aeff09276748b66072f2db2e668cec955cf41959 upstream.
-
-The available (i.e. not used) buffers are returned by stk1160_clear_queue(),
-on the stop_streaming() path. However, this is insufficient and the current
-buffer must be released as well. Fix it.
-
-Signed-off-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
-Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
-Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/media/usb/stk1160/stk1160-v4l.c | 17 +++++++++++++++--
- 1 file changed, 15 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/media/usb/stk1160/stk1160-v4l.c b/drivers/media/usb/stk1160/stk1160-v4l.c
-index 65a326c..749ad56 100644
---- a/drivers/media/usb/stk1160/stk1160-v4l.c
-+++ b/drivers/media/usb/stk1160/stk1160-v4l.c
-@@ -240,6 +240,11 @@ static int stk1160_stop_streaming(struct stk1160 *dev)
- 	if (mutex_lock_interruptible(&dev->v4l_lock))
- 		return -ERESTARTSYS;
+ 	ret = extent_same_check_offsets(src, loff, len);
+@@ -3626,6 +3629,11 @@ static noinline long btrfs_ioctl_clone(struct file *file, unsigned long srcfd,
+ 	if (off + len == src->i_size)
+ 		len = ALIGN(src->i_size, bs) - off;
  
++	if (len == 0) {
++		ret = 0;
++		goto out_unlock;
++	}
++
+ 	/* verify the end result is block aligned */
+ 	if (!IS_ALIGNED(off, bs) || !IS_ALIGNED(off + len, bs) ||
+ 	    !IS_ALIGNED(destoff, bs))
+diff --git a/fs/btrfs/xattr.c b/fs/btrfs/xattr.c
+index 883b936..45ea704 100644
+--- a/fs/btrfs/xattr.c
++++ b/fs/btrfs/xattr.c
+@@ -364,22 +364,42 @@ const struct xattr_handler *btrfs_xattr_handlers[] = {
+ /*
+  * Check if the attribute is in a supported namespace.
+  *
+- * This applied after the check for the synthetic attributes in the system
++ * This is applied after the check for the synthetic attributes in the system
+  * namespace.
+  */
+-static bool btrfs_is_valid_xattr(const char *name)
++static int btrfs_is_valid_xattr(const char *name)
+ {
+-	return !strncmp(name, XATTR_SECURITY_PREFIX,
+-			XATTR_SECURITY_PREFIX_LEN) ||
+-	       !strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN) ||
+-	       !strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) ||
+-	       !strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN) ||
+-		!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN);
++	int len = strlen(name);
++	int prefixlen = 0;
++
++	if (!strncmp(name, XATTR_SECURITY_PREFIX,
++			XATTR_SECURITY_PREFIX_LEN))
++		prefixlen = XATTR_SECURITY_PREFIX_LEN;
++	else if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
++		prefixlen = XATTR_SYSTEM_PREFIX_LEN;
++	else if (!strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN))
++		prefixlen = XATTR_TRUSTED_PREFIX_LEN;
++	else if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN))
++		prefixlen = XATTR_USER_PREFIX_LEN;
++	else if (!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN))
++		prefixlen = XATTR_BTRFS_PREFIX_LEN;
++	else
++		return -EOPNOTSUPP;
++
 +	/*
-+	 * Once URBs are cancelled, the URB complete handler
-+	 * won't be running. This is required to safely release the
-+	 * current buffer (dev->isoc_ctl.buf).
++	 * The name cannot consist of just prefix
 +	 */
- 	stk1160_cancel_isoc(dev);
- 
- 	/*
-@@ -620,8 +625,16 @@ void stk1160_clear_queue(struct stk1160 *dev)
- 		stk1160_info("buffer [%p/%d] aborted\n",
- 				buf, buf->vb.v4l2_buf.index);
- 	}
--	/* It's important to clear current buffer */
--	dev->isoc_ctl.buf = NULL;
-+
-+	/* It's important to release the current buffer */
-+	if (dev->isoc_ctl.buf) {
-+		buf = dev->isoc_ctl.buf;
-+		dev->isoc_ctl.buf = NULL;
++	if (len <= prefixlen)
++		return -EINVAL;
 +
-+		vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR);
-+		stk1160_info("buffer [%p/%d] aborted\n",
-+				buf, buf->vb.v4l2_buf.index);
-+	}
- 	spin_unlock_irqrestore(&dev->buf_lock, flags);
++	return 0;
  }
  
--- 
-2.3.6
-
-
-From d9bc10f7ccda1d662f3cd98f0949a03fe27b69e4 Mon Sep 17 00:00:00 2001
-From: Yann Droneaud <ydroneaud@opteya.com>
-Date: Mon, 13 Apr 2015 14:56:22 +0200
-Subject: [PATCH 178/219] IB/core: disallow registering 0-sized memory region
-Cc: mpagano@gentoo.org
-
-commit 8abaae62f3fdead8f4ce0ab46b4ab93dee39bab2 upstream.
-
-If ib_umem_get() is called with a size equal to 0 and an
-non-page aligned address, one page will be pinned and a
-0-sized umem will be returned to the caller.
-
-This should not be allowed: it's not expected for a memory
-region to have a size equal to 0.
-
-This patch adds a check to explicitly refuse to register
-a 0-sized region.
-
-Link: http://mid.gmane.org/cover.1428929103.git.ydroneaud@opteya.com
-Cc: Shachar Raindel <raindel@mellanox.com>
-Cc: Jack Morgenstein <jackm@mellanox.com>
-Cc: Or Gerlitz <ogerlitz@mellanox.com>
-Signed-off-by: Yann Droneaud <ydroneaud@opteya.com>
-Signed-off-by: Doug Ledford <dledford@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/infiniband/core/umem.c | 3 +++
- 1 file changed, 3 insertions(+)
-
-diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
-index 8c014b5..9ac4068 100644
---- a/drivers/infiniband/core/umem.c
-+++ b/drivers/infiniband/core/umem.c
-@@ -99,6 +99,9 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
- 	if (dmasync)
- 		dma_set_attr(DMA_ATTR_WRITE_BARRIER, &attrs);
- 
-+	if (!size)
-+		return ERR_PTR(-EINVAL);
+ ssize_t btrfs_getxattr(struct dentry *dentry, const char *name,
+ 		       void *buffer, size_t size)
+ {
++	int ret;
 +
  	/*
- 	 * If the combination of the addr and size requested for this memory
- 	 * region causes an integer overflow, return error.
--- 
-2.3.6
-
-
-From d0ddb13fc24a64a940e8050ea076e59bb04597f4 Mon Sep 17 00:00:00 2001
-From: Yann Droneaud <ydroneaud@opteya.com>
-Date: Mon, 13 Apr 2015 14:56:23 +0200
-Subject: [PATCH 179/219] IB/core: don't disallow registering region starting
- at 0x0
-Cc: mpagano@gentoo.org
-
-commit 66578b0b2f69659f00b6169e6fe7377c4b100d18 upstream.
-
-In a call to ib_umem_get(), if address is 0x0 and size is
-already page aligned, check added in commit 8494057ab5e4
-("IB/uverbs: Prevent integer overflow in ib_umem_get address
-arithmetic") will refuse to register a memory region that
-could otherwise be valid (provided vm.mmap_min_addr sysctl
-and mmap_low_allowed SELinux knobs allow userspace to map
-something at address 0x0).
-
-This patch allows back such registration: ib_umem_get()
-should probably don't care of the base address provided it
-can be pinned with get_user_pages().
-
-There's two possible overflows, in (addr + size) and in
-PAGE_ALIGN(addr + size), this patch keep ensuring none
-of them happen while allowing to pin memory at address
-0x0. Anyway, the case of size equal 0 is no more (partially)
-handled as 0-length memory region are disallowed by an
-earlier check.
-
-Link: http://mid.gmane.org/cover.1428929103.git.ydroneaud@opteya.com
-Cc: Shachar Raindel <raindel@mellanox.com>
-Cc: Jack Morgenstein <jackm@mellanox.com>
-Cc: Or Gerlitz <ogerlitz@mellanox.com>
-Signed-off-by: Yann Droneaud <ydroneaud@opteya.com>
-Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
-Reviewed-by: Haggai Eran <haggaie@mellanox.com>
-Signed-off-by: Doug Ledford <dledford@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/infiniband/core/umem.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
-index 9ac4068..38acb3c 100644
---- a/drivers/infiniband/core/umem.c
-+++ b/drivers/infiniband/core/umem.c
-@@ -106,8 +106,8 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
- 	 * If the combination of the addr and size requested for this memory
- 	 * region causes an integer overflow, return error.
- 	 */
--	if ((PAGE_ALIGN(addr + size) <= size) ||
--	    (PAGE_ALIGN(addr + size) <= addr))
-+	if (((addr + size) < addr) ||
-+	    PAGE_ALIGN(addr + size) < (addr + size))
- 		return ERR_PTR(-EINVAL);
- 
- 	if (!can_do_mlock())
--- 
-2.3.6
-
-
-From 7fc80a4ea6d5b307470a6bb165b293e334b22c20 Mon Sep 17 00:00:00 2001
-From: Erez Shitrit <erezsh@mellanox.com>
-Date: Thu, 2 Apr 2015 13:39:05 +0300
-Subject: [PATCH 180/219] IB/mlx4: Fix WQE LSO segment calculation
-Cc: mpagano@gentoo.org
-
-commit ca9b590caa17bcbbea119594992666e96cde9c2f upstream.
-
-The current code decreases from the mss size (which is the gso_size
-from the kernel skb) the size of the packet headers.
-
-It shouldn't do that because the mss that comes from the stack
-(e.g IPoIB) includes only the tcp payload without the headers.
-
-The result is indication to the HW that each packet that the HW sends
-is smaller than what it could be, and too many packets will be sent
-for big messages.
-
-An easy way to demonstrate one more aspect of the problem is by
-configuring the ipoib mtu to be less than 2*hlen (2*56) and then
-run app sending big TCP messages. This will tell the HW to send packets
-with giant (negative value which under unsigned arithmetics becomes
-a huge positive one) length and the QP moves to SQE state.
-
-Fixes: b832be1e4007 ('IB/mlx4: Add IPoIB LSO support')
-Reported-by: Matthew Finlay <matt@mellanox.com>
-Signed-off-by: Erez Shitrit <erezsh@mellanox.com>
-Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
-Signed-off-by: Doug Ledford <dledford@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/infiniband/hw/mlx4/qp.c | 3 +--
- 1 file changed, 1 insertion(+), 2 deletions(-)
-
-diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
-index ed2bd67..fbde33a 100644
---- a/drivers/infiniband/hw/mlx4/qp.c
-+++ b/drivers/infiniband/hw/mlx4/qp.c
-@@ -2605,8 +2605,7 @@ static int build_lso_seg(struct mlx4_wqe_lso_seg *wqe, struct ib_send_wr *wr,
- 
- 	memcpy(wqe->header, wr->wr.ud.header, wr->wr.ud.hlen);
+ 	 * If this is a request for a synthetic attribute in the system.*
+ 	 * namespace use the generic infrastructure to resolve a handler
+@@ -388,8 +408,9 @@ ssize_t btrfs_getxattr(struct dentry *dentry, const char *name,
+ 	if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
+ 		return generic_getxattr(dentry, name, buffer, size);
  
--	*lso_hdr_sz  = cpu_to_be32((wr->wr.ud.mss - wr->wr.ud.hlen) << 16 |
--				   wr->wr.ud.hlen);
-+	*lso_hdr_sz  = cpu_to_be32(wr->wr.ud.mss << 16 | wr->wr.ud.hlen);
- 	*lso_seg_len = halign;
- 	return 0;
+-	if (!btrfs_is_valid_xattr(name))
+-		return -EOPNOTSUPP;
++	ret = btrfs_is_valid_xattr(name);
++	if (ret)
++		return ret;
+ 	return __btrfs_getxattr(dentry->d_inode, name, buffer, size);
  }
--- 
-2.3.6
-
-
-From 6fb5785d6c07d834567ccf3f3ba2df9c3803b28b Mon Sep 17 00:00:00 2001
-From: Sagi Grimberg <sagig@mellanox.com>
-Date: Tue, 14 Apr 2015 18:08:13 +0300
-Subject: [PATCH 181/219] IB/iser: Fix wrong calculation of protection buffer
- length
-Cc: mpagano@gentoo.org
-
-commit a065fe6aa25ba6ba93c02dc13486131bb3c64d5f upstream.
-
-This length miss-calculation may cause a silent data corruption
-in the DIX case and cause the device to reference unmapped area.
-
-Fixes: d77e65350f2d ('libiscsi, iser: Adjust data_length to include protection information')
-Signed-off-by: Sagi Grimberg <sagig@mellanox.com>
-Signed-off-by: Doug Ledford <dledford@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/infiniband/ulp/iser/iser_initiator.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/infiniband/ulp/iser/iser_initiator.c b/drivers/infiniband/ulp/iser/iser_initiator.c
-index 20e859a..76eb57b 100644
---- a/drivers/infiniband/ulp/iser/iser_initiator.c
-+++ b/drivers/infiniband/ulp/iser/iser_initiator.c
-@@ -409,8 +409,8 @@ int iser_send_command(struct iscsi_conn *conn,
- 	if (scsi_prot_sg_count(sc)) {
- 		prot_buf->buf  = scsi_prot_sglist(sc);
- 		prot_buf->size = scsi_prot_sg_count(sc);
--		prot_buf->data_len = data_buf->data_len >>
--				     ilog2(sc->device->sector_size) * 8;
-+		prot_buf->data_len = (data_buf->data_len >>
-+				     ilog2(sc->device->sector_size)) * 8;
- 	}
  
- 	if (hdr->flags & ISCSI_FLAG_CMD_READ) {
--- 
-2.3.6
-
-
-From c62b024af945d20e01c3e8c416b9e00d137e6f02 Mon Sep 17 00:00:00 2001
-From: Rabin Vincent <rabin@rab.in>
-Date: Mon, 13 Apr 2015 22:30:12 +0200
-Subject: [PATCH 182/219] tracing: Handle ftrace_dump() atomic context in
- graph_trace_open()
-Cc: mpagano@gentoo.org
-
-commit ef99b88b16bee753fa51207abdc58ae660453ec6 upstream.
-
-graph_trace_open() can be called in atomic context from ftrace_dump().
-Use GFP_ATOMIC for the memory allocations when that's the case, in order
-to avoid the following splat.
-
- BUG: sleeping function called from invalid context at mm/slab.c:2849
- in_atomic(): 1, irqs_disabled(): 128, pid: 0, name: swapper/0
- Backtrace:
- ..
- [<8004dc94>] (__might_sleep) from [<801371f4>] (kmem_cache_alloc_trace+0x160/0x238)
-  r7:87800040 r6:000080d0 r5:810d16e8 r4:000080d0
- [<80137094>] (kmem_cache_alloc_trace) from [<800cbd60>] (graph_trace_open+0x30/0xd0)
-  r10:00000100 r9:809171a8 r8:00008e28 r7:810d16f0 r6:00000001 r5:810d16e8
-  r4:810d16f0
- [<800cbd30>] (graph_trace_open) from [<800c79c4>] (trace_init_global_iter+0x50/0x9c)
-  r8:00008e28 r7:808c853c r6:00000001 r5:810d16e8 r4:810d16f0 r3:800cbd30
- [<800c7974>] (trace_init_global_iter) from [<800c7aa0>] (ftrace_dump+0x90/0x2ec)
-  r4:810d2580 r3:00000000
- [<800c7a10>] (ftrace_dump) from [<80414b2c>] (sysrq_ftrace_dump+0x1c/0x20)
-  r10:00000100 r9:809171a8 r8:808f6e7c r7:00000001 r6:00000007 r5:0000007a
-  r4:808d5394
- [<80414b10>] (sysrq_ftrace_dump) from [<800169b8>] (return_to_handler+0x0/0x18)
- [<80415498>] (__handle_sysrq) from [<800169b8>] (return_to_handler+0x0/0x18)
-  r8:808c8100 r7:808c8444 r6:00000101 r5:00000010 r4:84eb3210
- [<80415668>] (handle_sysrq) from [<800169b8>] (return_to_handler+0x0/0x18)
- [<8042a760>] (pl011_int) from [<800169b8>] (return_to_handler+0x0/0x18)
-  r10:809171bc r9:809171a8 r8:00000001 r7:00000026 r6:808c6000 r5:84f01e60
-  r4:8454fe00
- [<8007782c>] (handle_irq_event_percpu) from [<80077b44>] (handle_irq_event+0x4c/0x6c)
-  r10:808c7ef0 r9:87283e00 r8:00000001 r7:00000000 r6:8454fe00 r5:84f01e60
-  r4:84f01e00
- [<80077af8>] (handle_irq_event) from [<8007aa28>] (handle_fasteoi_irq+0xf0/0x1ac)
-  r6:808f52a4 r5:84f01e60 r4:84f01e00 r3:00000000
- [<8007a938>] (handle_fasteoi_irq) from [<80076dc0>] (generic_handle_irq+0x3c/0x4c)
-  r6:00000026 r5:00000000 r4:00000026 r3:8007a938
- [<80076d84>] (generic_handle_irq) from [<80077128>] (__handle_domain_irq+0x8c/0xfc)
-  r4:808c1e38 r3:0000002e
- [<8007709c>] (__handle_domain_irq) from [<800087b8>] (gic_handle_irq+0x34/0x6c)
-  r10:80917748 r9:00000001 r8:88802100 r7:808c7ef0 r6:808c8fb0 r5:00000015
-  r4:8880210c r3:808c7ef0
- [<80008784>] (gic_handle_irq) from [<80014044>] (__irq_svc+0x44/0x7c)
-
-Link: http://lkml.kernel.org/r/1428953721-31349-1-git-send-email-rabin@rab.in
-Link: http://lkml.kernel.org/r/1428957012-2319-1-git-send-email-rabin@rab.in
-
-Signed-off-by: Rabin Vincent <rabin@rab.in>
-Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- kernel/trace/trace_functions_graph.c | 8 ++++++--
- 1 file changed, 6 insertions(+), 2 deletions(-)
-
-diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
-index 2d25ad1..b6fce36 100644
---- a/kernel/trace/trace_functions_graph.c
-+++ b/kernel/trace/trace_functions_graph.c
-@@ -1309,15 +1309,19 @@ void graph_trace_open(struct trace_iterator *iter)
+@@ -397,6 +418,7 @@ int btrfs_setxattr(struct dentry *dentry, const char *name, const void *value,
+ 		   size_t size, int flags)
  {
- 	/* pid and depth on the last trace processed */
- 	struct fgraph_data *data;
-+	gfp_t gfpflags;
- 	int cpu;
- 
- 	iter->private = NULL;
+ 	struct btrfs_root *root = BTRFS_I(dentry->d_inode)->root;
++	int ret;
  
--	data = kzalloc(sizeof(*data), GFP_KERNEL);
-+	/* We can be called in atomic context via ftrace_dump() */
-+	gfpflags = (in_atomic() || irqs_disabled()) ? GFP_ATOMIC : GFP_KERNEL;
-+
-+	data = kzalloc(sizeof(*data), gfpflags);
- 	if (!data)
- 		goto out_err;
+ 	/*
+ 	 * The permission on security.* and system.* is not checked
+@@ -413,8 +435,9 @@ int btrfs_setxattr(struct dentry *dentry, const char *name, const void *value,
+ 	if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
+ 		return generic_setxattr(dentry, name, value, size, flags);
  
--	data->cpu_data = alloc_percpu(struct fgraph_cpu_data);
-+	data->cpu_data = alloc_percpu_gfp(struct fgraph_cpu_data, gfpflags);
- 	if (!data->cpu_data)
- 		goto out_err_free;
+-	if (!btrfs_is_valid_xattr(name))
+-		return -EOPNOTSUPP;
++	ret = btrfs_is_valid_xattr(name);
++	if (ret)
++		return ret;
  
--- 
-2.3.6
-
-
-From aaeb6f4d936e550fef1f068d2e883a23f757d5f5 Mon Sep 17 00:00:00 2001
-From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
-Date: Thu, 16 Apr 2015 13:44:44 +0900
-Subject: [PATCH 183/219] tracing: Fix incorrect enabling of trace events by
- boot cmdline
-Cc: mpagano@gentoo.org
-
-commit 84fce9db4d7eaebd6cb2ee30c15da6d4e4daf846 upstream.
-
-There is a problem that trace events are not properly enabled with
-boot cmdline. The problem is that if we pass "trace_event=kmem:mm_page_alloc"
-to the boot cmdline, it enables all kmem trace events, and not just
-the page_alloc event.
-
-This is caused by the parsing mechanism. When we parse the cmdline, the buffer
-contents is modified due to tokenization. And, if we use this buffer
-again, we will get the wrong result.
-
-Unfortunately, this buffer is be accessed three times to set trace events
-properly at boot time. So, we need to handle this situation.
-
-There is already code handling ",", but we need another for ":".
-This patch adds it.
-
-Link: http://lkml.kernel.org/r/1429159484-22977-1-git-send-email-iamjoonsoo.kim@lge.com
-
-Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
-[ added missing return ret; ]
-Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- kernel/trace/trace_events.c | 9 ++++++++-
- 1 file changed, 8 insertions(+), 1 deletion(-)
-
-diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
-index db54dda..a9c10a3 100644
---- a/kernel/trace/trace_events.c
-+++ b/kernel/trace/trace_events.c
-@@ -565,6 +565,7 @@ static int __ftrace_set_clr_event(struct trace_array *tr, const char *match,
- static int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set)
+ 	if (!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN))
+ 		return btrfs_set_prop(dentry->d_inode, name,
+@@ -430,6 +453,7 @@ int btrfs_setxattr(struct dentry *dentry, const char *name, const void *value,
+ int btrfs_removexattr(struct dentry *dentry, const char *name)
  {
- 	char *event = NULL, *sub = NULL, *match;
+ 	struct btrfs_root *root = BTRFS_I(dentry->d_inode)->root;
 +	int ret;
  
  	/*
- 	 * The buf format can be <subsystem>:<event-name>
-@@ -590,7 +591,13 @@ static int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set)
- 			event = NULL;
- 	}
- 
--	return __ftrace_set_clr_event(tr, match, sub, event, set);
-+	ret = __ftrace_set_clr_event(tr, match, sub, event, set);
-+
-+	/* Put back the colon to allow this to be called again */
-+	if (buf)
-+		*(buf - 1) = ':';
-+
-+	return ret;
- }
- 
- /**
--- 
-2.3.6
-
-
-From c5bc4117a935b13fdc40db4753b9d32307d2e304 Mon Sep 17 00:00:00 2001
-From: Wolfram Sang <wsa+renesas@sang-engineering.com>
-Date: Thu, 23 Apr 2015 10:29:09 +0200
-Subject: [PATCH 184/219] i2c: mux: use proper dev when removing "channel-X"
- symlinks
-Cc: mpagano@gentoo.org
-
-commit 133778482ec6c8fde69406be380333963627c17a upstream.
-
-Those symlinks are created for the mux_dev, so we need to remove it from
-there. Currently, it breaks for muxes where the mux_dev is not the device
-of the parent adapter like this:
-
-[   78.234644] WARNING: CPU: 0 PID: 365 at fs/sysfs/dir.c:31 sysfs_warn_dup+0x5c/0x78()
-[   78.242438] sysfs: cannot create duplicate filename '/devices/platform/i2cbus@8/channel-0'
-
-Remove confusing comments while we are here.
-
-Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
-Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
-Fixes: c9449affad2ae0
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/i2c/i2c-mux.c | 8 +++++---
- 1 file changed, 5 insertions(+), 3 deletions(-)
-
-diff --git a/drivers/i2c/i2c-mux.c b/drivers/i2c/i2c-mux.c
-index 593f7ca..06cc1ff 100644
---- a/drivers/i2c/i2c-mux.c
-+++ b/drivers/i2c/i2c-mux.c
-@@ -32,8 +32,9 @@ struct i2c_mux_priv {
- 	struct i2c_algorithm algo;
- 
- 	struct i2c_adapter *parent;
--	void *mux_priv;	/* the mux chip/device */
--	u32  chan_id;	/* the channel id */
-+	struct device *mux_dev;
-+	void *mux_priv;
-+	u32 chan_id;
+ 	 * The permission on security.* and system.* is not checked
+@@ -446,8 +470,9 @@ int btrfs_removexattr(struct dentry *dentry, const char *name)
+ 	if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
+ 		return generic_removexattr(dentry, name);
  
- 	int (*select)(struct i2c_adapter *, void *mux_priv, u32 chan_id);
- 	int (*deselect)(struct i2c_adapter *, void *mux_priv, u32 chan_id);
-@@ -119,6 +120,7 @@ struct i2c_adapter *i2c_add_mux_adapter(struct i2c_adapter *parent,
+-	if (!btrfs_is_valid_xattr(name))
+-		return -EOPNOTSUPP;
++	ret = btrfs_is_valid_xattr(name);
++	if (ret)
++		return ret;
  
- 	/* Set up private adapter data */
- 	priv->parent = parent;
-+	priv->mux_dev = mux_dev;
- 	priv->mux_priv = mux_priv;
- 	priv->chan_id = chan_id;
- 	priv->select = select;
-@@ -203,7 +205,7 @@ void i2c_del_mux_adapter(struct i2c_adapter *adap)
- 	char symlink_name[20];
+ 	if (!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN))
+ 		return btrfs_set_prop(dentry->d_inode, name,
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 28fe71a..aae7011 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1865,7 +1865,7 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ 			  struct inode *inode)
+ {
+ 	struct inode *dir = dentry->d_parent->d_inode;
+-	struct buffer_head *bh;
++	struct buffer_head *bh = NULL;
+ 	struct ext4_dir_entry_2 *de;
+ 	struct ext4_dir_entry_tail *t;
+ 	struct super_block *sb;
+@@ -1889,14 +1889,14 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ 			return retval;
+ 		if (retval == 1) {
+ 			retval = 0;
+-			return retval;
++			goto out;
+ 		}
+ 	}
  
- 	snprintf(symlink_name, sizeof(symlink_name), "channel-%u", priv->chan_id);
--	sysfs_remove_link(&adap->dev.parent->kobj, symlink_name);
-+	sysfs_remove_link(&priv->mux_dev->kobj, symlink_name);
+ 	if (is_dx(dir)) {
+ 		retval = ext4_dx_add_entry(handle, dentry, inode);
+ 		if (!retval || (retval != ERR_BAD_DX_DIR))
+-			return retval;
++			goto out;
+ 		ext4_clear_inode_flag(dir, EXT4_INODE_INDEX);
+ 		dx_fallback++;
+ 		ext4_mark_inode_dirty(handle, dir);
+@@ -1908,14 +1908,15 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ 			return PTR_ERR(bh);
  
- 	sysfs_remove_link(&priv->adap.dev.kobj, "mux_device");
- 	i2c_del_adapter(adap);
--- 
-2.3.6
-
-
-From 7a86d818f4f71fdd0e1d16c07026e2b9a52be2d6 Mon Sep 17 00:00:00 2001
-From: Dmitry Torokhov <dmitry.torokhov@gmail.com>
-Date: Mon, 20 Apr 2015 15:14:47 -0700
-Subject: [PATCH 185/219] i2c: rk3x: report number of messages transmitted
-Cc: mpagano@gentoo.org
-
-commit c6cbfb91b878224e78408a2e15901c79de77115a upstream.
-
-master_xfer() method should return number of i2c messages transferred,
-but on Rockchip we were usually returning just 1, which caused trouble
-with users that actually check number of transferred messages vs.
-checking for negative error codes.
-
-Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
-Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/i2c/busses/i2c-rk3x.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/i2c/busses/i2c-rk3x.c b/drivers/i2c/busses/i2c-rk3x.c
-index 5f96b1b..019d542 100644
---- a/drivers/i2c/busses/i2c-rk3x.c
-+++ b/drivers/i2c/busses/i2c-rk3x.c
-@@ -833,7 +833,7 @@ static int rk3x_i2c_xfer(struct i2c_adapter *adap,
- 	clk_disable(i2c->clk);
- 	spin_unlock_irqrestore(&i2c->lock, flags);
+ 		retval = add_dirent_to_buf(handle, dentry, inode, NULL, bh);
+-		if (retval != -ENOSPC) {
+-			brelse(bh);
+-			return retval;
+-		}
++		if (retval != -ENOSPC)
++			goto out;
  
--	return ret;
-+	return ret < 0 ? ret : num;
- }
+ 		if (blocks == 1 && !dx_fallback &&
+-		    EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_DIR_INDEX))
+-			return make_indexed_dir(handle, dentry, inode, bh);
++		    EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_DIR_INDEX)) {
++			retval = make_indexed_dir(handle, dentry, inode, bh);
++			bh = NULL; /* make_indexed_dir releases bh */
++			goto out;
++		}
+ 		brelse(bh);
+ 	}
+ 	bh = ext4_append(handle, dir, &block);
+@@ -1931,6 +1932,7 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ 	}
  
- static u32 rk3x_i2c_func(struct i2c_adapter *adap)
--- 
-2.3.6
-
-
-From 184848b540e3c7df18a22b983319fa4f64acec15 Mon Sep 17 00:00:00 2001
-From: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
-Date: Thu, 16 Apr 2015 13:05:19 +0100
-Subject: [PATCH 186/219] i2c: Mark adapter devices with
- pm_runtime_no_callbacks
-Cc: mpagano@gentoo.org
-
-commit 6ada5c1e1b077ab98fc144d7ac132b4dcc0148ec upstream.
-
-Commit 523c5b89640e ("i2c: Remove support for legacy PM") removed the PM
-ops from the bus type, which causes the pm operations on the s3c2410
-adapter device to fail (-ENOSUPP in rpm_callback). The adapter device
-doesn't get bound to a driver and as such can't have its own pm_runtime
-callbacks. Previously this was fine as the bus callbacks would have been
-used, but now this can cause devices which use PM runtime and are
-attached over I2C to fail to resume.
-
-This commit fixes this issue by marking all adapter devices with
-pm_runtime_no_callbacks, since they can't have any.
-
-Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
-Acked-by: Beata Michalska <b.michalska@samsung.com>
-Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
-Fixes: 523c5b89640e
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/i2c/i2c-core.c | 2 ++
- 1 file changed, 2 insertions(+)
-
-diff --git a/drivers/i2c/i2c-core.c b/drivers/i2c/i2c-core.c
-index edf274c..526c5a5 100644
---- a/drivers/i2c/i2c-core.c
-+++ b/drivers/i2c/i2c-core.c
-@@ -1410,6 +1410,8 @@ static int i2c_register_adapter(struct i2c_adapter *adap)
+ 	retval = add_dirent_to_buf(handle, dentry, inode, de, bh);
++out:
+ 	brelse(bh);
+ 	if (retval == 0)
+ 		ext4_set_inode_state(inode, EXT4_STATE_NEWENTRY);
+diff --git a/fs/lockd/svcsubs.c b/fs/lockd/svcsubs.c
+index 665ef5a..a563ddb 100644
+--- a/fs/lockd/svcsubs.c
++++ b/fs/lockd/svcsubs.c
+@@ -31,7 +31,7 @@
+ static struct hlist_head	nlm_files[FILE_NRHASH];
+ static DEFINE_MUTEX(nlm_file_mutex);
  
- 	dev_dbg(&adap->dev, "adapter [%s] registered\n", adap->name);
+-#ifdef NFSD_DEBUG
++#ifdef CONFIG_SUNRPC_DEBUG
+ static inline void nlm_debug_print_fh(char *msg, struct nfs_fh *f)
+ {
+ 	u32 *fhp = (u32*)f->data;
+diff --git a/fs/namei.c b/fs/namei.c
+index c83145a..caa38a2 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -1591,7 +1591,8 @@ static inline int walk_component(struct nameidata *nd, struct path *path,
  
-+	pm_runtime_no_callbacks(&adap->dev);
-+
- #ifdef CONFIG_I2C_COMPAT
- 	res = class_compat_create_link(i2c_adapter_compat_class, &adap->dev,
- 				       adap->dev.parent);
--- 
-2.3.6
-
-
-From 00b2c92fe1b560e1a984edf0671f0feb7886a7ed Mon Sep 17 00:00:00 2001
-From: Mark Brown <broonie@kernel.org>
-Date: Wed, 15 Apr 2015 19:18:39 +0100
-Subject: [PATCH 187/219] i2c: core: Export bus recovery functions
-Cc: mpagano@gentoo.org
-
-commit c1c21f4e60ed4523292f1a89ff45a208bddd3849 upstream.
-
-Current -next fails to link an ARM allmodconfig because drivers that use
-the core recovery functions can be built as modules but those functions
-are not exported:
-
-ERROR: "i2c_generic_gpio_recovery" [drivers/i2c/busses/i2c-davinci.ko] undefined!
-ERROR: "i2c_generic_scl_recovery" [drivers/i2c/busses/i2c-davinci.ko] undefined!
-ERROR: "i2c_recover_bus" [drivers/i2c/busses/i2c-davinci.ko] undefined!
-
-Add exports to fix this.
-
-Fixes: 5f9296ba21b3c (i2c: Add bus recovery infrastructure)
-Signed-off-by: Mark Brown <broonie@kernel.org>
-Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/i2c/i2c-core.c | 3 +++
- 1 file changed, 3 insertions(+)
-
-diff --git a/drivers/i2c/i2c-core.c b/drivers/i2c/i2c-core.c
-index 526c5a5..8143162 100644
---- a/drivers/i2c/i2c-core.c
-+++ b/drivers/i2c/i2c-core.c
-@@ -596,6 +596,7 @@ int i2c_generic_scl_recovery(struct i2c_adapter *adap)
- 	adap->bus_recovery_info->set_scl(adap, 1);
- 	return i2c_generic_recovery(adap);
- }
-+EXPORT_SYMBOL_GPL(i2c_generic_scl_recovery);
+ 	if (should_follow_link(path->dentry, follow)) {
+ 		if (nd->flags & LOOKUP_RCU) {
+-			if (unlikely(unlazy_walk(nd, path->dentry))) {
++			if (unlikely(nd->path.mnt != path->mnt ||
++				     unlazy_walk(nd, path->dentry))) {
+ 				err = -ECHILD;
+ 				goto out_err;
+ 			}
+@@ -3047,7 +3048,8 @@ finish_lookup:
  
- int i2c_generic_gpio_recovery(struct i2c_adapter *adap)
+ 	if (should_follow_link(path->dentry, !symlink_ok)) {
+ 		if (nd->flags & LOOKUP_RCU) {
+-			if (unlikely(unlazy_walk(nd, path->dentry))) {
++			if (unlikely(nd->path.mnt != path->mnt ||
++				     unlazy_walk(nd, path->dentry))) {
+ 				error = -ECHILD;
+ 				goto out;
+ 			}
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 82ef140..4622ee3 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -632,14 +632,17 @@ struct mount *__lookup_mnt(struct vfsmount *mnt, struct dentry *dentry)
+  */
+ struct mount *__lookup_mnt_last(struct vfsmount *mnt, struct dentry *dentry)
  {
-@@ -610,6 +611,7 @@ int i2c_generic_gpio_recovery(struct i2c_adapter *adap)
+-	struct mount *p, *res;
+-	res = p = __lookup_mnt(mnt, dentry);
++	struct mount *p, *res = NULL;
++	p = __lookup_mnt(mnt, dentry);
+ 	if (!p)
+ 		goto out;
++	if (!(p->mnt.mnt_flags & MNT_UMOUNT))
++		res = p;
+ 	hlist_for_each_entry_continue(p, mnt_hash) {
+ 		if (&p->mnt_parent->mnt != mnt || p->mnt_mountpoint != dentry)
+ 			break;
+-		res = p;
++		if (!(p->mnt.mnt_flags & MNT_UMOUNT))
++			res = p;
+ 	}
+ out:
+ 	return res;
+@@ -795,10 +798,8 @@ static void __touch_mnt_namespace(struct mnt_namespace *ns)
+ /*
+  * vfsmount lock must be held for write
+  */
+-static void detach_mnt(struct mount *mnt, struct path *old_path)
++static void unhash_mnt(struct mount *mnt)
+ {
+-	old_path->dentry = mnt->mnt_mountpoint;
+-	old_path->mnt = &mnt->mnt_parent->mnt;
+ 	mnt->mnt_parent = mnt;
+ 	mnt->mnt_mountpoint = mnt->mnt.mnt_root;
+ 	list_del_init(&mnt->mnt_child);
+@@ -811,6 +812,26 @@ static void detach_mnt(struct mount *mnt, struct path *old_path)
+ /*
+  * vfsmount lock must be held for write
+  */
++static void detach_mnt(struct mount *mnt, struct path *old_path)
++{
++	old_path->dentry = mnt->mnt_mountpoint;
++	old_path->mnt = &mnt->mnt_parent->mnt;
++	unhash_mnt(mnt);
++}
++
++/*
++ * vfsmount lock must be held for write
++ */
++static void umount_mnt(struct mount *mnt)
++{
++	/* old mountpoint will be dropped when we can do that */
++	mnt->mnt_ex_mountpoint = mnt->mnt_mountpoint;
++	unhash_mnt(mnt);
++}
++
++/*
++ * vfsmount lock must be held for write
++ */
+ void mnt_set_mountpoint(struct mount *mnt,
+ 			struct mountpoint *mp,
+ 			struct mount *child_mnt)
+@@ -1078,6 +1099,13 @@ static void mntput_no_expire(struct mount *mnt)
+ 	rcu_read_unlock();
  
- 	return ret;
- }
-+EXPORT_SYMBOL_GPL(i2c_generic_gpio_recovery);
+ 	list_del(&mnt->mnt_instance);
++
++	if (unlikely(!list_empty(&mnt->mnt_mounts))) {
++		struct mount *p, *tmp;
++		list_for_each_entry_safe(p, tmp, &mnt->mnt_mounts,  mnt_child) {
++			umount_mnt(p);
++		}
++	}
+ 	unlock_mount_hash();
  
- int i2c_recover_bus(struct i2c_adapter *adap)
- {
-@@ -619,6 +621,7 @@ int i2c_recover_bus(struct i2c_adapter *adap)
- 	dev_dbg(&adap->dev, "Trying i2c bus recovery\n");
- 	return adap->bus_recovery_info->recover_bus(adap);
+ 	if (likely(!(mnt->mnt.mnt_flags & MNT_INTERNAL))) {
+@@ -1319,49 +1347,63 @@ static inline void namespace_lock(void)
+ 	down_write(&namespace_sem);
  }
-+EXPORT_SYMBOL_GPL(i2c_recover_bus);
  
- static int i2c_device_probe(struct device *dev)
++enum umount_tree_flags {
++	UMOUNT_SYNC = 1,
++	UMOUNT_PROPAGATE = 2,
++	UMOUNT_CONNECTED = 4,
++};
+ /*
+  * mount_lock must be held
+  * namespace_sem must be held for write
+- * how = 0 => just this tree, don't propagate
+- * how = 1 => propagate; we know that nobody else has reference to any victims
+- * how = 2 => lazy umount
+  */
+-void umount_tree(struct mount *mnt, int how)
++static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
  {
--- 
-2.3.6
-
-
-From 87479d71ffe1c2b63f7621fefbdc1cedd95dd49d Mon Sep 17 00:00:00 2001
-From: Alex Deucher <alexander.deucher@amd.com>
-Date: Tue, 24 Feb 2015 11:29:21 -0500
-Subject: [PATCH 188/219] drm/radeon: fix doublescan modes (v2)
-Cc: mpagano@gentoo.org
-
-commit fd99a0943ffaa0320ea4f69d09ed188f950c0432 upstream.
-
-Use the correct flags for atom.
-
-v2: handle DRM_MODE_FLAG_DBLCLK
-
-Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/gpu/drm/radeon/atombios_crtc.c | 8 ++++++--
- 1 file changed, 6 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c
-index 86807ee..9bd5611 100644
---- a/drivers/gpu/drm/radeon/atombios_crtc.c
-+++ b/drivers/gpu/drm/radeon/atombios_crtc.c
-@@ -330,8 +330,10 @@ atombios_set_crtc_dtd_timing(struct drm_crtc *crtc,
- 		misc |= ATOM_COMPOSITESYNC;
- 	if (mode->flags & DRM_MODE_FLAG_INTERLACE)
- 		misc |= ATOM_INTERLACE;
--	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
-+	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
- 		misc |= ATOM_DOUBLE_CLOCK_MODE;
-+	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
-+		misc |= ATOM_H_REPLICATIONBY2 | ATOM_V_REPLICATIONBY2;
+-	HLIST_HEAD(tmp_list);
++	LIST_HEAD(tmp_list);
+ 	struct mount *p;
  
- 	args.susModeMiscInfo.usAccess = cpu_to_le16(misc);
- 	args.ucCRTC = radeon_crtc->crtc_id;
-@@ -374,8 +376,10 @@ static void atombios_crtc_set_timing(struct drm_crtc *crtc,
- 		misc |= ATOM_COMPOSITESYNC;
- 	if (mode->flags & DRM_MODE_FLAG_INTERLACE)
- 		misc |= ATOM_INTERLACE;
--	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
-+	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
- 		misc |= ATOM_DOUBLE_CLOCK_MODE;
-+	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
-+		misc |= ATOM_H_REPLICATIONBY2 | ATOM_V_REPLICATIONBY2;
++	if (how & UMOUNT_PROPAGATE)
++		propagate_mount_unlock(mnt);
++
++	/* Gather the mounts to umount */
+ 	for (p = mnt; p; p = next_mnt(p, mnt)) {
+-		hlist_del_init_rcu(&p->mnt_hash);
+-		hlist_add_head(&p->mnt_hash, &tmp_list);
++		p->mnt.mnt_flags |= MNT_UMOUNT;
++		list_move(&p->mnt_list, &tmp_list);
+ 	}
  
- 	args.susModeMiscInfo.usAccess = cpu_to_le16(misc);
- 	args.ucCRTC = radeon_crtc->crtc_id;
--- 
-2.3.6
-
-
-From 7b645d942ed7101136f35bad5f6cb225c6e2adaa Mon Sep 17 00:00:00 2001
-From: Krzysztof Kozlowski <k.kozlowski@samsung.com>
-Date: Tue, 7 Apr 2015 22:28:50 +0900
-Subject: [PATCH 189/219] drm/exynos: Enable DP clock to fix display on
- Exynos5250 and other
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-Cc: mpagano@gentoo.org
-
-commit 1c363c7cccf64128087002b0779986ad16aff6dc upstream.
-
-After adding display power domain for Exynos5250 in commit
-2d2c9a8d0a4f ("ARM: dts: add display power domain for exynos5250") the
-display on Chromebook Snow and others stopped working after boot.
-
-The reason for this suggested Andrzej Hajda: the DP clock was disabled.
-This clock is required by Display Port and is enabled by bootloader.
-However when FIMD driver probing was deferred, the display power domain
-was turned off. This effectively reset the value of DP clock enable
-register.
-
-When exynos-dp is later probed, the clock is not enabled and display is
-not properly configured:
-
-exynos-dp 145b0000.dp-controller: Timeout of video streamclk ok
-exynos-dp 145b0000.dp-controller: unable to config video
-
-Fixes: 2d2c9a8d0a4f ("ARM: dts: add display power domain for exynos5250")
-
-Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
-Reported-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk>
-Tested-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk>
-Tested-by: Andreas Färber <afaerber@suse.de>
-Signed-off-by: Inki Dae <inki.dae@samsung.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/gpu/drm/exynos/exynos_dp_core.c  | 10 ++++++++++
- drivers/gpu/drm/exynos/exynos_drm_fimd.c | 19 +++++++++++++++++++
- drivers/gpu/drm/exynos/exynos_drm_fimd.h | 15 +++++++++++++++
- include/video/samsung_fimd.h             |  6 ++++++
- 4 files changed, 50 insertions(+)
- create mode 100644 drivers/gpu/drm/exynos/exynos_drm_fimd.h
-
-diff --git a/drivers/gpu/drm/exynos/exynos_dp_core.c b/drivers/gpu/drm/exynos/exynos_dp_core.c
-index bf17a60..1dbfba5 100644
---- a/drivers/gpu/drm/exynos/exynos_dp_core.c
-+++ b/drivers/gpu/drm/exynos/exynos_dp_core.c
-@@ -32,10 +32,16 @@
- #include <drm/bridge/ptn3460.h>
+-	hlist_for_each_entry(p, &tmp_list, mnt_hash)
++	/* Hide the mounts from mnt_mounts */
++	list_for_each_entry(p, &tmp_list, mnt_list) {
+ 		list_del_init(&p->mnt_child);
++	}
  
- #include "exynos_dp_core.h"
-+#include "exynos_drm_fimd.h"
+-	if (how)
++	/* Add propogated mounts to the tmp_list */
++	if (how & UMOUNT_PROPAGATE)
+ 		propagate_umount(&tmp_list);
  
- #define ctx_from_connector(c)	container_of(c, struct exynos_dp_device, \
- 					connector)
+-	while (!hlist_empty(&tmp_list)) {
+-		p = hlist_entry(tmp_list.first, struct mount, mnt_hash);
+-		hlist_del_init_rcu(&p->mnt_hash);
++	while (!list_empty(&tmp_list)) {
++		bool disconnect;
++		p = list_first_entry(&tmp_list, struct mount, mnt_list);
+ 		list_del_init(&p->mnt_expire);
+ 		list_del_init(&p->mnt_list);
+ 		__touch_mnt_namespace(p->mnt_ns);
+ 		p->mnt_ns = NULL;
+-		if (how < 2)
++		if (how & UMOUNT_SYNC)
+ 			p->mnt.mnt_flags |= MNT_SYNC_UMOUNT;
  
-+static inline struct exynos_drm_crtc *dp_to_crtc(struct exynos_dp_device *dp)
-+{
-+	return to_exynos_crtc(dp->encoder->crtc);
-+}
+-		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt, &unmounted);
++		disconnect = !(((how & UMOUNT_CONNECTED) &&
++				mnt_has_parent(p) &&
++				(p->mnt_parent->mnt.mnt_flags & MNT_UMOUNT)) ||
++			       IS_MNT_LOCKED_AND_LAZY(p));
 +
- static inline struct exynos_dp_device *
- display_to_dp(struct exynos_drm_display *d)
- {
-@@ -1070,6 +1076,8 @@ static void exynos_dp_poweron(struct exynos_dp_device *dp)
++		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt,
++				 disconnect ? &unmounted : NULL);
+ 		if (mnt_has_parent(p)) {
+-			hlist_del_init(&p->mnt_mp_list);
+-			put_mountpoint(p->mnt_mp);
+ 			mnt_add_count(p->mnt_parent, -1);
+-			/* old mountpoint will be dropped when we can do that */
+-			p->mnt_ex_mountpoint = p->mnt_mountpoint;
+-			p->mnt_mountpoint = p->mnt.mnt_root;
+-			p->mnt_parent = p;
+-			p->mnt_mp = NULL;
++			if (!disconnect) {
++				/* Don't forget about p */
++				list_add_tail(&p->mnt_child, &p->mnt_parent->mnt_mounts);
++			} else {
++				umount_mnt(p);
++			}
  		}
+ 		change_mnt_propagation(p, MS_PRIVATE);
  	}
+@@ -1447,14 +1489,14 @@ static int do_umount(struct mount *mnt, int flags)
  
-+	fimd_dp_clock_enable(dp_to_crtc(dp), true);
-+
- 	clk_prepare_enable(dp->clock);
- 	exynos_dp_phy_init(dp);
- 	exynos_dp_init_dp(dp);
-@@ -1094,6 +1102,8 @@ static void exynos_dp_poweroff(struct exynos_dp_device *dp)
- 	exynos_dp_phy_exit(dp);
- 	clk_disable_unprepare(dp->clock);
+ 	if (flags & MNT_DETACH) {
+ 		if (!list_empty(&mnt->mnt_list))
+-			umount_tree(mnt, 2);
++			umount_tree(mnt, UMOUNT_PROPAGATE);
+ 		retval = 0;
+ 	} else {
+ 		shrink_submounts(mnt);
+ 		retval = -EBUSY;
+ 		if (!propagate_mount_busy(mnt, 2)) {
+ 			if (!list_empty(&mnt->mnt_list))
+-				umount_tree(mnt, 1);
++				umount_tree(mnt, UMOUNT_PROPAGATE|UMOUNT_SYNC);
+ 			retval = 0;
+ 		}
+ 	}
+@@ -1480,13 +1522,20 @@ void __detach_mounts(struct dentry *dentry)
  
-+	fimd_dp_clock_enable(dp_to_crtc(dp), false);
-+
- 	if (dp->panel) {
- 		if (drm_panel_unprepare(dp->panel))
- 			DRM_ERROR("failed to turnoff the panel\n");
-diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.c b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
-index 33a10ce..5d58f6c 100644
---- a/drivers/gpu/drm/exynos/exynos_drm_fimd.c
-+++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
-@@ -32,6 +32,7 @@
- #include "exynos_drm_fbdev.h"
- #include "exynos_drm_crtc.h"
- #include "exynos_drm_iommu.h"
-+#include "exynos_drm_fimd.h"
+ 	namespace_lock();
+ 	mp = lookup_mountpoint(dentry);
+-	if (!mp)
++	if (IS_ERR_OR_NULL(mp))
+ 		goto out_unlock;
  
- /*
-  * FIMD stands for Fully Interactive Mobile Display and
-@@ -1233,6 +1234,24 @@ static int fimd_remove(struct platform_device *pdev)
- 	return 0;
+ 	lock_mount_hash();
+ 	while (!hlist_empty(&mp->m_list)) {
+ 		mnt = hlist_entry(mp->m_list.first, struct mount, mnt_mp_list);
+-		umount_tree(mnt, 2);
++		if (mnt->mnt.mnt_flags & MNT_UMOUNT) {
++			struct mount *p, *tmp;
++			list_for_each_entry_safe(p, tmp, &mnt->mnt_mounts,  mnt_child) {
++				hlist_add_head(&p->mnt_umount.s_list, &unmounted);
++				umount_mnt(p);
++			}
++		}
++		else umount_tree(mnt, UMOUNT_CONNECTED);
+ 	}
+ 	unlock_mount_hash();
+ 	put_mountpoint(mp);
+@@ -1648,7 +1697,7 @@ struct mount *copy_tree(struct mount *mnt, struct dentry *dentry,
+ out:
+ 	if (res) {
+ 		lock_mount_hash();
+-		umount_tree(res, 0);
++		umount_tree(res, UMOUNT_SYNC);
+ 		unlock_mount_hash();
+ 	}
+ 	return q;
+@@ -1672,7 +1721,7 @@ void drop_collected_mounts(struct vfsmount *mnt)
+ {
+ 	namespace_lock();
+ 	lock_mount_hash();
+-	umount_tree(real_mount(mnt), 0);
++	umount_tree(real_mount(mnt), UMOUNT_SYNC);
+ 	unlock_mount_hash();
+ 	namespace_unlock();
  }
+@@ -1855,7 +1904,7 @@ static int attach_recursive_mnt(struct mount *source_mnt,
+  out_cleanup_ids:
+ 	while (!hlist_empty(&tree_list)) {
+ 		child = hlist_entry(tree_list.first, struct mount, mnt_hash);
+-		umount_tree(child, 0);
++		umount_tree(child, UMOUNT_SYNC);
+ 	}
+ 	unlock_mount_hash();
+ 	cleanup_group_ids(source_mnt, NULL);
+@@ -2035,7 +2084,7 @@ static int do_loopback(struct path *path, const char *old_name,
+ 	err = graft_tree(mnt, parent, mp);
+ 	if (err) {
+ 		lock_mount_hash();
+-		umount_tree(mnt, 0);
++		umount_tree(mnt, UMOUNT_SYNC);
+ 		unlock_mount_hash();
+ 	}
+ out2:
+@@ -2406,7 +2455,7 @@ void mark_mounts_for_expiry(struct list_head *mounts)
+ 	while (!list_empty(&graveyard)) {
+ 		mnt = list_first_entry(&graveyard, struct mount, mnt_expire);
+ 		touch_mnt_namespace(mnt->mnt_ns);
+-		umount_tree(mnt, 1);
++		umount_tree(mnt, UMOUNT_PROPAGATE|UMOUNT_SYNC);
+ 	}
+ 	unlock_mount_hash();
+ 	namespace_unlock();
+@@ -2477,7 +2526,7 @@ static void shrink_submounts(struct mount *mnt)
+ 			m = list_first_entry(&graveyard, struct mount,
+ 						mnt_expire);
+ 			touch_mnt_namespace(m->mnt_ns);
+-			umount_tree(m, 1);
++			umount_tree(m, UMOUNT_PROPAGATE|UMOUNT_SYNC);
+ 		}
+ 	}
+ }
+diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
+index 351be920..8d129bb 100644
+--- a/fs/nfs/callback.c
++++ b/fs/nfs/callback.c
+@@ -128,7 +128,7 @@ nfs41_callback_svc(void *vrqstp)
+ 		if (try_to_freeze())
+ 			continue;
  
-+void fimd_dp_clock_enable(struct exynos_drm_crtc *crtc, bool enable)
-+{
-+	struct fimd_context *ctx = crtc->ctx;
-+	u32 val;
-+
-+	/*
-+	 * Only Exynos 5250, 5260, 5410 and 542x requires enabling DP/MIE
-+	 * clock. On these SoCs the bootloader may enable it but any
-+	 * power domain off/on will reset it to disable state.
-+	 */
-+	if (ctx->driver_data != &exynos5_fimd_driver_data)
-+		return;
-+
-+	val = enable ? DP_MIE_CLK_DP_ENABLE : DP_MIE_CLK_DISABLE;
-+	writel(DP_MIE_CLK_DP_ENABLE, ctx->regs + DP_MIE_CLKCON);
-+}
-+EXPORT_SYMBOL_GPL(fimd_dp_clock_enable);
-+
- struct platform_driver fimd_driver = {
- 	.probe		= fimd_probe,
- 	.remove		= fimd_remove,
-diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.h b/drivers/gpu/drm/exynos/exynos_drm_fimd.h
-new file mode 100644
-index 0000000..b4fcaa5
---- /dev/null
-+++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.h
-@@ -0,0 +1,15 @@
-+/*
-+ * Copyright (c) 2015 Samsung Electronics Co., Ltd.
-+ *
-+ * This program is free software; you can redistribute  it and/or modify it
-+ * under  the terms of  the GNU General  Public License as published by the
-+ * Free Software Foundation;  either version 2 of the  License, or (at your
-+ * option) any later version.
-+ */
-+
-+#ifndef _EXYNOS_DRM_FIMD_H_
-+#define _EXYNOS_DRM_FIMD_H_
-+
-+extern void fimd_dp_clock_enable(struct exynos_drm_crtc *crtc, bool enable);
-+
-+#endif /* _EXYNOS_DRM_FIMD_H_ */
-diff --git a/include/video/samsung_fimd.h b/include/video/samsung_fimd.h
-index a20e4a3..847a0a2 100644
---- a/include/video/samsung_fimd.h
-+++ b/include/video/samsung_fimd.h
-@@ -436,6 +436,12 @@
- #define BLENDCON_NEW_8BIT_ALPHA_VALUE		(1 << 0)
- #define BLENDCON_NEW_4BIT_ALPHA_VALUE		(0 << 0)
- 
-+/* Display port clock control */
-+#define DP_MIE_CLKCON				0x27c
-+#define DP_MIE_CLK_DISABLE			0x0
-+#define DP_MIE_CLK_DP_ENABLE			0x2
-+#define DP_MIE_CLK_MIE_ENABLE			0x3
-+
- /* Notes on per-window bpp settings
-  *
-  * Value	Win0	 Win1	  Win2	   Win3	    Win 4
--- 
-2.3.6
-
-
-From 9dc473bad145b361c179c4f115ea781b8b73448d Mon Sep 17 00:00:00 2001
-From: Daniel Vetter <daniel.vetter@ffwll.ch>
-Date: Wed, 1 Apr 2015 13:43:46 +0200
-Subject: [PATCH 190/219] drm/i915: Dont enable CS_PARSER_ERROR interrupts at
- all
-Cc: mpagano@gentoo.org
-
-commit 37ef01ab5d24d1d520dc79f6a98099d451c2a901 upstream.
-
-We stopped handling them in
-
-commit aaecdf611a05cac26a94713bad25297e60225c29
-Author: Daniel Vetter <daniel.vetter@ffwll.ch>
-Date:   Tue Nov 4 15:52:22 2014 +0100
-
-    drm/i915: Stop gathering error states for CS error interrupts
-
-but just clearing is apparently not enough: A sufficiently dead gpu
-left behind by firmware (*cough* coreboot *cough*) can keep the gpu in
-an endless loop of such interrupts, eventually leading to the nmi
-firing. And definitely to what looks like a machine hang.
-
-Since we don't even enable these interrupts on gen5+ let's do the same
-on earlier platforms.
-
-Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=93171
-Tested-by: Mono <mono-for-kernel-org@donderklumpen.de>
-Tested-by: info@gluglug.org.uk
-Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
-Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
-Signed-off-by: Jani Nikula <jani.nikula@intel.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/gpu/drm/i915/i915_irq.c | 8 ++------
- 1 file changed, 2 insertions(+), 6 deletions(-)
-
-diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
-index ede5bbb..07320cb 100644
---- a/drivers/gpu/drm/i915/i915_irq.c
-+++ b/drivers/gpu/drm/i915/i915_irq.c
-@@ -3718,14 +3718,12 @@ static int i8xx_irq_postinstall(struct drm_device *dev)
- 		~(I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
- 		  I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
- 		  I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
--		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT |
--		  I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT);
-+		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT);
- 	I915_WRITE16(IMR, dev_priv->irq_mask);
+-		prepare_to_wait(&serv->sv_cb_waitq, &wq, TASK_UNINTERRUPTIBLE);
++		prepare_to_wait(&serv->sv_cb_waitq, &wq, TASK_INTERRUPTIBLE);
+ 		spin_lock_bh(&serv->sv_cb_lock);
+ 		if (!list_empty(&serv->sv_cb_list)) {
+ 			req = list_first_entry(&serv->sv_cb_list,
+@@ -142,10 +142,10 @@ nfs41_callback_svc(void *vrqstp)
+ 				error);
+ 		} else {
+ 			spin_unlock_bh(&serv->sv_cb_lock);
+-			/* schedule_timeout to game the hung task watchdog */
+-			schedule_timeout(60 * HZ);
++			schedule();
+ 			finish_wait(&serv->sv_cb_waitq, &wq);
+ 		}
++		flush_signals(current);
+ 	}
+ 	return 0;
+ }
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index e907c8c..ab21ef1 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -129,22 +129,25 @@ nfs_direct_good_bytes(struct nfs_direct_req *dreq, struct nfs_pgio_header *hdr)
+ 	int i;
+ 	ssize_t count;
  
- 	I915_WRITE16(IER,
- 		     I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
- 		     I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
--		     I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT |
- 		     I915_USER_INTERRUPT);
- 	POSTING_READ16(IER);
+-	WARN_ON_ONCE(hdr->pgio_mirror_idx >= dreq->mirror_count);
+-
+-	count = dreq->mirrors[hdr->pgio_mirror_idx].count;
+-	if (count + dreq->io_start < hdr->io_start + hdr->good_bytes) {
+-		count = hdr->io_start + hdr->good_bytes - dreq->io_start;
+-		dreq->mirrors[hdr->pgio_mirror_idx].count = count;
+-	}
+-
+-	/* update the dreq->count by finding the minimum agreed count from all
+-	 * mirrors */
+-	count = dreq->mirrors[0].count;
++	if (dreq->mirror_count == 1) {
++		dreq->mirrors[hdr->pgio_mirror_idx].count += hdr->good_bytes;
++		dreq->count += hdr->good_bytes;
++	} else {
++		/* mirrored writes */
++		count = dreq->mirrors[hdr->pgio_mirror_idx].count;
++		if (count + dreq->io_start < hdr->io_start + hdr->good_bytes) {
++			count = hdr->io_start + hdr->good_bytes - dreq->io_start;
++			dreq->mirrors[hdr->pgio_mirror_idx].count = count;
++		}
++		/* update the dreq->count by finding the minimum agreed count from all
++		 * mirrors */
++		count = dreq->mirrors[0].count;
  
-@@ -3887,14 +3885,12 @@ static int i915_irq_postinstall(struct drm_device *dev)
- 		  I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
- 		  I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
- 		  I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
--		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT |
--		  I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT);
-+		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT);
+-	for (i = 1; i < dreq->mirror_count; i++)
+-		count = min(count, dreq->mirrors[i].count);
++		for (i = 1; i < dreq->mirror_count; i++)
++			count = min(count, dreq->mirrors[i].count);
  
- 	enable_mask =
- 		I915_ASLE_INTERRUPT |
- 		I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
- 		I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
--		I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT |
- 		I915_USER_INTERRUPT;
+-	dreq->count = count;
++		dreq->count = count;
++	}
+ }
  
- 	if (I915_HAS_HOTPLUG(dev)) {
--- 
-2.3.6
-
-
-From 244f81177e5bc0ecb2f5507ef4371dc4752fea94 Mon Sep 17 00:00:00 2001
-From: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
-Date: Wed, 18 Feb 2015 15:19:33 +0200
-Subject: [PATCH 191/219] drm: adv7511: Fix DDC error interrupt handling
-Cc: mpagano@gentoo.org
-
-commit 2e96206c4f952295e11c311fbb2a7aa2105024af upstream.
-
-The DDC error interrupt bit is located in REG_INT1, not REG_INT0. Update
-both the interrupt wait code and the interrupt sources reset code
-accordingly.
-
-Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/gpu/drm/i2c/adv7511.c | 14 ++++++++++----
- 1 file changed, 10 insertions(+), 4 deletions(-)
-
-diff --git a/drivers/gpu/drm/i2c/adv7511.c b/drivers/gpu/drm/i2c/adv7511.c
-index fa140e0..5109c21 100644
---- a/drivers/gpu/drm/i2c/adv7511.c
-+++ b/drivers/gpu/drm/i2c/adv7511.c
-@@ -467,14 +467,16 @@ static int adv7511_get_edid_block(void *data, u8 *buf, unsigned int block,
- 				     block);
- 			ret = adv7511_wait_for_interrupt(adv7511,
- 					ADV7511_INT0_EDID_READY |
--					ADV7511_INT1_DDC_ERROR, 200);
-+					(ADV7511_INT1_DDC_ERROR << 8), 200);
+ /*
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index 5c399ec..d494ea2 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -7365,6 +7365,11 @@ nfs4_stat_to_errno(int stat)
+ 	.p_name   = #proc,					\
+ }
  
- 			if (!(ret & ADV7511_INT0_EDID_READY))
- 				return -EIO;
- 		}
++#define STUB(proc)		\
++[NFSPROC4_CLNT_##proc] = {	\
++	.p_name = #proc,	\
++}
++
+ struct rpc_procinfo	nfs4_procedures[] = {
+ 	PROC(READ,		enc_read,		dec_read),
+ 	PROC(WRITE,		enc_write,		dec_write),
+@@ -7417,6 +7422,7 @@ struct rpc_procinfo	nfs4_procedures[] = {
+ 	PROC(SECINFO_NO_NAME,	enc_secinfo_no_name,	dec_secinfo_no_name),
+ 	PROC(TEST_STATEID,	enc_test_stateid,	dec_test_stateid),
+ 	PROC(FREE_STATEID,	enc_free_stateid,	dec_free_stateid),
++	STUB(GETDEVICELIST),
+ 	PROC(BIND_CONN_TO_SESSION,
+ 			enc_bind_conn_to_session, dec_bind_conn_to_session),
+ 	PROC(DESTROY_CLIENTID,	enc_destroy_clientid,	dec_destroy_clientid),
+diff --git a/fs/nfs/read.c b/fs/nfs/read.c
+index 568ecf0..848d8b1 100644
+--- a/fs/nfs/read.c
++++ b/fs/nfs/read.c
+@@ -284,7 +284,7 @@ int nfs_readpage(struct file *file, struct page *page)
+ 	dprintk("NFS: nfs_readpage (%p %ld@%lu)\n",
+ 		page, PAGE_CACHE_SIZE, page_file_index(page));
+ 	nfs_inc_stats(inode, NFSIOS_VFSREADPAGE);
+-	nfs_inc_stats(inode, NFSIOS_READPAGES);
++	nfs_add_stats(inode, NFSIOS_READPAGES, 1);
  
- 		regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
--			     ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
-+			     ADV7511_INT0_EDID_READY);
-+		regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
-+			     ADV7511_INT1_DDC_ERROR);
+ 	/*
+ 	 * Try to flush any pending writes to the file..
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 849ed78..41b3f1096 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -580,7 +580,7 @@ static int nfs_do_writepage(struct page *page, struct writeback_control *wbc, st
+ 	int ret;
  
- 		/* Break this apart, hopefully more I2C controllers will
- 		 * support 64 byte transfers than 256 byte transfers
-@@ -528,7 +530,9 @@ static int adv7511_get_modes(struct drm_encoder *encoder,
- 	/* Reading the EDID only works if the device is powered */
- 	if (adv7511->dpms_mode != DRM_MODE_DPMS_ON) {
- 		regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
--			     ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
-+			     ADV7511_INT0_EDID_READY);
-+		regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
-+			     ADV7511_INT1_DDC_ERROR);
- 		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
- 				   ADV7511_POWER_POWER_DOWN, 0);
- 		adv7511->current_edid_segment = -1;
-@@ -563,7 +567,9 @@ static void adv7511_encoder_dpms(struct drm_encoder *encoder, int mode)
- 		adv7511->current_edid_segment = -1;
+ 	nfs_inc_stats(inode, NFSIOS_VFSWRITEPAGE);
+-	nfs_inc_stats(inode, NFSIOS_WRITEPAGES);
++	nfs_add_stats(inode, NFSIOS_WRITEPAGES, 1);
  
- 		regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
--			     ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
-+			     ADV7511_INT0_EDID_READY);
-+		regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
-+			     ADV7511_INT1_DDC_ERROR);
- 		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
- 				   ADV7511_POWER_POWER_DOWN, 0);
- 		/*
--- 
-2.3.6
-
-
-From 74ed38596ea50609c61bd10f048f97d6161e73b4 Mon Sep 17 00:00:00 2001
-From: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
-Date: Wed, 18 Feb 2015 15:19:33 +0200
-Subject: [PATCH 192/219] drm: adv7511: Fix nested sleep when reading EDID
-Cc: mpagano@gentoo.org
-
-commit a5241289c4139f0521b89e34a70f5f998463ae15 upstream.
-
-The EDID read code waits for the read completion interrupt to occur
-using wait_event_interruptible(). The condition passed to the macro
-reads I2C registers. This results in sleeping with the task state set
-to TASK_INTERRUPTIBLE, triggering a WARN_ON() introduced in commit
-8eb23b9f35aae ("sched: Debug nested sleeps").
-
-Fix this by reworking the EDID read code. Instead of checking whether
-the read is complete through I2C reads, handle the interrupt registers
-in the interrupt handler and update a new edid_read flag accordingly. As
-a side effect both the IRQ and polling code paths now process the
-interrupt sources through the same code path, simplifying the code.
-
-Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/gpu/drm/i2c/adv7511.c | 96 +++++++++++++++++++++----------------------
- 1 file changed, 46 insertions(+), 50 deletions(-)
-
-diff --git a/drivers/gpu/drm/i2c/adv7511.c b/drivers/gpu/drm/i2c/adv7511.c
-index 5109c21..60ab1f7 100644
---- a/drivers/gpu/drm/i2c/adv7511.c
-+++ b/drivers/gpu/drm/i2c/adv7511.c
-@@ -33,6 +33,7 @@ struct adv7511 {
+ 	nfs_pageio_cond_complete(pgio, page_file_index(page));
+ 	ret = nfs_page_async_flush(pgio, page, wbc->sync_mode == WB_SYNC_NONE);
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 92b9d97..5416968 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1030,6 +1030,8 @@ nfsd4_fallocate(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		dprintk("NFSD: nfsd4_fallocate: couldn't process stateid!\n");
+ 		return status;
+ 	}
++	if (!file)
++		return nfserr_bad_stateid;
  
- 	unsigned int current_edid_segment;
- 	uint8_t edid_buf[256];
-+	bool edid_read;
+ 	status = nfsd4_vfs_fallocate(rqstp, &cstate->current_fh, file,
+ 				     fallocate->falloc_offset,
+@@ -1069,6 +1071,8 @@ nfsd4_seek(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		dprintk("NFSD: nfsd4_seek: couldn't process stateid!\n");
+ 		return status;
+ 	}
++	if (!file)
++		return nfserr_bad_stateid;
  
- 	wait_queue_head_t wq;
- 	struct drm_encoder *encoder;
-@@ -379,69 +380,71 @@ static bool adv7511_hpd(struct adv7511 *adv7511)
- 	return false;
+ 	switch (seek->seek_whence) {
+ 	case NFS4_CONTENT_DATA:
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 8ba1d88..ee1cccd 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1139,7 +1139,7 @@ hash_sessionid(struct nfs4_sessionid *sessionid)
+ 	return sid->sequence % SESSION_HASH_SIZE;
  }
  
--static irqreturn_t adv7511_irq_handler(int irq, void *devid)
--{
--	struct adv7511 *adv7511 = devid;
--
--	if (adv7511_hpd(adv7511))
--		drm_helper_hpd_irq_event(adv7511->encoder->dev);
--
--	wake_up_all(&adv7511->wq);
--
--	return IRQ_HANDLED;
--}
--
--static unsigned int adv7511_is_interrupt_pending(struct adv7511 *adv7511,
--						 unsigned int irq)
-+static int adv7511_irq_process(struct adv7511 *adv7511)
+-#ifdef NFSD_DEBUG
++#ifdef CONFIG_SUNRPC_DEBUG
+ static inline void
+ dump_sessionid(const char *fn, struct nfs4_sessionid *sessionid)
  {
- 	unsigned int irq0, irq1;
--	unsigned int pending;
- 	int ret;
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 5fb7e78..5b33ce1 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3422,6 +3422,7 @@ nfsd4_encode_read(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 	unsigned long maxcount;
+ 	struct xdr_stream *xdr = &resp->xdr;
+ 	struct file *file = read->rd_filp;
++	struct svc_fh *fhp = read->rd_fhp;
+ 	int starting_len = xdr->buf->len;
+ 	struct raparms *ra;
+ 	__be32 *p;
+@@ -3445,12 +3446,15 @@ nfsd4_encode_read(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 	maxcount = min_t(unsigned long, maxcount, (xdr->buf->buflen - xdr->buf->len));
+ 	maxcount = min_t(unsigned long, maxcount, read->rd_length);
+ 
+-	if (!read->rd_filp) {
++	if (read->rd_filp)
++		err = nfsd_permission(resp->rqstp, fhp->fh_export,
++				fhp->fh_dentry,
++				NFSD_MAY_READ|NFSD_MAY_OWNER_OVERRIDE);
++	else
+ 		err = nfsd_get_tmp_read_open(resp->rqstp, read->rd_fhp,
+ 						&file, &ra);
+-		if (err)
+-			goto err_truncate;
+-	}
++	if (err)
++		goto err_truncate;
+ 
+ 	if (file->f_op->splice_read && test_bit(RQ_SPLICE_OK, &resp->rqstp->rq_flags))
+ 		err = nfsd4_encode_splice_read(resp, read, file, maxcount);
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index aa47d75..9690cb4 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1250,15 +1250,15 @@ static int __init init_nfsd(void)
+ 	int retval;
+ 	printk(KERN_INFO "Installing knfsd (copyright (C) 1996 okir@monad.swb.de).\n");
+ 
+-	retval = register_cld_notifier();
+-	if (retval)
+-		return retval;
+ 	retval = register_pernet_subsys(&nfsd_net_ops);
+ 	if (retval < 0)
+-		goto out_unregister_notifier;
+-	retval = nfsd4_init_slabs();
++		return retval;
++	retval = register_cld_notifier();
+ 	if (retval)
+ 		goto out_unregister_pernet;
++	retval = nfsd4_init_slabs();
++	if (retval)
++		goto out_unregister_notifier;
+ 	retval = nfsd4_init_pnfs();
+ 	if (retval)
+ 		goto out_free_slabs;
+@@ -1290,10 +1290,10 @@ out_exit_pnfs:
+ 	nfsd4_exit_pnfs();
+ out_free_slabs:
+ 	nfsd4_free_slabs();
+-out_unregister_pernet:
+-	unregister_pernet_subsys(&nfsd_net_ops);
+ out_unregister_notifier:
+ 	unregister_cld_notifier();
++out_unregister_pernet:
++	unregister_pernet_subsys(&nfsd_net_ops);
+ 	return retval;
+ }
  
- 	ret = regmap_read(adv7511->regmap, ADV7511_REG_INT(0), &irq0);
- 	if (ret < 0)
--		return 0;
-+		return ret;
-+
- 	ret = regmap_read(adv7511->regmap, ADV7511_REG_INT(1), &irq1);
- 	if (ret < 0)
--		return 0;
-+		return ret;
-+
-+	regmap_write(adv7511->regmap, ADV7511_REG_INT(0), irq0);
-+	regmap_write(adv7511->regmap, ADV7511_REG_INT(1), irq1);
-+
-+	if (irq0 & ADV7511_INT0_HDP)
-+		drm_helper_hpd_irq_event(adv7511->encoder->dev);
-+
-+	if (irq0 & ADV7511_INT0_EDID_READY || irq1 & ADV7511_INT1_DDC_ERROR) {
-+		adv7511->edid_read = true;
+@@ -1308,8 +1308,8 @@ static void __exit exit_nfsd(void)
+ 	nfsd4_exit_pnfs();
+ 	nfsd_fault_inject_cleanup();
+ 	unregister_filesystem(&nfsd_fs_type);
+-	unregister_pernet_subsys(&nfsd_net_ops);
+ 	unregister_cld_notifier();
++	unregister_pernet_subsys(&nfsd_net_ops);
+ }
  
--	pending = (irq1 << 8) | irq0;
-+		if (adv7511->i2c_main->irq)
-+			wake_up_all(&adv7511->wq);
-+	}
+ MODULE_AUTHOR("Olaf Kirch <okir@monad.swb.de>");
+diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
+index 565c4da..cf98052 100644
+--- a/fs/nfsd/nfsd.h
++++ b/fs/nfsd/nfsd.h
+@@ -24,7 +24,7 @@
+ #include "export.h"
  
--	return pending & irq;
-+	return 0;
+ #undef ifdebug
+-#ifdef NFSD_DEBUG
++#ifdef CONFIG_SUNRPC_DEBUG
+ # define ifdebug(flag)		if (nfsd_debug & NFSDDBG_##flag)
+ #else
+ # define ifdebug(flag)		if (0)
+diff --git a/fs/open.c b/fs/open.c
+index 33f9cbf..44a3be1 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -570,6 +570,7 @@ static int chown_common(struct path *path, uid_t user, gid_t group)
+ 	uid = make_kuid(current_user_ns(), user);
+ 	gid = make_kgid(current_user_ns(), group);
+ 
++retry_deleg:
+ 	newattrs.ia_valid =  ATTR_CTIME;
+ 	if (user != (uid_t) -1) {
+ 		if (!uid_valid(uid))
+@@ -586,7 +587,6 @@ static int chown_common(struct path *path, uid_t user, gid_t group)
+ 	if (!S_ISDIR(inode->i_mode))
+ 		newattrs.ia_valid |=
+ 			ATTR_KILL_SUID | ATTR_KILL_SGID | ATTR_KILL_PRIV;
+-retry_deleg:
+ 	mutex_lock(&inode->i_mutex);
+ 	error = security_path_chown(path, uid, gid);
+ 	if (!error)
+diff --git a/fs/pnode.c b/fs/pnode.c
+index 260ac8f..6367e1e 100644
+--- a/fs/pnode.c
++++ b/fs/pnode.c
+@@ -362,6 +362,46 @@ int propagate_mount_busy(struct mount *mnt, int refcnt)
  }
  
--static int adv7511_wait_for_interrupt(struct adv7511 *adv7511, int irq,
--				      int timeout)
-+static irqreturn_t adv7511_irq_handler(int irq, void *devid)
+ /*
++ * Clear MNT_LOCKED when it can be shown to be safe.
++ *
++ * mount_lock lock must be held for write
++ */
++void propagate_mount_unlock(struct mount *mnt)
 +{
-+	struct adv7511 *adv7511 = devid;
-+	int ret;
++	struct mount *parent = mnt->mnt_parent;
++	struct mount *m, *child;
 +
-+	ret = adv7511_irq_process(adv7511);
-+	return ret < 0 ? IRQ_NONE : IRQ_HANDLED;
++	BUG_ON(parent == mnt);
++
++	for (m = propagation_next(parent, parent); m;
++			m = propagation_next(m, parent)) {
++		child = __lookup_mnt_last(&m->mnt, mnt->mnt_mountpoint);
++		if (child)
++			child->mnt.mnt_flags &= ~MNT_LOCKED;
++	}
 +}
 +
-+/* -----------------------------------------------------------------------------
-+ * EDID retrieval
++/*
++ * Mark all mounts that the MNT_LOCKED logic will allow to be unmounted.
 + */
++static void mark_umount_candidates(struct mount *mnt)
++{
++	struct mount *parent = mnt->mnt_parent;
++	struct mount *m;
 +
-+static int adv7511_wait_for_edid(struct adv7511 *adv7511, int timeout)
- {
--	unsigned int pending;
- 	int ret;
- 
- 	if (adv7511->i2c_main->irq) {
- 		ret = wait_event_interruptible_timeout(adv7511->wq,
--				adv7511_is_interrupt_pending(adv7511, irq),
--				msecs_to_jiffies(timeout));
--		if (ret <= 0)
--			return 0;
--		pending = adv7511_is_interrupt_pending(adv7511, irq);
-+				adv7511->edid_read, msecs_to_jiffies(timeout));
- 	} else {
--		if (timeout < 25)
--			timeout = 25;
--		do {
--			pending = adv7511_is_interrupt_pending(adv7511, irq);
--			if (pending)
-+		for (; timeout > 0; timeout -= 25) {
-+			ret = adv7511_irq_process(adv7511);
-+			if (ret < 0)
-+				break;
-+
-+			if (adv7511->edid_read)
- 				break;
++	BUG_ON(parent == mnt);
 +
- 			msleep(25);
--			timeout -= 25;
--		} while (timeout >= 25);
++	for (m = propagation_next(parent, parent); m;
++			m = propagation_next(m, parent)) {
++		struct mount *child = __lookup_mnt_last(&m->mnt,
++						mnt->mnt_mountpoint);
++		if (child && (!IS_MNT_LOCKED(child) || IS_MNT_MARKED(m))) {
++			SET_MNT_MARK(child);
 +		}
++	}
++}
++
++/*
+  * NOTE: unmounting 'mnt' naturally propagates to all other mounts its
+  * parent propagates to.
+  */
+@@ -378,13 +418,16 @@ static void __propagate_umount(struct mount *mnt)
+ 		struct mount *child = __lookup_mnt_last(&m->mnt,
+ 						mnt->mnt_mountpoint);
+ 		/*
+-		 * umount the child only if the child has no
+-		 * other children
++		 * umount the child only if the child has no children
++		 * and the child is marked safe to unmount.
+ 		 */
+-		if (child && list_empty(&child->mnt_mounts)) {
++		if (!child || !IS_MNT_MARKED(child))
++			continue;
++		CLEAR_MNT_MARK(child);
++		if (list_empty(&child->mnt_mounts)) {
+ 			list_del_init(&child->mnt_child);
+-			hlist_del_init_rcu(&child->mnt_hash);
+-			hlist_add_before_rcu(&child->mnt_hash, &mnt->mnt_hash);
++			child->mnt.mnt_flags |= MNT_UMOUNT;
++			list_move_tail(&child->mnt_list, &mnt->mnt_list);
+ 		}
  	}
+ }
+@@ -396,11 +439,14 @@ static void __propagate_umount(struct mount *mnt)
+  *
+  * vfsmount lock must be held for write
+  */
+-int propagate_umount(struct hlist_head *list)
++int propagate_umount(struct list_head *list)
+ {
+ 	struct mount *mnt;
  
--	return pending;
-+	return adv7511->edid_read ? 0 : -EIO;
+-	hlist_for_each_entry(mnt, list, mnt_hash)
++	list_for_each_entry_reverse(mnt, list, mnt_list)
++		mark_umount_candidates(mnt);
++
++	list_for_each_entry(mnt, list, mnt_list)
+ 		__propagate_umount(mnt);
+ 	return 0;
  }
+diff --git a/fs/pnode.h b/fs/pnode.h
+index 4a24635..7114ce6 100644
+--- a/fs/pnode.h
++++ b/fs/pnode.h
+@@ -19,6 +19,9 @@
+ #define IS_MNT_MARKED(m) ((m)->mnt.mnt_flags & MNT_MARKED)
+ #define SET_MNT_MARK(m) ((m)->mnt.mnt_flags |= MNT_MARKED)
+ #define CLEAR_MNT_MARK(m) ((m)->mnt.mnt_flags &= ~MNT_MARKED)
++#define IS_MNT_LOCKED(m) ((m)->mnt.mnt_flags & MNT_LOCKED)
++#define IS_MNT_LOCKED_AND_LAZY(m) \
++	(((m)->mnt.mnt_flags & (MNT_LOCKED|MNT_SYNC_UMOUNT)) == MNT_LOCKED)
  
--/* -----------------------------------------------------------------------------
-- * EDID retrieval
-- */
--
- static int adv7511_get_edid_block(void *data, u8 *buf, unsigned int block,
- 				  size_t len)
- {
-@@ -463,21 +466,14 @@ static int adv7511_get_edid_block(void *data, u8 *buf, unsigned int block,
- 			return ret;
+ #define CL_EXPIRE    		0x01
+ #define CL_SLAVE     		0x02
+@@ -40,14 +43,14 @@ static inline void set_mnt_shared(struct mount *mnt)
+ void change_mnt_propagation(struct mount *, int);
+ int propagate_mnt(struct mount *, struct mountpoint *, struct mount *,
+ 		struct hlist_head *);
+-int propagate_umount(struct hlist_head *);
++int propagate_umount(struct list_head *);
+ int propagate_mount_busy(struct mount *, int);
++void propagate_mount_unlock(struct mount *);
+ void mnt_release_group_id(struct mount *);
+ int get_dominating_id(struct mount *mnt, const struct path *root);
+ unsigned int mnt_get_count(struct mount *mnt);
+ void mnt_set_mountpoint(struct mount *, struct mountpoint *,
+ 			struct mount *);
+-void umount_tree(struct mount *, int);
+ struct mount *copy_tree(struct mount *, struct dentry *, int);
+ bool is_path_reachable(struct mount *, struct dentry *,
+ 			 const struct path *root);
+diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h
+index b034f10..0d58525 100644
+--- a/include/acpi/actypes.h
++++ b/include/acpi/actypes.h
+@@ -199,9 +199,29 @@ typedef int s32;
+ typedef s32 acpi_native_int;
  
- 		if (status != 2) {
-+			adv7511->edid_read = false;
- 			regmap_write(adv7511->regmap, ADV7511_REG_EDID_SEGMENT,
- 				     block);
--			ret = adv7511_wait_for_interrupt(adv7511,
--					ADV7511_INT0_EDID_READY |
--					(ADV7511_INT1_DDC_ERROR << 8), 200);
--
--			if (!(ret & ADV7511_INT0_EDID_READY))
--				return -EIO;
-+			ret = adv7511_wait_for_edid(adv7511, 200);
-+			if (ret < 0)
-+				return ret;
- 		}
+ typedef u32 acpi_size;
++
++#ifdef ACPI_32BIT_PHYSICAL_ADDRESS
++
++/*
++ * OSPMs can define this to shrink the size of the structures for 32-bit
++ * none PAE environment. ASL compiler may always define this to generate
++ * 32-bit OSPM compliant tables.
++ */
+ typedef u32 acpi_io_address;
+ typedef u32 acpi_physical_address;
+ 
++#else				/* ACPI_32BIT_PHYSICAL_ADDRESS */
++
++/*
++ * It is reported that, after some calculations, the physical addresses can
++ * wrap over the 32-bit boundary on 32-bit PAE environment.
++ * https://bugzilla.kernel.org/show_bug.cgi?id=87971
++ */
++typedef u64 acpi_io_address;
++typedef u64 acpi_physical_address;
++
++#endif				/* ACPI_32BIT_PHYSICAL_ADDRESS */
++
+ #define ACPI_MAX_PTR                    ACPI_UINT32_MAX
+ #define ACPI_SIZE_MAX                   ACPI_UINT32_MAX
  
--		regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
--			     ADV7511_INT0_EDID_READY);
--		regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
--			     ADV7511_INT1_DDC_ERROR);
+@@ -736,10 +756,6 @@ typedef u32 acpi_event_status;
+ #define ACPI_GPE_ENABLE                 0
+ #define ACPI_GPE_DISABLE                1
+ #define ACPI_GPE_CONDITIONAL_ENABLE     2
+-#define ACPI_GPE_SAVE_MASK              4
 -
- 		/* Break this apart, hopefully more I2C controllers will
- 		 * support 64 byte transfers than 256 byte transfers
- 		 */
--- 
-2.3.6
-
-
-From 959905cf28ee80f8830b717f4e1ac28a61732974 Mon Sep 17 00:00:00 2001
-From: Imre Deak <imre.deak@intel.com>
-Date: Wed, 15 Apr 2015 16:52:30 -0700
-Subject: [PATCH 193/219] drm/i915: vlv: fix save/restore of GFX_MAX_REQ_COUNT
- reg
-Cc: mpagano@gentoo.org
-
-commit b5f1c97f944482e98e6e39208af356630389d1ea upstream.
-
-Due this typo we don't save/restore the GFX_MAX_REQ_COUNT register across
-suspend/resume, so fix this.
-
-This was introduced in
-
-commit ddeea5b0c36f3665446518c609be91f9336ef674
-Author: Imre Deak <imre.deak@intel.com>
-Date:   Mon May 5 15:19:56 2014 +0300
-
-    drm/i915: vlv: add runtime PM support
-
-I noticed this only by reading the code. To my knowledge it shouldn't
-cause any real problems at the moment, since the power well backing this
-register remains on across a runtime s/r. This may change once
-system-wide s0ix functionality is enabled in the kernel.
-
-v2:
-- resend after a missing git add -u :/
-
-Signed-off-by: Imre Deak <imre.deak@intel.com>
-Tested-By: PRC QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com)
-Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
-Signed-off-by: Jani Nikula <jani.nikula@intel.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/gpu/drm/i915/i915_drv.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
-index 5c66b56..ec4d932 100644
---- a/drivers/gpu/drm/i915/i915_drv.c
-+++ b/drivers/gpu/drm/i915/i915_drv.c
-@@ -1042,7 +1042,7 @@ static void vlv_save_gunit_s0ix_state(struct drm_i915_private *dev_priv)
- 		s->lra_limits[i] = I915_READ(GEN7_LRA_LIMITS_BASE + i * 4);
+-#define ACPI_GPE_ENABLE_SAVE            (ACPI_GPE_ENABLE | ACPI_GPE_SAVE_MASK)
+-#define ACPI_GPE_DISABLE_SAVE           (ACPI_GPE_DISABLE | ACPI_GPE_SAVE_MASK)
  
- 	s->media_max_req_count	= I915_READ(GEN7_MEDIA_MAX_REQ_COUNT);
--	s->gfx_max_req_count	= I915_READ(GEN7_MEDIA_MAX_REQ_COUNT);
-+	s->gfx_max_req_count	= I915_READ(GEN7_GFX_MAX_REQ_COUNT);
+ /*
+  * GPE info flags - Per GPE
+diff --git a/include/acpi/platform/acenv.h b/include/acpi/platform/acenv.h
+index ad74dc5..ecdf940 100644
+--- a/include/acpi/platform/acenv.h
++++ b/include/acpi/platform/acenv.h
+@@ -76,6 +76,7 @@
+ #define ACPI_LARGE_NAMESPACE_NODE
+ #define ACPI_DATA_TABLE_DISASSEMBLY
+ #define ACPI_SINGLE_THREADED
++#define ACPI_32BIT_PHYSICAL_ADDRESS
+ #endif
  
- 	s->render_hwsp		= I915_READ(RENDER_HWS_PGA_GEN7);
- 	s->ecochk		= I915_READ(GAM_ECOCHK);
-@@ -1124,7 +1124,7 @@ static void vlv_restore_gunit_s0ix_state(struct drm_i915_private *dev_priv)
- 		I915_WRITE(GEN7_LRA_LIMITS_BASE + i * 4, s->lra_limits[i]);
+ /* acpi_exec configuration. Multithreaded with full AML debugger */
+diff --git a/include/dt-bindings/clock/tegra124-car-common.h b/include/dt-bindings/clock/tegra124-car-common.h
+index ae2eb17..a215609 100644
+--- a/include/dt-bindings/clock/tegra124-car-common.h
++++ b/include/dt-bindings/clock/tegra124-car-common.h
+@@ -297,7 +297,7 @@
+ #define TEGRA124_CLK_PLL_C4 270
+ #define TEGRA124_CLK_PLL_DP 271
+ #define TEGRA124_CLK_PLL_E_MUX 272
+-#define TEGRA124_CLK_PLLD_DSI 273
++#define TEGRA124_CLK_PLL_D_DSI_OUT 273
+ /* 274 */
+ /* 275 */
+ /* 276 */
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index bbfceb7..33b52fb 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -48,7 +48,7 @@ struct bpf_map *bpf_map_get(struct fd f);
  
- 	I915_WRITE(GEN7_MEDIA_MAX_REQ_COUNT, s->media_max_req_count);
--	I915_WRITE(GEN7_MEDIA_MAX_REQ_COUNT, s->gfx_max_req_count);
-+	I915_WRITE(GEN7_GFX_MAX_REQ_COUNT, s->gfx_max_req_count);
+ /* function argument constraints */
+ enum bpf_arg_type {
+-	ARG_ANYTHING = 0,	/* any argument is ok */
++	ARG_DONTCARE = 0,	/* unused argument in helper function */
  
- 	I915_WRITE(RENDER_HWS_PGA_GEN7,	s->render_hwsp);
- 	I915_WRITE(GAM_ECOCHK,		s->ecochk);
--- 
-2.3.6
-
-
-From 0f14e0aa4e606b77387e807b89a0ee8faf10accb Mon Sep 17 00:00:00 2001
-From: Dmitry Torokhov <dmitry.torokhov@gmail.com>
-Date: Tue, 21 Apr 2015 09:49:11 -0700
-Subject: [PATCH 194/219] drm/i915: cope with large i2c transfers
-Cc: mpagano@gentoo.org
-
-commit 9535c4757b881e06fae72a857485ad57c422b8d2 upstream.
-
-The hardware, according to the specs, is limited to 256 byte transfers,
-and current driver has no protections in case users attempt to do larger
-transfers. The code will just stomp over status register and mayhem
-ensues.
-
-Let's split larger transfers into digestable chunks. Doing this allows
-Atmel MXT driver on Pixel 1 function properly (it hasn't since commit
-9d8dc3e529a19e427fd379118acd132520935c5d "Input: atmel_mxt_ts -
-implement T44 message handling" which tries to consume multiple
-touchscreen/touchpad reports in a single transaction).
-
-Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
-Signed-off-by: Jani Nikula <jani.nikula@intel.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/gpu/drm/i915/i915_reg.h  |  1 +
- drivers/gpu/drm/i915/intel_i2c.c | 66 ++++++++++++++++++++++++++++++++++------
- 2 files changed, 57 insertions(+), 10 deletions(-)
-
-diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
-index 33b3d0a2..f536ff2 100644
---- a/drivers/gpu/drm/i915/i915_reg.h
-+++ b/drivers/gpu/drm/i915/i915_reg.h
-@@ -1740,6 +1740,7 @@ enum punit_power_well {
- #define   GMBUS_CYCLE_INDEX	(2<<25)
- #define   GMBUS_CYCLE_STOP	(4<<25)
- #define   GMBUS_BYTE_COUNT_SHIFT 16
-+#define   GMBUS_BYTE_COUNT_MAX   256U
- #define   GMBUS_SLAVE_INDEX_SHIFT 8
- #define   GMBUS_SLAVE_ADDR_SHIFT 1
- #define   GMBUS_SLAVE_READ	(1<<0)
-diff --git a/drivers/gpu/drm/i915/intel_i2c.c b/drivers/gpu/drm/i915/intel_i2c.c
-index b31088a..56e437e 100644
---- a/drivers/gpu/drm/i915/intel_i2c.c
-+++ b/drivers/gpu/drm/i915/intel_i2c.c
-@@ -270,18 +270,17 @@ gmbus_wait_idle(struct drm_i915_private *dev_priv)
- }
+ 	/* the following constraints used to prototype
+ 	 * bpf_map_lookup/update/delete_elem() functions
+@@ -62,6 +62,8 @@ enum bpf_arg_type {
+ 	 */
+ 	ARG_PTR_TO_STACK,	/* any pointer to eBPF program stack */
+ 	ARG_CONST_STACK_SIZE,	/* number of bytes accessed from stack */
++
++	ARG_ANYTHING,		/* any (initialized) argument is ok */
+ };
  
- static int
--gmbus_xfer_read(struct drm_i915_private *dev_priv, struct i2c_msg *msg,
--		u32 gmbus1_index)
-+gmbus_xfer_read_chunk(struct drm_i915_private *dev_priv,
-+		      unsigned short addr, u8 *buf, unsigned int len,
-+		      u32 gmbus1_index)
- {
- 	int reg_offset = dev_priv->gpio_mmio_base;
--	u16 len = msg->len;
--	u8 *buf = msg->buf;
+ /* type of values returned from helper functions */
+diff --git a/include/linux/mount.h b/include/linux/mount.h
+index c2c561d..564beee 100644
+--- a/include/linux/mount.h
++++ b/include/linux/mount.h
+@@ -61,6 +61,7 @@ struct mnt_namespace;
+ #define MNT_DOOMED		0x1000000
+ #define MNT_SYNC_UMOUNT		0x2000000
+ #define MNT_MARKED		0x4000000
++#define MNT_UMOUNT		0x8000000
  
- 	I915_WRITE(GMBUS1 + reg_offset,
- 		   gmbus1_index |
- 		   GMBUS_CYCLE_WAIT |
- 		   (len << GMBUS_BYTE_COUNT_SHIFT) |
--		   (msg->addr << GMBUS_SLAVE_ADDR_SHIFT) |
-+		   (addr << GMBUS_SLAVE_ADDR_SHIFT) |
- 		   GMBUS_SLAVE_READ | GMBUS_SW_RDY);
- 	while (len) {
- 		int ret;
-@@ -303,11 +302,35 @@ gmbus_xfer_read(struct drm_i915_private *dev_priv, struct i2c_msg *msg,
- }
+ struct vfsmount {
+ 	struct dentry *mnt_root;	/* root of the mounted tree */
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index a419b65..51348f7 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -176,6 +176,14 @@ extern void get_iowait_load(unsigned long *nr_waiters, unsigned long *load);
+ extern void calc_global_load(unsigned long ticks);
+ extern void update_cpu_load_nohz(void);
  
- static int
--gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
-+gmbus_xfer_read(struct drm_i915_private *dev_priv, struct i2c_msg *msg,
-+		u32 gmbus1_index)
- {
--	int reg_offset = dev_priv->gpio_mmio_base;
--	u16 len = msg->len;
- 	u8 *buf = msg->buf;
-+	unsigned int rx_size = msg->len;
-+	unsigned int len;
-+	int ret;
-+
-+	do {
-+		len = min(rx_size, GMBUS_BYTE_COUNT_MAX);
-+
-+		ret = gmbus_xfer_read_chunk(dev_priv, msg->addr,
-+					    buf, len, gmbus1_index);
-+		if (ret)
-+			return ret;
-+
-+		rx_size -= len;
-+		buf += len;
-+	} while (rx_size != 0);
-+
-+	return 0;
-+}
++/* Notifier for when a task gets migrated to a new CPU */
++struct task_migration_notifier {
++	struct task_struct *task;
++	int from_cpu;
++	int to_cpu;
++};
++extern void register_task_migration_notifier(struct notifier_block *n);
 +
-+static int
-+gmbus_xfer_write_chunk(struct drm_i915_private *dev_priv,
-+		       unsigned short addr, u8 *buf, unsigned int len)
-+{
-+	int reg_offset = dev_priv->gpio_mmio_base;
-+	unsigned int chunk_size = len;
- 	u32 val, loop;
+ extern unsigned long get_parent_ip(unsigned long addr);
  
- 	val = loop = 0;
-@@ -319,8 +342,8 @@ gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
- 	I915_WRITE(GMBUS3 + reg_offset, val);
- 	I915_WRITE(GMBUS1 + reg_offset,
- 		   GMBUS_CYCLE_WAIT |
--		   (msg->len << GMBUS_BYTE_COUNT_SHIFT) |
--		   (msg->addr << GMBUS_SLAVE_ADDR_SHIFT) |
-+		   (chunk_size << GMBUS_BYTE_COUNT_SHIFT) |
-+		   (addr << GMBUS_SLAVE_ADDR_SHIFT) |
- 		   GMBUS_SLAVE_WRITE | GMBUS_SW_RDY);
- 	while (len) {
- 		int ret;
-@@ -337,6 +360,29 @@ gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
- 		if (ret)
- 			return ret;
- 	}
-+
-+	return 0;
-+}
-+
-+static int
-+gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
+ extern void dump_cpu_task(int cpu);
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index f54d665..bdccc4b 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -769,6 +769,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
+ 
+ struct sk_buff *__alloc_skb(unsigned int size, gfp_t priority, int flags,
+ 			    int node);
++struct sk_buff *__build_skb(void *data, unsigned int frag_size);
+ struct sk_buff *build_skb(void *data, unsigned int frag_size);
+ static inline struct sk_buff *alloc_skb(unsigned int size,
+ 					gfp_t priority)
+@@ -3013,6 +3014,18 @@ static inline bool __skb_checksum_validate_needed(struct sk_buff *skb,
+  */
+ #define CHECKSUM_BREAK 76
+ 
++/* Unset checksum-complete
++ *
++ * Unset checksum complete can be done when packet is being modified
++ * (uncompressed for instance) and checksum-complete value is
++ * invalidated.
++ */
++static inline void skb_checksum_complete_unset(struct sk_buff *skb)
 +{
-+	u8 *buf = msg->buf;
-+	unsigned int tx_size = msg->len;
-+	unsigned int len;
-+	int ret;
-+
-+	do {
-+		len = min(tx_size, GMBUS_BYTE_COUNT_MAX);
-+
-+		ret = gmbus_xfer_write_chunk(dev_priv, msg->addr, buf, len);
-+		if (ret)
-+			return ret;
++	if (skb->ip_summed == CHECKSUM_COMPLETE)
++		skb->ip_summed = CHECKSUM_NONE;
++}
 +
-+		buf += len;
-+		tx_size -= len;
-+	} while (tx_size != 0);
+ /* Validate (init) checksum based on checksum complete.
+  *
+  * Return values:
+diff --git a/include/linux/usb.h b/include/linux/usb.h
+index 7ee1b5c..447fe29 100644
+--- a/include/linux/usb.h
++++ b/include/linux/usb.h
+@@ -205,6 +205,32 @@ void usb_put_intf(struct usb_interface *intf);
+ #define USB_MAXINTERFACES	32
+ #define USB_MAXIADS		(USB_MAXINTERFACES/2)
+ 
++/*
++ * USB Resume Timer: Every Host controller driver should drive the resume
++ * signalling on the bus for the amount of time defined by this macro.
++ *
++ * That way we will have a 'stable' behavior among all HCDs supported by Linux.
++ *
++ * Note that the USB Specification states we should drive resume for *at least*
++ * 20 ms, but it doesn't give an upper bound. This creates two possible
++ * situations which we want to avoid:
++ *
++ * (a) sometimes an msleep(20) might expire slightly before 20 ms, which causes
++ * us to fail USB Electrical Tests, thus failing Certification
++ *
++ * (b) Some (many) devices actually need more than 20 ms of resume signalling,
++ * and while we can argue that's against the USB Specification, we don't have
++ * control over which devices a certification laboratory will be using for
++ * certification. If CertLab uses a device which was tested against Windows and
++ * that happens to have relaxed resume signalling rules, we might fall into
++ * situations where we fail interoperability and electrical tests.
++ *
++ * In order to avoid both conditions, we're using a 40 ms resume timeout, which
++ * should cope with both LPJ calibration errors and devices not following every
++ * detail of the USB Specification.
++ */
++#define USB_RESUME_TIMEOUT	40 /* ms */
 +
- 	return 0;
- }
+ /**
+  * struct usb_interface_cache - long-term representation of a device interface
+  * @num_altsetting: number of altsettings defined.
+diff --git a/include/target/iscsi/iscsi_target_core.h b/include/target/iscsi/iscsi_target_core.h
+index d3583d3..dd0f3ab 100644
+--- a/include/target/iscsi/iscsi_target_core.h
++++ b/include/target/iscsi/iscsi_target_core.h
+@@ -602,6 +602,11 @@ struct iscsi_conn {
+ 	struct iscsi_session	*sess;
+ 	/* Pointer to thread_set in use for this conn's threads */
+ 	struct iscsi_thread_set	*thread_set;
++	int			bitmap_id;
++	int			rx_thread_active;
++	struct task_struct	*rx_thread;
++	int			tx_thread_active;
++	struct task_struct	*tx_thread;
+ 	/* list_head for session connection list */
+ 	struct list_head	conn_list;
+ } ____cacheline_aligned;
+@@ -871,10 +876,12 @@ struct iscsit_global {
+ 	/* Unique identifier used for the authentication daemon */
+ 	u32			auth_id;
+ 	u32			inactive_ts;
++#define ISCSIT_BITMAP_BITS	262144
+ 	/* Thread Set bitmap count */
+ 	int			ts_bitmap_count;
+ 	/* Thread Set bitmap pointer */
+ 	unsigned long		*ts_bitmap;
++	spinlock_t		ts_bitmap_lock;
+ 	/* Used for iSCSI discovery session authentication */
+ 	struct iscsi_node_acl	discovery_acl;
+ 	struct iscsi_portal_group	*discovery_tpg;
+diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
+index 672150b..985ca4c 100644
+--- a/include/target/target_core_base.h
++++ b/include/target/target_core_base.h
+@@ -524,7 +524,7 @@ struct se_cmd {
+ 	sense_reason_t		(*execute_cmd)(struct se_cmd *);
+ 	sense_reason_t		(*execute_rw)(struct se_cmd *, struct scatterlist *,
+ 					      u32, enum dma_data_direction);
+-	sense_reason_t (*transport_complete_callback)(struct se_cmd *);
++	sense_reason_t (*transport_complete_callback)(struct se_cmd *, bool);
  
--- 
-2.3.6
-
-
-From f5e360ea796b5833aa7ddf281ed49d72f9eba1e3 Mon Sep 17 00:00:00 2001
-From: Al Viro <viro@zeniv.linux.org.uk>
-Date: Fri, 24 Apr 2015 15:47:07 -0400
-Subject: [PATCH 195/219] RCU pathwalk breakage when running into a symlink
- overmounting something
-Cc: mpagano@gentoo.org
-
-commit 3cab989afd8d8d1bc3d99fef0e7ed87c31e7b647 upstream.
-
-Calling unlazy_walk() in walk_component() and do_last() when we find
-a symlink that needs to be followed doesn't acquire a reference to vfsmount.
-That's fine when the symlink is on the same vfsmount as the parent directory
-(which is almost always the case), but it's not always true - one _can_
-manage to bind a symlink on top of something.  And in such cases we end up
-with excessive mntput().
-
-Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/namei.c | 6 ++++--
- 1 file changed, 4 insertions(+), 2 deletions(-)
-
-diff --git a/fs/namei.c b/fs/namei.c
-index c83145a..caa38a2 100644
---- a/fs/namei.c
-+++ b/fs/namei.c
-@@ -1591,7 +1591,8 @@ static inline int walk_component(struct nameidata *nd, struct path *path,
+ 	unsigned char		*t_task_cdb;
+ 	unsigned char		__t_task_cdb[TCM_MAX_COMMAND_SIZE];
+diff --git a/include/uapi/linux/nfsd/debug.h b/include/uapi/linux/nfsd/debug.h
+index 0bf130a..28ec6c9 100644
+--- a/include/uapi/linux/nfsd/debug.h
++++ b/include/uapi/linux/nfsd/debug.h
+@@ -12,14 +12,6 @@
+ #include <linux/sunrpc/debug.h>
  
- 	if (should_follow_link(path->dentry, follow)) {
- 		if (nd->flags & LOOKUP_RCU) {
--			if (unlikely(unlazy_walk(nd, path->dentry))) {
-+			if (unlikely(nd->path.mnt != path->mnt ||
-+				     unlazy_walk(nd, path->dentry))) {
- 				err = -ECHILD;
- 				goto out_err;
- 			}
-@@ -3047,7 +3048,8 @@ finish_lookup:
+ /*
+- * Enable debugging for nfsd.
+- * Requires RPC_DEBUG.
+- */
+-#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
+-# define NFSD_DEBUG		1
+-#endif
+-
+-/*
+  * knfsd debug flags
+  */
+ #define NFSDDBG_SOCK		0x0001
+diff --git a/include/video/samsung_fimd.h b/include/video/samsung_fimd.h
+index a20e4a3..847a0a2 100644
+--- a/include/video/samsung_fimd.h
++++ b/include/video/samsung_fimd.h
+@@ -436,6 +436,12 @@
+ #define BLENDCON_NEW_8BIT_ALPHA_VALUE		(1 << 0)
+ #define BLENDCON_NEW_4BIT_ALPHA_VALUE		(0 << 0)
  
- 	if (should_follow_link(path->dentry, !symlink_ok)) {
- 		if (nd->flags & LOOKUP_RCU) {
--			if (unlikely(unlazy_walk(nd, path->dentry))) {
-+			if (unlikely(nd->path.mnt != path->mnt ||
-+				     unlazy_walk(nd, path->dentry))) {
- 				error = -ECHILD;
- 				goto out;
- 			}
--- 
-2.3.6
-
-
-From 04dcce2b2b45c99fdaebd0baa19640674ea388f4 Mon Sep 17 00:00:00 2001
-From: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
-Date: Thu, 16 Apr 2015 18:48:39 +0800
-Subject: [PATCH 196/219] Revert "nfs: replace nfs_add_stats with nfs_inc_stats
- when add one"
-Cc: mpagano@gentoo.org
-
-commit 3708f842e107b9b79d54a75d152e666b693649e8 upstream.
-
-This reverts commit 5a254d08b086d80cbead2ebcee6d2a4b3a15587a.
-
-Since commit 5a254d08b086 ("nfs: replace nfs_add_stats with
-nfs_inc_stats when add one"), nfs_readpage and nfs_do_writepage use
-nfs_inc_stats to increment NFSIOS_READPAGES and NFSIOS_WRITEPAGES
-instead of nfs_add_stats.
-
-However nfs_inc_stats does not do the same thing as nfs_add_stats with
-value 1 because these functions work on distinct stats:
-nfs_inc_stats increments stats from "enum nfs_stat_eventcounters" (in
-server->io_stats->events) and nfs_add_stats those from "enum
-nfs_stat_bytecounters" (in server->io_stats->bytes).
-
-Signed-off-by: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
-Fixes: 5a254d08b086 ("nfs: replace nfs_add_stats with nfs_inc_stats...")
-Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/nfs/read.c  | 2 +-
- fs/nfs/write.c | 2 +-
- 2 files changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/fs/nfs/read.c b/fs/nfs/read.c
-index 568ecf0..848d8b1 100644
---- a/fs/nfs/read.c
-+++ b/fs/nfs/read.c
-@@ -284,7 +284,7 @@ int nfs_readpage(struct file *file, struct page *page)
- 	dprintk("NFS: nfs_readpage (%p %ld@%lu)\n",
- 		page, PAGE_CACHE_SIZE, page_file_index(page));
- 	nfs_inc_stats(inode, NFSIOS_VFSREADPAGE);
--	nfs_inc_stats(inode, NFSIOS_READPAGES);
-+	nfs_add_stats(inode, NFSIOS_READPAGES, 1);
++/* Display port clock control */
++#define DP_MIE_CLKCON				0x27c
++#define DP_MIE_CLK_DISABLE			0x0
++#define DP_MIE_CLK_DP_ENABLE			0x2
++#define DP_MIE_CLK_MIE_ENABLE			0x3
++
+ /* Notes on per-window bpp settings
+  *
+  * Value	Win0	 Win1	  Win2	   Win3	    Win 4
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 36508e6..5d8ea3d 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -755,7 +755,7 @@ static int check_func_arg(struct verifier_env *env, u32 regno,
+ 	enum bpf_reg_type expected_type;
+ 	int err = 0;
  
- 	/*
- 	 * Try to flush any pending writes to the file..
-diff --git a/fs/nfs/write.c b/fs/nfs/write.c
-index 849ed78..41b3f1096 100644
---- a/fs/nfs/write.c
-+++ b/fs/nfs/write.c
-@@ -580,7 +580,7 @@ static int nfs_do_writepage(struct page *page, struct writeback_control *wbc, st
- 	int ret;
+-	if (arg_type == ARG_ANYTHING)
++	if (arg_type == ARG_DONTCARE)
+ 		return 0;
  
- 	nfs_inc_stats(inode, NFSIOS_VFSWRITEPAGE);
--	nfs_inc_stats(inode, NFSIOS_WRITEPAGES);
-+	nfs_add_stats(inode, NFSIOS_WRITEPAGES, 1);
+ 	if (reg->type == NOT_INIT) {
+@@ -763,6 +763,9 @@ static int check_func_arg(struct verifier_env *env, u32 regno,
+ 		return -EACCES;
+ 	}
  
- 	nfs_pageio_cond_complete(pgio, page_file_index(page));
- 	ret = nfs_page_async_flush(pgio, page, wbc->sync_mode == WB_SYNC_NONE);
--- 
-2.3.6
-
-
-From 2556cb4a63a559a09112aba49d0112bd7dc4d2d6 Mon Sep 17 00:00:00 2001
-From: "J. Bruce Fields" <bfields@redhat.com>
-Date: Fri, 3 Apr 2015 16:24:27 -0400
-Subject: [PATCH 197/219] nfsd4: disallow ALLOCATE with special stateids
-Cc: mpagano@gentoo.org
-
-commit 5ba4a25ab7b13be528b23f85182f4d09cf7f71ad upstream.
-
-vfs_fallocate will hit a NULL dereference if the client tries an
-ALLOCATE or DEALLOCATE with a special stateid.  Fix that.  (We also
-depend on the open to have broken any conflicting leases or delegations
-for us.)
-
-(If it turns out we need to allow special stateid's then we could do a
-temporary open here in the special-stateid case, as we do for read and
-write.  For now I'm assuming it's not necessary.)
-
-Fixes: 95d871f03cae "nfsd: Add ALLOCATE support"
-Cc: Anna Schumaker <Anna.Schumaker@Netapp.com>
-Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/nfsd/nfs4proc.c | 2 ++
- 1 file changed, 2 insertions(+)
-
-diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
-index 92b9d97..5912967 100644
---- a/fs/nfsd/nfs4proc.c
-+++ b/fs/nfsd/nfs4proc.c
-@@ -1030,6 +1030,8 @@ nfsd4_fallocate(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
- 		dprintk("NFSD: nfsd4_fallocate: couldn't process stateid!\n");
- 		return status;
++	if (arg_type == ARG_ANYTHING)
++		return 0;
++
+ 	if (arg_type == ARG_PTR_TO_STACK || arg_type == ARG_PTR_TO_MAP_KEY ||
+ 	    arg_type == ARG_PTR_TO_MAP_VALUE) {
+ 		expected_type = PTR_TO_STACK;
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index 227fec3..9a34bd8 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -697,6 +697,8 @@ static int ptrace_peek_siginfo(struct task_struct *child,
+ static int ptrace_resume(struct task_struct *child, long request,
+ 			 unsigned long data)
+ {
++	bool need_siglock;
++
+ 	if (!valid_signal(data))
+ 		return -EIO;
+ 
+@@ -724,8 +726,26 @@ static int ptrace_resume(struct task_struct *child, long request,
+ 		user_disable_single_step(child);
  	}
-+	if (!file)
-+		return nfserr_bad_stateid;
  
- 	status = nfsd4_vfs_fallocate(rqstp, &cstate->current_fh, file,
- 				     fallocate->falloc_offset,
--- 
-2.3.6
-
-
-From e2efc21fbad9a8d055586716fad4d4baaf210b56 Mon Sep 17 00:00:00 2001
-From: "J. Bruce Fields" <bfields@redhat.com>
-Date: Fri, 3 Apr 2015 17:19:41 -0400
-Subject: [PATCH 198/219] nfsd4: fix READ permission checking
-Cc: mpagano@gentoo.org
-
-commit 6e4891dc289cd191d46ab7ba1dcb29646644f9ca upstream.
-
-In the case we already have a struct file (derived from a stateid), we
-still need to do permission-checking; otherwise an unauthorized user
-could gain access to a file by sniffing or guessing somebody else's
-stateid.
-
-Fixes: dc97618ddda9 "nfsd4: separate splice and readv cases"
-Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/nfsd/nfs4xdr.c | 12 ++++++++----
- 1 file changed, 8 insertions(+), 4 deletions(-)
-
-diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
-index 5fb7e78..5b33ce1 100644
---- a/fs/nfsd/nfs4xdr.c
-+++ b/fs/nfsd/nfs4xdr.c
-@@ -3422,6 +3422,7 @@ nfsd4_encode_read(struct nfsd4_compoundres *resp, __be32 nfserr,
- 	unsigned long maxcount;
- 	struct xdr_stream *xdr = &resp->xdr;
- 	struct file *file = read->rd_filp;
-+	struct svc_fh *fhp = read->rd_fhp;
- 	int starting_len = xdr->buf->len;
- 	struct raparms *ra;
- 	__be32 *p;
-@@ -3445,12 +3446,15 @@ nfsd4_encode_read(struct nfsd4_compoundres *resp, __be32 nfserr,
- 	maxcount = min_t(unsigned long, maxcount, (xdr->buf->buflen - xdr->buf->len));
- 	maxcount = min_t(unsigned long, maxcount, read->rd_length);
++	/*
++	 * Change ->exit_code and ->state under siglock to avoid the race
++	 * with wait_task_stopped() in between; a non-zero ->exit_code will
++	 * wrongly look like another report from tracee.
++	 *
++	 * Note that we need siglock even if ->exit_code == data and/or this
++	 * status was not reported yet, the new status must not be cleared by
++	 * wait_task_stopped() after resume.
++	 *
++	 * If data == 0 we do not care if wait_task_stopped() reports the old
++	 * status and clears the code too; this can't race with the tracee, it
++	 * takes siglock after resume.
++	 */
++	need_siglock = data && !thread_group_empty(current);
++	if (need_siglock)
++		spin_lock_irq(&child->sighand->siglock);
+ 	child->exit_code = data;
+ 	wake_up_state(child, __TASK_TRACED);
++	if (need_siglock)
++		spin_unlock_irq(&child->sighand->siglock);
  
--	if (!read->rd_filp) {
-+	if (read->rd_filp)
-+		err = nfsd_permission(resp->rqstp, fhp->fh_export,
-+				fhp->fh_dentry,
-+				NFSD_MAY_READ|NFSD_MAY_OWNER_OVERRIDE);
-+	else
- 		err = nfsd_get_tmp_read_open(resp->rqstp, read->rd_fhp,
- 						&file, &ra);
--		if (err)
--			goto err_truncate;
--	}
-+	if (err)
-+		goto err_truncate;
+ 	return 0;
+ }
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 62671f5..3d5f6f6 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -996,6 +996,13 @@ void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags)
+ 		rq_clock_skip_update(rq, true);
+ }
+ 
++static ATOMIC_NOTIFIER_HEAD(task_migration_notifier);
++
++void register_task_migration_notifier(struct notifier_block *n)
++{
++	atomic_notifier_chain_register(&task_migration_notifier, n);
++}
++
+ #ifdef CONFIG_SMP
+ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
+ {
+@@ -1026,10 +1033,18 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
+ 	trace_sched_migrate_task(p, new_cpu);
  
- 	if (file->f_op->splice_read && test_bit(RQ_SPLICE_OK, &resp->rqstp->rq_flags))
- 		err = nfsd4_encode_splice_read(resp, read, file, maxcount);
--- 
-2.3.6
-
-
-From 6fd154a83b18bc81aa3f1071e74c36d9076ff4b9 Mon Sep 17 00:00:00 2001
-From: "J. Bruce Fields" <bfields@redhat.com>
-Date: Tue, 21 Apr 2015 15:25:39 -0400
-Subject: [PATCH 199/219] nfsd4: disallow SEEK with special stateids
-Cc: mpagano@gentoo.org
-
-commit 980608fb50aea34993ba956b71cd4602aa42b14b upstream.
-
-If the client uses a special stateid then we'll pass a NULL file to
-vfs_llseek.
-
-Fixes: 24bab491220f " NFSD: Implement SEEK"
-Cc: Anna Schumaker <Anna.Schumaker@Netapp.com>
-Reported-by: Christoph Hellwig <hch@infradead.org>
-Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/nfsd/nfs4proc.c | 2 ++
- 1 file changed, 2 insertions(+)
-
-diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
-index 5912967..5416968 100644
---- a/fs/nfsd/nfs4proc.c
-+++ b/fs/nfsd/nfs4proc.c
-@@ -1071,6 +1071,8 @@ nfsd4_seek(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
- 		dprintk("NFSD: nfsd4_seek: couldn't process stateid!\n");
- 		return status;
+ 	if (task_cpu(p) != new_cpu) {
++		struct task_migration_notifier tmn;
++
+ 		if (p->sched_class->migrate_task_rq)
+ 			p->sched_class->migrate_task_rq(p, new_cpu);
+ 		p->se.nr_migrations++;
+ 		perf_sw_event_sched(PERF_COUNT_SW_CPU_MIGRATIONS, 1, 0);
++
++		tmn.task = p;
++		tmn.from_cpu = task_cpu(p);
++		tmn.to_cpu = new_cpu;
++
++		atomic_notifier_call_chain(&task_migration_notifier, 0, &tmn);
  	}
-+	if (!file)
-+		return nfserr_bad_stateid;
  
- 	switch (seek->seek_whence) {
- 	case NFS4_CONTENT_DATA:
--- 
-2.3.6
-
-
-From 1f8303c597803d7d7c6943708dff333dbbc009a1 Mon Sep 17 00:00:00 2001
-From: Mark Salter <msalter@redhat.com>
-Date: Mon, 6 Apr 2015 09:46:00 -0400
-Subject: [PATCH 200/219] nfsd: eliminate NFSD_DEBUG
-Cc: mpagano@gentoo.org
-
-commit 135dd002c23054aaa056ea3162c1e0356905c195 upstream.
-
-Commit f895b252d4edf ("sunrpc: eliminate RPC_DEBUG") introduced
-use of IS_ENABLED() in a uapi header which leads to a build
-failure for userspace apps trying to use <linux/nfsd/debug.h>:
-
-   linux/nfsd/debug.h:18:15: error: missing binary operator before token "("
-  #if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
-                ^
-
-Since this was only used to define NFSD_DEBUG if CONFIG_SUNRPC_DEBUG
-is enabled, replace instances of NFSD_DEBUG with CONFIG_SUNRPC_DEBUG.
-
-Fixes: f895b252d4edf "sunrpc: eliminate RPC_DEBUG"
-Signed-off-by: Mark Salter <msalter@redhat.com>
-Reviewed-by: Jeff Layton <jlayton@primarydata.com>
-Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/lockd/svcsubs.c              | 2 +-
- fs/nfsd/nfs4state.c             | 2 +-
- fs/nfsd/nfsd.h                  | 2 +-
- include/uapi/linux/nfsd/debug.h | 8 --------
- 4 files changed, 3 insertions(+), 11 deletions(-)
-
-diff --git a/fs/lockd/svcsubs.c b/fs/lockd/svcsubs.c
-index 665ef5a..a563ddb 100644
---- a/fs/lockd/svcsubs.c
-+++ b/fs/lockd/svcsubs.c
-@@ -31,7 +31,7 @@
- static struct hlist_head	nlm_files[FILE_NRHASH];
- static DEFINE_MUTEX(nlm_file_mutex);
+ 	__set_task_cpu(p, new_cpu);
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 3fa8fa6..f670cbb 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -514,7 +514,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
+ 	unsigned long flags;
+ 	struct rq *rq;
  
--#ifdef NFSD_DEBUG
-+#ifdef CONFIG_SUNRPC_DEBUG
- static inline void nlm_debug_print_fh(char *msg, struct nfs_fh *f)
+-	rq = task_rq_lock(current, &flags);
++	rq = task_rq_lock(p, &flags);
+ 
+ 	/*
+ 	 * We need to take care of several possible races here:
+@@ -569,7 +569,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
+ 		push_dl_task(rq);
+ #endif
+ unlock:
+-	task_rq_unlock(rq, current, &flags);
++	task_rq_unlock(rq, p, &flags);
+ 
+ 	return HRTIMER_NORESTART;
+ }
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 5040d44..922048a 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -2679,7 +2679,7 @@ static DEFINE_PER_CPU(unsigned int, current_context);
+ 
+ static __always_inline int trace_recursive_lock(void)
  {
- 	u32 *fhp = (u32*)f->data;
-diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
-index 8ba1d88..ee1cccd 100644
---- a/fs/nfsd/nfs4state.c
-+++ b/fs/nfsd/nfs4state.c
-@@ -1139,7 +1139,7 @@ hash_sessionid(struct nfs4_sessionid *sessionid)
- 	return sid->sequence % SESSION_HASH_SIZE;
+-	unsigned int val = this_cpu_read(current_context);
++	unsigned int val = __this_cpu_read(current_context);
+ 	int bit;
+ 
+ 	if (in_interrupt()) {
+@@ -2696,18 +2696,17 @@ static __always_inline int trace_recursive_lock(void)
+ 		return 1;
+ 
+ 	val |= (1 << bit);
+-	this_cpu_write(current_context, val);
++	__this_cpu_write(current_context, val);
+ 
+ 	return 0;
  }
  
--#ifdef NFSD_DEBUG
-+#ifdef CONFIG_SUNRPC_DEBUG
- static inline void
- dump_sessionid(const char *fn, struct nfs4_sessionid *sessionid)
+ static __always_inline void trace_recursive_unlock(void)
  {
-diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
-index 565c4da..cf98052 100644
---- a/fs/nfsd/nfsd.h
-+++ b/fs/nfsd/nfsd.h
-@@ -24,7 +24,7 @@
- #include "export.h"
+-	unsigned int val = this_cpu_read(current_context);
++	unsigned int val = __this_cpu_read(current_context);
+ 
+-	val--;
+-	val &= this_cpu_read(current_context);
+-	this_cpu_write(current_context, val);
++	val &= val & (val - 1);
++	__this_cpu_write(current_context, val);
+ }
  
- #undef ifdebug
--#ifdef NFSD_DEBUG
-+#ifdef CONFIG_SUNRPC_DEBUG
- # define ifdebug(flag)		if (nfsd_debug & NFSDDBG_##flag)
  #else
- # define ifdebug(flag)		if (0)
-diff --git a/include/uapi/linux/nfsd/debug.h b/include/uapi/linux/nfsd/debug.h
-index 0bf130a..28ec6c9 100644
---- a/include/uapi/linux/nfsd/debug.h
-+++ b/include/uapi/linux/nfsd/debug.h
-@@ -12,14 +12,6 @@
- #include <linux/sunrpc/debug.h>
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index db54dda..a9c10a3 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -565,6 +565,7 @@ static int __ftrace_set_clr_event(struct trace_array *tr, const char *match,
+ static int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set)
+ {
+ 	char *event = NULL, *sub = NULL, *match;
++	int ret;
  
- /*
-- * Enable debugging for nfsd.
-- * Requires RPC_DEBUG.
-- */
--#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
--# define NFSD_DEBUG		1
--#endif
--
--/*
-  * knfsd debug flags
-  */
- #define NFSDDBG_SOCK		0x0001
--- 
-2.3.6
-
-
-From d5d30089c2a59d079a074eb37c8c223b81664ceb Mon Sep 17 00:00:00 2001
-From: Giuseppe Cantavenera <giuseppe.cantavenera.ext@nokia.com>
-Date: Mon, 20 Apr 2015 18:00:08 +0200
-Subject: [PATCH 201/219] nfsd: fix nsfd startup race triggering BUG_ON
-Cc: mpagano@gentoo.org
-
-commit bb7ffbf29e76b89a86ca4c3ee0d4690641f2f772 upstream.
-
-nfsd triggered a BUG_ON in net_generic(...) when rpc_pipefs_event(...)
-in fs/nfsd/nfs4recover.c was called before assigning ntfsd_net_id.
-The following was observed on a MIPS 32-core processor:
-kernel: Call Trace:
-kernel: [<ffffffffc00bc5e4>] rpc_pipefs_event+0x7c/0x158 [nfsd]
-kernel: [<ffffffff8017a2a0>] notifier_call_chain+0x70/0xb8
-kernel: [<ffffffff8017a4e4>] __blocking_notifier_call_chain+0x4c/0x70
-kernel: [<ffffffff8053aff8>] rpc_fill_super+0xf8/0x1a0
-kernel: [<ffffffff8022204c>] mount_ns+0xb4/0xf0
-kernel: [<ffffffff80222b48>] mount_fs+0x50/0x1f8
-kernel: [<ffffffff8023dc00>] vfs_kern_mount+0x58/0xf0
-kernel: [<ffffffff802404ac>] do_mount+0x27c/0xa28
-kernel: [<ffffffff80240cf0>] SyS_mount+0x98/0xe8
-kernel: [<ffffffff80135d24>] handle_sys64+0x44/0x68
-kernel:
-kernel:
-        Code: 0040f809  00000000  2e020001 <00020336> 3c12c00d
-                3c02801a  de100000 6442eb98  0040f809
-kernel: ---[ end trace 7471374335809536 ]---
-
-Fixed this behaviour by calling register_pernet_subsys(&nfsd_net_ops) before
-registering rpc_pipefs_event(...) with the notifier chain.
-
-Signed-off-by: Giuseppe Cantavenera <giuseppe.cantavenera.ext@nokia.com>
-Signed-off-by: Lorenzo Restelli <lorenzo.restelli.ext@nokia.com>
-Reviewed-by: Kinlong Mee <kinglongmee@gmail.com>
-Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/nfsd/nfsctl.c | 16 ++++++++--------
- 1 file changed, 8 insertions(+), 8 deletions(-)
-
-diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
-index aa47d75..9690cb4 100644
---- a/fs/nfsd/nfsctl.c
-+++ b/fs/nfsd/nfsctl.c
-@@ -1250,15 +1250,15 @@ static int __init init_nfsd(void)
- 	int retval;
- 	printk(KERN_INFO "Installing knfsd (copyright (C) 1996 okir@monad.swb.de).\n");
+ 	/*
+ 	 * The buf format can be <subsystem>:<event-name>
+@@ -590,7 +591,13 @@ static int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set)
+ 			event = NULL;
+ 	}
  
--	retval = register_cld_notifier();
--	if (retval)
--		return retval;
- 	retval = register_pernet_subsys(&nfsd_net_ops);
- 	if (retval < 0)
--		goto out_unregister_notifier;
--	retval = nfsd4_init_slabs();
-+		return retval;
-+	retval = register_cld_notifier();
- 	if (retval)
- 		goto out_unregister_pernet;
-+	retval = nfsd4_init_slabs();
-+	if (retval)
-+		goto out_unregister_notifier;
- 	retval = nfsd4_init_pnfs();
- 	if (retval)
- 		goto out_free_slabs;
-@@ -1290,10 +1290,10 @@ out_exit_pnfs:
- 	nfsd4_exit_pnfs();
- out_free_slabs:
- 	nfsd4_free_slabs();
--out_unregister_pernet:
--	unregister_pernet_subsys(&nfsd_net_ops);
- out_unregister_notifier:
- 	unregister_cld_notifier();
-+out_unregister_pernet:
-+	unregister_pernet_subsys(&nfsd_net_ops);
- 	return retval;
+-	return __ftrace_set_clr_event(tr, match, sub, event, set);
++	ret = __ftrace_set_clr_event(tr, match, sub, event, set);
++
++	/* Put back the colon to allow this to be called again */
++	if (buf)
++		*(buf - 1) = ':';
++
++	return ret;
  }
  
-@@ -1308,8 +1308,8 @@ static void __exit exit_nfsd(void)
- 	nfsd4_exit_pnfs();
- 	nfsd_fault_inject_cleanup();
- 	unregister_filesystem(&nfsd_fs_type);
--	unregister_pernet_subsys(&nfsd_net_ops);
- 	unregister_cld_notifier();
-+	unregister_pernet_subsys(&nfsd_net_ops);
+ /**
+diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
+index 2d25ad1..b6fce36 100644
+--- a/kernel/trace/trace_functions_graph.c
++++ b/kernel/trace/trace_functions_graph.c
+@@ -1309,15 +1309,19 @@ void graph_trace_open(struct trace_iterator *iter)
+ {
+ 	/* pid and depth on the last trace processed */
+ 	struct fgraph_data *data;
++	gfp_t gfpflags;
+ 	int cpu;
+ 
+ 	iter->private = NULL;
+ 
+-	data = kzalloc(sizeof(*data), GFP_KERNEL);
++	/* We can be called in atomic context via ftrace_dump() */
++	gfpflags = (in_atomic() || irqs_disabled()) ? GFP_ATOMIC : GFP_KERNEL;
++
++	data = kzalloc(sizeof(*data), gfpflags);
+ 	if (!data)
+ 		goto out_err;
+ 
+-	data->cpu_data = alloc_percpu(struct fgraph_cpu_data);
++	data->cpu_data = alloc_percpu_gfp(struct fgraph_cpu_data, gfpflags);
+ 	if (!data->cpu_data)
+ 		goto out_err_free;
+ 
+diff --git a/lib/string.c b/lib/string.c
+index ce81aae..a579201 100644
+--- a/lib/string.c
++++ b/lib/string.c
+@@ -607,7 +607,7 @@ EXPORT_SYMBOL(memset);
+ void memzero_explicit(void *s, size_t count)
+ {
+ 	memset(s, 0, count);
+-	OPTIMIZER_HIDE_VAR(s);
++	barrier();
  }
+ EXPORT_SYMBOL(memzero_explicit);
  
- MODULE_AUTHOR("Olaf Kirch <okir@monad.swb.de>");
--- 
-2.3.6
-
-
-From c59908b7a9d4b76f72367f055559663e1da274fc Mon Sep 17 00:00:00 2001
-From: Jeff Layton <jlayton@poochiereds.net>
-Date: Fri, 20 Mar 2015 15:15:14 -0400
-Subject: [PATCH 202/219] nfs: fix high load average due to callback thread
- sleeping
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-Cc: mpagano@gentoo.org
-
-commit 5d05e54af3cdbb13cf19c557ff2184781b91a22c upstream.
-
-Chuck pointed out a problem that crept in with commit 6ffa30d3f734 (nfs:
-don't call blocking operations while !TASK_RUNNING). Linux counts tasks
-in uninterruptible sleep against the load average, so this caused the
-system's load average to be pinned at at least 1 when there was a
-NFSv4.1+ mount active.
-
-Not a huge problem, but it's probably worth fixing before we get too
-many complaints about it. This patch converts the code back to use
-TASK_INTERRUPTIBLE sleep, simply has it flush any signals on each loop
-iteration. In practice no one should really be signalling this thread at
-all, so I think this is reasonably safe.
-
-With this change, there's also no need to game the hung task watchdog so
-we can also convert the schedule_timeout call back to a normal schedule.
-
-Reported-by: Chuck Lever <chuck.lever@oracle.com>
-Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
-Tested-by: Chuck Lever <chuck.lever@oracle.com>
-Fixes: commit 6ffa30d3f734 (“nfs: don't call blocking . . .”)
-Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/nfs/callback.c | 6 +++---
- 1 file changed, 3 insertions(+), 3 deletions(-)
-
-diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
-index 351be920..8d129bb 100644
---- a/fs/nfs/callback.c
-+++ b/fs/nfs/callback.c
-@@ -128,7 +128,7 @@ nfs41_callback_svc(void *vrqstp)
- 		if (try_to_freeze())
- 			continue;
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 6817b03..956d4db 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2316,8 +2316,14 @@ static struct page
+ 		       struct vm_area_struct *vma, unsigned long address,
+ 		       int node)
+ {
++	gfp_t flags;
++
+ 	VM_BUG_ON_PAGE(*hpage, *hpage);
+ 
++	/* Only allocate from the target node */
++	flags = alloc_hugepage_gfpmask(khugepaged_defrag(), __GFP_OTHER_NODE) |
++	        __GFP_THISNODE;
++
+ 	/*
+ 	 * Before allocating the hugepage, release the mmap_sem read lock.
+ 	 * The allocation can take potentially a long time if it involves
+@@ -2326,8 +2332,7 @@ static struct page
+ 	 */
+ 	up_read(&mm->mmap_sem);
  
--		prepare_to_wait(&serv->sv_cb_waitq, &wq, TASK_UNINTERRUPTIBLE);
-+		prepare_to_wait(&serv->sv_cb_waitq, &wq, TASK_INTERRUPTIBLE);
- 		spin_lock_bh(&serv->sv_cb_lock);
- 		if (!list_empty(&serv->sv_cb_list)) {
- 			req = list_first_entry(&serv->sv_cb_list,
-@@ -142,10 +142,10 @@ nfs41_callback_svc(void *vrqstp)
- 				error);
- 		} else {
- 			spin_unlock_bh(&serv->sv_cb_lock);
--			/* schedule_timeout to game the hung task watchdog */
--			schedule_timeout(60 * HZ);
-+			schedule();
- 			finish_wait(&serv->sv_cb_waitq, &wq);
+-	*hpage = alloc_pages_exact_node(node, alloc_hugepage_gfpmask(
+-		khugepaged_defrag(), __GFP_OTHER_NODE), HPAGE_PMD_ORDER);
++	*hpage = alloc_pages_exact_node(node, flags, HPAGE_PMD_ORDER);
+ 	if (unlikely(!*hpage)) {
+ 		count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
+ 		*hpage = ERR_PTR(-ENOMEM);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index c41b2a0..caad3c5 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3735,8 +3735,7 @@ retry:
+ 	if (!pmd_huge(*pmd))
+ 		goto out;
+ 	if (pmd_present(*pmd)) {
+-		page = pte_page(*(pte_t *)pmd) +
+-			((address & ~PMD_MASK) >> PAGE_SHIFT);
++		page = pmd_page(*pmd) + ((address & ~PMD_MASK) >> PAGE_SHIFT);
+ 		if (flags & FOLL_GET)
+ 			get_page(page);
+ 	} else {
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 4721046..de5dc5e 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -1985,7 +1985,8 @@ retry_cpuset:
+ 		nmask = policy_nodemask(gfp, pol);
+ 		if (!nmask || node_isset(node, *nmask)) {
+ 			mpol_cond_put(pol);
+-			page = alloc_pages_exact_node(node, gfp, order);
++			page = alloc_pages_exact_node(node,
++						gfp | __GFP_THISNODE, order);
+ 			goto out;
  		}
-+		flush_signals(current);
  	}
- 	return 0;
- }
--- 
-2.3.6
-
-
-From dcd8d0c80e86b8821c5a453b5bf782328d8580e1 Mon Sep 17 00:00:00 2001
-From: Peng Tao <tao.peng@primarydata.com>
-Date: Thu, 9 Apr 2015 23:02:16 +0800
-Subject: [PATCH 203/219] nfs: fix DIO good bytes calculation
-Cc: mpagano@gentoo.org
-
-commit 1ccbad9f9f9bd36db26a10f0b17fbaf12b3ae93a upstream.
-
-For direct read that has IO size larger than rsize, we'll split
-it into several READ requests and nfs_direct_good_bytes() would
-count completed bytes incorrectly by eating last zero count reply.
-
-Fix it by handling mirror and non-mirror cases differently such that
-we only count mirrored writes differently.
-
-This fixes 5fadeb47("nfs: count DIO good bytes correctly with mirroring").
-
-Reported-by: Jean Spector <jean@primarydata.com>
-Signed-off-by: Peng Tao <tao.peng@primarydata.com>
-Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/nfs/direct.c | 29 +++++++++++++++++------------
- 1 file changed, 17 insertions(+), 12 deletions(-)
-
-diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
-index e907c8c..5e451a7 100644
---- a/fs/nfs/direct.c
-+++ b/fs/nfs/direct.c
-@@ -131,20 +131,25 @@ nfs_direct_good_bytes(struct nfs_direct_req *dreq, struct nfs_pgio_header *hdr)
- 
- 	WARN_ON_ONCE(hdr->pgio_mirror_idx >= dreq->mirror_count);
+diff --git a/net/bridge/br_netfilter.c b/net/bridge/br_netfilter.c
+index 0ee453f..f371cbf 100644
+--- a/net/bridge/br_netfilter.c
++++ b/net/bridge/br_netfilter.c
+@@ -651,6 +651,13 @@ static int br_nf_forward_finish(struct sk_buff *skb)
+ 	struct net_device *in;
  
--	count = dreq->mirrors[hdr->pgio_mirror_idx].count;
--	if (count + dreq->io_start < hdr->io_start + hdr->good_bytes) {
--		count = hdr->io_start + hdr->good_bytes - dreq->io_start;
--		dreq->mirrors[hdr->pgio_mirror_idx].count = count;
--	}
--
--	/* update the dreq->count by finding the minimum agreed count from all
--	 * mirrors */
--	count = dreq->mirrors[0].count;
-+	if (dreq->mirror_count == 1) {
-+		dreq->mirrors[hdr->pgio_mirror_idx].count += hdr->good_bytes;
-+		dreq->count += hdr->good_bytes;
-+	} else {
-+		/* mirrored writes */
-+		count = dreq->mirrors[hdr->pgio_mirror_idx].count;
-+		if (count + dreq->io_start < hdr->io_start + hdr->good_bytes) {
-+			count = hdr->io_start + hdr->good_bytes - dreq->io_start;
-+			dreq->mirrors[hdr->pgio_mirror_idx].count = count;
+ 	if (!IS_ARP(skb) && !IS_VLAN_ARP(skb)) {
++		int frag_max_size;
++
++		if (skb->protocol == htons(ETH_P_IP)) {
++			frag_max_size = IPCB(skb)->frag_max_size;
++			BR_INPUT_SKB_CB(skb)->frag_max_size = frag_max_size;
 +		}
-+		/* update the dreq->count by finding the minimum agreed count from all
-+		 * mirrors */
-+		count = dreq->mirrors[0].count;
- 
--	for (i = 1; i < dreq->mirror_count; i++)
--		count = min(count, dreq->mirrors[i].count);
-+		for (i = 1; i < dreq->mirror_count; i++)
-+			count = min(count, dreq->mirrors[i].count);
++
+ 		in = nf_bridge->physindev;
+ 		if (nf_bridge->mask & BRNF_PKT_TYPE) {
+ 			skb->pkt_type = PACKET_OTHERHOST;
+@@ -710,8 +717,14 @@ static unsigned int br_nf_forward_ip(const struct nf_hook_ops *ops,
+ 		nf_bridge->mask |= BRNF_PKT_TYPE;
+ 	}
  
--	dreq->count = count;
-+		dreq->count = count;
+-	if (pf == NFPROTO_IPV4 && br_parse_ip_options(skb))
+-		return NF_DROP;
++	if (pf == NFPROTO_IPV4) {
++		int frag_max = BR_INPUT_SKB_CB(skb)->frag_max_size;
++
++		if (br_parse_ip_options(skb))
++			return NF_DROP;
++
++		IPCB(skb)->frag_max_size = frag_max;
 +	}
- }
  
- /*
--- 
-2.3.6
-
-
-From 5efdfc74ab7d8ccfce9f8517012e3962939c91fc Mon Sep 17 00:00:00 2001
-From: Peng Tao <tao.peng@primarydata.com>
-Date: Thu, 9 Apr 2015 23:02:17 +0800
-Subject: [PATCH 204/219] nfs: remove WARN_ON_ONCE from nfs_direct_good_bytes
-Cc: mpagano@gentoo.org
-
-commit 05f54903d9d370a4cd302a85681304d3ec59e5c1 upstream.
-
-For flexfiles driver, we might choose to read from mirror index other
-than 0 while mirror_count is always 1 for read.
-
-Reported-by: Jean Spector <jean@primarydata.com>
-Cc: Weston Andros Adamson <dros@primarydata.com>
-Signed-off-by: Peng Tao <tao.peng@primarydata.com>
-Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/nfs/direct.c | 2 --
- 1 file changed, 2 deletions(-)
-
-diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
-index 5e451a7..ab21ef1 100644
---- a/fs/nfs/direct.c
-+++ b/fs/nfs/direct.c
-@@ -129,8 +129,6 @@ nfs_direct_good_bytes(struct nfs_direct_req *dreq, struct nfs_pgio_header *hdr)
- 	int i;
- 	ssize_t count;
+ 	/* The physdev module checks on this */
+ 	nf_bridge->mask |= BRNF_BRIDGED;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 45109b7..22a53ac 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3041,7 +3041,7 @@ static struct rps_dev_flow *
+ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
+ 	    struct rps_dev_flow *rflow, u16 next_cpu)
+ {
+-	if (next_cpu != RPS_NO_CPU) {
++	if (next_cpu < nr_cpu_ids) {
+ #ifdef CONFIG_RFS_ACCEL
+ 		struct netdev_rx_queue *rxqueue;
+ 		struct rps_dev_flow_table *flow_table;
+@@ -3146,7 +3146,7 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
+ 		 * If the desired CPU (where last recvmsg was done) is
+ 		 * different from current CPU (one in the rx-queue flow
+ 		 * table entry), switch if one of the following holds:
+-		 *   - Current CPU is unset (equal to RPS_NO_CPU).
++		 *   - Current CPU is unset (>= nr_cpu_ids).
+ 		 *   - Current CPU is offline.
+ 		 *   - The current CPU's queue tail has advanced beyond the
+ 		 *     last packet that was enqueued using this table entry.
+@@ -3154,14 +3154,14 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
+ 		 *     have been dequeued, thus preserving in order delivery.
+ 		 */
+ 		if (unlikely(tcpu != next_cpu) &&
+-		    (tcpu == RPS_NO_CPU || !cpu_online(tcpu) ||
++		    (tcpu >= nr_cpu_ids || !cpu_online(tcpu) ||
+ 		     ((int)(per_cpu(softnet_data, tcpu).input_queue_head -
+ 		      rflow->last_qtail)) >= 0)) {
+ 			tcpu = next_cpu;
+ 			rflow = set_rps_cpu(dev, skb, rflow, next_cpu);
+ 		}
  
--	WARN_ON_ONCE(hdr->pgio_mirror_idx >= dreq->mirror_count);
--
- 	if (dreq->mirror_count == 1) {
- 		dreq->mirrors[hdr->pgio_mirror_idx].count += hdr->good_bytes;
- 		dreq->count += hdr->good_bytes;
--- 
-2.3.6
-
-
-From ecb403f5eaf05dd7a9160fae030d55e23a5a4445 Mon Sep 17 00:00:00 2001
-From: Anna Schumaker <Anna.Schumaker@netapp.com>
-Date: Tue, 14 Apr 2015 10:34:20 -0400
-Subject: [PATCH 205/219] NFS: Add a stub for GETDEVICELIST
-Cc: mpagano@gentoo.org
-
-commit 7c61f0d3897eeeff6f3294adb9f910ddefa8035a upstream.
-
-d4b18c3e (pnfs: remove GETDEVICELIST implementation) removed the
-GETDEVICELIST operation from the NFS client, but left a "hole" in the
-nfs4_procedures array.  This caused /proc/self/mountstats to report an
-operation named "51" where GETDEVICELIST used to be.  This patch adds a
-stub to fix mountstats.
-
-Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-Fixes: d4b18c3e (pnfs: remove GETDEVICELIST implementation)
-Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- fs/nfs/nfs4xdr.c | 6 ++++++
- 1 file changed, 6 insertions(+)
-
-diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
-index 5c399ec..d494ea2 100644
---- a/fs/nfs/nfs4xdr.c
-+++ b/fs/nfs/nfs4xdr.c
-@@ -7365,6 +7365,11 @@ nfs4_stat_to_errno(int stat)
- 	.p_name   = #proc,					\
- }
+-		if (tcpu != RPS_NO_CPU && cpu_online(tcpu)) {
++		if (tcpu < nr_cpu_ids && cpu_online(tcpu)) {
+ 			*rflowp = rflow;
+ 			cpu = tcpu;
+ 			goto done;
+@@ -3202,14 +3202,14 @@ bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index,
+ 	struct rps_dev_flow_table *flow_table;
+ 	struct rps_dev_flow *rflow;
+ 	bool expire = true;
+-	int cpu;
++	unsigned int cpu;
  
-+#define STUB(proc)		\
-+[NFSPROC4_CLNT_##proc] = {	\
-+	.p_name = #proc,	\
-+}
+ 	rcu_read_lock();
+ 	flow_table = rcu_dereference(rxqueue->rps_flow_table);
+ 	if (flow_table && flow_id <= flow_table->mask) {
+ 		rflow = &flow_table->flows[flow_id];
+ 		cpu = ACCESS_ONCE(rflow->cpu);
+-		if (rflow->filter == filter_id && cpu != RPS_NO_CPU &&
++		if (rflow->filter == filter_id && cpu < nr_cpu_ids &&
+ 		    ((int)(per_cpu(softnet_data, cpu).input_queue_head -
+ 			   rflow->last_qtail) <
+ 		     (int)(10 * flow_table->mask)))
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 98d45fe..e9f9a15 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -280,13 +280,14 @@ nodata:
+ EXPORT_SYMBOL(__alloc_skb);
+ 
+ /**
+- * build_skb - build a network buffer
++ * __build_skb - build a network buffer
+  * @data: data buffer provided by caller
+- * @frag_size: size of fragment, or 0 if head was kmalloced
++ * @frag_size: size of data, or 0 if head was kmalloced
+  *
+  * Allocate a new &sk_buff. Caller provides space holding head and
+  * skb_shared_info. @data must have been allocated by kmalloc() only if
+- * @frag_size is 0, otherwise data should come from the page allocator.
++ * @frag_size is 0, otherwise data should come from the page allocator
++ *  or vmalloc()
+  * The return is the new skb buffer.
+  * On a failure the return is %NULL, and @data is not freed.
+  * Notes :
+@@ -297,7 +298,7 @@ EXPORT_SYMBOL(__alloc_skb);
+  *  before giving packet to stack.
+  *  RX rings only contains data buffers, not full skbs.
+  */
+-struct sk_buff *build_skb(void *data, unsigned int frag_size)
++struct sk_buff *__build_skb(void *data, unsigned int frag_size)
+ {
+ 	struct skb_shared_info *shinfo;
+ 	struct sk_buff *skb;
+@@ -311,7 +312,6 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size)
+ 
+ 	memset(skb, 0, offsetof(struct sk_buff, tail));
+ 	skb->truesize = SKB_TRUESIZE(size);
+-	skb->head_frag = frag_size != 0;
+ 	atomic_set(&skb->users, 1);
+ 	skb->head = data;
+ 	skb->data = data;
+@@ -328,6 +328,23 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size)
+ 
+ 	return skb;
+ }
 +
- struct rpc_procinfo	nfs4_procedures[] = {
- 	PROC(READ,		enc_read,		dec_read),
- 	PROC(WRITE,		enc_write,		dec_write),
-@@ -7417,6 +7422,7 @@ struct rpc_procinfo	nfs4_procedures[] = {
- 	PROC(SECINFO_NO_NAME,	enc_secinfo_no_name,	dec_secinfo_no_name),
- 	PROC(TEST_STATEID,	enc_test_stateid,	dec_test_stateid),
- 	PROC(FREE_STATEID,	enc_free_stateid,	dec_free_stateid),
-+	STUB(GETDEVICELIST),
- 	PROC(BIND_CONN_TO_SESSION,
- 			enc_bind_conn_to_session, dec_bind_conn_to_session),
- 	PROC(DESTROY_CLIENTID,	enc_destroy_clientid,	dec_destroy_clientid),
--- 
-2.3.6
-
-
-From a0e97e698901d058b984bcf1c13693f7a33375b3 Mon Sep 17 00:00:00 2001
-From: Juri Lelli <juri.lelli@arm.com>
-Date: Tue, 31 Mar 2015 09:53:36 +0100
-Subject: [PATCH 206/219] sched/deadline: Always enqueue on previous rq when
- dl_task_timer() fires
-Cc: mpagano@gentoo.org
-
-commit 4cd57f97135840f637431c92380c8da3edbe44ed upstream.
-
-dl_task_timer() may fire on a different rq from where a task was removed
-after throttling. Since the call path is:
-
-  dl_task_timer() ->
-    enqueue_task_dl() ->
-      enqueue_dl_entity() ->
-        replenish_dl_entity()
-
-and replenish_dl_entity() uses dl_se's rq, we can't use current's rq
-in dl_task_timer(), but we need to lock the task's previous one.
-
-Tested-by: Wanpeng Li <wanpeng.li@linux.intel.com>
-Signed-off-by: Juri Lelli <juri.lelli@arm.com>
-Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
-Acked-by: Kirill Tkhai <ktkhai@parallels.com>
-Cc: Juri Lelli <juri.lelli@gmail.com>
-Fixes: 3960c8c0c789 ("sched: Make dl_task_time() use task_rq_lock()")
-Link: http://lkml.kernel.org/r/1427792017-7356-1-git-send-email-juri.lelli@arm.com
-Signed-off-by: Ingo Molnar <mingo@kernel.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- kernel/sched/deadline.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
-index 3fa8fa6..f670cbb 100644
---- a/kernel/sched/deadline.c
-+++ b/kernel/sched/deadline.c
-@@ -514,7 +514,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
- 	unsigned long flags;
- 	struct rq *rq;
++/* build_skb() is wrapper over __build_skb(), that specifically
++ * takes care of skb->head and skb->pfmemalloc
++ * This means that if @frag_size is not zero, then @data must be backed
++ * by a page fragment, not kmalloc() or vmalloc()
++ */
++struct sk_buff *build_skb(void *data, unsigned int frag_size)
++{
++	struct sk_buff *skb = __build_skb(data, frag_size);
++
++	if (skb && frag_size) {
++		skb->head_frag = 1;
++		if (virt_to_head_page(data)->pfmemalloc)
++			skb->pfmemalloc = 1;
++	}
++	return skb;
++}
+ EXPORT_SYMBOL(build_skb);
  
--	rq = task_rq_lock(current, &flags);
-+	rq = task_rq_lock(p, &flags);
+ struct netdev_alloc_cache {
+@@ -348,7 +365,8 @@ static struct page *__page_frag_refill(struct netdev_alloc_cache *nc,
+ 	gfp_t gfp = gfp_mask;
  
- 	/*
- 	 * We need to take care of several possible races here:
-@@ -569,7 +569,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
- 		push_dl_task(rq);
- #endif
- unlock:
--	task_rq_unlock(rq, current, &flags);
-+	task_rq_unlock(rq, p, &flags);
+ 	if (order) {
+-		gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY;
++		gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY |
++			    __GFP_NOMEMALLOC;
+ 		page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, order);
+ 		nc->frag.size = PAGE_SIZE << (page ? order : 0);
+ 	}
+diff --git a/net/ipv4/ip_forward.c b/net/ipv4/ip_forward.c
+index d9bc28a..53bd53f 100644
+--- a/net/ipv4/ip_forward.c
++++ b/net/ipv4/ip_forward.c
+@@ -82,6 +82,9 @@ int ip_forward(struct sk_buff *skb)
+ 	if (skb->pkt_type != PACKET_HOST)
+ 		goto drop;
  
- 	return HRTIMER_NORESTART;
++	if (unlikely(skb->sk))
++		goto drop;
++
+ 	if (skb_warn_if_lro(skb))
+ 		goto drop;
+ 
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index d520492..9d48dc4 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -2751,39 +2751,65 @@ begin_fwd:
+ 	}
  }
--- 
-2.3.6
-
-
-From 9279e1f98b13d5e5b40805114896ec33313ad019 Mon Sep 17 00:00:00 2001
-From: Sabrina Dubroca <sd@queasysnail.net>
-Date: Thu, 26 Feb 2015 05:35:41 +0000
-Subject: [PATCH 207/219] e1000: add dummy allocator to fix race condition
- between mtu change and netpoll
-Cc: mpagano@gentoo.org
-
-commit 08e8331654d1d7b2c58045e549005bc356aa7810 upstream.
-
-There is a race condition between e1000_change_mtu's cleanups and
-netpoll, when we change the MTU across jumbo size:
-
-Changing MTU frees all the rx buffers:
-    e1000_change_mtu -> e1000_down -> e1000_clean_all_rx_rings ->
-        e1000_clean_rx_ring
-
-Then, close to the end of e1000_change_mtu:
-    pr_info -> ... -> netpoll_poll_dev -> e1000_clean ->
-        e1000_clean_rx_irq -> e1000_alloc_rx_buffers -> e1000_alloc_frag
-
-And when we come back to do the rest of the MTU change:
-    e1000_up -> e1000_configure -> e1000_configure_rx ->
-        e1000_alloc_jumbo_rx_buffers
-
-alloc_jumbo finds the buffers already != NULL, since data (shared with
-page in e1000_rx_buffer->rxbuf) has been re-alloc'd, but it's garbage,
-or at least not what is expected when in jumbo state.
-
-This results in an unusable adapter (packets don't get through), and a
-NULL pointer dereference on the next call to e1000_clean_rx_ring
-(other mtu change, link down, shutdown):
-
-BUG: unable to handle kernel NULL pointer dereference at           (null)
-IP: [<ffffffff81194d6e>] put_compound_page+0x7e/0x330
-
-    [...]
-
-Call Trace:
- [<ffffffff81195445>] put_page+0x55/0x60
- [<ffffffff815d9f44>] e1000_clean_rx_ring+0x134/0x200
- [<ffffffff815da055>] e1000_clean_all_rx_rings+0x45/0x60
- [<ffffffff815df5e0>] e1000_down+0x1c0/0x1d0
- [<ffffffff811e2260>] ? deactivate_slab+0x7f0/0x840
- [<ffffffff815e21bc>] e1000_change_mtu+0xdc/0x170
- [<ffffffff81647050>] dev_set_mtu+0xa0/0x140
- [<ffffffff81664218>] do_setlink+0x218/0xac0
- [<ffffffff814459e9>] ? nla_parse+0xb9/0x120
- [<ffffffff816652d0>] rtnl_newlink+0x6d0/0x890
- [<ffffffff8104f000>] ? kvm_clock_read+0x20/0x40
- [<ffffffff810a2068>] ? sched_clock_cpu+0xa8/0x100
- [<ffffffff81663802>] rtnetlink_rcv_msg+0x92/0x260
-
-By setting the allocator to a dummy version, netpoll can't mess up our
-rx buffers.  The allocator is set back to a sane value in
-e1000_configure_rx.
-
-Fixes: edbbb3ca1077 ("e1000: implement jumbo receive with partial descriptors")
-Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
-Tested-by: Aaron Brown <aaron.f.brown@intel.com>
-Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/net/ethernet/intel/e1000/e1000_main.c | 10 +++++++++-
- 1 file changed, 9 insertions(+), 1 deletion(-)
-
-diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
-index 7f997d3..a71c446 100644
---- a/drivers/net/ethernet/intel/e1000/e1000_main.c
-+++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
-@@ -144,6 +144,11 @@ static bool e1000_clean_rx_irq(struct e1000_adapter *adapter,
- static bool e1000_clean_jumbo_rx_irq(struct e1000_adapter *adapter,
- 				     struct e1000_rx_ring *rx_ring,
- 				     int *work_done, int work_to_do);
-+static void e1000_alloc_dummy_rx_buffers(struct e1000_adapter *adapter,
-+					 struct e1000_rx_ring *rx_ring,
-+					 int cleaned_count)
+ 
+-/* Send a fin.  The caller locks the socket for us.  This cannot be
+- * allowed to fail queueing a FIN frame under any circumstances.
++/* We allow to exceed memory limits for FIN packets to expedite
++ * connection tear down and (memory) recovery.
++ * Otherwise tcp_send_fin() could be tempted to either delay FIN
++ * or even be forced to close flow without any FIN.
++ */
++static void sk_forced_wmem_schedule(struct sock *sk, int size)
 +{
++	int amt, status;
++
++	if (size <= sk->sk_forward_alloc)
++		return;
++	amt = sk_mem_pages(size);
++	sk->sk_forward_alloc += amt * SK_MEM_QUANTUM;
++	sk_memory_allocated_add(sk, amt, &status);
 +}
- static void e1000_alloc_rx_buffers(struct e1000_adapter *adapter,
- 				   struct e1000_rx_ring *rx_ring,
- 				   int cleaned_count);
-@@ -3552,8 +3557,11 @@ static int e1000_change_mtu(struct net_device *netdev, int new_mtu)
- 		msleep(1);
- 	/* e1000_down has a dependency on max_frame_size */
- 	hw->max_frame_size = max_frame;
--	if (netif_running(netdev))
-+	if (netif_running(netdev)) {
-+		/* prevent buffers from being reallocated */
-+		adapter->alloc_rx_buf = e1000_alloc_dummy_rx_buffers;
- 		e1000_down(adapter);
-+	}
++
++/* Send a FIN. The caller locks the socket for us.
++ * We should try to send a FIN packet really hard, but eventually give up.
+  */
+ void tcp_send_fin(struct sock *sk)
+ {
++	struct sk_buff *skb, *tskb = tcp_write_queue_tail(sk);
+ 	struct tcp_sock *tp = tcp_sk(sk);
+-	struct sk_buff *skb = tcp_write_queue_tail(sk);
+-	int mss_now;
  
- 	/* NOTE: netdev_alloc_skb reserves 16 bytes, and typically NET_IP_ALIGN
- 	 * means we reserve 2 more, this pushes us to allocate from the next
--- 
-2.3.6
-
-
-From dada7797e4595606cf730600d8c9a03955a8264b Mon Sep 17 00:00:00 2001
-From: Johannes Berg <johannes.berg@intel.com>
-Date: Sat, 21 Mar 2015 07:41:04 +0100
-Subject: [PATCH 208/219] mac80211: send AP probe as unicast again
-Cc: mpagano@gentoo.org
-
-commit a73f8e21f3f93159bc19e154e8f50891c22c11db upstream.
-
-Louis reported that a static checker was complaining that
-the 'dst' variable was set (multiple times) but not used.
-This is due to a previous commit having removed the usage
-(apparently erroneously), so add it back.
-
-Fixes: a344d6778a98 ("mac80211: allow drivers to support NL80211_SCAN_FLAG_RANDOM_ADDR")
-Reported-by: Louis Langholtz <lou_langholtz@me.com>
-Signed-off-by: Johannes Berg <johannes.berg@intel.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- net/mac80211/mlme.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
+-	/* Optimization, tack on the FIN if we have a queue of
+-	 * unsent frames.  But be careful about outgoing SACKS
+-	 * and IP options.
++	/* Optimization, tack on the FIN if we have one skb in write queue and
++	 * this skb was not yet sent, or we are under memory pressure.
++	 * Note: in the latter case, FIN packet will be sent after a timeout,
++	 * as TCP stack thinks it has already been transmitted.
+ 	 */
+-	mss_now = tcp_current_mss(sk);
+-
+-	if (tcp_send_head(sk) != NULL) {
+-		TCP_SKB_CB(skb)->tcp_flags |= TCPHDR_FIN;
+-		TCP_SKB_CB(skb)->end_seq++;
++	if (tskb && (tcp_send_head(sk) || sk_under_memory_pressure(sk))) {
++coalesce:
++		TCP_SKB_CB(tskb)->tcp_flags |= TCPHDR_FIN;
++		TCP_SKB_CB(tskb)->end_seq++;
+ 		tp->write_seq++;
++		if (!tcp_send_head(sk)) {
++			/* This means tskb was already sent.
++			 * Pretend we included the FIN on previous transmit.
++			 * We need to set tp->snd_nxt to the value it would have
++			 * if FIN had been sent. This is because retransmit path
++			 * does not change tp->snd_nxt.
++			 */
++			tp->snd_nxt++;
++			return;
++		}
+ 	} else {
+-		/* Socket is locked, keep trying until memory is available. */
+-		for (;;) {
+-			skb = sk_stream_alloc_skb(sk, 0, sk->sk_allocation);
+-			if (skb)
+-				break;
+-			yield();
++		skb = alloc_skb_fclone(MAX_TCP_HEADER, sk->sk_allocation);
++		if (unlikely(!skb)) {
++			if (tskb)
++				goto coalesce;
++			return;
+ 		}
++		skb_reserve(skb, MAX_TCP_HEADER);
++		sk_forced_wmem_schedule(sk, skb->truesize);
+ 		/* FIN eats a sequence byte, write_seq advanced by tcp_queue_skb(). */
+ 		tcp_init_nondata_skb(skb, tp->write_seq,
+ 				     TCPHDR_ACK | TCPHDR_FIN);
+ 		tcp_queue_skb(sk, skb);
+ 	}
+-	__tcp_push_pending_frames(sk, mss_now, TCP_NAGLE_OFF);
++	__tcp_push_pending_frames(sk, tcp_current_mss(sk), TCP_NAGLE_OFF);
+ }
+ 
+ /* We get here when a process closes a file descriptor (either due to
 diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
 index 142f66a..0ca013d 100644
 --- a/net/mac80211/mlme.c
@@ -16114,744 +8177,411 @@ index 142f66a..0ca013d 100644
  					 ssid + 2, ssid_len, NULL,
  					 0, (u32) -1, true, 0,
  					 ifmgd->associated->channel, false);
--- 
-2.3.6
-
-
-From e86ecd8a7bbc590987b4046c523d8caaef8f8b5f Mon Sep 17 00:00:00 2001
-From: Daniel Borkmann <daniel@iogearbox.net>
-Date: Thu, 12 Mar 2015 17:21:42 +0100
-Subject: [PATCH 209/219] ebpf: verifier: check that call reg with ARG_ANYTHING
- is initialized
-Cc: mpagano@gentoo.org
-
-commit 80f1d68ccba70b1060c9c7360ca83da430f66bed upstream.
-
-I noticed that a helper function with argument type ARG_ANYTHING does
-not need to have an initialized value (register).
-
-This can worst case lead to unintented stack memory leakage in future
-helper functions if they are not carefully designed, or unintended
-application behaviour in case the application developer was not careful
-enough to match a correct helper function signature in the API.
-
-The underlying issue is that ARG_ANYTHING should actually be split
-into two different semantics:
-
-  1) ARG_DONTCARE for function arguments that the helper function
-     does not care about (in other words: the default for unused
-     function arguments), and
-
-  2) ARG_ANYTHING that is an argument actually being used by a
-     helper function and *guaranteed* to be an initialized register.
-
-The current risk is low: ARG_ANYTHING is only used for the 'flags'
-argument (r4) in bpf_map_update_elem() that internally does strict
-checking.
-
-Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)")
-Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-Acked-by: Alexei Starovoitov <ast@plumgrid.com>
-Signed-off-by: David S. Miller <davem@davemloft.net>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- include/linux/bpf.h   | 4 +++-
- kernel/bpf/verifier.c | 5 ++++-
- 2 files changed, 7 insertions(+), 2 deletions(-)
-
-diff --git a/include/linux/bpf.h b/include/linux/bpf.h
-index bbfceb7..33b52fb 100644
---- a/include/linux/bpf.h
-+++ b/include/linux/bpf.h
-@@ -48,7 +48,7 @@ struct bpf_map *bpf_map_get(struct fd f);
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 05919bf..d1d7a81 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1616,13 +1616,11 @@ static struct sk_buff *netlink_alloc_large_skb(unsigned int size,
+ 	if (data == NULL)
+ 		return NULL;
  
- /* function argument constraints */
- enum bpf_arg_type {
--	ARG_ANYTHING = 0,	/* any argument is ok */
-+	ARG_DONTCARE = 0,	/* unused argument in helper function */
+-	skb = build_skb(data, size);
++	skb = __build_skb(data, size);
+ 	if (skb == NULL)
+ 		vfree(data);
+-	else {
+-		skb->head_frag = 0;
++	else
+ 		skb->destructor = netlink_skb_destructor;
+-	}
  
- 	/* the following constraints used to prototype
- 	 * bpf_map_lookup/update/delete_elem() functions
-@@ -62,6 +62,8 @@ enum bpf_arg_type {
- 	 */
- 	ARG_PTR_TO_STACK,	/* any pointer to eBPF program stack */
- 	ARG_CONST_STACK_SIZE,	/* number of bytes accessed from stack */
+ 	return skb;
+ }
+diff --git a/sound/pci/emu10k1/emuproc.c b/sound/pci/emu10k1/emuproc.c
+index 2ca9f2e..53745f4 100644
+--- a/sound/pci/emu10k1/emuproc.c
++++ b/sound/pci/emu10k1/emuproc.c
+@@ -241,31 +241,22 @@ static void snd_emu10k1_proc_spdif_read(struct snd_info_entry *entry,
+ 	struct snd_emu10k1 *emu = entry->private_data;
+ 	u32 value;
+ 	u32 value2;
+-	unsigned long flags;
+ 	u32 rate;
+ 
+ 	if (emu->card_capabilities->emu_model) {
+-		spin_lock_irqsave(&emu->emu_lock, flags);
+ 		snd_emu1010_fpga_read(emu, 0x38, &value);
+-		spin_unlock_irqrestore(&emu->emu_lock, flags);
+ 		if ((value & 0x1) == 0) {
+-			spin_lock_irqsave(&emu->emu_lock, flags);
+ 			snd_emu1010_fpga_read(emu, 0x2a, &value);
+ 			snd_emu1010_fpga_read(emu, 0x2b, &value2);
+-			spin_unlock_irqrestore(&emu->emu_lock, flags);
+ 			rate = 0x1770000 / (((value << 5) | value2)+1);	
+ 			snd_iprintf(buffer, "ADAT Locked : %u\n", rate);
+ 		} else {
+ 			snd_iprintf(buffer, "ADAT Unlocked\n");
+ 		}
+-		spin_lock_irqsave(&emu->emu_lock, flags);
+ 		snd_emu1010_fpga_read(emu, 0x20, &value);
+-		spin_unlock_irqrestore(&emu->emu_lock, flags);
+ 		if ((value & 0x4) == 0) {
+-			spin_lock_irqsave(&emu->emu_lock, flags);
+ 			snd_emu1010_fpga_read(emu, 0x28, &value);
+ 			snd_emu1010_fpga_read(emu, 0x29, &value2);
+-			spin_unlock_irqrestore(&emu->emu_lock, flags);
+ 			rate = 0x1770000 / (((value << 5) | value2)+1);	
+ 			snd_iprintf(buffer, "SPDIF Locked : %d\n", rate);
+ 		} else {
+@@ -410,14 +401,11 @@ static void snd_emu_proc_emu1010_reg_read(struct snd_info_entry *entry,
+ {
+ 	struct snd_emu10k1 *emu = entry->private_data;
+ 	u32 value;
+-	unsigned long flags;
+ 	int i;
+ 	snd_iprintf(buffer, "EMU1010 Registers:\n\n");
+ 
+ 	for(i = 0; i < 0x40; i+=1) {
+-		spin_lock_irqsave(&emu->emu_lock, flags);
+ 		snd_emu1010_fpga_read(emu, i, &value);
+-		spin_unlock_irqrestore(&emu->emu_lock, flags);
+ 		snd_iprintf(buffer, "%02X: %08X, %02X\n", i, value, (value >> 8) & 0x7f);
+ 	}
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f9d12c0..2fd490b 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5047,12 +5047,14 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x2212, "Thinkpad T440", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad X240", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++	SND_PCI_QUIRK(0x17aa, 0x2226, "ThinkPad X250", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP),
+ 	SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x501e, "Thinkpad L440", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x5026, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++	SND_PCI_QUIRK(0x17aa, 0x5034, "Thinkpad T450", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x5036, "Thinkpad T450s", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+@@ -5142,6 +5144,16 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{0x1b, 0x411111f0}, \
+ 	{0x1e, 0x411111f0}
+ 
++#define ALC256_STANDARD_PINS \
++	{0x12, 0x90a60140}, \
++	{0x14, 0x90170110}, \
++	{0x19, 0x411111f0}, \
++	{0x1a, 0x411111f0}, \
++	{0x1b, 0x411111f0}, \
++	{0x1d, 0x40700001}, \
++	{0x1e, 0x411111f0}, \
++	{0x21, 0x02211020}
 +
-+	ARG_ANYTHING,		/* any (initialized) argument is ok */
- };
+ #define ALC282_STANDARD_PINS \
+ 	{0x14, 0x90170110}, \
+ 	{0x18, 0x411111f0}, \
+@@ -5235,15 +5247,11 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x1d, 0x40700001},
+ 		{0x21, 0x02211050}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+-		{0x12, 0x90a60140},
+-		{0x13, 0x40000000},
+-		{0x14, 0x90170110},
+-		{0x19, 0x411111f0},
+-		{0x1a, 0x411111f0},
+-		{0x1b, 0x411111f0},
+-		{0x1d, 0x40700001},
+-		{0x1e, 0x411111f0},
+-		{0x21, 0x02211020}),
++		ALC256_STANDARD_PINS,
++		{0x13, 0x40000000}),
++	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++		ALC256_STANDARD_PINS,
++		{0x13, 0x411111f0}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0280, 0x103c, "HP", ALC280_FIXUP_HP_GPIO4,
+ 		{0x12, 0x90a60130},
+ 		{0x13, 0x40000000},
+@@ -5563,6 +5571,8 @@ static int patch_alc269(struct hda_codec *codec)
+ 		break;
+ 	case 0x10ec0256:
+ 		spec->codec_variant = ALC269_TYPE_ALC256;
++		spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */
++		alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/
+ 		break;
+ 	}
  
- /* type of values returned from helper functions */
-diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
-index 36508e6..5d8ea3d 100644
---- a/kernel/bpf/verifier.c
-+++ b/kernel/bpf/verifier.c
-@@ -755,7 +755,7 @@ static int check_func_arg(struct verifier_env *env, u32 regno,
- 	enum bpf_reg_type expected_type;
- 	int err = 0;
+@@ -5576,8 +5586,8 @@ static int patch_alc269(struct hda_codec *codec)
+ 	if (err < 0)
+ 		goto error;
  
--	if (arg_type == ARG_ANYTHING)
-+	if (arg_type == ARG_DONTCARE)
- 		return 0;
+-	if (!spec->gen.no_analog && spec->gen.beep_nid)
+-		set_beep_amp(spec, 0x0b, 0x04, HDA_INPUT);
++	if (!spec->gen.no_analog && spec->gen.beep_nid && spec->gen.mixer_nid)
++		set_beep_amp(spec, spec->gen.mixer_nid, 0x04, HDA_INPUT);
  
- 	if (reg->type == NOT_INIT) {
-@@ -763,6 +763,9 @@ static int check_func_arg(struct verifier_env *env, u32 regno,
- 		return -EACCES;
+ 	codec->patch_ops = alc_patch_ops;
+ #ifdef CONFIG_PM
+diff --git a/sound/soc/codecs/cs4271.c b/sound/soc/codecs/cs4271.c
+index 7d3a6ac..e770ee6 100644
+--- a/sound/soc/codecs/cs4271.c
++++ b/sound/soc/codecs/cs4271.c
+@@ -561,10 +561,10 @@ static int cs4271_codec_probe(struct snd_soc_codec *codec)
+ 	if (gpio_is_valid(cs4271->gpio_nreset)) {
+ 		/* Reset codec */
+ 		gpio_direction_output(cs4271->gpio_nreset, 0);
+-		udelay(1);
++		mdelay(1);
+ 		gpio_set_value(cs4271->gpio_nreset, 1);
+ 		/* Give the codec time to wake up */
+-		udelay(1);
++		mdelay(1);
  	}
  
-+	if (arg_type == ARG_ANYTHING)
-+		return 0;
-+
- 	if (arg_type == ARG_PTR_TO_STACK || arg_type == ARG_PTR_TO_MAP_KEY ||
- 	    arg_type == ARG_PTR_TO_MAP_VALUE) {
- 		expected_type = PTR_TO_STACK;
--- 
-2.3.6
-
-
-From 0b97a15f6fedf422d276245866319990c2c771c5 Mon Sep 17 00:00:00 2001
-From: David Rientjes <rientjes@google.com>
-Date: Tue, 14 Apr 2015 15:46:58 -0700
-Subject: [PATCH 210/219] mm, thp: really limit transparent hugepage allocation
- to local node
-Cc: mpagano@gentoo.org
-
-commit 5265047ac30191ea24b16503165000c225f54feb upstream.
-
-Commit 077fcf116c8c ("mm/thp: allocate transparent hugepages on local
-node") restructured alloc_hugepage_vma() with the intent of only
-allocating transparent hugepages locally when there was not an effective
-interleave mempolicy.
-
-alloc_pages_exact_node() does not limit the allocation to the single node,
-however, but rather prefers it.  This is because __GFP_THISNODE is not set
-which would cause the node-local nodemask to be passed.  Without it, only
-a nodemask that prefers the local node is passed.
-
-Fix this by passing __GFP_THISNODE and falling back to small pages when
-the allocation fails.
-
-Commit 9f1b868a13ac ("mm: thp: khugepaged: add policy for finding target
-node") suffers from a similar problem for khugepaged, which is also fixed.
-
-Fixes: 077fcf116c8c ("mm/thp: allocate transparent hugepages on local node")
-Fixes: 9f1b868a13ac ("mm: thp: khugepaged: add policy for finding target node")
-Signed-off-by: David Rientjes <rientjes@google.com>
-Acked-by: Vlastimil Babka <vbabka@suse.cz>
-Cc: Christoph Lameter <cl@linux.com>
-Cc: Pekka Enberg <penberg@kernel.org>
-Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
-Cc: Johannes Weiner <hannes@cmpxchg.org>
-Cc: Mel Gorman <mgorman@suse.de>
-Cc: Pravin Shelar <pshelar@nicira.com>
-Cc: Jarno Rajahalme <jrajahalme@nicira.com>
-Cc: Li Zefan <lizefan@huawei.com>
-Cc: Greg Thelen <gthelen@google.com>
-Cc: Tejun Heo <tj@kernel.org>
-Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- mm/huge_memory.c | 9 +++++++--
- mm/mempolicy.c   | 3 ++-
- 2 files changed, 9 insertions(+), 3 deletions(-)
-
-diff --git a/mm/huge_memory.c b/mm/huge_memory.c
-index 6817b03..956d4db 100644
---- a/mm/huge_memory.c
-+++ b/mm/huge_memory.c
-@@ -2316,8 +2316,14 @@ static struct page
- 		       struct vm_area_struct *vma, unsigned long address,
- 		       int node)
- {
-+	gfp_t flags;
-+
- 	VM_BUG_ON_PAGE(*hpage, *hpage);
- 
-+	/* Only allocate from the target node */
-+	flags = alloc_hugepage_gfpmask(khugepaged_defrag(), __GFP_OTHER_NODE) |
-+	        __GFP_THISNODE;
-+
- 	/*
- 	 * Before allocating the hugepage, release the mmap_sem read lock.
- 	 * The allocation can take potentially a long time if it involves
-@@ -2326,8 +2332,7 @@ static struct page
- 	 */
- 	up_read(&mm->mmap_sem);
+ 	ret = regmap_update_bits(cs4271->regmap, CS4271_MODE2,
+diff --git a/sound/soc/codecs/pcm512x.c b/sound/soc/codecs/pcm512x.c
+index 474cae8..8c09e3f 100644
+--- a/sound/soc/codecs/pcm512x.c
++++ b/sound/soc/codecs/pcm512x.c
+@@ -304,9 +304,9 @@ static const struct soc_enum pcm512x_veds =
+ static const struct snd_kcontrol_new pcm512x_controls[] = {
+ SOC_DOUBLE_R_TLV("Digital Playback Volume", PCM512x_DIGITAL_VOLUME_2,
+ 		 PCM512x_DIGITAL_VOLUME_3, 0, 255, 1, digital_tlv),
+-SOC_DOUBLE_TLV("Playback Volume", PCM512x_ANALOG_GAIN_CTRL,
++SOC_DOUBLE_TLV("Analogue Playback Volume", PCM512x_ANALOG_GAIN_CTRL,
+ 	       PCM512x_LAGN_SHIFT, PCM512x_RAGN_SHIFT, 1, 1, analog_tlv),
+-SOC_DOUBLE_TLV("Playback Boost Volume", PCM512x_ANALOG_GAIN_BOOST,
++SOC_DOUBLE_TLV("Analogue Playback Boost Volume", PCM512x_ANALOG_GAIN_BOOST,
+ 	       PCM512x_AGBL_SHIFT, PCM512x_AGBR_SHIFT, 1, 0, boost_tlv),
+ SOC_DOUBLE("Digital Playback Switch", PCM512x_MUTE, PCM512x_RQML_SHIFT,
+ 	   PCM512x_RQMR_SHIFT, 1, 1),
+@@ -576,8 +576,8 @@ static int pcm512x_find_pll_coeff(struct snd_soc_dai *dai,
  
--	*hpage = alloc_pages_exact_node(node, alloc_hugepage_gfpmask(
--		khugepaged_defrag(), __GFP_OTHER_NODE), HPAGE_PMD_ORDER);
-+	*hpage = alloc_pages_exact_node(node, flags, HPAGE_PMD_ORDER);
- 	if (unlikely(!*hpage)) {
- 		count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
- 		*hpage = ERR_PTR(-ENOMEM);
-diff --git a/mm/mempolicy.c b/mm/mempolicy.c
-index 4721046..de5dc5e 100644
---- a/mm/mempolicy.c
-+++ b/mm/mempolicy.c
-@@ -1985,7 +1985,8 @@ retry_cpuset:
- 		nmask = policy_nodemask(gfp, pol);
- 		if (!nmask || node_isset(node, *nmask)) {
- 			mpol_cond_put(pol);
--			page = alloc_pages_exact_node(node, gfp, order);
-+			page = alloc_pages_exact_node(node,
-+						gfp | __GFP_THISNODE, order);
- 			goto out;
- 		}
+ 	/* pllin_rate / P (or here, den) cannot be greater than 20 MHz */
+ 	if (pllin_rate / den > 20000000 && num < 8) {
+-		num *= 20000000 / (pllin_rate / den);
+-		den *= 20000000 / (pllin_rate / den);
++		num *= DIV_ROUND_UP(pllin_rate / den, 20000000);
++		den *= DIV_ROUND_UP(pllin_rate / den, 20000000);
  	}
--- 
-2.3.6
-
-
-From 2649caa31cc3143b2ad3039ac581dacd7529a631 Mon Sep 17 00:00:00 2001
-From: mancha security <mancha1@zoho.com>
-Date: Wed, 18 Mar 2015 18:47:25 +0100
-Subject: [PATCH 211/219] lib: memzero_explicit: use barrier instead of
- OPTIMIZER_HIDE_VAR
-Cc: mpagano@gentoo.org
-
-commit 0b053c9518292705736329a8fe20ef4686ffc8e9 upstream.
-
-OPTIMIZER_HIDE_VAR(), as defined when using gcc, is insufficient to
-ensure protection from dead store optimization.
-
-For the random driver and crypto drivers, calls are emitted ...
-
-  $ gdb vmlinux
-  (gdb) disassemble memzero_explicit
-  Dump of assembler code for function memzero_explicit:
-    0xffffffff813a18b0 <+0>:	push   %rbp
-    0xffffffff813a18b1 <+1>:	mov    %rsi,%rdx
-    0xffffffff813a18b4 <+4>:	xor    %esi,%esi
-    0xffffffff813a18b6 <+6>:	mov    %rsp,%rbp
-    0xffffffff813a18b9 <+9>:	callq  0xffffffff813a7120 <memset>
-    0xffffffff813a18be <+14>:	pop    %rbp
-    0xffffffff813a18bf <+15>:	retq
-  End of assembler dump.
-
-  (gdb) disassemble extract_entropy
-  [...]
-    0xffffffff814a5009 <+313>:	mov    %r12,%rdi
-    0xffffffff814a500c <+316>:	mov    $0xa,%esi
-    0xffffffff814a5011 <+321>:	callq  0xffffffff813a18b0 <memzero_explicit>
-    0xffffffff814a5016 <+326>:	mov    -0x48(%rbp),%rax
-  [...]
-
-... but in case in future we might use facilities such as LTO, then
-OPTIMIZER_HIDE_VAR() is not sufficient to protect gcc from a possible
-eviction of the memset(). We have to use a compiler barrier instead.
-
-Minimal test example when we assume memzero_explicit() would *not* be
-a call, but would have been *inlined* instead:
-
-  static inline void memzero_explicit(void *s, size_t count)
-  {
-    memset(s, 0, count);
-    <foo>
-  }
-
-  int main(void)
-  {
-    char buff[20];
-
-    snprintf(buff, sizeof(buff) - 1, "test");
-    printf("%s", buff);
-
-    memzero_explicit(buff, sizeof(buff));
-    return 0;
-  }
-
-With <foo> := OPTIMIZER_HIDE_VAR():
-
-  (gdb) disassemble main
-  Dump of assembler code for function main:
-  [...]
-   0x0000000000400464 <+36>:	callq  0x400410 <printf@plt>
-   0x0000000000400469 <+41>:	xor    %eax,%eax
-   0x000000000040046b <+43>:	add    $0x28,%rsp
-   0x000000000040046f <+47>:	retq
-  End of assembler dump.
-
-With <foo> := barrier():
-
-  (gdb) disassemble main
-  Dump of assembler code for function main:
-  [...]
-   0x0000000000400464 <+36>:	callq  0x400410 <printf@plt>
-   0x0000000000400469 <+41>:	movq   $0x0,(%rsp)
-   0x0000000000400471 <+49>:	movq   $0x0,0x8(%rsp)
-   0x000000000040047a <+58>:	movl   $0x0,0x10(%rsp)
-   0x0000000000400482 <+66>:	xor    %eax,%eax
-   0x0000000000400484 <+68>:	add    $0x28,%rsp
-   0x0000000000400488 <+72>:	retq
-  End of assembler dump.
-
-As can be seen, movq, movq, movl are being emitted inlined
-via memset().
-
-Reference: http://thread.gmane.org/gmane.linux.kernel.cryptoapi/13764/
-Fixes: d4c5efdb9777 ("random: add and use memzero_explicit() for clearing data")
-Cc: Theodore Ts'o <tytso@mit.edu>
-Signed-off-by: mancha security <mancha1@zoho.com>
-Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
-Acked-by: Stephan Mueller <smueller@chronox.de>
-Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- lib/string.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/lib/string.c b/lib/string.c
-index ce81aae..a579201 100644
---- a/lib/string.c
-+++ b/lib/string.c
-@@ -607,7 +607,7 @@ EXPORT_SYMBOL(memset);
- void memzero_explicit(void *s, size_t count)
- {
- 	memset(s, 0, count);
--	OPTIMIZER_HIDE_VAR(s);
-+	barrier();
- }
- EXPORT_SYMBOL(memzero_explicit);
+ 	dev_dbg(dev, "num / den = %lu / %lu\n", num, den);
  
--- 
-2.3.6
-
-
-From 1cd176dfd9e5e4d0cae0545fa8c56ecd582b2e9a Mon Sep 17 00:00:00 2001
-From: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
-Date: Fri, 13 Mar 2015 15:17:14 +0800
-Subject: [PATCH 212/219] wl18xx: show rx_frames_per_rates as an array as it
- really is
-Cc: mpagano@gentoo.org
-
-commit a3fa71c40f1853d0c27e8f5bc01a722a705d9682 upstream.
-
-In struct wl18xx_acx_rx_rate_stat, rx_frames_per_rates field is an
-array, not a number.  This means WL18XX_DEBUGFS_FWSTATS_FILE can't be
-used to display this field in debugfs (it would display a pointer, not
-the actual data).  Use WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY instead.
-
-This bug has been found by adding a __printf attribute to
-wl1271_format_buffer.  gcc complained about "format '%u' expects
-argument of type 'unsigned int', but argument 5 has type 'u32 *'".
-
-Fixes: c5d94169e818 ("wl18xx: use new fw stats structures")
-Signed-off-by: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
-Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/net/wireless/ti/wl18xx/debugfs.c | 2 +-
- drivers/net/wireless/ti/wlcore/debugfs.h | 4 ++--
- 2 files changed, 3 insertions(+), 3 deletions(-)
-
-diff --git a/drivers/net/wireless/ti/wl18xx/debugfs.c b/drivers/net/wireless/ti/wl18xx/debugfs.c
-index c93fae9..5fbd223 100644
---- a/drivers/net/wireless/ti/wl18xx/debugfs.c
-+++ b/drivers/net/wireless/ti/wl18xx/debugfs.c
-@@ -139,7 +139,7 @@ WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, protection_filter, "%u");
- WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, accum_arp_pend_requests, "%u");
- WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, max_arp_queue_dep, "%u");
+diff --git a/sound/soc/codecs/wm8741.c b/sound/soc/codecs/wm8741.c
+index 31bb480..9e71c76 100644
+--- a/sound/soc/codecs/wm8741.c
++++ b/sound/soc/codecs/wm8741.c
+@@ -123,7 +123,7 @@ static struct {
+ };
  
--WL18XX_DEBUGFS_FWSTATS_FILE(rx_rate, rx_frames_per_rates, "%u");
-+WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY(rx_rate, rx_frames_per_rates, 50);
+ static const unsigned int rates_11289[] = {
+-	44100, 88235,
++	44100, 88200,
+ };
  
- WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY(aggr_size, tx_agg_vs_rate,
- 				  AGGR_STATS_TX_AGG*AGGR_STATS_TX_RATE);
-diff --git a/drivers/net/wireless/ti/wlcore/debugfs.h b/drivers/net/wireless/ti/wlcore/debugfs.h
-index 0f2cfb0..bf14676 100644
---- a/drivers/net/wireless/ti/wlcore/debugfs.h
-+++ b/drivers/net/wireless/ti/wlcore/debugfs.h
-@@ -26,8 +26,8 @@
+ static const struct snd_pcm_hw_constraint_list constraints_11289 = {
+@@ -150,7 +150,7 @@ static const struct snd_pcm_hw_constraint_list constraints_16384 = {
+ };
  
- #include "wlcore.h"
+ static const unsigned int rates_16934[] = {
+-	44100, 88235,
++	44100, 88200,
+ };
  
--int wl1271_format_buffer(char __user *userbuf, size_t count,
--			 loff_t *ppos, char *fmt, ...);
-+__printf(4, 5) int wl1271_format_buffer(char __user *userbuf, size_t count,
-+					loff_t *ppos, char *fmt, ...);
+ static const struct snd_pcm_hw_constraint_list constraints_16934 = {
+@@ -168,7 +168,7 @@ static const struct snd_pcm_hw_constraint_list constraints_18432 = {
+ };
  
- int wl1271_debugfs_init(struct wl1271 *wl);
- void wl1271_debugfs_exit(struct wl1271 *wl);
--- 
-2.3.6
-
-
-From 8a7e1640e89ee191d677e2d994476ce68e2160ea Mon Sep 17 00:00:00 2001
-From: "Vutla, Lokesh" <lokeshvutla@ti.com>
-Date: Tue, 31 Mar 2015 09:52:25 +0530
-Subject: [PATCH 213/219] crypto: omap-aes - Fix support for unequal lengths
-Cc: mpagano@gentoo.org
-
-commit 6d7e7e02a044025237b6f62a20521170b794537f upstream.
-
-For cases where total length of an input SGs is not same as
-length of the input data for encryption, omap-aes driver
-crashes. This happens in the case when IPsec is trying to use
-omap-aes driver.
-
-To avoid this, we copy all the pages from the input SG list
-into a contiguous buffer and prepare a single element SG list
-for this buffer with length as the total bytes to crypt, which is
-similar thing that is done in case of unaligned lengths.
-
-Fixes: 6242332ff2f3 ("crypto: omap-aes - Add support for cases of unaligned lengths")
-Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com>
-Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/crypto/omap-aes.c | 14 +++++++++++---
- 1 file changed, 11 insertions(+), 3 deletions(-)
-
-diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
-index 42f95a4..9a28b7e 100644
---- a/drivers/crypto/omap-aes.c
-+++ b/drivers/crypto/omap-aes.c
-@@ -554,15 +554,23 @@ static int omap_aes_crypt_dma_stop(struct omap_aes_dev *dd)
- 	return err;
- }
+ static const unsigned int rates_22579[] = {
+-	44100, 88235, 1764000
++	44100, 88200, 176400
+ };
  
--static int omap_aes_check_aligned(struct scatterlist *sg)
-+static int omap_aes_check_aligned(struct scatterlist *sg, int total)
- {
-+	int len = 0;
-+
- 	while (sg) {
- 		if (!IS_ALIGNED(sg->offset, 4))
- 			return -1;
- 		if (!IS_ALIGNED(sg->length, AES_BLOCK_SIZE))
- 			return -1;
-+
-+		len += sg->length;
- 		sg = sg_next(sg);
- 	}
-+
-+	if (len != total)
-+		return -1;
-+
- 	return 0;
- }
+ static const struct snd_pcm_hw_constraint_list constraints_22579 = {
+@@ -186,7 +186,7 @@ static const struct snd_pcm_hw_constraint_list constraints_24576 = {
+ };
  
-@@ -633,8 +641,8 @@ static int omap_aes_handle_queue(struct omap_aes_dev *dd,
- 	dd->in_sg = req->src;
- 	dd->out_sg = req->dst;
+ static const unsigned int rates_36864[] = {
+-	48000, 96000, 19200
++	48000, 96000, 192000
+ };
  
--	if (omap_aes_check_aligned(dd->in_sg) ||
--	    omap_aes_check_aligned(dd->out_sg)) {
-+	if (omap_aes_check_aligned(dd->in_sg, dd->total) ||
-+	    omap_aes_check_aligned(dd->out_sg, dd->total)) {
- 		if (omap_aes_copy_sgs(dd))
- 			pr_err("Failed to copy SGs for unaligned cases\n");
- 		dd->sgs_copied = 1;
--- 
-2.3.6
-
-
-From 78775b31ea25fc6d25f2444c634b2eec0ed90bca Mon Sep 17 00:00:00 2001
-From: Nishanth Menon <nm@ti.com>
-Date: Sat, 7 Mar 2015 03:39:05 -0600
-Subject: [PATCH 214/219] C6x: time: Ensure consistency in __init
-Cc: mpagano@gentoo.org
-
-commit f4831605f2dacd12730fe73961c77253cc2ea425 upstream.
-
-time_init invokes timer64_init (which is __init annotation)
-since all of these are invoked at init time, lets maintain
-consistency by ensuring time_init is marked appropriately
-as well.
-
-This fixes the following warning with CONFIG_DEBUG_SECTION_MISMATCH=y
-
-WARNING: vmlinux.o(.text+0x3bfc): Section mismatch in reference from the function time_init() to the function .init.text:timer64_init()
-The function time_init() references
-the function __init timer64_init().
-This is often because time_init lacks a __init
-annotation or the annotation of timer64_init is wrong.
-
-Fixes: 546a39546c64 ("C6X: time management")
-Signed-off-by: Nishanth Menon <nm@ti.com>
-Signed-off-by: Mark Salter <msalter@redhat.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- arch/c6x/kernel/time.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/arch/c6x/kernel/time.c b/arch/c6x/kernel/time.c
-index 356ee84..04845aa 100644
---- a/arch/c6x/kernel/time.c
-+++ b/arch/c6x/kernel/time.c
-@@ -49,7 +49,7 @@ u64 sched_clock(void)
- 	return (tsc * sched_clock_multiplier) >> SCHED_CLOCK_SHIFT;
+ static const struct snd_pcm_hw_constraint_list constraints_36864 = {
+diff --git a/sound/soc/davinci/davinci-evm.c b/sound/soc/davinci/davinci-evm.c
+index b6bb594..8c2b9be 100644
+--- a/sound/soc/davinci/davinci-evm.c
++++ b/sound/soc/davinci/davinci-evm.c
+@@ -425,18 +425,8 @@ static int davinci_evm_probe(struct platform_device *pdev)
+ 	return ret;
  }
  
--void time_init(void)
-+void __init time_init(void)
+-static int davinci_evm_remove(struct platform_device *pdev)
+-{
+-	struct snd_soc_card *card = platform_get_drvdata(pdev);
+-
+-	snd_soc_unregister_card(card);
+-
+-	return 0;
+-}
+-
+ static struct platform_driver davinci_evm_driver = {
+ 	.probe		= davinci_evm_probe,
+-	.remove		= davinci_evm_remove,
+ 	.driver		= {
+ 		.name	= "davinci_evm",
+ 		.pm	= &snd_soc_pm_ops,
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 9a28365..32631a8 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1115,6 +1115,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
  {
- 	u64 tmp = (u64)NSEC_PER_SEC << SCHED_CLOCK_SHIFT;
+ 	/* devices which do not support reading the sample rate. */
+ 	switch (chip->usb_id) {
++	case USB_ID(0x045E, 0x075D): /* MS Lifecam Cinema  */
+ 	case USB_ID(0x045E, 0x076D): /* MS Lifecam HD-5000 */
+ 	case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */
+ 		return true;
+diff --git a/tools/lib/traceevent/kbuffer-parse.c b/tools/lib/traceevent/kbuffer-parse.c
+index dcc6652..deb3569 100644
+--- a/tools/lib/traceevent/kbuffer-parse.c
++++ b/tools/lib/traceevent/kbuffer-parse.c
+@@ -372,7 +372,6 @@ translate_data(struct kbuffer *kbuf, void *data, void **rptr,
+ 	switch (type_len) {
+ 	case KBUFFER_TYPE_PADDING:
+ 		*length = read_4(kbuf, data);
+-		data += *length;
+ 		break;
  
--- 
-2.3.6
-
-
-From df0bffebd40ba332f01193e2b6694042a0a2f56c Mon Sep 17 00:00:00 2001
-From: Dan Carpenter <dan.carpenter@oracle.com>
-Date: Thu, 16 Apr 2015 12:48:35 -0700
-Subject: [PATCH 215/219] memstick: mspro_block: add missing curly braces
-Cc: mpagano@gentoo.org
-
-commit 13f6b191aaa11c7fd718d35a0c565f3c16bc1d99 upstream.
-
-Using the indenting we can see the curly braces were obviously intended.
-This is a static checker fix, but my guess is that we don't read enough
-bytes, because we don't calculate "t_len" correctly.
-
-Fixes: f1d82698029b ('memstick: use fully asynchronous request processing')
-Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
-Cc: Alex Dubov <oakad@yahoo.com>
-Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/memstick/core/mspro_block.c | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
-index fc145d2..922a750 100644
---- a/drivers/memstick/core/mspro_block.c
-+++ b/drivers/memstick/core/mspro_block.c
-@@ -758,7 +758,7 @@ static int mspro_block_complete_req(struct memstick_dev *card, int error)
+ 	case KBUFFER_TYPE_TIME_EXTEND:
+diff --git a/tools/perf/config/Makefile b/tools/perf/config/Makefile
+index cc22408..0884d31 100644
+--- a/tools/perf/config/Makefile
++++ b/tools/perf/config/Makefile
+@@ -651,7 +651,7 @@ ifeq (${IS_64_BIT}, 1)
+       NO_PERF_READ_VDSO32 := 1
+     endif
+   endif
+-  ifneq (${IS_X86_64}, 1)
++  ifneq ($(ARCH), x86)
+     NO_PERF_READ_VDSOX32 := 1
+   endif
+   ifndef NO_PERF_READ_VDSOX32
+@@ -699,7 +699,7 @@ sysconfdir = $(prefix)/etc
+ ETC_PERFCONFIG = etc/perfconfig
+ endif
+ ifndef lib
+-ifeq ($(IS_X86_64),1)
++ifeq ($(ARCH)$(IS_64_BIT), x861)
+ lib = lib64
+ else
+ lib = lib
+diff --git a/tools/perf/tests/make b/tools/perf/tests/make
+index 75709d2..bff8532 100644
+--- a/tools/perf/tests/make
++++ b/tools/perf/tests/make
+@@ -5,7 +5,7 @@ include config/Makefile.arch
  
- 		if (error || (card->current_mrq.tpc == MSPRO_CMD_STOP)) {
- 			if (msb->data_dir == READ) {
--				for (cnt = 0; cnt < msb->current_seg; cnt++)
-+				for (cnt = 0; cnt < msb->current_seg; cnt++) {
- 					t_len += msb->req_sg[cnt].length
- 						 / msb->page_size;
+ # FIXME looks like x86 is the only arch running tests ;-)
+ # we need some IS_(32/64) flag to make this generic
+-ifeq ($(IS_X86_64),1)
++ifeq ($(ARCH)$(IS_64_BIT), x861)
+ lib = lib64
+ else
+ lib = lib
+diff --git a/tools/perf/util/cloexec.c b/tools/perf/util/cloexec.c
+index 6da965b..85b5238 100644
+--- a/tools/perf/util/cloexec.c
++++ b/tools/perf/util/cloexec.c
+@@ -7,6 +7,12 @@
  
-@@ -766,6 +766,7 @@ static int mspro_block_complete_req(struct memstick_dev *card, int error)
- 						t_len += msb->current_page - 1;
+ static unsigned long flag = PERF_FLAG_FD_CLOEXEC;
  
- 					t_len *= msb->page_size;
-+				}
- 			}
- 		} else
- 			t_len = blk_rq_bytes(msb->block_req);
--- 
-2.3.6
-
-
-From 6361409a1274060993b246c688c24a7c863c7eeb Mon Sep 17 00:00:00 2001
-From: Linus Walleij <linus.walleij@linaro.org>
-Date: Wed, 18 Feb 2015 17:12:18 +0100
-Subject: [PATCH 216/219] drivers: platform: parse IRQ flags from resources
-Cc: mpagano@gentoo.org
-
-commit 7085a7401ba54e92bbb5aa24d6f428071e18e509 upstream.
-
-This fixes a regression from the net subsystem:
-After commit d52fdbb735c36a209f36a628d40ca9185b349ba7
-"smc91x: retrieve IRQ and trigger flags in a modern way"
-a regression would appear on some legacy platforms such
-as the ARM PXA Zylonite that specify IRQ resources like
-this:
-
-static struct resource r = {
-       .start  = X,
-       .end    = X,
-       .flags  = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHEDGE,
-};
-
-The previous code would retrieve the resource and parse
-the high edge setting in the SMC91x driver, a use pattern
-that means every driver specifying an IRQ flag from a
-static resource need to parse resource flags and apply
-them at runtime.
-
-As we switched the code to use IRQ descriptors to retrieve
-the the trigger type like this:
-
-  irqd_get_trigger_type(irq_get_irq_data(...));
-
-the code would work for new platforms using e.g. device
-tree as the backing irq descriptor would have its flags
-properly set, whereas this kind of oldstyle static
-resources at no point assign the trigger flags to the
-corresponding IRQ descriptor.
-
-To make the behaviour identical on modern device tree
-and legacy static platform data platforms, modify
-platform_get_irq() to assign the trigger flags to the
-irq descriptor when a client looks up an IRQ from static
-resources.
-
-Fixes: d52fdbb735c3 ("smc91x: retrieve IRQ and trigger flags in a modern way")
-Tested-by: Robert Jarzmik <robert.jarzmik@free.fr>
-Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/base/platform.c | 9 +++++++++
- 1 file changed, 9 insertions(+)
-
-diff --git a/drivers/base/platform.c b/drivers/base/platform.c
-index 9421fed..e68ab79 100644
---- a/drivers/base/platform.c
-+++ b/drivers/base/platform.c
-@@ -101,6 +101,15 @@ int platform_get_irq(struct platform_device *dev, unsigned int num)
- 	}
++int __weak sched_getcpu(void)
++{
++	errno = ENOSYS;
++	return -1;
++}
++
+ static int perf_flag_probe(void)
+ {
+ 	/* use 'safest' configuration as used in perf_evsel__fallback() */
+diff --git a/tools/perf/util/cloexec.h b/tools/perf/util/cloexec.h
+index 94a5a7d..68888c2 100644
+--- a/tools/perf/util/cloexec.h
++++ b/tools/perf/util/cloexec.h
+@@ -3,4 +3,10 @@
  
- 	r = platform_get_resource(dev, IORESOURCE_IRQ, num);
-+	/*
-+	 * The resources may pass trigger flags to the irqs that need
-+	 * to be set up. It so happens that the trigger flags for
-+	 * IORESOURCE_BITS correspond 1-to-1 to the IRQF_TRIGGER*
-+	 * settings.
-+	 */
-+	if (r && r->flags & IORESOURCE_BITS)
-+		irqd_set_trigger_type(irq_get_irq_data(r->start),
-+				      r->flags & IORESOURCE_BITS);
+ unsigned long perf_event_open_cloexec_flag(void);
  
- 	return r ? r->start : -ENXIO;
- #endif
--- 
-2.3.6
-
-
-From 4c0a56b2ee7b3a3741339e943acd2692c146fcb1 Mon Sep 17 00:00:00 2001
-From: Junjie Mao <junjie_mao@yeah.net>
-Date: Wed, 28 Jan 2015 10:02:44 +0800
-Subject: [PATCH 217/219] driver core: bus: Goto appropriate labels on failure
- in bus_add_device
-Cc: mpagano@gentoo.org
-
-commit 1c34203a1496d1849ba978021b878b3447d433c8 upstream.
-
-It is not necessary to call device_remove_groups() when device_add_groups()
-fails.
-
-The group added by device_add_groups() should be removed if sysfs_create_link()
-fails.
-
-Fixes: fa6fdb33b486 ("driver core: bus_type: add dev_groups")
-Signed-off-by: Junjie Mao <junjie_mao@yeah.net>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/base/bus.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/base/bus.c b/drivers/base/bus.c
-index 876bae5..79bc203 100644
---- a/drivers/base/bus.c
-+++ b/drivers/base/bus.c
-@@ -515,11 +515,11 @@ int bus_add_device(struct device *dev)
- 			goto out_put;
- 		error = device_add_groups(dev, bus->dev_groups);
- 		if (error)
--			goto out_groups;
-+			goto out_id;
- 		error = sysfs_create_link(&bus->p->devices_kset->kobj,
- 						&dev->kobj, dev_name(dev));
- 		if (error)
--			goto out_id;
-+			goto out_groups;
- 		error = sysfs_create_link(&dev->kobj,
- 				&dev->bus->p->subsys.kobj, "subsystem");
- 		if (error)
--- 
-2.3.6
-
-
-From cf1cab07a20abcfa17f0cf431d103471ebd7b33c Mon Sep 17 00:00:00 2001
-From: Florian Westphal <fw@strlen.de>
-Date: Wed, 1 Apr 2015 22:36:27 +0200
-Subject: [PATCH 218/219] netfilter: bridge: really save frag_max_size between
- PRE and POST_ROUTING
-Cc: mpagano@gentoo.org
-
-commit 0b67c43ce36a9964f1d5e3f973ee19eefd3f9f8f upstream.
-
-We also need to save/store in forward, else br_parse_ip_options call
-will zero frag_max_size as well.
-
-Fixes: 93fdd47e5 ('bridge: Save frag_max_size between PRE_ROUTING and POST_ROUTING')
-Signed-off-by: Florian Westphal <fw@strlen.de>
-Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- net/bridge/br_netfilter.c | 17 +++++++++++++++--
- 1 file changed, 15 insertions(+), 2 deletions(-)
-
-diff --git a/net/bridge/br_netfilter.c b/net/bridge/br_netfilter.c
-index 0ee453f..f371cbf 100644
---- a/net/bridge/br_netfilter.c
-+++ b/net/bridge/br_netfilter.c
-@@ -651,6 +651,13 @@ static int br_nf_forward_finish(struct sk_buff *skb)
- 	struct net_device *in;
++#ifdef __GLIBC_PREREQ
++#if !__GLIBC_PREREQ(2, 6)
++extern int sched_getcpu(void) __THROW;
++#endif
++#endif
++
+ #endif /* __PERF_CLOEXEC_H */
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index 33b7a2a..9bdf007 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -74,6 +74,10 @@ static inline uint8_t elf_sym__type(const GElf_Sym *sym)
+ 	return GELF_ST_TYPE(sym->st_info);
+ }
  
- 	if (!IS_ARP(skb) && !IS_VLAN_ARP(skb)) {
-+		int frag_max_size;
++#ifndef STT_GNU_IFUNC
++#define STT_GNU_IFUNC 10
++#endif
 +
-+		if (skb->protocol == htons(ETH_P_IP)) {
-+			frag_max_size = IPCB(skb)->frag_max_size;
-+			BR_INPUT_SKB_CB(skb)->frag_max_size = frag_max_size;
-+		}
+ static inline int elf_sym__is_function(const GElf_Sym *sym)
+ {
+ 	return (elf_sym__type(sym) == STT_FUNC ||
+diff --git a/tools/power/x86/turbostat/Makefile b/tools/power/x86/turbostat/Makefile
+index d1b3a36..4039854 100644
+--- a/tools/power/x86/turbostat/Makefile
++++ b/tools/power/x86/turbostat/Makefile
+@@ -1,8 +1,12 @@
+ CC		= $(CROSS_COMPILE)gcc
+-BUILD_OUTPUT	:= $(PWD)
++BUILD_OUTPUT	:= $(CURDIR)
+ PREFIX		:= /usr
+ DESTDIR		:=
+ 
++ifeq ("$(origin O)", "command line")
++	BUILD_OUTPUT := $(O)
++endif
 +
- 		in = nf_bridge->physindev;
- 		if (nf_bridge->mask & BRNF_PKT_TYPE) {
- 			skb->pkt_type = PACKET_OTHERHOST;
-@@ -710,8 +717,14 @@ static unsigned int br_nf_forward_ip(const struct nf_hook_ops *ops,
- 		nf_bridge->mask |= BRNF_PKT_TYPE;
+ turbostat : turbostat.c
+ CFLAGS +=	-Wall
+ CFLAGS +=	-DMSRHEADER='"../../../../arch/x86/include/uapi/asm/msr-index.h"'
+diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
+index c9f60f5..e5abe7c 100644
+--- a/virt/kvm/arm/vgic.c
++++ b/virt/kvm/arm/vgic.c
+@@ -1371,6 +1371,9 @@ int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
+ 			goto out;
  	}
  
--	if (pf == NFPROTO_IPV4 && br_parse_ip_options(skb))
--		return NF_DROP;
-+	if (pf == NFPROTO_IPV4) {
-+		int frag_max = BR_INPUT_SKB_CB(skb)->frag_max_size;
-+
-+		if (br_parse_ip_options(skb))
-+			return NF_DROP;
++	if (irq_num >= kvm->arch.vgic.nr_irqs)
++		return -EINVAL;
 +
-+		IPCB(skb)->frag_max_size = frag_max;
-+	}
- 
- 	/* The physdev module checks on this */
- 	nf_bridge->mask |= BRNF_BRIDGED;
--- 
-2.3.6
-
-
-From 072cab659c9368586d6417cfd6ec2d2c68469c67 Mon Sep 17 00:00:00 2001
-From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-Date: Wed, 6 May 2015 22:04:23 +0200
-Subject: [PATCH 219/219] Linux 4.0.2
-Cc: mpagano@gentoo.org
-
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- Makefile | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/Makefile b/Makefile
-index f499cd2..0649a60 100644
---- a/Makefile
-+++ b/Makefile
-@@ -1,6 +1,6 @@
- VERSION = 4
- PATCHLEVEL = 0
--SUBLEVEL = 1
-+SUBLEVEL = 2
- EXTRAVERSION =
- NAME = Hurr durr I'ma sheep
- 
--- 
-2.3.6
-
+ 	vcpu_id = vgic_update_irq_pending(kvm, cpuid, irq_num, level);
+ 	if (vcpu_id >= 0) {
+ 		/* kick the specified vcpu */
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index cc6a25d..f8f3f5f 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1653,8 +1653,8 @@ int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
+ 	ghc->generation = slots->generation;
+ 	ghc->len = len;
+ 	ghc->memslot = gfn_to_memslot(kvm, start_gfn);
+-	ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, &nr_pages_avail);
+-	if (!kvm_is_error_hva(ghc->hva) && nr_pages_avail >= nr_pages_needed) {
++	ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, NULL);
++	if (!kvm_is_error_hva(ghc->hva) && nr_pages_needed <= 1) {
+ 		ghc->hva += offset;
+ 	} else {
+ 		/*


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-05-14 12:22 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-05-14 12:22 UTC (permalink / raw
  To: gentoo-commits

commit:     3c00c4432f861528e758a67ed7421c676afdbe8e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May 14 12:22:54 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May 14 12:22:54 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3c00c443

Linux patch 4.0.3

 0000_README            |    4 +
 1002_linux-4.0.3.patch | 2827 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2831 insertions(+)

diff --git a/0000_README b/0000_README
index 4fdafa3..b11f028 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-4.0.2.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.0.2
 
+Patch:  1002_linux-4.0.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.0.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-4.0.3.patch b/1002_linux-4.0.3.patch
new file mode 100644
index 0000000..d137bf2
--- /dev/null
+++ b/1002_linux-4.0.3.patch
@@ -0,0 +1,2827 @@
+diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
+index bfcb1a62a7b4..4d68ec841304 100644
+--- a/Documentation/kernel-parameters.txt
++++ b/Documentation/kernel-parameters.txt
+@@ -3746,6 +3746,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
+ 					READ_CAPACITY_16 command);
+ 				f = NO_REPORT_OPCODES (don't use report opcodes
+ 					command, uas only);
++				g = MAX_SECTORS_240 (don't transfer more than
++					240 sectors at a time, uas only);
+ 				h = CAPACITY_HEURISTICS (decrease the
+ 					reported device capacity by one
+ 					sector if the number is odd);
+diff --git a/Makefile b/Makefile
+index 0649a6011a76..dc9f43a019d6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
+index ef7d112f5ce0..b0bd4e5fd5cf 100644
+--- a/arch/arm64/mm/dma-mapping.c
++++ b/arch/arm64/mm/dma-mapping.c
+@@ -67,8 +67,7 @@ static void *__alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags)
+ 
+ 		*ret_page = phys_to_page(phys);
+ 		ptr = (void *)val;
+-		if (flags & __GFP_ZERO)
+-			memset(ptr, 0, size);
++		memset(ptr, 0, size);
+ 	}
+ 
+ 	return ptr;
+@@ -105,7 +104,6 @@ static void *__dma_alloc_coherent(struct device *dev, size_t size,
+ 		struct page *page;
+ 		void *addr;
+ 
+-		size = PAGE_ALIGN(size);
+ 		page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT,
+ 							get_order(size));
+ 		if (!page)
+@@ -113,8 +111,7 @@ static void *__dma_alloc_coherent(struct device *dev, size_t size,
+ 
+ 		*dma_handle = phys_to_dma(dev, page_to_phys(page));
+ 		addr = page_address(page);
+-		if (flags & __GFP_ZERO)
+-			memset(addr, 0, size);
++		memset(addr, 0, size);
+ 		return addr;
+ 	} else {
+ 		return swiotlb_alloc_coherent(dev, size, dma_handle, flags);
+@@ -195,6 +192,8 @@ static void __dma_free(struct device *dev, size_t size,
+ {
+ 	void *swiotlb_addr = phys_to_virt(dma_to_phys(dev, dma_handle));
+ 
++	size = PAGE_ALIGN(size);
++
+ 	if (!is_device_dma_coherent(dev)) {
+ 		if (__free_from_pool(vaddr, size))
+ 			return;
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index c7a16904cd03..1a313c468d65 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -2072,7 +2072,7 @@ config MIPSR2_TO_R6_EMULATOR
+ 	help
+ 	  Choose this option if you want to run non-R6 MIPS userland code.
+ 	  Even if you say 'Y' here, the emulator will still be disabled by
+-	  default. You can enable it using the 'mipsr2emul' kernel option.
++	  default. You can enable it using the 'mipsr2emu' kernel option.
+ 	  The only reason this is a build-time option is to save ~14K from the
+ 	  final kernel image.
+ comment "MIPS R2-to-R6 emulator is only available for UP kernels"
+@@ -2142,7 +2142,7 @@ config MIPS_CMP
+ 
+ config MIPS_CPS
+ 	bool "MIPS Coherent Processing System support"
+-	depends on SYS_SUPPORTS_MIPS_CPS
++	depends on SYS_SUPPORTS_MIPS_CPS && !64BIT
+ 	select MIPS_CM
+ 	select MIPS_CPC
+ 	select MIPS_CPS_PM if HOTPLUG_CPU
+diff --git a/arch/mips/Makefile b/arch/mips/Makefile
+index 8f57fc72d62c..1b4dab1e6ab8 100644
+--- a/arch/mips/Makefile
++++ b/arch/mips/Makefile
+@@ -197,11 +197,17 @@ endif
+ # Warning: the 64-bit MIPS architecture does not support the `smartmips' extension
+ # Pass -Wa,--no-warn to disable all assembler warnings until the kernel code has
+ # been fixed properly.
+-mips-cflags				:= "$(cflags-y)"
+-cflags-$(CONFIG_CPU_HAS_SMARTMIPS)	+= $(call cc-option,$(mips-cflags),-msmartmips) -Wa,--no-warn
+-cflags-$(CONFIG_CPU_MICROMIPS)		+= $(call cc-option,$(mips-cflags),-mmicromips)
++mips-cflags				:= $(cflags-y)
++ifeq ($(CONFIG_CPU_HAS_SMARTMIPS),y)
++smartmips-ase				:= $(call cc-option-yn,$(mips-cflags) -msmartmips)
++cflags-$(smartmips-ase)			+= -msmartmips -Wa,--no-warn
++endif
++ifeq ($(CONFIG_CPU_MICROMIPS),y)
++micromips-ase				:= $(call cc-option-yn,$(mips-cflags) -mmicromips)
++cflags-$(micromips-ase)			+= -mmicromips
++endif
+ ifeq ($(CONFIG_CPU_HAS_MSA),y)
+-toolchain-msa				:= $(call cc-option-yn,-$(mips-cflags),mhard-float -mfp64 -Wa$(comma)-mmsa)
++toolchain-msa				:= $(call cc-option-yn,$(mips-cflags) -mhard-float -mfp64 -Wa$(comma)-mmsa)
+ cflags-$(toolchain-msa)			+= -DTOOLCHAIN_SUPPORTS_MSA
+ endif
+ 
+diff --git a/arch/mips/bcm47xx/board.c b/arch/mips/bcm47xx/board.c
+index b3ae068ca4fa..3fd369d74444 100644
+--- a/arch/mips/bcm47xx/board.c
++++ b/arch/mips/bcm47xx/board.c
+@@ -247,8 +247,8 @@ static __init const struct bcm47xx_board_type *bcm47xx_board_get_nvram(void)
+ 	}
+ 
+ 	if (bcm47xx_nvram_getenv("hardware_version", buf1, sizeof(buf1)) >= 0 &&
+-	    bcm47xx_nvram_getenv("boardtype", buf2, sizeof(buf2)) >= 0) {
+-		for (e2 = bcm47xx_board_list_boot_hw; e2->value1; e2++) {
++	    bcm47xx_nvram_getenv("boardnum", buf2, sizeof(buf2)) >= 0) {
++		for (e2 = bcm47xx_board_list_hw_version_num; e2->value1; e2++) {
+ 			if (!strstarts(buf1, e2->value1) &&
+ 			    !strcmp(buf2, e2->value2))
+ 				return &e2->board;
+diff --git a/arch/mips/bcm63xx/prom.c b/arch/mips/bcm63xx/prom.c
+index e1f27d653f60..7019e2967009 100644
+--- a/arch/mips/bcm63xx/prom.c
++++ b/arch/mips/bcm63xx/prom.c
+@@ -17,7 +17,6 @@
+ #include <bcm63xx_cpu.h>
+ #include <bcm63xx_io.h>
+ #include <bcm63xx_regs.h>
+-#include <bcm63xx_gpio.h>
+ 
+ void __init prom_init(void)
+ {
+@@ -53,9 +52,6 @@ void __init prom_init(void)
+ 	reg &= ~mask;
+ 	bcm_perf_writel(reg, PERF_CKCTL_REG);
+ 
+-	/* register gpiochip */
+-	bcm63xx_gpio_init();
+-
+ 	/* do low level board init */
+ 	board_prom_init();
+ 
+diff --git a/arch/mips/bcm63xx/setup.c b/arch/mips/bcm63xx/setup.c
+index 6660c7ddf87b..240fb4ffa55c 100644
+--- a/arch/mips/bcm63xx/setup.c
++++ b/arch/mips/bcm63xx/setup.c
+@@ -20,6 +20,7 @@
+ #include <bcm63xx_cpu.h>
+ #include <bcm63xx_regs.h>
+ #include <bcm63xx_io.h>
++#include <bcm63xx_gpio.h>
+ 
+ void bcm63xx_machine_halt(void)
+ {
+@@ -160,6 +161,9 @@ void __init plat_mem_setup(void)
+ 
+ int __init bcm63xx_register_devices(void)
+ {
++	/* register gpiochip */
++	bcm63xx_gpio_init();
++
+ 	return board_register_devices();
+ }
+ 
+diff --git a/arch/mips/cavium-octeon/dma-octeon.c b/arch/mips/cavium-octeon/dma-octeon.c
+index 7d8987818ccf..d8960d46417b 100644
+--- a/arch/mips/cavium-octeon/dma-octeon.c
++++ b/arch/mips/cavium-octeon/dma-octeon.c
+@@ -306,7 +306,7 @@ void __init plat_swiotlb_setup(void)
+ 		swiotlbsize = 64 * (1<<20);
+ 	}
+ #endif
+-#ifdef CONFIG_USB_OCTEON_OHCI
++#ifdef CONFIG_USB_OHCI_HCD_PLATFORM
+ 	/* OCTEON II ohci is only 32-bit. */
+ 	if (OCTEON_IS_OCTEON2() && max_addr >= 0x100000000ul)
+ 		swiotlbsize = 64 * (1<<20);
+diff --git a/arch/mips/cavium-octeon/setup.c b/arch/mips/cavium-octeon/setup.c
+index a42110e7edbc..a7f40820e567 100644
+--- a/arch/mips/cavium-octeon/setup.c
++++ b/arch/mips/cavium-octeon/setup.c
+@@ -413,7 +413,10 @@ static void octeon_restart(char *command)
+ 
+ 	mb();
+ 	while (1)
+-		cvmx_write_csr(CVMX_CIU_SOFT_RST, 1);
++		if (OCTEON_IS_OCTEON3())
++			cvmx_write_csr(CVMX_RST_SOFT_RST, 1);
++		else
++			cvmx_write_csr(CVMX_CIU_SOFT_RST, 1);
+ }
+ 
+ 
+diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h
+index e08381a37f8b..723229f4cf27 100644
+--- a/arch/mips/include/asm/cacheflush.h
++++ b/arch/mips/include/asm/cacheflush.h
+@@ -29,6 +29,20 @@
+  *  - flush_icache_all() flush the entire instruction cache
+  *  - flush_data_cache_page() flushes a page from the data cache
+  */
++
++ /*
++ * This flag is used to indicate that the page pointed to by a pte
++ * is dirty and requires cleaning before returning it to the user.
++ */
++#define PG_dcache_dirty			PG_arch_1
++
++#define Page_dcache_dirty(page)		\
++	test_bit(PG_dcache_dirty, &(page)->flags)
++#define SetPageDcacheDirty(page)	\
++	set_bit(PG_dcache_dirty, &(page)->flags)
++#define ClearPageDcacheDirty(page)	\
++	clear_bit(PG_dcache_dirty, &(page)->flags)
++
+ extern void (*flush_cache_all)(void);
+ extern void (*__flush_cache_all)(void);
+ extern void (*flush_cache_mm)(struct mm_struct *mm);
+@@ -37,13 +51,15 @@ extern void (*flush_cache_range)(struct vm_area_struct *vma,
+ 	unsigned long start, unsigned long end);
+ extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn);
+ extern void __flush_dcache_page(struct page *page);
++extern void __flush_icache_page(struct vm_area_struct *vma, struct page *page);
+ 
+ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
+ static inline void flush_dcache_page(struct page *page)
+ {
+-	if (cpu_has_dc_aliases || !cpu_has_ic_fills_f_dc)
++	if (cpu_has_dc_aliases)
+ 		__flush_dcache_page(page);
+-
++	else if (!cpu_has_ic_fills_f_dc)
++		SetPageDcacheDirty(page);
+ }
+ 
+ #define flush_dcache_mmap_lock(mapping)		do { } while (0)
+@@ -61,6 +77,11 @@ static inline void flush_anon_page(struct vm_area_struct *vma,
+ static inline void flush_icache_page(struct vm_area_struct *vma,
+ 	struct page *page)
+ {
++	if (!cpu_has_ic_fills_f_dc && (vma->vm_flags & VM_EXEC) &&
++	    Page_dcache_dirty(page)) {
++		__flush_icache_page(vma, page);
++		ClearPageDcacheDirty(page);
++	}
+ }
+ 
+ extern void (*flush_icache_range)(unsigned long start, unsigned long end);
+@@ -95,19 +116,6 @@ extern void (*flush_icache_all)(void);
+ extern void (*local_flush_data_cache_page)(void * addr);
+ extern void (*flush_data_cache_page)(unsigned long addr);
+ 
+-/*
+- * This flag is used to indicate that the page pointed to by a pte
+- * is dirty and requires cleaning before returning it to the user.
+- */
+-#define PG_dcache_dirty			PG_arch_1
+-
+-#define Page_dcache_dirty(page)		\
+-	test_bit(PG_dcache_dirty, &(page)->flags)
+-#define SetPageDcacheDirty(page)	\
+-	set_bit(PG_dcache_dirty, &(page)->flags)
+-#define ClearPageDcacheDirty(page)	\
+-	clear_bit(PG_dcache_dirty, &(page)->flags)
+-
+ /* Run kernel code uncached, useful for cache probing functions. */
+ unsigned long run_uncached(void *func);
+ 
+diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h
+index 0d8208de9a3f..345fd7f80730 100644
+--- a/arch/mips/include/asm/cpu-features.h
++++ b/arch/mips/include/asm/cpu-features.h
+@@ -235,8 +235,39 @@
+ /* MIPSR2 and MIPSR6 have a lot of similarities */
+ #define cpu_has_mips_r2_r6	(cpu_has_mips_r2 | cpu_has_mips_r6)
+ 
++/*
++ * cpu_has_mips_r2_exec_hazard - return if IHB is required on current processor
++ *
++ * Returns non-zero value if the current processor implementation requires
++ * an IHB instruction to deal with an instruction hazard as per MIPS R2
++ * architecture specification, zero otherwise.
++ */
+ #ifndef cpu_has_mips_r2_exec_hazard
+-#define cpu_has_mips_r2_exec_hazard (cpu_has_mips_r2 | cpu_has_mips_r6)
++#define cpu_has_mips_r2_exec_hazard					\
++({									\
++	int __res;							\
++									\
++	switch (current_cpu_type()) {					\
++	case CPU_M14KC:							\
++	case CPU_74K:							\
++	case CPU_1074K:							\
++	case CPU_PROAPTIV:						\
++	case CPU_P5600:							\
++	case CPU_M5150:							\
++	case CPU_QEMU_GENERIC:						\
++	case CPU_CAVIUM_OCTEON:						\
++	case CPU_CAVIUM_OCTEON_PLUS:					\
++	case CPU_CAVIUM_OCTEON2:					\
++	case CPU_CAVIUM_OCTEON3:					\
++		__res = 0;						\
++		break;							\
++									\
++	default:							\
++		__res = 1;						\
++	}								\
++									\
++	__res;								\
++})
+ #endif
+ 
+ /*
+diff --git a/arch/mips/include/asm/elf.h b/arch/mips/include/asm/elf.h
+index 535f196ffe02..694925a26924 100644
+--- a/arch/mips/include/asm/elf.h
++++ b/arch/mips/include/asm/elf.h
+@@ -294,6 +294,9 @@ do {									\
+ 	if (personality(current->personality) != PER_LINUX)		\
+ 		set_personality(PER_LINUX);				\
+ 									\
++	clear_thread_flag(TIF_HYBRID_FPREGS);				\
++	set_thread_flag(TIF_32BIT_FPREGS);				\
++									\
+ 	mips_set_personality_fp(state);					\
+ 									\
+ 	current->thread.abi = &mips_abi;				\
+@@ -319,6 +322,8 @@ do {									\
+ 	do {								\
+ 		set_thread_flag(TIF_32BIT_REGS);			\
+ 		set_thread_flag(TIF_32BIT_ADDR);			\
++		clear_thread_flag(TIF_HYBRID_FPREGS);			\
++		set_thread_flag(TIF_32BIT_FPREGS);			\
+ 									\
+ 		mips_set_personality_fp(state);				\
+ 									\
+diff --git a/arch/mips/include/asm/mach-cavium-octeon/cpu-feature-overrides.h b/arch/mips/include/asm/mach-cavium-octeon/cpu-feature-overrides.h
+index fa1f3cfbae8d..d68e685cde60 100644
+--- a/arch/mips/include/asm/mach-cavium-octeon/cpu-feature-overrides.h
++++ b/arch/mips/include/asm/mach-cavium-octeon/cpu-feature-overrides.h
+@@ -50,7 +50,6 @@
+ #define cpu_has_mips32r2	0
+ #define cpu_has_mips64r1	0
+ #define cpu_has_mips64r2	1
+-#define cpu_has_mips_r2_exec_hazard 0
+ #define cpu_has_dsp		0
+ #define cpu_has_dsp2		0
+ #define cpu_has_mipsmt		0
+diff --git a/arch/mips/include/asm/octeon/cvmx.h b/arch/mips/include/asm/octeon/cvmx.h
+index 33db1c806b01..774bb45834cb 100644
+--- a/arch/mips/include/asm/octeon/cvmx.h
++++ b/arch/mips/include/asm/octeon/cvmx.h
+@@ -436,14 +436,6 @@ static inline uint64_t cvmx_get_cycle_global(void)
+ 
+ /***************************************************************************/
+ 
+-static inline void cvmx_reset_octeon(void)
+-{
+-	union cvmx_ciu_soft_rst ciu_soft_rst;
+-	ciu_soft_rst.u64 = 0;
+-	ciu_soft_rst.s.soft_rst = 1;
+-	cvmx_write_csr(CVMX_CIU_SOFT_RST, ciu_soft_rst.u64);
+-}
+-
+ /* Return the number of cores available in the chip */
+ static inline uint32_t cvmx_octeon_num_cores(void)
+ {
+diff --git a/arch/mips/include/asm/octeon/pci-octeon.h b/arch/mips/include/asm/octeon/pci-octeon.h
+index 64ba56a02843..1884609741a8 100644
+--- a/arch/mips/include/asm/octeon/pci-octeon.h
++++ b/arch/mips/include/asm/octeon/pci-octeon.h
+@@ -11,9 +11,6 @@
+ 
+ #include <linux/pci.h>
+ 
+-/* Some PCI cards require delays when accessing config space. */
+-#define PCI_CONFIG_SPACE_DELAY 10000
+-
+ /*
+  * The physical memory base mapped by BAR1.  256MB at the end of the
+  * first 4GB.
+diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
+index bef782c4a44b..f8f809fd6c6d 100644
+--- a/arch/mips/include/asm/pgtable.h
++++ b/arch/mips/include/asm/pgtable.h
+@@ -127,10 +127,6 @@ do {									\
+ 	}								\
+ } while(0)
+ 
+-
+-extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
+-	pte_t pteval);
+-
+ #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+ 
+ #define pte_none(pte)		(!(((pte).pte_low | (pte).pte_high) & ~_PAGE_GLOBAL))
+@@ -154,6 +150,7 @@ static inline void set_pte(pte_t *ptep, pte_t pte)
+ 		}
+ 	}
+ }
++#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)
+ 
+ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+ {
+@@ -192,6 +189,7 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
+ 	}
+ #endif
+ }
++#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)
+ 
+ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+ {
+@@ -407,12 +405,15 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+ 
+ extern void __update_tlb(struct vm_area_struct *vma, unsigned long address,
+ 	pte_t pte);
++extern void __update_cache(struct vm_area_struct *vma, unsigned long address,
++	pte_t pte);
+ 
+ static inline void update_mmu_cache(struct vm_area_struct *vma,
+ 	unsigned long address, pte_t *ptep)
+ {
+ 	pte_t pte = *ptep;
+ 	__update_tlb(vma, address, pte);
++	__update_cache(vma, address, pte);
+ }
+ 
+ static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
+diff --git a/arch/mips/include/asm/r4kcache.h b/arch/mips/include/asm/r4kcache.h
+index 1b22d2da88a1..38902bf97adc 100644
+--- a/arch/mips/include/asm/r4kcache.h
++++ b/arch/mips/include/asm/r4kcache.h
+@@ -12,6 +12,8 @@
+ #ifndef _ASM_R4KCACHE_H
+ #define _ASM_R4KCACHE_H
+ 
++#include <linux/stringify.h>
++
+ #include <asm/asm.h>
+ #include <asm/cacheops.h>
+ #include <asm/compiler.h>
+@@ -344,7 +346,7 @@ static inline void invalidate_tcache_page(unsigned long addr)
+ 	"	cache %1, 0x0a0(%0); cache %1, 0x0b0(%0)\n"	\
+ 	"	cache %1, 0x0c0(%0); cache %1, 0x0d0(%0)\n"	\
+ 	"	cache %1, 0x0e0(%0); cache %1, 0x0f0(%0)\n"	\
+-	"	addiu $1, $0, 0x100			\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, %0, 0x100	\n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x010($1)\n"	\
+ 	"	cache %1, 0x020($1); cache %1, 0x030($1)\n"	\
+ 	"	cache %1, 0x040($1); cache %1, 0x050($1)\n"	\
+@@ -368,17 +370,17 @@ static inline void invalidate_tcache_page(unsigned long addr)
+ 	"	cache %1, 0x040(%0); cache %1, 0x060(%0)\n"	\
+ 	"	cache %1, 0x080(%0); cache %1, 0x0a0(%0)\n"	\
+ 	"	cache %1, 0x0c0(%0); cache %1, 0x0e0(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, %0, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x020($1)\n"	\
+ 	"	cache %1, 0x040($1); cache %1, 0x060($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0a0($1)\n"	\
+ 	"	cache %1, 0x0c0($1); cache %1, 0x0e0($1)\n"	\
+-	"	addiu $1, $1, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x020($1)\n"	\
+ 	"	cache %1, 0x040($1); cache %1, 0x060($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0a0($1)\n"	\
+ 	"	cache %1, 0x0c0($1); cache %1, 0x0e0($1)\n"	\
+-	"	addiu $1, $1, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100\n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x020($1)\n"	\
+ 	"	cache %1, 0x040($1); cache %1, 0x060($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0a0($1)\n"	\
+@@ -396,25 +398,25 @@ static inline void invalidate_tcache_page(unsigned long addr)
+ 	"	.set noat\n"					\
+ 	"	cache %1, 0x000(%0); cache %1, 0x040(%0)\n"	\
+ 	"	cache %1, 0x080(%0); cache %1, 0x0c0(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, %0, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x040($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0c0($1)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x040($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0c0($1)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x040($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0c0($1)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x040($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0c0($1)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x040($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0c0($1)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x040($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0c0($1)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x040($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0c0($1)\n"	\
+ 	"	.set pop\n"					\
+@@ -429,39 +431,38 @@ static inline void invalidate_tcache_page(unsigned long addr)
+ 	"	.set mips64r6\n"				\
+ 	"	.set noat\n"					\
+ 	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, %0, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
+ 	"	.set pop\n"					\
+ 		:						\
+ 		: "r" (base),					\
+diff --git a/arch/mips/include/asm/spinlock.h b/arch/mips/include/asm/spinlock.h
+index b4548690ade9..1fca2e0793dc 100644
+--- a/arch/mips/include/asm/spinlock.h
++++ b/arch/mips/include/asm/spinlock.h
+@@ -263,7 +263,7 @@ static inline void arch_read_unlock(arch_rwlock_t *rw)
+ 	if (R10000_LLSC_WAR) {
+ 		__asm__ __volatile__(
+ 		"1:	ll	%1, %2		# arch_read_unlock	\n"
+-		"	addiu	%1, 1					\n"
++		"	addiu	%1, -1					\n"
+ 		"	sc	%1, %0					\n"
+ 		"	beqzl	%1, 1b					\n"
+ 		: "=" GCC_OFF_SMALL_ASM() (rw->lock), "=&r" (tmp)
+diff --git a/arch/mips/kernel/entry.S b/arch/mips/kernel/entry.S
+index af41ba6db960..7791840cf22c 100644
+--- a/arch/mips/kernel/entry.S
++++ b/arch/mips/kernel/entry.S
+@@ -10,6 +10,7 @@
+ 
+ #include <asm/asm.h>
+ #include <asm/asmmacro.h>
++#include <asm/compiler.h>
+ #include <asm/regdef.h>
+ #include <asm/mipsregs.h>
+ #include <asm/stackframe.h>
+@@ -185,7 +186,7 @@ syscall_exit_work:
+  * For C code use the inline version named instruction_hazard().
+  */
+ LEAF(mips_ihb)
+-	.set	mips32r2
++	.set	MIPS_ISA_LEVEL_RAW
+ 	jr.hb	ra
+ 	nop
+ 	END(mips_ihb)
+diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c
+index bed7590e475f..d5589bedd0a4 100644
+--- a/arch/mips/kernel/smp-cps.c
++++ b/arch/mips/kernel/smp-cps.c
+@@ -88,6 +88,12 @@ static void __init cps_smp_setup(void)
+ 
+ 	/* Make core 0 coherent with everything */
+ 	write_gcr_cl_coherence(0xff);
++
++#ifdef CONFIG_MIPS_MT_FPAFF
++	/* If we have an FPU, enroll ourselves in the FPU-full mask */
++	if (cpu_has_fpu)
++		cpu_set(0, mt_fpu_cpumask);
++#endif /* CONFIG_MIPS_MT_FPAFF */
+ }
+ 
+ static void __init cps_prepare_cpus(unsigned int max_cpus)
+diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
+index 7e3ea7766822..77d96db8253c 100644
+--- a/arch/mips/mm/cache.c
++++ b/arch/mips/mm/cache.c
+@@ -119,36 +119,37 @@ void __flush_anon_page(struct page *page, unsigned long vmaddr)
+ 
+ EXPORT_SYMBOL(__flush_anon_page);
+ 
+-static void mips_flush_dcache_from_pte(pte_t pteval, unsigned long address)
++void __flush_icache_page(struct vm_area_struct *vma, struct page *page)
++{
++	unsigned long addr;
++
++	if (PageHighMem(page))
++		return;
++
++	addr = (unsigned long) page_address(page);
++	flush_data_cache_page(addr);
++}
++EXPORT_SYMBOL_GPL(__flush_icache_page);
++
++void __update_cache(struct vm_area_struct *vma, unsigned long address,
++	pte_t pte)
+ {
+ 	struct page *page;
+-	unsigned long pfn = pte_pfn(pteval);
++	unsigned long pfn, addr;
++	int exec = (vma->vm_flags & VM_EXEC) && !cpu_has_ic_fills_f_dc;
+ 
++	pfn = pte_pfn(pte);
+ 	if (unlikely(!pfn_valid(pfn)))
+ 		return;
+-
+ 	page = pfn_to_page(pfn);
+ 	if (page_mapping(page) && Page_dcache_dirty(page)) {
+-		unsigned long page_addr = (unsigned long) page_address(page);
+-
+-		if (!cpu_has_ic_fills_f_dc ||
+-		    pages_do_alias(page_addr, address & PAGE_MASK))
+-			flush_data_cache_page(page_addr);
++		addr = (unsigned long) page_address(page);
++		if (exec || pages_do_alias(addr, address & PAGE_MASK))
++			flush_data_cache_page(addr);
+ 		ClearPageDcacheDirty(page);
+ 	}
+ }
+ 
+-void set_pte_at(struct mm_struct *mm, unsigned long addr,
+-        pte_t *ptep, pte_t pteval)
+-{
+-        if (cpu_has_dc_aliases || !cpu_has_ic_fills_f_dc) {
+-                if (pte_present(pteval))
+-                        mips_flush_dcache_from_pte(pteval, addr);
+-        }
+-
+-        set_pte(ptep, pteval);
+-}
+-
+ unsigned long _page_cachable_default;
+ EXPORT_SYMBOL(_page_cachable_default);
+ 
+diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
+index d75ff73a2012..a79fd0af0224 100644
+--- a/arch/mips/mm/tlbex.c
++++ b/arch/mips/mm/tlbex.c
+@@ -501,26 +501,9 @@ static void build_tlb_write_entry(u32 **p, struct uasm_label **l,
+ 	case tlb_indexed: tlbw = uasm_i_tlbwi; break;
+ 	}
+ 
+-	if (cpu_has_mips_r2_exec_hazard) {
+-		/*
+-		 * The architecture spec says an ehb is required here,
+-		 * but a number of cores do not have the hazard and
+-		 * using an ehb causes an expensive pipeline stall.
+-		 */
+-		switch (current_cpu_type()) {
+-		case CPU_M14KC:
+-		case CPU_74K:
+-		case CPU_1074K:
+-		case CPU_PROAPTIV:
+-		case CPU_P5600:
+-		case CPU_M5150:
+-		case CPU_QEMU_GENERIC:
+-			break;
+-
+-		default:
++	if (cpu_has_mips_r2_r6) {
++		if (cpu_has_mips_r2_exec_hazard)
+ 			uasm_i_ehb(p);
+-			break;
+-		}
+ 		tlbw(p);
+ 		return;
+ 	}
+diff --git a/arch/mips/netlogic/xlp/ahci-init-xlp2.c b/arch/mips/netlogic/xlp/ahci-init-xlp2.c
+index c83dbf3689e2..7b066a44e679 100644
+--- a/arch/mips/netlogic/xlp/ahci-init-xlp2.c
++++ b/arch/mips/netlogic/xlp/ahci-init-xlp2.c
+@@ -203,6 +203,7 @@ static u8 read_phy_reg(u64 regbase, u32 addr, u32 physel)
+ static void config_sata_phy(u64 regbase)
+ {
+ 	u32 port, i, reg;
++	u8 val;
+ 
+ 	for (port = 0; port < 2; port++) {
+ 		for (i = 0, reg = RXCDRCALFOSC0; reg <= CALDUTY; reg++, i++)
+@@ -210,6 +211,18 @@ static void config_sata_phy(u64 regbase)
+ 
+ 		for (i = 0, reg = RXDPIF; reg <= PPMDRIFTMAX_HI; reg++, i++)
+ 			write_phy_reg(regbase, reg, port, sata_phy_config2[i]);
++
++		/* Fix for PHY link up failures at lower temperatures */
++		write_phy_reg(regbase, 0x800F, port, 0x1f);
++
++		val = read_phy_reg(regbase, 0x0029, port);
++		write_phy_reg(regbase, 0x0029, port, val | (0x7 << 1));
++
++		val = read_phy_reg(regbase, 0x0056, port);
++		write_phy_reg(regbase, 0x0056, port, val & ~(1 << 3));
++
++		val = read_phy_reg(regbase, 0x0018, port);
++		write_phy_reg(regbase, 0x0018, port, val & ~(0x7 << 0));
+ 	}
+ }
+ 
+diff --git a/arch/mips/pci/Makefile b/arch/mips/pci/Makefile
+index 300591c6278d..2eda01e6e08f 100644
+--- a/arch/mips/pci/Makefile
++++ b/arch/mips/pci/Makefile
+@@ -43,7 +43,7 @@ obj-$(CONFIG_SIBYTE_BCM1x80)	+= pci-bcm1480.o pci-bcm1480ht.o
+ obj-$(CONFIG_SNI_RM)		+= fixup-sni.o ops-sni.o
+ obj-$(CONFIG_LANTIQ)		+= fixup-lantiq.o
+ obj-$(CONFIG_PCI_LANTIQ)	+= pci-lantiq.o ops-lantiq.o
+-obj-$(CONFIG_SOC_RT2880)	+= pci-rt2880.o
++obj-$(CONFIG_SOC_RT288X)	+= pci-rt2880.o
+ obj-$(CONFIG_SOC_RT3883)	+= pci-rt3883.o
+ obj-$(CONFIG_TANBAC_TB0219)	+= fixup-tb0219.o
+ obj-$(CONFIG_TANBAC_TB0226)	+= fixup-tb0226.o
+diff --git a/arch/mips/pci/pci-octeon.c b/arch/mips/pci/pci-octeon.c
+index a04af55d89f1..c258cd406fbb 100644
+--- a/arch/mips/pci/pci-octeon.c
++++ b/arch/mips/pci/pci-octeon.c
+@@ -214,6 +214,8 @@ const char *octeon_get_pci_interrupts(void)
+ 		return "AAABAAAAAAAAAAAAAAAAAAAAAAAAAAAA";
+ 	case CVMX_BOARD_TYPE_BBGW_REF:
+ 		return "AABCD";
++	case CVMX_BOARD_TYPE_CUST_DSR1000N:
++		return "CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC";
+ 	case CVMX_BOARD_TYPE_THUNDER:
+ 	case CVMX_BOARD_TYPE_EBH3000:
+ 	default:
+@@ -271,9 +273,6 @@ static int octeon_read_config(struct pci_bus *bus, unsigned int devfn,
+ 	pci_addr.s.func = devfn & 0x7;
+ 	pci_addr.s.reg = reg;
+ 
+-#if PCI_CONFIG_SPACE_DELAY
+-	udelay(PCI_CONFIG_SPACE_DELAY);
+-#endif
+ 	switch (size) {
+ 	case 4:
+ 		*val = le32_to_cpu(cvmx_read64_uint32(pci_addr.u64));
+@@ -308,9 +307,6 @@ static int octeon_write_config(struct pci_bus *bus, unsigned int devfn,
+ 	pci_addr.s.func = devfn & 0x7;
+ 	pci_addr.s.reg = reg;
+ 
+-#if PCI_CONFIG_SPACE_DELAY
+-	udelay(PCI_CONFIG_SPACE_DELAY);
+-#endif
+ 	switch (size) {
+ 	case 4:
+ 		cvmx_write64_uint32(pci_addr.u64, cpu_to_le32(val));
+diff --git a/arch/mips/pci/pcie-octeon.c b/arch/mips/pci/pcie-octeon.c
+index 1bb0b2bf8d6e..99f3db4f0a9b 100644
+--- a/arch/mips/pci/pcie-octeon.c
++++ b/arch/mips/pci/pcie-octeon.c
+@@ -1762,14 +1762,6 @@ static int octeon_pcie_write_config(unsigned int pcie_port, struct pci_bus *bus,
+ 	default:
+ 		return PCIBIOS_FUNC_NOT_SUPPORTED;
+ 	}
+-#if PCI_CONFIG_SPACE_DELAY
+-	/*
+-	 * Delay on writes so that devices have time to come up. Some
+-	 * bridges need this to allow time for the secondary busses to
+-	 * work
+-	 */
+-	udelay(PCI_CONFIG_SPACE_DELAY);
+-#endif
+ 	return PCIBIOS_SUCCESSFUL;
+ }
+ 
+diff --git a/arch/mips/ralink/Kconfig b/arch/mips/ralink/Kconfig
+index b1c52ca580f9..e9bc8c96174e 100644
+--- a/arch/mips/ralink/Kconfig
++++ b/arch/mips/ralink/Kconfig
+@@ -7,6 +7,11 @@ config CLKEVT_RT3352
+ 	select CLKSRC_OF
+ 	select CLKSRC_MMIO
+ 
++config RALINK_ILL_ACC
++	bool
++	depends on SOC_RT305X
++	default y
++
+ choice
+ 	prompt "Ralink SoC selection"
+ 	default SOC_RT305X
+diff --git a/drivers/acpi/sbs.c b/drivers/acpi/sbs.c
+index a7a3edd28beb..f23179e84128 100644
+--- a/drivers/acpi/sbs.c
++++ b/drivers/acpi/sbs.c
+@@ -670,7 +670,7 @@ static int acpi_sbs_add(struct acpi_device *device)
+ 	if (!sbs_manager_broken) {
+ 		result = acpi_manager_get_info(sbs);
+ 		if (!result) {
+-			sbs->manager_present = 0;
++			sbs->manager_present = 1;
+ 			for (id = 0; id < MAX_SBS_BAT; ++id)
+ 				if ((sbs->batteries_supported & (1 << id)))
+ 					acpi_battery_add(sbs, id);
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index b40af3203089..b67066d0d9a6 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -2264,6 +2264,11 @@ static bool rbd_img_obj_end_request(struct rbd_obj_request *obj_request)
+ 			result, xferred);
+ 		if (!img_request->result)
+ 			img_request->result = result;
++		/*
++		 * Need to end I/O on the entire obj_request worth of
++		 * bytes in case of error.
++		 */
++		xferred = obj_request->length;
+ 	}
+ 
+ 	/* Image object requests don't own their page array */
+diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c
+index 9bd56116fd5a..1afc0b419da2 100644
+--- a/drivers/gpu/drm/radeon/atombios_crtc.c
++++ b/drivers/gpu/drm/radeon/atombios_crtc.c
+@@ -580,6 +580,9 @@ static u32 atombios_adjust_pll(struct drm_crtc *crtc,
+ 		else
+ 			radeon_crtc->pll_flags |= RADEON_PLL_PREFER_LOW_REF_DIV;
+ 
++		/* if there is no audio, set MINM_OVER_MAXP  */
++		if (!drm_detect_monitor_audio(radeon_connector_edid(connector)))
++			radeon_crtc->pll_flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
+ 		if (rdev->family < CHIP_RV770)
+ 			radeon_crtc->pll_flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
+ 		/* use frac fb div on APUs */
+diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c
+index c39c1d0d9d4e..f20eb32406d1 100644
+--- a/drivers/gpu/drm/radeon/atombios_encoders.c
++++ b/drivers/gpu/drm/radeon/atombios_encoders.c
+@@ -1729,17 +1729,15 @@ radeon_atom_encoder_dpms(struct drm_encoder *encoder, int mode)
+ 	struct drm_device *dev = encoder->dev;
+ 	struct radeon_device *rdev = dev->dev_private;
+ 	struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+-	struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
+ 	int encoder_mode = atombios_get_encoder_mode(encoder);
+ 
+ 	DRM_DEBUG_KMS("encoder dpms %d to mode %d, devices %08x, active_devices %08x\n",
+ 		  radeon_encoder->encoder_id, mode, radeon_encoder->devices,
+ 		  radeon_encoder->active_device);
+ 
+-	if (connector && (radeon_audio != 0) &&
++	if ((radeon_audio != 0) &&
+ 	    ((encoder_mode == ATOM_ENCODER_MODE_HDMI) ||
+-	     (ENCODER_MODE_IS_DP(encoder_mode) &&
+-	      drm_detect_monitor_audio(radeon_connector_edid(connector)))))
++	     ENCODER_MODE_IS_DP(encoder_mode)))
+ 		radeon_audio_dpms(encoder, mode);
+ 
+ 	switch (radeon_encoder->encoder_id) {
+diff --git a/drivers/gpu/drm/radeon/dce6_afmt.c b/drivers/gpu/drm/radeon/dce6_afmt.c
+index 3adc2afe32aa..68fd9fc677e3 100644
+--- a/drivers/gpu/drm/radeon/dce6_afmt.c
++++ b/drivers/gpu/drm/radeon/dce6_afmt.c
+@@ -295,28 +295,3 @@ void dce6_dp_audio_set_dto(struct radeon_device *rdev,
+ 		WREG32(DCCG_AUDIO_DTO1_MODULE, clock);
+ 	}
+ }
+-
+-void dce6_dp_enable(struct drm_encoder *encoder, bool enable)
+-{
+-	struct drm_device *dev = encoder->dev;
+-	struct radeon_device *rdev = dev->dev_private;
+-	struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+-	struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
+-
+-	if (!dig || !dig->afmt)
+-		return;
+-
+-	if (enable) {
+-		WREG32(EVERGREEN_DP_SEC_TIMESTAMP + dig->afmt->offset,
+-		       EVERGREEN_DP_SEC_TIMESTAMP_MODE(1));
+-		WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset,
+-		       EVERGREEN_DP_SEC_ASP_ENABLE |		/* Audio packet transmission */
+-		       EVERGREEN_DP_SEC_ATP_ENABLE |		/* Audio timestamp packet transmission */
+-		       EVERGREEN_DP_SEC_AIP_ENABLE |		/* Audio infoframe packet transmission */
+-		       EVERGREEN_DP_SEC_STREAM_ENABLE);	/* Master enable for secondary stream engine */
+-	} else {
+-		WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 0);
+-	}
+-
+-	dig->afmt->enabled = enable;
+-}
+diff --git a/drivers/gpu/drm/radeon/evergreen_hdmi.c b/drivers/gpu/drm/radeon/evergreen_hdmi.c
+index c18d4ecbd95d..0926739c9fa7 100644
+--- a/drivers/gpu/drm/radeon/evergreen_hdmi.c
++++ b/drivers/gpu/drm/radeon/evergreen_hdmi.c
+@@ -219,13 +219,9 @@ void evergreen_set_avi_packet(struct radeon_device *rdev, u32 offset,
+ 	WREG32(AFMT_AVI_INFO3 + offset,
+ 		frame[0xC] | (frame[0xD] << 8) | (buffer[1] << 24));
+ 
+-	WREG32_OR(HDMI_INFOFRAME_CONTROL0 + offset,
+-		HDMI_AVI_INFO_SEND |	/* enable AVI info frames */
+-		HDMI_AVI_INFO_CONT);	/* required for audio info values to be updated */
+-
+ 	WREG32_P(HDMI_INFOFRAME_CONTROL1 + offset,
+-		HDMI_AVI_INFO_LINE(2),	/* anything other than 0 */
+-		~HDMI_AVI_INFO_LINE_MASK);
++		 HDMI_AVI_INFO_LINE(2),	/* anything other than 0 */
++		 ~HDMI_AVI_INFO_LINE_MASK);
+ }
+ 
+ void dce4_hdmi_audio_set_dto(struct radeon_device *rdev,
+@@ -370,9 +366,13 @@ void dce4_set_audio_packet(struct drm_encoder *encoder, u32 offset)
+ 	WREG32(AFMT_AUDIO_PACKET_CONTROL2 + offset,
+ 		AFMT_AUDIO_CHANNEL_ENABLE(0xff));
+ 
++	WREG32(HDMI_AUDIO_PACKET_CONTROL + offset,
++	       HDMI_AUDIO_DELAY_EN(1) | /* set the default audio delay */
++	       HDMI_AUDIO_PACKETS_PER_LINE(3)); /* should be suffient for all audio modes and small enough for all hblanks */
++
+ 	/* allow 60958 channel status and send audio packets fields to be updated */
+-	WREG32(AFMT_AUDIO_PACKET_CONTROL + offset,
+-		AFMT_AUDIO_SAMPLE_SEND | AFMT_RESET_FIFO_WHEN_AUDIO_DIS | AFMT_60958_CS_UPDATE);
++	WREG32_OR(AFMT_AUDIO_PACKET_CONTROL + offset,
++		  AFMT_RESET_FIFO_WHEN_AUDIO_DIS | AFMT_60958_CS_UPDATE);
+ }
+ 
+ 
+@@ -398,17 +398,26 @@ void evergreen_hdmi_enable(struct drm_encoder *encoder, bool enable)
+ 		return;
+ 
+ 	if (enable) {
+-		WREG32(HDMI_INFOFRAME_CONTROL1 + dig->afmt->offset,
+-		       HDMI_AUDIO_INFO_LINE(2)); /* anything other than 0 */
+-
+-		WREG32(HDMI_AUDIO_PACKET_CONTROL + dig->afmt->offset,
+-		       HDMI_AUDIO_DELAY_EN(1) | /* set the default audio delay */
+-		       HDMI_AUDIO_PACKETS_PER_LINE(3)); /* should be suffient for all audio modes and small enough for all hblanks */
++		struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
+ 
+-		WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset,
+-		       HDMI_AUDIO_INFO_SEND | /* enable audio info frames (frames won't be set until audio is enabled) */
+-		       HDMI_AUDIO_INFO_CONT); /* required for audio info values to be updated */
++		if (drm_detect_monitor_audio(radeon_connector_edid(connector))) {
++			WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset,
++			       HDMI_AVI_INFO_SEND | /* enable AVI info frames */
++			       HDMI_AVI_INFO_CONT | /* required for audio info values to be updated */
++			       HDMI_AUDIO_INFO_SEND | /* enable audio info frames (frames won't be set until audio is enabled) */
++			       HDMI_AUDIO_INFO_CONT); /* required for audio info values to be updated */
++			WREG32_OR(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset,
++				  AFMT_AUDIO_SAMPLE_SEND);
++		} else {
++			WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset,
++			       HDMI_AVI_INFO_SEND | /* enable AVI info frames */
++			       HDMI_AVI_INFO_CONT); /* required for audio info values to be updated */
++			WREG32_AND(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset,
++				   ~AFMT_AUDIO_SAMPLE_SEND);
++		}
+ 	} else {
++		WREG32_AND(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset,
++			   ~AFMT_AUDIO_SAMPLE_SEND);
+ 		WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset, 0);
+ 	}
+ 
+@@ -424,20 +433,24 @@ void evergreen_dp_enable(struct drm_encoder *encoder, bool enable)
+ 	struct radeon_device *rdev = dev->dev_private;
+ 	struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+ 	struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
++	struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
+ 
+ 	if (!dig || !dig->afmt)
+ 		return;
+ 
+-	if (enable) {
++	if (enable && drm_detect_monitor_audio(radeon_connector_edid(connector))) {
+ 		struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
+ 		struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ 		struct radeon_connector_atom_dig *dig_connector;
+ 		uint32_t val;
+ 
++		WREG32_OR(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset,
++			  AFMT_AUDIO_SAMPLE_SEND);
++
+ 		WREG32(EVERGREEN_DP_SEC_TIMESTAMP + dig->afmt->offset,
+ 		       EVERGREEN_DP_SEC_TIMESTAMP_MODE(1));
+ 
+-		if (radeon_connector->con_priv) {
++		if (!ASIC_IS_DCE6(rdev) && radeon_connector->con_priv) {
+ 			dig_connector = radeon_connector->con_priv;
+ 			val = RREG32(EVERGREEN_DP_SEC_AUD_N + dig->afmt->offset);
+ 			val &= ~EVERGREEN_DP_SEC_N_BASE_MULTIPLE(0xf);
+@@ -457,6 +470,8 @@ void evergreen_dp_enable(struct drm_encoder *encoder, bool enable)
+ 			EVERGREEN_DP_SEC_STREAM_ENABLE);	/* Master enable for secondary stream engine */
+ 	} else {
+ 		WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 0);
++		WREG32_AND(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset,
++			   ~AFMT_AUDIO_SAMPLE_SEND);
+ 	}
+ 
+ 	dig->afmt->enabled = enable;
+diff --git a/drivers/gpu/drm/radeon/r600_hdmi.c b/drivers/gpu/drm/radeon/r600_hdmi.c
+index dd6606b8e23c..e85894ade95c 100644
+--- a/drivers/gpu/drm/radeon/r600_hdmi.c
++++ b/drivers/gpu/drm/radeon/r600_hdmi.c
+@@ -228,12 +228,13 @@ void r600_set_avi_packet(struct radeon_device *rdev, u32 offset,
+ 	WREG32(HDMI0_AVI_INFO3 + offset,
+ 		frame[0xC] | (frame[0xD] << 8) | (buffer[1] << 24));
+ 
++	WREG32_OR(HDMI0_INFOFRAME_CONTROL1 + offset,
++		  HDMI0_AVI_INFO_LINE(2));	/* anything other than 0 */
++
+ 	WREG32_OR(HDMI0_INFOFRAME_CONTROL0 + offset,
+-		HDMI0_AVI_INFO_SEND |	/* enable AVI info frames */
+-		HDMI0_AVI_INFO_CONT);	/* send AVI info frames every frame/field */
++		  HDMI0_AVI_INFO_SEND |	/* enable AVI info frames */
++		  HDMI0_AVI_INFO_CONT);	/* send AVI info frames every frame/field */
+ 
+-	WREG32_OR(HDMI0_INFOFRAME_CONTROL1 + offset,
+-		HDMI0_AVI_INFO_LINE(2));	/* anything other than 0 */
+ }
+ 
+ /*
+diff --git a/drivers/gpu/drm/radeon/radeon_audio.c b/drivers/gpu/drm/radeon/radeon_audio.c
+index b21ef69a34ac..b7d33a13db9f 100644
+--- a/drivers/gpu/drm/radeon/radeon_audio.c
++++ b/drivers/gpu/drm/radeon/radeon_audio.c
+@@ -102,7 +102,6 @@ static void radeon_audio_dp_mode_set(struct drm_encoder *encoder,
+ void r600_hdmi_enable(struct drm_encoder *encoder, bool enable);
+ void evergreen_hdmi_enable(struct drm_encoder *encoder, bool enable);
+ void evergreen_dp_enable(struct drm_encoder *encoder, bool enable);
+-void dce6_dp_enable(struct drm_encoder *encoder, bool enable);
+ 
+ static const u32 pin_offsets[7] =
+ {
+@@ -240,7 +239,7 @@ static struct radeon_audio_funcs dce6_dp_funcs = {
+ 	.set_avi_packet = evergreen_set_avi_packet,
+ 	.set_audio_packet = dce4_set_audio_packet,
+ 	.mode_set = radeon_audio_dp_mode_set,
+-	.dpms = dce6_dp_enable,
++	.dpms = evergreen_dp_enable,
+ };
+ 
+ static void radeon_audio_interface_init(struct radeon_device *rdev)
+@@ -461,30 +460,33 @@ void radeon_audio_detect(struct drm_connector *connector,
+ 	if (!connector || !connector->encoder)
+ 		return;
+ 
++	if (!radeon_encoder_is_digital(connector->encoder))
++		return;
++
+ 	rdev = connector->encoder->dev->dev_private;
+ 	radeon_encoder = to_radeon_encoder(connector->encoder);
+ 	dig = radeon_encoder->enc_priv;
+ 
+-	if (status == connector_status_connected) {
+-		struct radeon_connector *radeon_connector;
+-		int sink_type;
+-
+-		if (!drm_detect_monitor_audio(radeon_connector_edid(connector))) {
+-			radeon_encoder->audio = NULL;
+-			return;
+-		}
++	if (!dig->afmt)
++		return;
+ 
+-		radeon_connector = to_radeon_connector(connector);
+-		sink_type = radeon_dp_getsinktype(radeon_connector);
++	if (status == connector_status_connected) {
++		struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ 
+ 		if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort &&
+-			sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT)
++		    radeon_dp_getsinktype(radeon_connector) ==
++		    CONNECTOR_OBJECT_ID_DISPLAYPORT)
+ 			radeon_encoder->audio = rdev->audio.dp_funcs;
+ 		else
+ 			radeon_encoder->audio = rdev->audio.hdmi_funcs;
+ 
+ 		dig->afmt->pin = radeon_audio_get_pin(connector->encoder);
+-		radeon_audio_enable(rdev, dig->afmt->pin, 0xf);
++		if (drm_detect_monitor_audio(radeon_connector_edid(connector))) {
++			radeon_audio_enable(rdev, dig->afmt->pin, 0xf);
++		} else {
++			radeon_audio_enable(rdev, dig->afmt->pin, 0);
++			dig->afmt->pin = NULL;
++		}
+ 	} else {
+ 		radeon_audio_enable(rdev, dig->afmt->pin, 0);
+ 		dig->afmt->pin = NULL;
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index 27def67cb6be..27973e3faf0e 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -1333,8 +1333,10 @@ out:
+ 	/* updated in get modes as well since we need to know if it's analog or digital */
+ 	radeon_connector_update_scratch_regs(connector, ret);
+ 
+-	if (radeon_audio != 0)
++	if (radeon_audio != 0) {
++		radeon_connector_get_edid(connector);
+ 		radeon_audio_detect(connector, ret);
++	}
+ 
+ exit:
+ 	pm_runtime_mark_last_busy(connector->dev->dev);
+@@ -1659,8 +1661,10 @@ radeon_dp_detect(struct drm_connector *connector, bool force)
+ 
+ 	radeon_connector_update_scratch_regs(connector, ret);
+ 
+-	if (radeon_audio != 0)
++	if (radeon_audio != 0) {
++		radeon_connector_get_edid(connector);
+ 		radeon_audio_detect(connector, ret);
++	}
+ 
+ out:
+ 	pm_runtime_mark_last_busy(connector->dev->dev);
+diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c
+index 4d0f96cc3da4..ab39b85e0f76 100644
+--- a/drivers/gpu/drm/radeon/radeon_cs.c
++++ b/drivers/gpu/drm/radeon/radeon_cs.c
+@@ -88,7 +88,7 @@ static int radeon_cs_parser_relocs(struct radeon_cs_parser *p)
+ 	p->dma_reloc_idx = 0;
+ 	/* FIXME: we assume that each relocs use 4 dwords */
+ 	p->nrelocs = chunk->length_dw / 4;
+-	p->relocs = kcalloc(p->nrelocs, sizeof(struct radeon_bo_list), GFP_KERNEL);
++	p->relocs = drm_calloc_large(p->nrelocs, sizeof(struct radeon_bo_list));
+ 	if (p->relocs == NULL) {
+ 		return -ENOMEM;
+ 	}
+@@ -428,7 +428,7 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser *parser, int error, bo
+ 		}
+ 	}
+ 	kfree(parser->track);
+-	kfree(parser->relocs);
++	drm_free_large(parser->relocs);
+ 	drm_free_large(parser->vm_bos);
+ 	for (i = 0; i < parser->nchunks; i++)
+ 		drm_free_large(parser->chunks[i].kdata);
+diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c
+index 2a5a4a9e772d..de42fc4a22b8 100644
+--- a/drivers/gpu/drm/radeon/radeon_vm.c
++++ b/drivers/gpu/drm/radeon/radeon_vm.c
+@@ -473,6 +473,23 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 	}
+ 
+ 	mutex_lock(&vm->mutex);
++	soffset /= RADEON_GPU_PAGE_SIZE;
++	eoffset /= RADEON_GPU_PAGE_SIZE;
++	if (soffset || eoffset) {
++		struct interval_tree_node *it;
++		it = interval_tree_iter_first(&vm->va, soffset, eoffset - 1);
++		if (it && it != &bo_va->it) {
++			struct radeon_bo_va *tmp;
++			tmp = container_of(it, struct radeon_bo_va, it);
++			/* bo and tmp overlap, invalid offset */
++			dev_err(rdev->dev, "bo %p va 0x%010Lx conflict with "
++				"(bo %p 0x%010lx 0x%010lx)\n", bo_va->bo,
++				soffset, tmp->bo, tmp->it.start, tmp->it.last);
++			mutex_unlock(&vm->mutex);
++			return -EINVAL;
++		}
++	}
++
+ 	if (bo_va->it.start || bo_va->it.last) {
+ 		if (bo_va->addr) {
+ 			/* add a clone of the bo_va to clear the old address */
+@@ -490,6 +507,8 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 			spin_lock(&vm->status_lock);
+ 			list_add(&tmp->vm_status, &vm->freed);
+ 			spin_unlock(&vm->status_lock);
++
++			bo_va->addr = 0;
+ 		}
+ 
+ 		interval_tree_remove(&bo_va->it, &vm->va);
+@@ -497,21 +516,7 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 		bo_va->it.last = 0;
+ 	}
+ 
+-	soffset /= RADEON_GPU_PAGE_SIZE;
+-	eoffset /= RADEON_GPU_PAGE_SIZE;
+ 	if (soffset || eoffset) {
+-		struct interval_tree_node *it;
+-		it = interval_tree_iter_first(&vm->va, soffset, eoffset - 1);
+-		if (it) {
+-			struct radeon_bo_va *tmp;
+-			tmp = container_of(it, struct radeon_bo_va, it);
+-			/* bo and tmp overlap, invalid offset */
+-			dev_err(rdev->dev, "bo %p va 0x%010Lx conflict with "
+-				"(bo %p 0x%010lx 0x%010lx)\n", bo_va->bo,
+-				soffset, tmp->bo, tmp->it.start, tmp->it.last);
+-			mutex_unlock(&vm->mutex);
+-			return -EINVAL;
+-		}
+ 		bo_va->it.start = soffset;
+ 		bo_va->it.last = eoffset - 1;
+ 		interval_tree_insert(&bo_va->it, &vm->va);
+@@ -1107,7 +1112,8 @@ void radeon_vm_bo_rmv(struct radeon_device *rdev,
+ 	list_del(&bo_va->bo_list);
+ 
+ 	mutex_lock(&vm->mutex);
+-	interval_tree_remove(&bo_va->it, &vm->va);
++	if (bo_va->it.start || bo_va->it.last)
++		interval_tree_remove(&bo_va->it, &vm->va);
+ 	spin_lock(&vm->status_lock);
+ 	list_del(&bo_va->vm_status);
+ 
+diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c
+index 7be11651b7e6..9dbb3154d559 100644
+--- a/drivers/gpu/drm/radeon/si_dpm.c
++++ b/drivers/gpu/drm/radeon/si_dpm.c
+@@ -2924,6 +2924,7 @@ struct si_dpm_quirk {
+ static struct si_dpm_quirk si_dpm_quirk_list[] = {
+ 	/* PITCAIRN - https://bugs.freedesktop.org/show_bug.cgi?id=76490 */
+ 	{ PCI_VENDOR_ID_ATI, 0x6810, 0x1462, 0x3036, 0, 120000 },
++	{ PCI_VENDOR_ID_ATI, 0x6811, 0x174b, 0xe271, 0, 120000 },
+ 	{ 0, 0, 0, 0 },
+ };
+ 
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 3736f71bdec5..18def3022f6e 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -787,7 +787,7 @@ int vmbus_request_offers(void)
+ {
+ 	struct vmbus_channel_message_header *msg;
+ 	struct vmbus_channel_msginfo *msginfo;
+-	int ret, t;
++	int ret;
+ 
+ 	msginfo = kmalloc(sizeof(*msginfo) +
+ 			  sizeof(struct vmbus_channel_message_header),
+@@ -795,8 +795,6 @@ int vmbus_request_offers(void)
+ 	if (!msginfo)
+ 		return -ENOMEM;
+ 
+-	init_completion(&msginfo->waitevent);
+-
+ 	msg = (struct vmbus_channel_message_header *)msginfo->msg;
+ 
+ 	msg->msgtype = CHANNELMSG_REQUESTOFFERS;
+@@ -810,14 +808,6 @@ int vmbus_request_offers(void)
+ 		goto cleanup;
+ 	}
+ 
+-	t = wait_for_completion_timeout(&msginfo->waitevent, 5*HZ);
+-	if (t == 0) {
+-		ret = -ETIMEDOUT;
+-		goto cleanup;
+-	}
+-
+-
+-
+ cleanup:
+ 	kfree(msginfo);
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+index ee394dc68303..ec1ea8ba7aac 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+@@ -492,7 +492,7 @@ int t4_memory_rw(struct adapter *adap, int win, int mtype, u32 addr,
+ 		memoffset = (mtype * (edc_size * 1024 * 1024));
+ 	else {
+ 		mc_size = EXT_MEM0_SIZE_G(t4_read_reg(adap,
+-						      MA_EXT_MEMORY1_BAR_A));
++						      MA_EXT_MEMORY0_BAR_A));
+ 		memoffset = (MEM_MC0 * edc_size + mc_size) * 1024 * 1024;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+index 3485acf03014..2f1324bed7b3 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+@@ -1467,6 +1467,7 @@ static void mlx4_en_service_task(struct work_struct *work)
+ 		if (mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_TS)
+ 			mlx4_en_ptp_overflow_check(mdev);
+ 
++		mlx4_en_recover_from_oom(priv);
+ 		queue_delayed_work(mdev->workqueue, &priv->service_task,
+ 				   SERVICE_TASK_DELAY);
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+index 698d60de1255..05ec5e151ded 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+@@ -244,6 +244,12 @@ static int mlx4_en_prepare_rx_desc(struct mlx4_en_priv *priv,
+ 	return mlx4_en_alloc_frags(priv, rx_desc, frags, ring->page_alloc, gfp);
+ }
+ 
++static inline bool mlx4_en_is_ring_empty(struct mlx4_en_rx_ring *ring)
++{
++	BUG_ON((u32)(ring->prod - ring->cons) > ring->actual_size);
++	return ring->prod == ring->cons;
++}
++
+ static inline void mlx4_en_update_rx_prod_db(struct mlx4_en_rx_ring *ring)
+ {
+ 	*ring->wqres.db.db = cpu_to_be32(ring->prod & 0xffff);
+@@ -315,8 +321,7 @@ static void mlx4_en_free_rx_buf(struct mlx4_en_priv *priv,
+ 	       ring->cons, ring->prod);
+ 
+ 	/* Unmap and free Rx buffers */
+-	BUG_ON((u32) (ring->prod - ring->cons) > ring->actual_size);
+-	while (ring->cons != ring->prod) {
++	while (!mlx4_en_is_ring_empty(ring)) {
+ 		index = ring->cons & ring->size_mask;
+ 		en_dbg(DRV, priv, "Processing descriptor:%d\n", index);
+ 		mlx4_en_free_rx_desc(priv, ring, index);
+@@ -491,6 +496,23 @@ err_allocator:
+ 	return err;
+ }
+ 
++/* We recover from out of memory by scheduling our napi poll
++ * function (mlx4_en_process_cq), which tries to allocate
++ * all missing RX buffers (call to mlx4_en_refill_rx_buffers).
++ */
++void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv)
++{
++	int ring;
++
++	if (!priv->port_up)
++		return;
++
++	for (ring = 0; ring < priv->rx_ring_num; ring++) {
++		if (mlx4_en_is_ring_empty(priv->rx_ring[ring]))
++			napi_reschedule(&priv->rx_cq[ring]->napi);
++	}
++}
++
+ void mlx4_en_destroy_rx_ring(struct mlx4_en_priv *priv,
+ 			     struct mlx4_en_rx_ring **pring,
+ 			     u32 size, u16 stride)
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+index 55f9f5c5344e..8c234ec1d8aa 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+@@ -143,8 +143,10 @@ int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
+ 	ring->hwtstamp_tx_type = priv->hwtstamp_config.tx_type;
+ 	ring->queue_index = queue_index;
+ 
+-	if (queue_index < priv->num_tx_rings_p_up && cpu_online(queue_index))
+-		cpumask_set_cpu(queue_index, &ring->affinity_mask);
++	if (queue_index < priv->num_tx_rings_p_up)
++		cpumask_set_cpu_local_first(queue_index,
++					    priv->mdev->dev->numa_node,
++					    &ring->affinity_mask);
+ 
+ 	*pring = ring;
+ 	return 0;
+@@ -213,7 +215,7 @@ int mlx4_en_activate_tx_ring(struct mlx4_en_priv *priv,
+ 
+ 	err = mlx4_qp_to_ready(mdev->dev, &ring->wqres.mtt, &ring->context,
+ 			       &ring->qp, &ring->qp_state);
+-	if (!user_prio && cpu_online(ring->queue_index))
++	if (!cpumask_empty(&ring->affinity_mask))
+ 		netif_set_xps_queue(priv->dev, &ring->affinity_mask,
+ 				    ring->queue_index);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+index ebbe244e80dd..8687c8d54227 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
++++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+@@ -790,6 +790,7 @@ int mlx4_en_activate_tx_ring(struct mlx4_en_priv *priv,
+ void mlx4_en_deactivate_tx_ring(struct mlx4_en_priv *priv,
+ 				struct mlx4_en_tx_ring *ring);
+ void mlx4_en_set_num_rx_rings(struct mlx4_en_dev *mdev);
++void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv);
+ int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv,
+ 			   struct mlx4_en_rx_ring **pring,
+ 			   u32 size, u16 stride, int node);
+diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
+index 7600639db4c4..add419d6ff34 100644
+--- a/drivers/scsi/3w-9xxx.c
++++ b/drivers/scsi/3w-9xxx.c
+@@ -149,7 +149,6 @@ static int twa_reset_sequence(TW_Device_Extension *tw_dev, int soft_reset);
+ static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, char *cdb, int use_sg, TW_SG_Entry *sglistarg);
+ static void twa_scsiop_execute_scsi_complete(TW_Device_Extension *tw_dev, int request_id);
+ static char *twa_string_lookup(twa_message_type *table, unsigned int aen_code);
+-static void twa_unmap_scsi_data(TW_Device_Extension *tw_dev, int request_id);
+ 
+ /* Functions */
+ 
+@@ -1340,11 +1339,11 @@ static irqreturn_t twa_interrupt(int irq, void *dev_instance)
+ 				}
+ 
+ 				/* Now complete the io */
++				scsi_dma_unmap(cmd);
++				cmd->scsi_done(cmd);
+ 				tw_dev->state[request_id] = TW_S_COMPLETED;
+ 				twa_free_request_id(tw_dev, request_id);
+ 				tw_dev->posted_request_count--;
+-				tw_dev->srb[request_id]->scsi_done(tw_dev->srb[request_id]);
+-				twa_unmap_scsi_data(tw_dev, request_id);
+ 			}
+ 
+ 			/* Check for valid status after each drain */
+@@ -1402,26 +1401,6 @@ static void twa_load_sgl(TW_Device_Extension *tw_dev, TW_Command_Full *full_comm
+ 	}
+ } /* End twa_load_sgl() */
+ 
+-/* This function will perform a pci-dma mapping for a scatter gather list */
+-static int twa_map_scsi_sg_data(TW_Device_Extension *tw_dev, int request_id)
+-{
+-	int use_sg;
+-	struct scsi_cmnd *cmd = tw_dev->srb[request_id];
+-
+-	use_sg = scsi_dma_map(cmd);
+-	if (!use_sg)
+-		return 0;
+-	else if (use_sg < 0) {
+-		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1c, "Failed to map scatter gather list");
+-		return 0;
+-	}
+-
+-	cmd->SCp.phase = TW_PHASE_SGLIST;
+-	cmd->SCp.have_data_in = use_sg;
+-
+-	return use_sg;
+-} /* End twa_map_scsi_sg_data() */
+-
+ /* This function will poll for a response interrupt of a request */
+ static int twa_poll_response(TW_Device_Extension *tw_dev, int request_id, int seconds)
+ {
+@@ -1600,9 +1579,11 @@ static int twa_reset_device_extension(TW_Device_Extension *tw_dev)
+ 		    (tw_dev->state[i] != TW_S_INITIAL) &&
+ 		    (tw_dev->state[i] != TW_S_COMPLETED)) {
+ 			if (tw_dev->srb[i]) {
+-				tw_dev->srb[i]->result = (DID_RESET << 16);
+-				tw_dev->srb[i]->scsi_done(tw_dev->srb[i]);
+-				twa_unmap_scsi_data(tw_dev, i);
++				struct scsi_cmnd *cmd = tw_dev->srb[i];
++
++				cmd->result = (DID_RESET << 16);
++				scsi_dma_unmap(cmd);
++				cmd->scsi_done(cmd);
+ 			}
+ 		}
+ 	}
+@@ -1781,21 +1762,18 @@ static int twa_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_
+ 	/* Save the scsi command for use by the ISR */
+ 	tw_dev->srb[request_id] = SCpnt;
+ 
+-	/* Initialize phase to zero */
+-	SCpnt->SCp.phase = TW_PHASE_INITIAL;
+-
+ 	retval = twa_scsiop_execute_scsi(tw_dev, request_id, NULL, 0, NULL);
+ 	switch (retval) {
+ 	case SCSI_MLQUEUE_HOST_BUSY:
++		scsi_dma_unmap(SCpnt);
+ 		twa_free_request_id(tw_dev, request_id);
+-		twa_unmap_scsi_data(tw_dev, request_id);
+ 		break;
+ 	case 1:
+-		tw_dev->state[request_id] = TW_S_COMPLETED;
+-		twa_free_request_id(tw_dev, request_id);
+-		twa_unmap_scsi_data(tw_dev, request_id);
+ 		SCpnt->result = (DID_ERROR << 16);
++		scsi_dma_unmap(SCpnt);
+ 		done(SCpnt);
++		tw_dev->state[request_id] = TW_S_COMPLETED;
++		twa_free_request_id(tw_dev, request_id);
+ 		retval = 0;
+ 	}
+ out:
+@@ -1863,8 +1841,8 @@ static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id,
+ 				command_packet->sg_list[0].address = TW_CPU_TO_SGL(tw_dev->generic_buffer_phys[request_id]);
+ 				command_packet->sg_list[0].length = cpu_to_le32(TW_MIN_SGL_LENGTH);
+ 			} else {
+-				sg_count = twa_map_scsi_sg_data(tw_dev, request_id);
+-				if (sg_count == 0)
++				sg_count = scsi_dma_map(srb);
++				if (sg_count < 0)
+ 					goto out;
+ 
+ 				scsi_for_each_sg(srb, sg, sg_count, i) {
+@@ -1979,15 +1957,6 @@ static char *twa_string_lookup(twa_message_type *table, unsigned int code)
+ 	return(table[index].text);
+ } /* End twa_string_lookup() */
+ 
+-/* This function will perform a pci-dma unmap */
+-static void twa_unmap_scsi_data(TW_Device_Extension *tw_dev, int request_id)
+-{
+-	struct scsi_cmnd *cmd = tw_dev->srb[request_id];
+-
+-	if (cmd->SCp.phase == TW_PHASE_SGLIST)
+-		scsi_dma_unmap(cmd);
+-} /* End twa_unmap_scsi_data() */
+-
+ /* This function gets called when a disk is coming on-line */
+ static int twa_slave_configure(struct scsi_device *sdev)
+ {
+diff --git a/drivers/scsi/3w-9xxx.h b/drivers/scsi/3w-9xxx.h
+index 040f7214e5b7..0fdc83cfa0e1 100644
+--- a/drivers/scsi/3w-9xxx.h
++++ b/drivers/scsi/3w-9xxx.h
+@@ -324,11 +324,6 @@ static twa_message_type twa_error_table[] = {
+ #define TW_CURRENT_DRIVER_BUILD 0
+ #define TW_CURRENT_DRIVER_BRANCH 0
+ 
+-/* Phase defines */
+-#define TW_PHASE_INITIAL 0
+-#define TW_PHASE_SINGLE  1
+-#define TW_PHASE_SGLIST  2
+-
+ /* Misc defines */
+ #define TW_9550SX_DRAIN_COMPLETED	      0xFFFF
+ #define TW_SECTOR_SIZE                        512
+diff --git a/drivers/scsi/3w-sas.c b/drivers/scsi/3w-sas.c
+index 2361772d5909..f8374850f714 100644
+--- a/drivers/scsi/3w-sas.c
++++ b/drivers/scsi/3w-sas.c
+@@ -290,26 +290,6 @@ static int twl_post_command_packet(TW_Device_Extension *tw_dev, int request_id)
+ 	return 0;
+ } /* End twl_post_command_packet() */
+ 
+-/* This function will perform a pci-dma mapping for a scatter gather list */
+-static int twl_map_scsi_sg_data(TW_Device_Extension *tw_dev, int request_id)
+-{
+-	int use_sg;
+-	struct scsi_cmnd *cmd = tw_dev->srb[request_id];
+-
+-	use_sg = scsi_dma_map(cmd);
+-	if (!use_sg)
+-		return 0;
+-	else if (use_sg < 0) {
+-		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1, "Failed to map scatter gather list");
+-		return 0;
+-	}
+-
+-	cmd->SCp.phase = TW_PHASE_SGLIST;
+-	cmd->SCp.have_data_in = use_sg;
+-
+-	return use_sg;
+-} /* End twl_map_scsi_sg_data() */
+-
+ /* This function hands scsi cdb's to the firmware */
+ static int twl_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, char *cdb, int use_sg, TW_SG_Entry_ISO *sglistarg)
+ {
+@@ -357,8 +337,8 @@ static int twl_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id,
+ 	if (!sglistarg) {
+ 		/* Map sglist from scsi layer to cmd packet */
+ 		if (scsi_sg_count(srb)) {
+-			sg_count = twl_map_scsi_sg_data(tw_dev, request_id);
+-			if (sg_count == 0)
++			sg_count = scsi_dma_map(srb);
++			if (sg_count <= 0)
+ 				goto out;
+ 
+ 			scsi_for_each_sg(srb, sg, sg_count, i) {
+@@ -1102,15 +1082,6 @@ out:
+ 	return retval;
+ } /* End twl_initialize_device_extension() */
+ 
+-/* This function will perform a pci-dma unmap */
+-static void twl_unmap_scsi_data(TW_Device_Extension *tw_dev, int request_id)
+-{
+-	struct scsi_cmnd *cmd = tw_dev->srb[request_id];
+-
+-	if (cmd->SCp.phase == TW_PHASE_SGLIST)
+-		scsi_dma_unmap(cmd);
+-} /* End twl_unmap_scsi_data() */
+-
+ /* This function will handle attention interrupts */
+ static int twl_handle_attention_interrupt(TW_Device_Extension *tw_dev)
+ {
+@@ -1251,11 +1222,11 @@ static irqreturn_t twl_interrupt(int irq, void *dev_instance)
+ 			}
+ 
+ 			/* Now complete the io */
++			scsi_dma_unmap(cmd);
++			cmd->scsi_done(cmd);
+ 			tw_dev->state[request_id] = TW_S_COMPLETED;
+ 			twl_free_request_id(tw_dev, request_id);
+ 			tw_dev->posted_request_count--;
+-			tw_dev->srb[request_id]->scsi_done(tw_dev->srb[request_id]);
+-			twl_unmap_scsi_data(tw_dev, request_id);
+ 		}
+ 
+ 		/* Check for another response interrupt */
+@@ -1400,10 +1371,12 @@ static int twl_reset_device_extension(TW_Device_Extension *tw_dev, int ioctl_res
+ 		if ((tw_dev->state[i] != TW_S_FINISHED) &&
+ 		    (tw_dev->state[i] != TW_S_INITIAL) &&
+ 		    (tw_dev->state[i] != TW_S_COMPLETED)) {
+-			if (tw_dev->srb[i]) {
+-				tw_dev->srb[i]->result = (DID_RESET << 16);
+-				tw_dev->srb[i]->scsi_done(tw_dev->srb[i]);
+-				twl_unmap_scsi_data(tw_dev, i);
++			struct scsi_cmnd *cmd = tw_dev->srb[i];
++
++			if (cmd) {
++				cmd->result = (DID_RESET << 16);
++				scsi_dma_unmap(cmd);
++				cmd->scsi_done(cmd);
+ 			}
+ 		}
+ 	}
+@@ -1507,9 +1480,6 @@ static int twl_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_
+ 	/* Save the scsi command for use by the ISR */
+ 	tw_dev->srb[request_id] = SCpnt;
+ 
+-	/* Initialize phase to zero */
+-	SCpnt->SCp.phase = TW_PHASE_INITIAL;
+-
+ 	retval = twl_scsiop_execute_scsi(tw_dev, request_id, NULL, 0, NULL);
+ 	if (retval) {
+ 		tw_dev->state[request_id] = TW_S_COMPLETED;
+diff --git a/drivers/scsi/3w-sas.h b/drivers/scsi/3w-sas.h
+index d474892701d4..fec6449c7595 100644
+--- a/drivers/scsi/3w-sas.h
++++ b/drivers/scsi/3w-sas.h
+@@ -103,10 +103,6 @@ static char *twl_aen_severity_table[] =
+ #define TW_CURRENT_DRIVER_BUILD 0
+ #define TW_CURRENT_DRIVER_BRANCH 0
+ 
+-/* Phase defines */
+-#define TW_PHASE_INITIAL 0
+-#define TW_PHASE_SGLIST  2
+-
+ /* Misc defines */
+ #define TW_SECTOR_SIZE                        512
+ #define TW_MAX_UNITS			      32
+diff --git a/drivers/scsi/3w-xxxx.c b/drivers/scsi/3w-xxxx.c
+index c75f2048319f..2940bd769936 100644
+--- a/drivers/scsi/3w-xxxx.c
++++ b/drivers/scsi/3w-xxxx.c
+@@ -1271,32 +1271,6 @@ static int tw_initialize_device_extension(TW_Device_Extension *tw_dev)
+ 	return 0;
+ } /* End tw_initialize_device_extension() */
+ 
+-static int tw_map_scsi_sg_data(struct pci_dev *pdev, struct scsi_cmnd *cmd)
+-{
+-	int use_sg;
+-
+-	dprintk(KERN_WARNING "3w-xxxx: tw_map_scsi_sg_data()\n");
+-
+-	use_sg = scsi_dma_map(cmd);
+-	if (use_sg < 0) {
+-		printk(KERN_WARNING "3w-xxxx: tw_map_scsi_sg_data(): pci_map_sg() failed.\n");
+-		return 0;
+-	}
+-
+-	cmd->SCp.phase = TW_PHASE_SGLIST;
+-	cmd->SCp.have_data_in = use_sg;
+-
+-	return use_sg;
+-} /* End tw_map_scsi_sg_data() */
+-
+-static void tw_unmap_scsi_data(struct pci_dev *pdev, struct scsi_cmnd *cmd)
+-{
+-	dprintk(KERN_WARNING "3w-xxxx: tw_unmap_scsi_data()\n");
+-
+-	if (cmd->SCp.phase == TW_PHASE_SGLIST)
+-		scsi_dma_unmap(cmd);
+-} /* End tw_unmap_scsi_data() */
+-
+ /* This function will reset a device extension */
+ static int tw_reset_device_extension(TW_Device_Extension *tw_dev)
+ {
+@@ -1319,8 +1293,8 @@ static int tw_reset_device_extension(TW_Device_Extension *tw_dev)
+ 			srb = tw_dev->srb[i];
+ 			if (srb != NULL) {
+ 				srb->result = (DID_RESET << 16);
+-				tw_dev->srb[i]->scsi_done(tw_dev->srb[i]);
+-				tw_unmap_scsi_data(tw_dev->tw_pci_dev, tw_dev->srb[i]);
++				scsi_dma_unmap(srb);
++				srb->scsi_done(srb);
+ 			}
+ 		}
+ 	}
+@@ -1767,8 +1741,8 @@ static int tw_scsiop_read_write(TW_Device_Extension *tw_dev, int request_id)
+ 	command_packet->byte8.io.lba = lba;
+ 	command_packet->byte6.block_count = num_sectors;
+ 
+-	use_sg = tw_map_scsi_sg_data(tw_dev->tw_pci_dev, tw_dev->srb[request_id]);
+-	if (!use_sg)
++	use_sg = scsi_dma_map(srb);
++	if (use_sg <= 0)
+ 		return 1;
+ 
+ 	scsi_for_each_sg(tw_dev->srb[request_id], sg, use_sg, i) {
+@@ -1955,9 +1929,6 @@ static int tw_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_c
+ 	/* Save the scsi command for use by the ISR */
+ 	tw_dev->srb[request_id] = SCpnt;
+ 
+-	/* Initialize phase to zero */
+-	SCpnt->SCp.phase = TW_PHASE_INITIAL;
+-
+ 	switch (*command) {
+ 		case READ_10:
+ 		case READ_6:
+@@ -2185,12 +2156,11 @@ static irqreturn_t tw_interrupt(int irq, void *dev_instance)
+ 
+ 				/* Now complete the io */
+ 				if ((error != TW_ISR_DONT_COMPLETE)) {
++					scsi_dma_unmap(tw_dev->srb[request_id]);
++					tw_dev->srb[request_id]->scsi_done(tw_dev->srb[request_id]);
+ 					tw_dev->state[request_id] = TW_S_COMPLETED;
+ 					tw_state_request_finish(tw_dev, request_id);
+ 					tw_dev->posted_request_count--;
+-					tw_dev->srb[request_id]->scsi_done(tw_dev->srb[request_id]);
+-					
+-					tw_unmap_scsi_data(tw_dev->tw_pci_dev, tw_dev->srb[request_id]);
+ 				}
+ 			}
+ 				
+diff --git a/drivers/scsi/3w-xxxx.h b/drivers/scsi/3w-xxxx.h
+index 29b0b84ed69e..6f65e663d393 100644
+--- a/drivers/scsi/3w-xxxx.h
++++ b/drivers/scsi/3w-xxxx.h
+@@ -195,11 +195,6 @@ static unsigned char tw_sense_table[][4] =
+ #define TW_AEN_SMART_FAIL        0x000F
+ #define TW_AEN_SBUF_FAIL         0x0024
+ 
+-/* Phase defines */
+-#define TW_PHASE_INITIAL 0
+-#define TW_PHASE_SINGLE 1
+-#define TW_PHASE_SGLIST 2
+-
+ /* Misc defines */
+ #define TW_ALIGNMENT_6000		      64 /* 64 bytes */
+ #define TW_ALIGNMENT_7000                     4  /* 4 bytes */
+diff --git a/drivers/scsi/scsi_devinfo.c b/drivers/scsi/scsi_devinfo.c
+index 262ab837a704..9f77d23239a2 100644
+--- a/drivers/scsi/scsi_devinfo.c
++++ b/drivers/scsi/scsi_devinfo.c
+@@ -226,6 +226,7 @@ static struct {
+ 	{"PIONEER", "CD-ROM DRM-624X", NULL, BLIST_FORCELUN | BLIST_SINGLELUN},
+ 	{"Promise", "VTrak E610f", NULL, BLIST_SPARSELUN | BLIST_NO_RSOC},
+ 	{"Promise", "", NULL, BLIST_SPARSELUN},
++	{"QNAP", "iSCSI Storage", NULL, BLIST_MAX_1024},
+ 	{"QUANTUM", "XP34301", "1071", BLIST_NOTQ},
+ 	{"REGAL", "CDC-4X", NULL, BLIST_MAX5LUN | BLIST_SINGLELUN},
+ 	{"SanDisk", "ImageMate CF-SD1", NULL, BLIST_FORCELUN},
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index 9c0a520d933c..3e6142f61499 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -897,6 +897,12 @@ static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result,
+ 	 */
+ 	if (*bflags & BLIST_MAX_512)
+ 		blk_queue_max_hw_sectors(sdev->request_queue, 512);
++	/*
++	 * Max 1024 sector transfer length for targets that report incorrect
++	 * max/optimal lengths and relied on the old block layer safe default
++	 */
++	else if (*bflags & BLIST_MAX_1024)
++		blk_queue_max_hw_sectors(sdev->request_queue, 1024);
+ 
+ 	/*
+ 	 * Some devices may not want to have a start command automatically
+diff --git a/drivers/ssb/Kconfig b/drivers/ssb/Kconfig
+index 75b3603906c1..f0d22cdb51cd 100644
+--- a/drivers/ssb/Kconfig
++++ b/drivers/ssb/Kconfig
+@@ -130,6 +130,7 @@ config SSB_DRIVER_MIPS
+ 	bool "SSB Broadcom MIPS core driver"
+ 	depends on SSB && MIPS
+ 	select SSB_SERIAL
++	select SSB_SFLASH
+ 	help
+ 	  Driver for the Sonics Silicon Backplane attached
+ 	  Broadcom MIPS core.
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index 4e959c43f680..6afce7eb3d74 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -880,6 +880,7 @@ static int atmel_prepare_tx_dma(struct uart_port *port)
+ 	config.direction = DMA_MEM_TO_DEV;
+ 	config.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ 	config.dst_addr = port->mapbase + ATMEL_US_THR;
++	config.dst_maxburst = 1;
+ 
+ 	ret = dmaengine_slave_config(atmel_port->chan_tx,
+ 				     &config);
+@@ -1059,6 +1060,7 @@ static int atmel_prepare_rx_dma(struct uart_port *port)
+ 	config.direction = DMA_DEV_TO_MEM;
+ 	config.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ 	config.src_addr = port->mapbase + ATMEL_US_RHR;
++	config.src_maxburst = 1;
+ 
+ 	ret = dmaengine_slave_config(atmel_port->chan_rx,
+ 				     &config);
+diff --git a/drivers/tty/serial/of_serial.c b/drivers/tty/serial/of_serial.c
+index 33fb94f78967..0a52c8b55a5f 100644
+--- a/drivers/tty/serial/of_serial.c
++++ b/drivers/tty/serial/of_serial.c
+@@ -344,7 +344,6 @@ static struct of_device_id of_platform_serial_table[] = {
+ 	{ .compatible = "ibm,qpace-nwp-serial",
+ 		.data = (void *)PORT_NWPSERIAL, },
+ #endif
+-	{ .type = "serial",         .data = (void *)PORT_UNKNOWN, },
+ 	{ /* end of list */ },
+ };
+ 
+diff --git a/drivers/tty/serial/uartlite.c b/drivers/tty/serial/uartlite.c
+index 189f52e3111f..a0099a7f60d4 100644
+--- a/drivers/tty/serial/uartlite.c
++++ b/drivers/tty/serial/uartlite.c
+@@ -632,7 +632,8 @@ MODULE_DEVICE_TABLE(of, ulite_of_match);
+ 
+ static int ulite_probe(struct platform_device *pdev)
+ {
+-	struct resource *res, *res2;
++	struct resource *res;
++	int irq;
+ 	int id = pdev->id;
+ #ifdef CONFIG_OF
+ 	const __be32 *prop;
+@@ -646,11 +647,11 @@ static int ulite_probe(struct platform_device *pdev)
+ 	if (!res)
+ 		return -ENODEV;
+ 
+-	res2 = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+-	if (!res2)
+-		return -ENODEV;
++	irq = platform_get_irq(pdev, 0);
++	if (irq <= 0)
++		return -ENXIO;
+ 
+-	return ulite_assign(&pdev->dev, id, res->start, res2->start);
++	return ulite_assign(&pdev->dev, id, res->start, irq);
+ }
+ 
+ static int ulite_remove(struct platform_device *pdev)
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index cff531a51a78..54853a02ce9e 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -1325,9 +1325,9 @@ static SIMPLE_DEV_PM_OPS(cdns_uart_dev_pm_ops, cdns_uart_suspend,
+  */
+ static int cdns_uart_probe(struct platform_device *pdev)
+ {
+-	int rc, id;
++	int rc, id, irq;
+ 	struct uart_port *port;
+-	struct resource *res, *res2;
++	struct resource *res;
+ 	struct cdns_uart *cdns_uart_data;
+ 
+ 	cdns_uart_data = devm_kzalloc(&pdev->dev, sizeof(*cdns_uart_data),
+@@ -1374,9 +1374,9 @@ static int cdns_uart_probe(struct platform_device *pdev)
+ 		goto err_out_clk_disable;
+ 	}
+ 
+-	res2 = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+-	if (!res2) {
+-		rc = -ENODEV;
++	irq = platform_get_irq(pdev, 0);
++	if (irq <= 0) {
++		rc = -ENXIO;
+ 		goto err_out_clk_disable;
+ 	}
+ 
+@@ -1405,7 +1405,7 @@ static int cdns_uart_probe(struct platform_device *pdev)
+ 		 * and triggers invocation of the config_port() entry point.
+ 		 */
+ 		port->mapbase = res->start;
+-		port->irq = res2->start;
++		port->irq = irq;
+ 		port->dev = &pdev->dev;
+ 		port->uartclk = clk_get_rate(cdns_uart_data->uartclk);
+ 		port->private_data = cdns_uart_data;
+diff --git a/drivers/usb/chipidea/otg_fsm.c b/drivers/usb/chipidea/otg_fsm.c
+index 562e581f6765..3770330a2201 100644
+--- a/drivers/usb/chipidea/otg_fsm.c
++++ b/drivers/usb/chipidea/otg_fsm.c
+@@ -537,7 +537,6 @@ static int ci_otg_start_host(struct otg_fsm *fsm, int on)
+ {
+ 	struct ci_hdrc	*ci = container_of(fsm, struct ci_hdrc, fsm);
+ 
+-	mutex_unlock(&fsm->lock);
+ 	if (on) {
+ 		ci_role_stop(ci);
+ 		ci_role_start(ci, CI_ROLE_HOST);
+@@ -546,7 +545,6 @@ static int ci_otg_start_host(struct otg_fsm *fsm, int on)
+ 		hw_device_reset(ci);
+ 		ci_role_start(ci, CI_ROLE_GADGET);
+ 	}
+-	mutex_lock(&fsm->lock);
+ 	return 0;
+ }
+ 
+@@ -554,12 +552,10 @@ static int ci_otg_start_gadget(struct otg_fsm *fsm, int on)
+ {
+ 	struct ci_hdrc	*ci = container_of(fsm, struct ci_hdrc, fsm);
+ 
+-	mutex_unlock(&fsm->lock);
+ 	if (on)
+ 		usb_gadget_vbus_connect(&ci->gadget);
+ 	else
+ 		usb_gadget_vbus_disconnect(&ci->gadget);
+-	mutex_lock(&fsm->lock);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 683617714e7c..220c0fd059bb 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1133,11 +1133,16 @@ static int acm_probe(struct usb_interface *intf,
+ 	}
+ 
+ 	while (buflen > 0) {
++		elength = buffer[0];
++		if (!elength) {
++			dev_err(&intf->dev, "skipping garbage byte\n");
++			elength = 1;
++			goto next_desc;
++		}
+ 		if (buffer[1] != USB_DT_CS_INTERFACE) {
+ 			dev_err(&intf->dev, "skipping garbage\n");
+ 			goto next_desc;
+ 		}
+-		elength = buffer[0];
+ 
+ 		switch (buffer[2]) {
+ 		case USB_CDC_UNION_TYPE: /* we've found it */
+diff --git a/drivers/usb/storage/uas-detect.h b/drivers/usb/storage/uas-detect.h
+index 9893d696fc97..f58caa9e6a27 100644
+--- a/drivers/usb/storage/uas-detect.h
++++ b/drivers/usb/storage/uas-detect.h
+@@ -51,7 +51,8 @@ static int uas_find_endpoints(struct usb_host_interface *alt,
+ }
+ 
+ static int uas_use_uas_driver(struct usb_interface *intf,
+-			      const struct usb_device_id *id)
++			      const struct usb_device_id *id,
++			      unsigned long *flags_ret)
+ {
+ 	struct usb_host_endpoint *eps[4] = { };
+ 	struct usb_device *udev = interface_to_usbdev(intf);
+@@ -73,7 +74,7 @@ static int uas_use_uas_driver(struct usb_interface *intf,
+ 	 * this writing the following versions exist:
+ 	 * ASM1051 - no uas support version
+ 	 * ASM1051 - with broken (*) uas support
+-	 * ASM1053 - with working uas support
++	 * ASM1053 - with working uas support, but problems with large xfers
+ 	 * ASM1153 - with working uas support
+ 	 *
+ 	 * Devices with these chips re-use a number of device-ids over the
+@@ -103,6 +104,9 @@ static int uas_use_uas_driver(struct usb_interface *intf,
+ 		} else if (usb_ss_max_streams(&eps[1]->ss_ep_comp) == 32) {
+ 			/* Possibly an ASM1051, disable uas */
+ 			flags |= US_FL_IGNORE_UAS;
++		} else {
++			/* ASM1053, these have issues with large transfers */
++			flags |= US_FL_MAX_SECTORS_240;
+ 		}
+ 	}
+ 
+@@ -132,5 +136,8 @@ static int uas_use_uas_driver(struct usb_interface *intf,
+ 		return 0;
+ 	}
+ 
++	if (flags_ret)
++		*flags_ret = flags;
++
+ 	return 1;
+ }
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 6cdabdc119a7..6d3122afeed3 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -759,7 +759,10 @@ static int uas_eh_bus_reset_handler(struct scsi_cmnd *cmnd)
+ 
+ static int uas_slave_alloc(struct scsi_device *sdev)
+ {
+-	sdev->hostdata = (void *)sdev->host->hostdata;
++	struct uas_dev_info *devinfo =
++		(struct uas_dev_info *)sdev->host->hostdata;
++
++	sdev->hostdata = devinfo;
+ 
+ 	/* USB has unusual DMA-alignment requirements: Although the
+ 	 * starting address of each scatter-gather element doesn't matter,
+@@ -778,6 +781,11 @@ static int uas_slave_alloc(struct scsi_device *sdev)
+ 	 */
+ 	blk_queue_update_dma_alignment(sdev->request_queue, (512 - 1));
+ 
++	if (devinfo->flags & US_FL_MAX_SECTORS_64)
++		blk_queue_max_hw_sectors(sdev->request_queue, 64);
++	else if (devinfo->flags & US_FL_MAX_SECTORS_240)
++		blk_queue_max_hw_sectors(sdev->request_queue, 240);
++
+ 	return 0;
+ }
+ 
+@@ -887,8 +895,9 @@ static int uas_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ 	struct Scsi_Host *shost = NULL;
+ 	struct uas_dev_info *devinfo;
+ 	struct usb_device *udev = interface_to_usbdev(intf);
++	unsigned long dev_flags;
+ 
+-	if (!uas_use_uas_driver(intf, id))
++	if (!uas_use_uas_driver(intf, id, &dev_flags))
+ 		return -ENODEV;
+ 
+ 	if (uas_switch_interface(udev, intf))
+@@ -910,8 +919,7 @@ static int uas_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ 	devinfo->udev = udev;
+ 	devinfo->resetting = 0;
+ 	devinfo->shutdown = 0;
+-	devinfo->flags = id->driver_info;
+-	usb_stor_adjust_quirks(udev, &devinfo->flags);
++	devinfo->flags = dev_flags;
+ 	init_usb_anchor(&devinfo->cmd_urbs);
+ 	init_usb_anchor(&devinfo->sense_urbs);
+ 	init_usb_anchor(&devinfo->data_urbs);
+diff --git a/drivers/usb/storage/usb.c b/drivers/usb/storage/usb.c
+index 5600c33fcadb..6c10c888f35f 100644
+--- a/drivers/usb/storage/usb.c
++++ b/drivers/usb/storage/usb.c
+@@ -479,7 +479,8 @@ void usb_stor_adjust_quirks(struct usb_device *udev, unsigned long *fflags)
+ 			US_FL_SINGLE_LUN | US_FL_NO_WP_DETECT |
+ 			US_FL_NO_READ_DISC_INFO | US_FL_NO_READ_CAPACITY_16 |
+ 			US_FL_INITIAL_READ10 | US_FL_WRITE_CACHE |
+-			US_FL_NO_ATA_1X | US_FL_NO_REPORT_OPCODES);
++			US_FL_NO_ATA_1X | US_FL_NO_REPORT_OPCODES |
++			US_FL_MAX_SECTORS_240);
+ 
+ 	p = quirks;
+ 	while (*p) {
+@@ -520,6 +521,9 @@ void usb_stor_adjust_quirks(struct usb_device *udev, unsigned long *fflags)
+ 		case 'f':
+ 			f |= US_FL_NO_REPORT_OPCODES;
+ 			break;
++		case 'g':
++			f |= US_FL_MAX_SECTORS_240;
++			break;
+ 		case 'h':
+ 			f |= US_FL_CAPACITY_HEURISTICS;
+ 			break;
+@@ -1080,7 +1084,7 @@ static int storage_probe(struct usb_interface *intf,
+ 
+ 	/* If uas is enabled and this device can do uas then ignore it. */
+ #if IS_ENABLED(CONFIG_USB_UAS)
+-	if (uas_use_uas_driver(intf, id))
++	if (uas_use_uas_driver(intf, id, NULL))
+ 		return -ENXIO;
+ #endif
+ 
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index f23d4be3280e..2b4c5423672d 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -2403,7 +2403,7 @@ static noinline int btrfs_ioctl_snap_destroy(struct file *file,
+ 			"Attempt to delete subvolume %llu during send",
+ 			dest->root_key.objectid);
+ 		err = -EPERM;
+-		goto out_dput;
++		goto out_unlock_inode;
+ 	}
+ 
+ 	d_invalidate(dentry);
+@@ -2498,6 +2498,7 @@ out_up_write:
+ 				root_flags & ~BTRFS_ROOT_SUBVOL_DEAD);
+ 		spin_unlock(&dest->root_item_lock);
+ 	}
++out_unlock_inode:
+ 	mutex_unlock(&inode->i_mutex);
+ 	if (!err) {
+ 		shrink_dcache_sb(root->fs_info->sb);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index bed43081720f..16f6365f65e7 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -4934,13 +4934,6 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
+ 	if (ret)
+ 		return ret;
+ 
+-	/*
+-	 * currently supporting (pre)allocate mode for extent-based
+-	 * files _only_
+-	 */
+-	if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+-		return -EOPNOTSUPP;
+-
+ 	if (mode & FALLOC_FL_COLLAPSE_RANGE)
+ 		return ext4_collapse_range(inode, offset, len);
+ 
+@@ -4962,6 +4955,14 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
+ 
+ 	mutex_lock(&inode->i_mutex);
+ 
++	/*
++	 * We only support preallocation for extent-based files only
++	 */
++	if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) {
++		ret = -EOPNOTSUPP;
++		goto out;
++	}
++
+ 	if (!(mode & FALLOC_FL_KEEP_SIZE) &&
+ 	     offset + len > i_size_read(inode)) {
+ 		new_size = offset + len;
+diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
+index e04d45733976..9a0121376358 100644
+--- a/fs/ext4/extents_status.c
++++ b/fs/ext4/extents_status.c
+@@ -705,6 +705,14 @@ int ext4_es_insert_extent(struct inode *inode, ext4_lblk_t lblk,
+ 
+ 	BUG_ON(end < lblk);
+ 
++	if ((status & EXTENT_STATUS_DELAYED) &&
++	    (status & EXTENT_STATUS_WRITTEN)) {
++		ext4_warning(inode->i_sb, "Inserting extent [%u/%u] as "
++				" delayed and written which can potentially "
++				" cause data loss.\n", lblk, len);
++		WARN_ON(1);
++	}
++
+ 	newes.es_lblk = lblk;
+ 	newes.es_len = len;
+ 	ext4_es_store_pblock_status(&newes, pblk, status);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 5cb9a212b86f..852cc521f327 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -534,6 +534,7 @@ int ext4_map_blocks(handle_t *handle, struct inode *inode,
+ 		status = map->m_flags & EXT4_MAP_UNWRITTEN ?
+ 				EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
+ 		if (!(flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE) &&
++		    !(status & EXTENT_STATUS_WRITTEN) &&
+ 		    ext4_find_delalloc_range(inode, map->m_lblk,
+ 					     map->m_lblk + map->m_len - 1))
+ 			status |= EXTENT_STATUS_DELAYED;
+@@ -638,6 +639,7 @@ found:
+ 		status = map->m_flags & EXT4_MAP_UNWRITTEN ?
+ 				EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
+ 		if (!(flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE) &&
++		    !(status & EXTENT_STATUS_WRITTEN) &&
+ 		    ext4_find_delalloc_range(inode, map->m_lblk,
+ 					     map->m_lblk + map->m_len - 1))
+ 			status |= EXTENT_STATUS_DELAYED;
+diff --git a/fs/hfsplus/xattr.c b/fs/hfsplus/xattr.c
+index d98094a9f476..ff10f3decbc9 100644
+--- a/fs/hfsplus/xattr.c
++++ b/fs/hfsplus/xattr.c
+@@ -806,9 +806,6 @@ end_removexattr:
+ static int hfsplus_osx_getxattr(struct dentry *dentry, const char *name,
+ 					void *buffer, size_t size, int type)
+ {
+-	char *xattr_name;
+-	int res;
+-
+ 	if (!strcmp(name, ""))
+ 		return -EINVAL;
+ 
+@@ -818,24 +815,19 @@ static int hfsplus_osx_getxattr(struct dentry *dentry, const char *name,
+ 	 */
+ 	if (is_known_namespace(name))
+ 		return -EOPNOTSUPP;
+-	xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN
+-		+ XATTR_MAC_OSX_PREFIX_LEN + 1, GFP_KERNEL);
+-	if (!xattr_name)
+-		return -ENOMEM;
+-	strcpy(xattr_name, XATTR_MAC_OSX_PREFIX);
+-	strcpy(xattr_name + XATTR_MAC_OSX_PREFIX_LEN, name);
+ 
+-	res = hfsplus_getxattr(dentry, xattr_name, buffer, size);
+-	kfree(xattr_name);
+-	return res;
++	/*
++	 * osx is the namespace we use to indicate an unprefixed
++	 * attribute on the filesystem (like the ones that OS X
++	 * creates), so we pass the name through unmodified (after
++	 * ensuring it doesn't conflict with another namespace).
++	 */
++	return hfsplus_getxattr(dentry, name, buffer, size);
+ }
+ 
+ static int hfsplus_osx_setxattr(struct dentry *dentry, const char *name,
+ 		const void *buffer, size_t size, int flags, int type)
+ {
+-	char *xattr_name;
+-	int res;
+-
+ 	if (!strcmp(name, ""))
+ 		return -EINVAL;
+ 
+@@ -845,16 +837,14 @@ static int hfsplus_osx_setxattr(struct dentry *dentry, const char *name,
+ 	 */
+ 	if (is_known_namespace(name))
+ 		return -EOPNOTSUPP;
+-	xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN
+-		+ XATTR_MAC_OSX_PREFIX_LEN + 1, GFP_KERNEL);
+-	if (!xattr_name)
+-		return -ENOMEM;
+-	strcpy(xattr_name, XATTR_MAC_OSX_PREFIX);
+-	strcpy(xattr_name + XATTR_MAC_OSX_PREFIX_LEN, name);
+ 
+-	res = hfsplus_setxattr(dentry, xattr_name, buffer, size, flags);
+-	kfree(xattr_name);
+-	return res;
++	/*
++	 * osx is the namespace we use to indicate an unprefixed
++	 * attribute on the filesystem (like the ones that OS X
++	 * creates), so we pass the name through unmodified (after
++	 * ensuring it doesn't conflict with another namespace).
++	 */
++	return hfsplus_setxattr(dentry, name, buffer, size, flags);
+ }
+ 
+ static size_t hfsplus_osx_listxattr(struct dentry *dentry, char *list,
+diff --git a/include/linux/usb_usual.h b/include/linux/usb_usual.h
+index a7f2604c5f25..7f5f78bd15ad 100644
+--- a/include/linux/usb_usual.h
++++ b/include/linux/usb_usual.h
+@@ -77,6 +77,8 @@
+ 		/* Cannot handle ATA_12 or ATA_16 CDBs */	\
+ 	US_FLAG(NO_REPORT_OPCODES,	0x04000000)		\
+ 		/* Cannot handle MI_REPORT_SUPPORTED_OPERATION_CODES */	\
++	US_FLAG(MAX_SECTORS_240,	0x08000000)		\
++		/* Sets max_sectors to 240 */			\
+ 
+ #define US_FLAG(name, value)	US_FL_##name = value ,
+ enum { US_DO_ALL_FLAGS };
+diff --git a/include/scsi/scsi_devinfo.h b/include/scsi/scsi_devinfo.h
+index 183eaab7c380..96e3f56519e7 100644
+--- a/include/scsi/scsi_devinfo.h
++++ b/include/scsi/scsi_devinfo.h
+@@ -36,5 +36,6 @@
+ 					     for sequential scan */
+ #define BLIST_TRY_VPD_PAGES	0x10000000 /* Attempt to read VPD pages */
+ #define BLIST_NO_RSOC		0x20000000 /* don't try to issue RSOC */
++#define BLIST_MAX_1024		0x40000000 /* maximum 1024 sector cdb length */
+ 
+ #endif
+diff --git a/include/sound/emu10k1.h b/include/sound/emu10k1.h
+index 0de95ccb92cf..5bd134651f5e 100644
+--- a/include/sound/emu10k1.h
++++ b/include/sound/emu10k1.h
+@@ -41,7 +41,8 @@
+ 
+ #define EMUPAGESIZE     4096
+ #define MAXREQVOICES    8
+-#define MAXPAGES        8192
++#define MAXPAGES0       4096	/* 32 bit mode */
++#define MAXPAGES1       8192	/* 31 bit mode */
+ #define RESERVED        0
+ #define NUM_MIDI        16
+ #define NUM_G           64              /* use all channels */
+@@ -50,8 +51,7 @@
+ 
+ /* FIXME? - according to the OSS driver the EMU10K1 needs a 29 bit DMA mask */
+ #define EMU10K1_DMA_MASK	0x7fffffffUL	/* 31bit */
+-#define AUDIGY_DMA_MASK		0x7fffffffUL	/* 31bit FIXME - 32 should work? */
+-						/* See ALSA bug #1276 - rlrevell */
++#define AUDIGY_DMA_MASK		0xffffffffUL	/* 32bit mode */
+ 
+ #define TMEMSIZE        256*1024
+ #define TMEMSIZEREG     4
+@@ -466,8 +466,11 @@
+ 
+ #define MAPB			0x0d		/* Cache map B						*/
+ 
+-#define MAP_PTE_MASK		0xffffe000	/* The 19 MSBs of the PTE indexed by the PTI		*/
+-#define MAP_PTI_MASK		0x00001fff	/* The 13 bit index to one of the 8192 PTE dwords      	*/
++#define MAP_PTE_MASK0		0xfffff000	/* The 20 MSBs of the PTE indexed by the PTI		*/
++#define MAP_PTI_MASK0		0x00000fff	/* The 12 bit index to one of the 4096 PTE dwords      	*/
++
++#define MAP_PTE_MASK1		0xffffe000	/* The 19 MSBs of the PTE indexed by the PTI		*/
++#define MAP_PTI_MASK1		0x00001fff	/* The 13 bit index to one of the 8192 PTE dwords      	*/
+ 
+ /* 0x0e, 0x0f: Not used */
+ 
+@@ -1704,6 +1707,7 @@ struct snd_emu10k1 {
+ 	unsigned short model;			/* subsystem id */
+ 	unsigned int card_type;			/* EMU10K1_CARD_* */
+ 	unsigned int ecard_ctrl;		/* ecard control bits */
++	unsigned int address_mode;		/* address mode */
+ 	unsigned long dma_mask;			/* PCI DMA mask */
+ 	unsigned int delay_pcm_irq;		/* in samples */
+ 	int max_cache_pages;			/* max memory size / PAGE_SIZE */
+diff --git a/include/sound/soc-dapm.h b/include/sound/soc-dapm.h
+index 8d7416e46861..15355892a0ff 100644
+--- a/include/sound/soc-dapm.h
++++ b/include/sound/soc-dapm.h
+@@ -287,7 +287,7 @@ struct device;
+ 	.access = SNDRV_CTL_ELEM_ACCESS_TLV_READ | SNDRV_CTL_ELEM_ACCESS_READWRITE,\
+ 	.tlv.p = (tlv_array), \
+ 	.get = snd_soc_dapm_get_volsw, .put = snd_soc_dapm_put_volsw, \
+-	.private_value = SOC_SINGLE_VALUE(reg, shift, max, invert, 0) }
++	.private_value = SOC_SINGLE_VALUE(reg, shift, max, invert, 1) }
+ #define SOC_DAPM_SINGLE_TLV_VIRT(xname, max, tlv_array) \
+ 	SOC_DAPM_SINGLE(xname, SND_SOC_NOPM, 0, max, 0, tlv_array)
+ #define SOC_DAPM_ENUM(xname, xenum) \
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index a64e7a207d2b..0c5796eadae1 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -357,8 +357,8 @@ select_insn:
+ 	ALU64_MOD_X:
+ 		if (unlikely(SRC == 0))
+ 			return 0;
+-		tmp = DST;
+-		DST = do_div(tmp, SRC);
++		div64_u64_rem(DST, SRC, &tmp);
++		DST = tmp;
+ 		CONT;
+ 	ALU_MOD_X:
+ 		if (unlikely(SRC == 0))
+@@ -367,8 +367,8 @@ select_insn:
+ 		DST = do_div(tmp, (u32) SRC);
+ 		CONT;
+ 	ALU64_MOD_K:
+-		tmp = DST;
+-		DST = do_div(tmp, IMM);
++		div64_u64_rem(DST, IMM, &tmp);
++		DST = tmp;
+ 		CONT;
+ 	ALU_MOD_K:
+ 		tmp = (u32) DST;
+@@ -377,7 +377,7 @@ select_insn:
+ 	ALU64_DIV_X:
+ 		if (unlikely(SRC == 0))
+ 			return 0;
+-		do_div(DST, SRC);
++		DST = div64_u64(DST, SRC);
+ 		CONT;
+ 	ALU_DIV_X:
+ 		if (unlikely(SRC == 0))
+@@ -387,7 +387,7 @@ select_insn:
+ 		DST = (u32) tmp;
+ 		CONT;
+ 	ALU64_DIV_K:
+-		do_div(DST, IMM);
++		DST = div64_u64(DST, IMM);
+ 		CONT;
+ 	ALU_DIV_K:
+ 		tmp = (u32) DST;
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index 208d5439e59b..787b0d699969 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -158,6 +158,7 @@ void ping_unhash(struct sock *sk)
+ 	if (sk_hashed(sk)) {
+ 		write_lock_bh(&ping_table.lock);
+ 		hlist_nulls_del(&sk->sk_nulls_node);
++		sk_nulls_node_init(&sk->sk_nulls_node);
+ 		sock_put(sk);
+ 		isk->inet_num = 0;
+ 		isk->inet_sport = 0;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index ad5064362c5c..20fc0202cbbe 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -963,10 +963,7 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
+ 	if (dst_metric_locked(dst, RTAX_MTU))
+ 		return;
+ 
+-	if (dst->dev->mtu < mtu)
+-		return;
+-
+-	if (rt->rt_pmtu && rt->rt_pmtu < mtu)
++	if (ipv4_mtu(dst) < mtu)
+ 		return;
+ 
+ 	if (mtu < ip_rt_min_pmtu)
+diff --git a/sound/pci/emu10k1/emu10k1.c b/sound/pci/emu10k1/emu10k1.c
+index 37d0220a094c..db7a2e5e4a14 100644
+--- a/sound/pci/emu10k1/emu10k1.c
++++ b/sound/pci/emu10k1/emu10k1.c
+@@ -183,8 +183,10 @@ static int snd_card_emu10k1_probe(struct pci_dev *pci,
+ 	}
+ #endif
+  
+-	strcpy(card->driver, emu->card_capabilities->driver);
+-	strcpy(card->shortname, emu->card_capabilities->name);
++	strlcpy(card->driver, emu->card_capabilities->driver,
++		sizeof(card->driver));
++	strlcpy(card->shortname, emu->card_capabilities->name,
++		sizeof(card->shortname));
+ 	snprintf(card->longname, sizeof(card->longname),
+ 		 "%s (rev.%d, serial:0x%x) at 0x%lx, irq %i",
+ 		 card->shortname, emu->revision, emu->serial, emu->port, emu->irq);
+diff --git a/sound/pci/emu10k1/emu10k1_callback.c b/sound/pci/emu10k1/emu10k1_callback.c
+index 874cd76c7b7f..d2c7ea3a7610 100644
+--- a/sound/pci/emu10k1/emu10k1_callback.c
++++ b/sound/pci/emu10k1/emu10k1_callback.c
+@@ -415,7 +415,7 @@ start_voice(struct snd_emux_voice *vp)
+ 	snd_emu10k1_ptr_write(hw, Z2, ch, 0);
+ 
+ 	/* invalidate maps */
+-	temp = (hw->silent_page.addr << 1) | MAP_PTI_MASK;
++	temp = (hw->silent_page.addr << hw->address_mode) | (hw->address_mode ? MAP_PTI_MASK1 : MAP_PTI_MASK0);
+ 	snd_emu10k1_ptr_write(hw, MAPA, ch, temp);
+ 	snd_emu10k1_ptr_write(hw, MAPB, ch, temp);
+ #if 0
+@@ -436,7 +436,7 @@ start_voice(struct snd_emux_voice *vp)
+ 		snd_emu10k1_ptr_write(hw, CDF, ch, sample);
+ 
+ 		/* invalidate maps */
+-		temp = ((unsigned int)hw->silent_page.addr << 1) | MAP_PTI_MASK;
++		temp = ((unsigned int)hw->silent_page.addr << hw_address_mode) | (hw->address_mode ? MAP_PTI_MASK1 : MAP_PTI_MASK0);
+ 		snd_emu10k1_ptr_write(hw, MAPA, ch, temp);
+ 		snd_emu10k1_ptr_write(hw, MAPB, ch, temp);
+ 		
+diff --git a/sound/pci/emu10k1/emu10k1_main.c b/sound/pci/emu10k1/emu10k1_main.c
+index b4458a630a7c..df9f5c7c9c77 100644
+--- a/sound/pci/emu10k1/emu10k1_main.c
++++ b/sound/pci/emu10k1/emu10k1_main.c
+@@ -282,7 +282,7 @@ static int snd_emu10k1_init(struct snd_emu10k1 *emu, int enable_ir, int resume)
+ 	snd_emu10k1_ptr_write(emu, TCB, 0, 0);	/* taken from original driver */
+ 	snd_emu10k1_ptr_write(emu, TCBS, 0, 4);	/* taken from original driver */
+ 
+-	silent_page = (emu->silent_page.addr << 1) | MAP_PTI_MASK;
++	silent_page = (emu->silent_page.addr << emu->address_mode) | (emu->address_mode ? MAP_PTI_MASK1 : MAP_PTI_MASK0);
+ 	for (ch = 0; ch < NUM_G; ch++) {
+ 		snd_emu10k1_ptr_write(emu, MAPA, ch, silent_page);
+ 		snd_emu10k1_ptr_write(emu, MAPB, ch, silent_page);
+@@ -348,6 +348,11 @@ static int snd_emu10k1_init(struct snd_emu10k1 *emu, int enable_ir, int resume)
+ 		outl(reg | A_IOCFG_GPOUT0, emu->port + A_IOCFG);
+ 	}
+ 
++	if (emu->address_mode == 0) {
++		/* use 16M in 4G */
++		outl(inl(emu->port + HCFG) | HCFG_EXPANDED_MEM, emu->port + HCFG);
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1421,7 +1426,7 @@ static struct snd_emu_chip_details emu_chip_details[] = {
+ 	 *
+ 	 */
+ 	{.vendor = 0x1102, .device = 0x0008, .subsystem = 0x20011102,
+-	 .driver = "Audigy2", .name = "SB Audigy 2 ZS Notebook [SB0530]",
++	 .driver = "Audigy2", .name = "Audigy 2 ZS Notebook [SB0530]",
+ 	 .id = "Audigy2",
+ 	 .emu10k2_chip = 1,
+ 	 .ca0108_chip = 1,
+@@ -1571,7 +1576,7 @@ static struct snd_emu_chip_details emu_chip_details[] = {
+ 	 .adc_1361t = 1,  /* 24 bit capture instead of 16bit */
+ 	 .ac97_chip = 1} ,
+ 	{.vendor = 0x1102, .device = 0x0004, .subsystem = 0x10051102,
+-	 .driver = "Audigy2", .name = "SB Audigy 2 Platinum EX [SB0280]",
++	 .driver = "Audigy2", .name = "Audigy 2 Platinum EX [SB0280]",
+ 	 .id = "Audigy2",
+ 	 .emu10k2_chip = 1,
+ 	 .ca0102_chip = 1,
+@@ -1877,8 +1882,10 @@ int snd_emu10k1_create(struct snd_card *card,
+ 
+ 	is_audigy = emu->audigy = c->emu10k2_chip;
+ 
++	/* set addressing mode */
++	emu->address_mode = is_audigy ? 0 : 1;
+ 	/* set the DMA transfer mask */
+-	emu->dma_mask = is_audigy ? AUDIGY_DMA_MASK : EMU10K1_DMA_MASK;
++	emu->dma_mask = emu->address_mode ? EMU10K1_DMA_MASK : AUDIGY_DMA_MASK;
+ 	if (pci_set_dma_mask(pci, emu->dma_mask) < 0 ||
+ 	    pci_set_consistent_dma_mask(pci, emu->dma_mask) < 0) {
+ 		dev_err(card->dev,
+@@ -1903,7 +1910,7 @@ int snd_emu10k1_create(struct snd_card *card,
+ 
+ 	emu->max_cache_pages = max_cache_bytes >> PAGE_SHIFT;
+ 	if (snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV, snd_dma_pci_data(pci),
+-				32 * 1024, &emu->ptb_pages) < 0) {
++				(emu->address_mode ? 32 : 16) * 1024, &emu->ptb_pages) < 0) {
+ 		err = -ENOMEM;
+ 		goto error;
+ 	}
+@@ -2002,8 +2009,8 @@ int snd_emu10k1_create(struct snd_card *card,
+ 
+ 	/* Clear silent pages and set up pointers */
+ 	memset(emu->silent_page.area, 0, PAGE_SIZE);
+-	silent_page = emu->silent_page.addr << 1;
+-	for (idx = 0; idx < MAXPAGES; idx++)
++	silent_page = emu->silent_page.addr << emu->address_mode;
++	for (idx = 0; idx < (emu->address_mode ? MAXPAGES1 : MAXPAGES0); idx++)
+ 		((u32 *)emu->ptb_pages.area)[idx] = cpu_to_le32(silent_page | idx);
+ 
+ 	/* set up voice indices */
+diff --git a/sound/pci/emu10k1/emupcm.c b/sound/pci/emu10k1/emupcm.c
+index 0dc07385af0e..14a305bd8a98 100644
+--- a/sound/pci/emu10k1/emupcm.c
++++ b/sound/pci/emu10k1/emupcm.c
+@@ -380,7 +380,7 @@ static void snd_emu10k1_pcm_init_voice(struct snd_emu10k1 *emu,
+ 	snd_emu10k1_ptr_write(emu, Z1, voice, 0);
+ 	snd_emu10k1_ptr_write(emu, Z2, voice, 0);
+ 	/* invalidate maps */
+-	silent_page = ((unsigned int)emu->silent_page.addr << 1) | MAP_PTI_MASK;
++	silent_page = ((unsigned int)emu->silent_page.addr << emu->address_mode) | (emu->address_mode ? MAP_PTI_MASK1 : MAP_PTI_MASK0);
+ 	snd_emu10k1_ptr_write(emu, MAPA, voice, silent_page);
+ 	snd_emu10k1_ptr_write(emu, MAPB, voice, silent_page);
+ 	/* modulation envelope */
+diff --git a/sound/pci/emu10k1/memory.c b/sound/pci/emu10k1/memory.c
+index c68e6dd2fa67..4f1f69be1865 100644
+--- a/sound/pci/emu10k1/memory.c
++++ b/sound/pci/emu10k1/memory.c
+@@ -34,10 +34,11 @@
+  * aligned pages in others
+  */
+ #define __set_ptb_entry(emu,page,addr) \
+-	(((u32 *)(emu)->ptb_pages.area)[page] = cpu_to_le32(((addr) << 1) | (page)))
++	(((u32 *)(emu)->ptb_pages.area)[page] = cpu_to_le32(((addr) << (emu->address_mode)) | (page)))
+ 
+ #define UNIT_PAGES		(PAGE_SIZE / EMUPAGESIZE)
+-#define MAX_ALIGN_PAGES		(MAXPAGES / UNIT_PAGES)
++#define MAX_ALIGN_PAGES0		(MAXPAGES0 / UNIT_PAGES)
++#define MAX_ALIGN_PAGES1		(MAXPAGES1 / UNIT_PAGES)
+ /* get aligned page from offset address */
+ #define get_aligned_page(offset)	((offset) >> PAGE_SHIFT)
+ /* get offset address from aligned page */
+@@ -124,7 +125,7 @@ static int search_empty_map_area(struct snd_emu10k1 *emu, int npages, struct lis
+ 		}
+ 		page = blk->mapped_page + blk->pages;
+ 	}
+-	size = MAX_ALIGN_PAGES - page;
++	size = (emu->address_mode ? MAX_ALIGN_PAGES1 : MAX_ALIGN_PAGES0) - page;
+ 	if (size >= max_size) {
+ 		*nextp = pos;
+ 		return page;
+@@ -181,7 +182,7 @@ static int unmap_memblk(struct snd_emu10k1 *emu, struct snd_emu10k1_memblk *blk)
+ 		q = get_emu10k1_memblk(p, mapped_link);
+ 		end_page = q->mapped_page;
+ 	} else
+-		end_page = MAX_ALIGN_PAGES;
++		end_page = (emu->address_mode ? MAX_ALIGN_PAGES1 : MAX_ALIGN_PAGES0);
+ 
+ 	/* remove links */
+ 	list_del(&blk->mapped_link);
+@@ -307,7 +308,7 @@ snd_emu10k1_alloc_pages(struct snd_emu10k1 *emu, struct snd_pcm_substream *subst
+ 	if (snd_BUG_ON(!emu))
+ 		return NULL;
+ 	if (snd_BUG_ON(runtime->dma_bytes <= 0 ||
+-		       runtime->dma_bytes >= MAXPAGES * EMUPAGESIZE))
++		       runtime->dma_bytes >= (emu->address_mode ? MAXPAGES1 : MAXPAGES0) * EMUPAGESIZE))
+ 		return NULL;
+ 	hdr = emu->memhdr;
+ 	if (snd_BUG_ON(!hdr))
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 2fe86d2e1b09..a63a86332deb 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -3027,6 +3027,16 @@ static struct snd_kcontrol_new vmaster_mute_mode = {
+ 	.put = vmaster_mute_mode_put,
+ };
+ 
++/* meta hook to call each driver's vmaster hook */
++static void vmaster_hook(void *private_data, int enabled)
++{
++	struct hda_vmaster_mute_hook *hook = private_data;
++
++	if (hook->mute_mode != HDA_VMUTE_FOLLOW_MASTER)
++		enabled = hook->mute_mode;
++	hook->hook(hook->codec, enabled);
++}
++
+ /**
+  * snd_hda_add_vmaster_hook - Add a vmaster hook for mute-LED
+  * @codec: the HDA codec
+@@ -3045,9 +3055,9 @@ int snd_hda_add_vmaster_hook(struct hda_codec *codec,
+ 
+ 	if (!hook->hook || !hook->sw_kctl)
+ 		return 0;
+-	snd_ctl_add_vmaster_hook(hook->sw_kctl, hook->hook, codec);
+ 	hook->codec = codec;
+ 	hook->mute_mode = HDA_VMUTE_FOLLOW_MASTER;
++	snd_ctl_add_vmaster_hook(hook->sw_kctl, vmaster_hook, hook);
+ 	if (!expose_enum_ctl)
+ 		return 0;
+ 	kctl = snd_ctl_new1(&vmaster_mute_mode, hook);
+@@ -3073,14 +3083,7 @@ void snd_hda_sync_vmaster_hook(struct hda_vmaster_mute_hook *hook)
+ 	 */
+ 	if (hook->codec->bus->shutdown)
+ 		return;
+-	switch (hook->mute_mode) {
+-	case HDA_VMUTE_FOLLOW_MASTER:
+-		snd_ctl_sync_vmaster_hook(hook->sw_kctl);
+-		break;
+-	default:
+-		hook->hook(hook->codec, hook->mute_mode);
+-		break;
+-	}
++	snd_ctl_sync_vmaster_hook(hook->sw_kctl);
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_sync_vmaster_hook);
+ 
+diff --git a/sound/pci/hda/thinkpad_helper.c b/sound/pci/hda/thinkpad_helper.c
+index 6ba0b5517c40..2341fc334163 100644
+--- a/sound/pci/hda/thinkpad_helper.c
++++ b/sound/pci/hda/thinkpad_helper.c
+@@ -72,6 +72,7 @@ static void hda_fixup_thinkpad_acpi(struct hda_codec *codec,
+ 		if (led_set_func(TPACPI_LED_MUTE, false) >= 0) {
+ 			old_vmaster_hook = spec->vmaster_mute.hook;
+ 			spec->vmaster_mute.hook = update_tpacpi_mute_led;
++			spec->vmaster_mute_enum = 1;
+ 			removefunc = false;
+ 		}
+ 		if (led_set_func(TPACPI_LED_MICMUTE, false) >= 0) {
+diff --git a/sound/soc/codecs/rt5677.c b/sound/soc/codecs/rt5677.c
+index fb9c20eace3f..97b33e96439a 100644
+--- a/sound/soc/codecs/rt5677.c
++++ b/sound/soc/codecs/rt5677.c
+@@ -62,6 +62,9 @@ static const struct reg_default init_list[] = {
+ 	{RT5677_PR_BASE + 0x1e,	0x0000},
+ 	{RT5677_PR_BASE + 0x12,	0x0eaa},
+ 	{RT5677_PR_BASE + 0x14,	0x018a},
++	{RT5677_PR_BASE + 0x15,	0x0490},
++	{RT5677_PR_BASE + 0x38,	0x0f71},
++	{RT5677_PR_BASE + 0x39,	0x0f71},
+ };
+ #define RT5677_INIT_REG_LEN ARRAY_SIZE(init_list)
+ 
+@@ -901,7 +904,7 @@ static int set_dmic_clk(struct snd_soc_dapm_widget *w,
+ {
+ 	struct snd_soc_codec *codec = snd_soc_dapm_to_codec(w->dapm);
+ 	struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+-	int idx = rl6231_calc_dmic_clk(rt5677->sysclk);
++	int idx = rl6231_calc_dmic_clk(rt5677->lrck[RT5677_AIF1] << 8);
+ 
+ 	if (idx < 0)
+ 		dev_err(codec->dev, "Failed to set DMIC clock\n");
+diff --git a/sound/soc/codecs/tfa9879.c b/sound/soc/codecs/tfa9879.c
+index 16f1b71edb55..aab0af681e8c 100644
+--- a/sound/soc/codecs/tfa9879.c
++++ b/sound/soc/codecs/tfa9879.c
+@@ -280,8 +280,8 @@ static int tfa9879_i2c_probe(struct i2c_client *i2c,
+ 	int i;
+ 
+ 	tfa9879 = devm_kzalloc(&i2c->dev, sizeof(*tfa9879), GFP_KERNEL);
+-	if (IS_ERR(tfa9879))
+-		return PTR_ERR(tfa9879);
++	if (!tfa9879)
++		return -ENOMEM;
+ 
+ 	i2c_set_clientdata(i2c, tfa9879);
+ 
+diff --git a/sound/soc/samsung/s3c24xx-i2s.c b/sound/soc/samsung/s3c24xx-i2s.c
+index 326d3c3804e3..5bf723689692 100644
+--- a/sound/soc/samsung/s3c24xx-i2s.c
++++ b/sound/soc/samsung/s3c24xx-i2s.c
+@@ -461,8 +461,8 @@ static int s3c24xx_iis_dev_probe(struct platform_device *pdev)
+ 		return -ENOENT;
+ 	}
+ 	s3c24xx_i2s.regs = devm_ioremap_resource(&pdev->dev, res);
+-	if (s3c24xx_i2s.regs == NULL)
+-		return -ENXIO;
++	if (IS_ERR(s3c24xx_i2s.regs))
++		return PTR_ERR(s3c24xx_i2s.regs);
+ 
+ 	s3c24xx_i2s_pcm_stereo_out.dma_addr = res->start + S3C2410_IISFIFO;
+ 	s3c24xx_i2s_pcm_stereo_in.dma_addr = res->start + S3C2410_IISFIFO;
+diff --git a/sound/synth/emux/emux_oss.c b/sound/synth/emux/emux_oss.c
+index ab37add269ae..82e350e9501c 100644
+--- a/sound/synth/emux/emux_oss.c
++++ b/sound/synth/emux/emux_oss.c
+@@ -118,12 +118,8 @@ snd_emux_open_seq_oss(struct snd_seq_oss_arg *arg, void *closure)
+ 	if (snd_BUG_ON(!arg || !emu))
+ 		return -ENXIO;
+ 
+-	mutex_lock(&emu->register_mutex);
+-
+-	if (!snd_emux_inc_count(emu)) {
+-		mutex_unlock(&emu->register_mutex);
++	if (!snd_emux_inc_count(emu))
+ 		return -EFAULT;
+-	}
+ 
+ 	memset(&callback, 0, sizeof(callback));
+ 	callback.owner = THIS_MODULE;
+@@ -135,7 +131,6 @@ snd_emux_open_seq_oss(struct snd_seq_oss_arg *arg, void *closure)
+ 	if (p == NULL) {
+ 		snd_printk(KERN_ERR "can't create port\n");
+ 		snd_emux_dec_count(emu);
+-		mutex_unlock(&emu->register_mutex);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -148,8 +143,6 @@ snd_emux_open_seq_oss(struct snd_seq_oss_arg *arg, void *closure)
+ 	reset_port_mode(p, arg->seq_mode);
+ 
+ 	snd_emux_reset_port(p);
+-
+-	mutex_unlock(&emu->register_mutex);
+ 	return 0;
+ }
+ 
+@@ -195,13 +188,11 @@ snd_emux_close_seq_oss(struct snd_seq_oss_arg *arg)
+ 	if (snd_BUG_ON(!emu))
+ 		return -ENXIO;
+ 
+-	mutex_lock(&emu->register_mutex);
+ 	snd_emux_sounds_off_all(p);
+ 	snd_soundfont_close_check(emu->sflist, SF_CLIENT_NO(p->chset.port));
+ 	snd_seq_event_port_detach(p->chset.client, p->chset.port);
+ 	snd_emux_dec_count(emu);
+ 
+-	mutex_unlock(&emu->register_mutex);
+ 	return 0;
+ }
+ 
+diff --git a/sound/synth/emux/emux_seq.c b/sound/synth/emux/emux_seq.c
+index 7778b8e19782..a0209204ae48 100644
+--- a/sound/synth/emux/emux_seq.c
++++ b/sound/synth/emux/emux_seq.c
+@@ -124,12 +124,10 @@ snd_emux_detach_seq(struct snd_emux *emu)
+ 	if (emu->voices)
+ 		snd_emux_terminate_all(emu);
+ 		
+-	mutex_lock(&emu->register_mutex);
+ 	if (emu->client >= 0) {
+ 		snd_seq_delete_kernel_client(emu->client);
+ 		emu->client = -1;
+ 	}
+-	mutex_unlock(&emu->register_mutex);
+ }
+ 
+ 
+@@ -269,8 +267,8 @@ snd_emux_event_input(struct snd_seq_event *ev, int direct, void *private_data,
+ /*
+  * increment usage count
+  */
+-int
+-snd_emux_inc_count(struct snd_emux *emu)
++static int
++__snd_emux_inc_count(struct snd_emux *emu)
+ {
+ 	emu->used++;
+ 	if (!try_module_get(emu->ops.owner))
+@@ -284,12 +282,21 @@ snd_emux_inc_count(struct snd_emux *emu)
+ 	return 1;
+ }
+ 
++int snd_emux_inc_count(struct snd_emux *emu)
++{
++	int ret;
++
++	mutex_lock(&emu->register_mutex);
++	ret = __snd_emux_inc_count(emu);
++	mutex_unlock(&emu->register_mutex);
++	return ret;
++}
+ 
+ /*
+  * decrease usage count
+  */
+-void
+-snd_emux_dec_count(struct snd_emux *emu)
++static void
++__snd_emux_dec_count(struct snd_emux *emu)
+ {
+ 	module_put(emu->card->module);
+ 	emu->used--;
+@@ -298,6 +305,12 @@ snd_emux_dec_count(struct snd_emux *emu)
+ 	module_put(emu->ops.owner);
+ }
+ 
++void snd_emux_dec_count(struct snd_emux *emu)
++{
++	mutex_lock(&emu->register_mutex);
++	__snd_emux_dec_count(emu);
++	mutex_unlock(&emu->register_mutex);
++}
+ 
+ /*
+  * Routine that is called upon a first use of a particular port
+@@ -317,7 +330,7 @@ snd_emux_use(void *private_data, struct snd_seq_port_subscribe *info)
+ 
+ 	mutex_lock(&emu->register_mutex);
+ 	snd_emux_init_port(p);
+-	snd_emux_inc_count(emu);
++	__snd_emux_inc_count(emu);
+ 	mutex_unlock(&emu->register_mutex);
+ 	return 0;
+ }
+@@ -340,7 +353,7 @@ snd_emux_unuse(void *private_data, struct snd_seq_port_subscribe *info)
+ 
+ 	mutex_lock(&emu->register_mutex);
+ 	snd_emux_sounds_off_all(p);
+-	snd_emux_dec_count(emu);
++	__snd_emux_dec_count(emu);
+ 	mutex_unlock(&emu->register_mutex);
+ 	return 0;
+ }


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-05-17 19:55 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-05-17 19:55 UTC (permalink / raw
  To: gentoo-commits

commit:     58e7c3a053a0e6b0a9836db809f579db10b9f883
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun May 17 15:54:56 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun May 17 15:54:56 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=58e7c3a0

Linux patch 4.0.4

 0000_README            |    4 +
 1003_linux-4.0.4.patch | 2713 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2717 insertions(+)

diff --git a/0000_README b/0000_README
index b11f028..3bcb0f8 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-4.0.3.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.0.3
 
+Patch:  1003_linux-4.0.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.0.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-4.0.4.patch b/1003_linux-4.0.4.patch
new file mode 100644
index 0000000..e5c793a
--- /dev/null
+++ b/1003_linux-4.0.4.patch
@@ -0,0 +1,2713 @@
+diff --git a/Documentation/devicetree/bindings/dma/fsl-mxs-dma.txt b/Documentation/devicetree/bindings/dma/fsl-mxs-dma.txt
+index a4873e5e3e36..e30e184f50c7 100644
+--- a/Documentation/devicetree/bindings/dma/fsl-mxs-dma.txt
++++ b/Documentation/devicetree/bindings/dma/fsl-mxs-dma.txt
+@@ -38,7 +38,7 @@ dma_apbx: dma-apbx@80024000 {
+ 		      80 81 68 69
+ 		      70 71 72 73
+ 		      74 75 76 77>;
+-	interrupt-names = "auart4-rx", "aurat4-tx", "spdif-tx", "empty",
++	interrupt-names = "auart4-rx", "auart4-tx", "spdif-tx", "empty",
+ 			  "saif0", "saif1", "i2c0", "i2c1",
+ 			  "auart0-rx", "auart0-tx", "auart1-rx", "auart1-tx",
+ 			  "auart2-rx", "auart2-tx", "auart3-rx", "auart3-tx";
+diff --git a/Makefile b/Makefile
+index dc9f43a019d6..3d16bcc87585 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+diff --git a/arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts b/arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts
+index 0c76d9f05fd0..f4838ebd918b 100644
+--- a/arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts
++++ b/arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts
+@@ -105,6 +105,10 @@
+ 		};
+ 
+ 		internal-regs {
++			rtc@10300 {
++				/* No crystal connected to the internal RTC */
++				status = "disabled";
++			};
+ 			serial@12000 {
+ 				status = "okay";
+ 			};
+diff --git a/arch/arm/boot/dts/imx23-olinuxino.dts b/arch/arm/boot/dts/imx23-olinuxino.dts
+index 7e6eef2488e8..82045398bf1f 100644
+--- a/arch/arm/boot/dts/imx23-olinuxino.dts
++++ b/arch/arm/boot/dts/imx23-olinuxino.dts
+@@ -12,6 +12,7 @@
+  */
+ 
+ /dts-v1/;
++#include <dt-bindings/gpio/gpio.h>
+ #include "imx23.dtsi"
+ 
+ / {
+@@ -93,6 +94,7 @@
+ 
+ 	ahb@80080000 {
+ 		usb0: usb@80080000 {
++			dr_mode = "host";
+ 			vbus-supply = <&reg_usb0_vbus>;
+ 			status = "okay";
+ 		};
+@@ -122,7 +124,7 @@
+ 
+ 		user {
+ 			label = "green";
+-			gpios = <&gpio2 1 1>;
++			gpios = <&gpio2 1 GPIO_ACTIVE_HIGH>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/imx25.dtsi b/arch/arm/boot/dts/imx25.dtsi
+index e4d3aecc4ed2..677f81d9dcd5 100644
+--- a/arch/arm/boot/dts/imx25.dtsi
++++ b/arch/arm/boot/dts/imx25.dtsi
+@@ -428,6 +428,7 @@
+ 
+ 			pwm4: pwm@53fc8000 {
+ 				compatible = "fsl,imx25-pwm", "fsl,imx27-pwm";
++				#pwm-cells = <2>;
+ 				reg = <0x53fc8000 0x4000>;
+ 				clocks = <&clks 108>, <&clks 52>;
+ 				clock-names = "ipg", "per";
+diff --git a/arch/arm/boot/dts/imx28.dtsi b/arch/arm/boot/dts/imx28.dtsi
+index 47f68ac868d4..5ed245a3f9ac 100644
+--- a/arch/arm/boot/dts/imx28.dtsi
++++ b/arch/arm/boot/dts/imx28.dtsi
+@@ -900,7 +900,7 @@
+ 					      80 81 68 69
+ 					      70 71 72 73
+ 					      74 75 76 77>;
+-				interrupt-names = "auart4-rx", "aurat4-tx", "spdif-tx", "empty",
++				interrupt-names = "auart4-rx", "auart4-tx", "spdif-tx", "empty",
+ 						  "saif0", "saif1", "i2c0", "i2c1",
+ 						  "auart0-rx", "auart0-tx", "auart1-rx", "auart1-tx",
+ 						  "auart2-rx", "auart2-tx", "auart3-rx", "auart3-tx";
+diff --git a/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi b/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
+index 19cc269a08d4..1ce6133b67f5 100644
+--- a/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
+@@ -31,6 +31,7 @@
+ 			regulator-min-microvolt = <5000000>;
+ 			regulator-max-microvolt = <5000000>;
+ 			gpio = <&gpio4 15 0>;
++			enable-active-high;
+ 		};
+ 
+ 		reg_usb_h1_vbus: regulator@1 {
+@@ -40,6 +41,7 @@
+ 			regulator-min-microvolt = <5000000>;
+ 			regulator-max-microvolt = <5000000>;
+ 			gpio = <&gpio1 0 0>;
++			enable-active-high;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/omap3-n900.dts b/arch/arm/boot/dts/omap3-n900.dts
+index db80f9d376fa..9c8bdf2c93a1 100644
+--- a/arch/arm/boot/dts/omap3-n900.dts
++++ b/arch/arm/boot/dts/omap3-n900.dts
+@@ -484,6 +484,8 @@
+ 		DRVDD-supply = <&vmmc2>;
+ 		IOVDD-supply = <&vio>;
+ 		DVDD-supply = <&vio>;
++
++		ai3x-micbias-vg = <1>;
+ 	};
+ 
+ 	tlv320aic3x_aux: tlv320aic3x@19 {
+@@ -495,6 +497,8 @@
+ 		DRVDD-supply = <&vmmc2>;
+ 		IOVDD-supply = <&vio>;
+ 		DVDD-supply = <&vio>;
++
++		ai3x-micbias-vg = <2>;
+ 	};
+ 
+ 	tsl2563: tsl2563@29 {
+diff --git a/arch/arm/boot/dts/ste-dbx5x0.dtsi b/arch/arm/boot/dts/ste-dbx5x0.dtsi
+index bfd3f1c734b8..2201cd5da3bb 100644
+--- a/arch/arm/boot/dts/ste-dbx5x0.dtsi
++++ b/arch/arm/boot/dts/ste-dbx5x0.dtsi
+@@ -1017,23 +1017,6 @@
+ 			status = "disabled";
+ 		};
+ 
+-		vmmci: regulator-gpio {
+-			compatible = "regulator-gpio";
+-
+-			regulator-min-microvolt = <1800000>;
+-			regulator-max-microvolt = <2900000>;
+-			regulator-name = "mmci-reg";
+-			regulator-type = "voltage";
+-
+-			startup-delay-us = <100>;
+-			enable-active-high;
+-
+-			states = <1800000 0x1
+-				  2900000 0x0>;
+-
+-			status = "disabled";
+-		};
+-
+ 		mcde@a0350000 {
+ 			compatible = "stericsson,mcde";
+ 			reg = <0xa0350000 0x1000>, /* MCDE */
+diff --git a/arch/arm/boot/dts/ste-href.dtsi b/arch/arm/boot/dts/ste-href.dtsi
+index bf8f0eddc2c0..744c1e3a744d 100644
+--- a/arch/arm/boot/dts/ste-href.dtsi
++++ b/arch/arm/boot/dts/ste-href.dtsi
+@@ -111,6 +111,21 @@
+ 			pinctrl-1 = <&i2c3_sleep_mode>;
+ 		};
+ 
++		vmmci: regulator-gpio {
++			compatible = "regulator-gpio";
++
++			regulator-min-microvolt = <1800000>;
++			regulator-max-microvolt = <2900000>;
++			regulator-name = "mmci-reg";
++			regulator-type = "voltage";
++
++			startup-delay-us = <100>;
++			enable-active-high;
++
++			states = <1800000 0x1
++				  2900000 0x0>;
++		};
++
+ 		// External Micro SD slot
+ 		sdi0_per1@80126000 {
+ 			arm,primecell-periphid = <0x10480180>;
+diff --git a/arch/arm/boot/dts/ste-snowball.dts b/arch/arm/boot/dts/ste-snowball.dts
+index 206826a855c0..1bc84ebdccaa 100644
+--- a/arch/arm/boot/dts/ste-snowball.dts
++++ b/arch/arm/boot/dts/ste-snowball.dts
+@@ -146,8 +146,21 @@
+ 		};
+ 
+ 		vmmci: regulator-gpio {
++			compatible = "regulator-gpio";
++
+ 			gpios = <&gpio7 4 0x4>;
+ 			enable-gpio = <&gpio6 25 0x4>;
++
++			regulator-min-microvolt = <1800000>;
++			regulator-max-microvolt = <2900000>;
++			regulator-name = "mmci-reg";
++			regulator-type = "voltage";
++
++			startup-delay-us = <100>;
++			enable-active-high;
++
++			states = <1800000 0x1
++				  2900000 0x0>;
+ 		};
+ 
+ 		// External Micro SD slot
+diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
+index 902397dd1000..1c1cdfa566ac 100644
+--- a/arch/arm/kernel/Makefile
++++ b/arch/arm/kernel/Makefile
+@@ -86,7 +86,7 @@ obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
+ 
+ obj-$(CONFIG_ARM_VIRT_EXT)	+= hyp-stub.o
+ ifeq ($(CONFIG_ARM_PSCI),y)
+-obj-y				+= psci.o
++obj-y				+= psci.o psci-call.o
+ obj-$(CONFIG_SMP)		+= psci_smp.o
+ endif
+ 
+diff --git a/arch/arm/kernel/psci-call.S b/arch/arm/kernel/psci-call.S
+new file mode 100644
+index 000000000000..a78e9e1e206d
+--- /dev/null
++++ b/arch/arm/kernel/psci-call.S
+@@ -0,0 +1,31 @@
++/*
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * Copyright (C) 2015 ARM Limited
++ *
++ * Author: Mark Rutland <mark.rutland@arm.com>
++ */
++
++#include <linux/linkage.h>
++
++#include <asm/opcodes-sec.h>
++#include <asm/opcodes-virt.h>
++
++/* int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1, u32 arg2) */
++ENTRY(__invoke_psci_fn_hvc)
++	__HVC(0)
++	bx	lr
++ENDPROC(__invoke_psci_fn_hvc)
++
++/* int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1, u32 arg2) */
++ENTRY(__invoke_psci_fn_smc)
++	__SMC(0)
++	bx	lr
++ENDPROC(__invoke_psci_fn_smc)
+diff --git a/arch/arm/kernel/psci.c b/arch/arm/kernel/psci.c
+index f73891b6b730..f90fdf4ce7c7 100644
+--- a/arch/arm/kernel/psci.c
++++ b/arch/arm/kernel/psci.c
+@@ -23,8 +23,6 @@
+ 
+ #include <asm/compiler.h>
+ #include <asm/errno.h>
+-#include <asm/opcodes-sec.h>
+-#include <asm/opcodes-virt.h>
+ #include <asm/psci.h>
+ #include <asm/system_misc.h>
+ 
+@@ -33,6 +31,9 @@ struct psci_operations psci_ops;
+ static int (*invoke_psci_fn)(u32, u32, u32, u32);
+ typedef int (*psci_initcall_t)(const struct device_node *);
+ 
++asmlinkage int __invoke_psci_fn_hvc(u32, u32, u32, u32);
++asmlinkage int __invoke_psci_fn_smc(u32, u32, u32, u32);
++
+ enum psci_function {
+ 	PSCI_FN_CPU_SUSPEND,
+ 	PSCI_FN_CPU_ON,
+@@ -71,40 +72,6 @@ static u32 psci_power_state_pack(struct psci_power_state state)
+ 		 & PSCI_0_2_POWER_STATE_AFFL_MASK);
+ }
+ 
+-/*
+- * The following two functions are invoked via the invoke_psci_fn pointer
+- * and will not be inlined, allowing us to piggyback on the AAPCS.
+- */
+-static noinline int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1,
+-					 u32 arg2)
+-{
+-	asm volatile(
+-			__asmeq("%0", "r0")
+-			__asmeq("%1", "r1")
+-			__asmeq("%2", "r2")
+-			__asmeq("%3", "r3")
+-			__HVC(0)
+-		: "+r" (function_id)
+-		: "r" (arg0), "r" (arg1), "r" (arg2));
+-
+-	return function_id;
+-}
+-
+-static noinline int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1,
+-					 u32 arg2)
+-{
+-	asm volatile(
+-			__asmeq("%0", "r0")
+-			__asmeq("%1", "r1")
+-			__asmeq("%2", "r2")
+-			__asmeq("%3", "r3")
+-			__SMC(0)
+-		: "+r" (function_id)
+-		: "r" (arg0), "r" (arg1), "r" (arg2));
+-
+-	return function_id;
+-}
+-
+ static int psci_get_version(void)
+ {
+ 	int err;
+diff --git a/arch/arm/mach-omap2/prm-regbits-34xx.h b/arch/arm/mach-omap2/prm-regbits-34xx.h
+index cbefbd7cfdb5..661d753df584 100644
+--- a/arch/arm/mach-omap2/prm-regbits-34xx.h
++++ b/arch/arm/mach-omap2/prm-regbits-34xx.h
+@@ -112,6 +112,7 @@
+ #define OMAP3430_VC_CMD_ONLP_SHIFT			16
+ #define OMAP3430_VC_CMD_RET_SHIFT			8
+ #define OMAP3430_VC_CMD_OFF_SHIFT			0
++#define OMAP3430_SREN_MASK				(1 << 4)
+ #define OMAP3430_HSEN_MASK				(1 << 3)
+ #define OMAP3430_MCODE_MASK				(0x7 << 0)
+ #define OMAP3430_VALID_MASK				(1 << 24)
+diff --git a/arch/arm/mach-omap2/prm-regbits-44xx.h b/arch/arm/mach-omap2/prm-regbits-44xx.h
+index b1c7a33e00e7..e794828dee55 100644
+--- a/arch/arm/mach-omap2/prm-regbits-44xx.h
++++ b/arch/arm/mach-omap2/prm-regbits-44xx.h
+@@ -35,6 +35,7 @@
+ #define OMAP4430_GLOBAL_WARM_SW_RST_SHIFT				1
+ #define OMAP4430_GLOBAL_WUEN_MASK					(1 << 16)
+ #define OMAP4430_HSMCODE_MASK						(0x7 << 0)
++#define OMAP4430_SRMODEEN_MASK						(1 << 4)
+ #define OMAP4430_HSMODEEN_MASK						(1 << 3)
+ #define OMAP4430_HSSCLL_SHIFT						24
+ #define OMAP4430_ICEPICK_RST_SHIFT					9
+diff --git a/arch/arm/mach-omap2/vc.c b/arch/arm/mach-omap2/vc.c
+index be9ef834fa81..076fd20d7e5a 100644
+--- a/arch/arm/mach-omap2/vc.c
++++ b/arch/arm/mach-omap2/vc.c
+@@ -316,7 +316,8 @@ static void __init omap3_vc_init_pmic_signaling(struct voltagedomain *voltdm)
+ 	 * idle. And we can also scale voltages to zero for off-idle.
+ 	 * Note that no actual voltage scaling during off-idle will
+ 	 * happen unless the board specific twl4030 PMIC scripts are
+-	 * loaded.
++	 * loaded. See also omap_vc_i2c_init for comments regarding
++	 * erratum i531.
+ 	 */
+ 	val = voltdm->read(OMAP3_PRM_VOLTCTRL_OFFSET);
+ 	if (!(val & OMAP3430_PRM_VOLTCTRL_SEL_OFF)) {
+@@ -704,9 +705,16 @@ static void __init omap_vc_i2c_init(struct voltagedomain *voltdm)
+ 		return;
+ 	}
+ 
++	/*
++	 * Note that for omap3 OMAP3430_SREN_MASK clears SREN to work around
++	 * erratum i531 "Extra Power Consumed When Repeated Start Operation
++	 * Mode Is Enabled on I2C Interface Dedicated for Smart Reflex (I2C4)".
++	 * Otherwise I2C4 eventually leads into about 23mW extra power being
++	 * consumed even during off idle using VMODE.
++	 */
+ 	i2c_high_speed = voltdm->pmic->i2c_high_speed;
+ 	if (i2c_high_speed)
+-		voltdm->rmw(vc->common->i2c_cfg_hsen_mask,
++		voltdm->rmw(vc->common->i2c_cfg_clear_mask,
+ 			    vc->common->i2c_cfg_hsen_mask,
+ 			    vc->common->i2c_cfg_reg);
+ 
+diff --git a/arch/arm/mach-omap2/vc.h b/arch/arm/mach-omap2/vc.h
+index cdbdd78e755e..89b83b7ff3ec 100644
+--- a/arch/arm/mach-omap2/vc.h
++++ b/arch/arm/mach-omap2/vc.h
+@@ -34,6 +34,7 @@ struct voltagedomain;
+  * @cmd_ret_shift: RET field shift in PRM_VC_CMD_VAL_* register
+  * @cmd_off_shift: OFF field shift in PRM_VC_CMD_VAL_* register
+  * @i2c_cfg_reg: I2C configuration register offset
++ * @i2c_cfg_clear_mask: high-speed mode bit clear mask in I2C config register
+  * @i2c_cfg_hsen_mask: high-speed mode bit field mask in I2C config register
+  * @i2c_mcode_mask: MCODE field mask for I2C config register
+  *
+@@ -52,6 +53,7 @@ struct omap_vc_common {
+ 	u8 cmd_ret_shift;
+ 	u8 cmd_off_shift;
+ 	u8 i2c_cfg_reg;
++	u8 i2c_cfg_clear_mask;
+ 	u8 i2c_cfg_hsen_mask;
+ 	u8 i2c_mcode_mask;
+ };
+diff --git a/arch/arm/mach-omap2/vc3xxx_data.c b/arch/arm/mach-omap2/vc3xxx_data.c
+index 75bc4aa22b3a..71d74c9172c1 100644
+--- a/arch/arm/mach-omap2/vc3xxx_data.c
++++ b/arch/arm/mach-omap2/vc3xxx_data.c
+@@ -40,6 +40,7 @@ static struct omap_vc_common omap3_vc_common = {
+ 	.cmd_onlp_shift	 = OMAP3430_VC_CMD_ONLP_SHIFT,
+ 	.cmd_ret_shift	 = OMAP3430_VC_CMD_RET_SHIFT,
+ 	.cmd_off_shift	 = OMAP3430_VC_CMD_OFF_SHIFT,
++	.i2c_cfg_clear_mask = OMAP3430_SREN_MASK | OMAP3430_HSEN_MASK,
+ 	.i2c_cfg_hsen_mask = OMAP3430_HSEN_MASK,
+ 	.i2c_cfg_reg	 = OMAP3_PRM_VC_I2C_CFG_OFFSET,
+ 	.i2c_mcode_mask	 = OMAP3430_MCODE_MASK,
+diff --git a/arch/arm/mach-omap2/vc44xx_data.c b/arch/arm/mach-omap2/vc44xx_data.c
+index 085e5d6a04fd..2abd5fa8a697 100644
+--- a/arch/arm/mach-omap2/vc44xx_data.c
++++ b/arch/arm/mach-omap2/vc44xx_data.c
+@@ -42,6 +42,7 @@ static const struct omap_vc_common omap4_vc_common = {
+ 	.cmd_ret_shift = OMAP4430_RET_SHIFT,
+ 	.cmd_off_shift = OMAP4430_OFF_SHIFT,
+ 	.i2c_cfg_reg = OMAP4_PRM_VC_CFG_I2C_MODE_OFFSET,
++	.i2c_cfg_clear_mask = OMAP4430_SRMODEEN_MASK | OMAP4430_HSMODEEN_MASK,
+ 	.i2c_cfg_hsen_mask = OMAP4430_HSMODEEN_MASK,
+ 	.i2c_mcode_mask	 = OMAP4430_HSMCODE_MASK,
+ };
+diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
+index e1268f905026..f412b53ed268 100644
+--- a/arch/arm/net/bpf_jit_32.c
++++ b/arch/arm/net/bpf_jit_32.c
+@@ -449,10 +449,21 @@ static inline void emit_udiv(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx)
+ 		return;
+ 	}
+ #endif
+-	if (rm != ARM_R0)
+-		emit(ARM_MOV_R(ARM_R0, rm), ctx);
++
++	/*
++	 * For BPF_ALU | BPF_DIV | BPF_K instructions, rm is ARM_R4
++	 * (r_A) and rn is ARM_R0 (r_scratch) so load rn first into
++	 * ARM_R1 to avoid accidentally overwriting ARM_R0 with rm
++	 * before using it as a source for ARM_R1.
++	 *
++	 * For BPF_ALU | BPF_DIV | BPF_X rm is ARM_R4 (r_A) and rn is
++	 * ARM_R5 (r_X) so there is no particular register overlap
++	 * issues.
++	 */
+ 	if (rn != ARM_R1)
+ 		emit(ARM_MOV_R(ARM_R1, rn), ctx);
++	if (rm != ARM_R0)
++		emit(ARM_MOV_R(ARM_R0, rm), ctx);
+ 
+ 	ctx->seen |= SEEN_CALL;
+ 	emit_mov_i(ARM_R3, (u32)jit_udiv, ctx);
+diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
+index cf87de3fc390..64b611782ef0 100644
+--- a/arch/x86/include/asm/spinlock.h
++++ b/arch/x86/include/asm/spinlock.h
+@@ -169,7 +169,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock)
+ 	struct __raw_tickets tmp = READ_ONCE(lock->tickets);
+ 
+ 	tmp.head &= ~TICKET_SLOWPATH_FLAG;
+-	return (tmp.tail - tmp.head) > TICKET_LOCK_INC;
++	return (__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC;
+ }
+ #define arch_spin_is_contended	arch_spin_is_contended
+ 
+diff --git a/arch/x86/pci/acpi.c b/arch/x86/pci/acpi.c
+index e4695985f9de..d93963340c3c 100644
+--- a/arch/x86/pci/acpi.c
++++ b/arch/x86/pci/acpi.c
+@@ -325,6 +325,26 @@ static void release_pci_root_info(struct pci_host_bridge *bridge)
+ 	kfree(info);
+ }
+ 
++/*
++ * An IO port or MMIO resource assigned to a PCI host bridge may be
++ * consumed by the host bridge itself or available to its child
++ * bus/devices. The ACPI specification defines a bit (Producer/Consumer)
++ * to tell whether the resource is consumed by the host bridge itself,
++ * but firmware hasn't used that bit consistently, so we can't rely on it.
++ *
++ * On x86 and IA64 platforms, all IO port and MMIO resources are assumed
++ * to be available to child bus/devices except one special case:
++ *     IO port [0xCF8-0xCFF] is consumed by the host bridge itself
++ *     to access PCI configuration space.
++ *
++ * So explicitly filter out PCI CFG IO ports[0xCF8-0xCFF].
++ */
++static bool resource_is_pcicfg_ioport(struct resource *res)
++{
++	return (res->flags & IORESOURCE_IO) &&
++		res->start == 0xCF8 && res->end == 0xCFF;
++}
++
+ static void probe_pci_root_info(struct pci_root_info *info,
+ 				struct acpi_device *device,
+ 				int busnum, int domain,
+@@ -346,8 +366,8 @@ static void probe_pci_root_info(struct pci_root_info *info,
+ 			"no IO and memory resources present in _CRS\n");
+ 	else
+ 		resource_list_for_each_entry_safe(entry, tmp, list) {
+-			if ((entry->res->flags & IORESOURCE_WINDOW) == 0 ||
+-			    (entry->res->flags & IORESOURCE_DISABLED))
++			if ((entry->res->flags & IORESOURCE_DISABLED) ||
++			    resource_is_pcicfg_ioport(entry->res))
+ 				resource_list_destroy_entry(entry);
+ 			else
+ 				entry->res->name = info->name;
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 794c3e7f01cf..66406474f0c4 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -552,6 +552,8 @@ void blk_cleanup_queue(struct request_queue *q)
+ 		q->queue_lock = &q->__queue_lock;
+ 	spin_unlock_irq(lock);
+ 
++	bdi_destroy(&q->backing_dev_info);
++
+ 	/* @q is and will stay empty, shutdown and put */
+ 	blk_put_queue(q);
+ }
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 33c428530193..5c39703e644f 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -675,8 +675,11 @@ static void blk_mq_rq_timer(unsigned long priv)
+ 		data.next = blk_rq_timeout(round_jiffies_up(data.next));
+ 		mod_timer(&q->timeout, data.next);
+ 	} else {
+-		queue_for_each_hw_ctx(q, hctx, i)
+-			blk_mq_tag_idle(hctx);
++		queue_for_each_hw_ctx(q, hctx, i) {
++			/* the hctx may be unmapped, so check it here */
++			if (blk_mq_hw_queue_mapped(hctx))
++				blk_mq_tag_idle(hctx);
++		}
+ 	}
+ }
+ 
+@@ -1570,22 +1573,6 @@ static int blk_mq_hctx_cpu_offline(struct blk_mq_hw_ctx *hctx, int cpu)
+ 	return NOTIFY_OK;
+ }
+ 
+-static int blk_mq_hctx_cpu_online(struct blk_mq_hw_ctx *hctx, int cpu)
+-{
+-	struct request_queue *q = hctx->queue;
+-	struct blk_mq_tag_set *set = q->tag_set;
+-
+-	if (set->tags[hctx->queue_num])
+-		return NOTIFY_OK;
+-
+-	set->tags[hctx->queue_num] = blk_mq_init_rq_map(set, hctx->queue_num);
+-	if (!set->tags[hctx->queue_num])
+-		return NOTIFY_STOP;
+-
+-	hctx->tags = set->tags[hctx->queue_num];
+-	return NOTIFY_OK;
+-}
+-
+ static int blk_mq_hctx_notify(void *data, unsigned long action,
+ 			      unsigned int cpu)
+ {
+@@ -1593,8 +1580,11 @@ static int blk_mq_hctx_notify(void *data, unsigned long action,
+ 
+ 	if (action == CPU_DEAD || action == CPU_DEAD_FROZEN)
+ 		return blk_mq_hctx_cpu_offline(hctx, cpu);
+-	else if (action == CPU_ONLINE || action == CPU_ONLINE_FROZEN)
+-		return blk_mq_hctx_cpu_online(hctx, cpu);
++
++	/*
++	 * In case of CPU online, tags may be reallocated
++	 * in blk_mq_map_swqueue() after mapping is updated.
++	 */
+ 
+ 	return NOTIFY_OK;
+ }
+@@ -1776,6 +1766,7 @@ static void blk_mq_map_swqueue(struct request_queue *q)
+ 	unsigned int i;
+ 	struct blk_mq_hw_ctx *hctx;
+ 	struct blk_mq_ctx *ctx;
++	struct blk_mq_tag_set *set = q->tag_set;
+ 
+ 	queue_for_each_hw_ctx(q, hctx, i) {
+ 		cpumask_clear(hctx->cpumask);
+@@ -1802,16 +1793,20 @@ static void blk_mq_map_swqueue(struct request_queue *q)
+ 		 * disable it and free the request entries.
+ 		 */
+ 		if (!hctx->nr_ctx) {
+-			struct blk_mq_tag_set *set = q->tag_set;
+-
+ 			if (set->tags[i]) {
+ 				blk_mq_free_rq_map(set, set->tags[i], i);
+ 				set->tags[i] = NULL;
+-				hctx->tags = NULL;
+ 			}
++			hctx->tags = NULL;
+ 			continue;
+ 		}
+ 
++		/* unmapped hw queue can be remapped after CPU topo changed */
++		if (!set->tags[i])
++			set->tags[i] = blk_mq_init_rq_map(set, i);
++		hctx->tags = set->tags[i];
++		WARN_ON(!hctx->tags);
++
+ 		/*
+ 		 * Initialize batch roundrobin counts
+ 		 */
+@@ -2075,9 +2070,16 @@ static int blk_mq_queue_reinit_notify(struct notifier_block *nb,
+ 	 */
+ 	list_for_each_entry(q, &all_q_list, all_q_node)
+ 		blk_mq_freeze_queue_start(q);
+-	list_for_each_entry(q, &all_q_list, all_q_node)
++	list_for_each_entry(q, &all_q_list, all_q_node) {
+ 		blk_mq_freeze_queue_wait(q);
+ 
++		/*
++		 * timeout handler can't touch hw queue during the
++		 * reinitialization
++		 */
++		del_timer_sync(&q->timeout);
++	}
++
+ 	list_for_each_entry(q, &all_q_list, all_q_node)
+ 		blk_mq_queue_reinit(q);
+ 
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index faaf36ade7eb..2b8fd302f677 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -522,8 +522,6 @@ static void blk_release_queue(struct kobject *kobj)
+ 
+ 	blk_trace_shutdown(q);
+ 
+-	bdi_destroy(&q->backing_dev_info);
+-
+ 	ida_simple_remove(&blk_queue_ida, q->id);
+ 	call_rcu(&q->rcu_head, blk_free_queue_rcu);
+ }
+diff --git a/drivers/acpi/acpi_pnp.c b/drivers/acpi/acpi_pnp.c
+index b193f8425999..ff6d8adc9cda 100644
+--- a/drivers/acpi/acpi_pnp.c
++++ b/drivers/acpi/acpi_pnp.c
+@@ -304,6 +304,8 @@ static const struct acpi_device_id acpi_pnp_device_ids[] = {
+ 	{"PNPb006"},
+ 	/* cs423x-pnpbios */
+ 	{"CSC0100"},
++	{"CSC0103"},
++	{"CSC0110"},
+ 	{"CSC0000"},
+ 	{"GIM0100"},		/* Guillemot Turtlebeach something appears to be cs4232 compatible */
+ 	/* es18xx-pnpbios */
+diff --git a/drivers/acpi/acpica/acmacros.h b/drivers/acpi/acpica/acmacros.h
+index cf607fe69dbd..c240bdf824f2 100644
+--- a/drivers/acpi/acpica/acmacros.h
++++ b/drivers/acpi/acpica/acmacros.h
+@@ -63,23 +63,12 @@
+ #define ACPI_SET64(ptr, val)            (*ACPI_CAST64 (ptr) = (u64) (val))
+ 
+ /*
+- * printf() format helpers. These macros are workarounds for the difficulties
++ * printf() format helper. This macros is a workaround for the difficulties
+  * with emitting 64-bit integers and 64-bit pointers with the same code
+  * for both 32-bit and 64-bit hosts.
+  */
+ #define ACPI_FORMAT_UINT64(i)           ACPI_HIDWORD(i), ACPI_LODWORD(i)
+ 
+-#if ACPI_MACHINE_WIDTH == 64
+-#define ACPI_FORMAT_NATIVE_UINT(i)      ACPI_FORMAT_UINT64(i)
+-#define ACPI_FORMAT_TO_UINT(i)          ACPI_FORMAT_UINT64(i)
+-#define ACPI_PRINTF_UINT                 "0x%8.8X%8.8X"
+-
+-#else
+-#define ACPI_FORMAT_NATIVE_UINT(i)      0, (u32) (i)
+-#define ACPI_FORMAT_TO_UINT(i)          (u32) (i)
+-#define ACPI_PRINTF_UINT                 "0x%8.8X"
+-#endif
+-
+ /*
+  * Macros for moving data around to/from buffers that are possibly unaligned.
+  * If the hardware supports the transfer of unaligned data, just do the store.
+diff --git a/drivers/acpi/acpica/dsopcode.c b/drivers/acpi/acpica/dsopcode.c
+index 77244182ff02..ea0cc4e08f80 100644
+--- a/drivers/acpi/acpica/dsopcode.c
++++ b/drivers/acpi/acpica/dsopcode.c
+@@ -446,7 +446,7 @@ acpi_ds_eval_region_operands(struct acpi_walk_state *walk_state,
+ 
+ 	ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "RgnObj %p Addr %8.8X%8.8X Len %X\n",
+ 			  obj_desc,
+-			  ACPI_FORMAT_NATIVE_UINT(obj_desc->region.address),
++			  ACPI_FORMAT_UINT64(obj_desc->region.address),
+ 			  obj_desc->region.length));
+ 
+ 	/* Now the address and length are valid for this opregion */
+@@ -539,13 +539,12 @@ acpi_ds_eval_table_region_operands(struct acpi_walk_state *walk_state,
+ 		return_ACPI_STATUS(AE_NOT_EXIST);
+ 	}
+ 
+-	obj_desc->region.address =
+-	    (acpi_physical_address) ACPI_TO_INTEGER(table);
++	obj_desc->region.address = ACPI_PTR_TO_PHYSADDR(table);
+ 	obj_desc->region.length = table->length;
+ 
+ 	ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "RgnObj %p Addr %8.8X%8.8X Len %X\n",
+ 			  obj_desc,
+-			  ACPI_FORMAT_NATIVE_UINT(obj_desc->region.address),
++			  ACPI_FORMAT_UINT64(obj_desc->region.address),
+ 			  obj_desc->region.length));
+ 
+ 	/* Now the address and length are valid for this opregion */
+diff --git a/drivers/acpi/acpica/evregion.c b/drivers/acpi/acpica/evregion.c
+index 9abace3401f9..2ba28a63fb68 100644
+--- a/drivers/acpi/acpica/evregion.c
++++ b/drivers/acpi/acpica/evregion.c
+@@ -272,7 +272,7 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj,
+ 	ACPI_DEBUG_PRINT((ACPI_DB_OPREGION,
+ 			  "Handler %p (@%p) Address %8.8X%8.8X [%s]\n",
+ 			  &region_obj->region.handler->address_space, handler,
+-			  ACPI_FORMAT_NATIVE_UINT(address),
++			  ACPI_FORMAT_UINT64(address),
+ 			  acpi_ut_get_region_name(region_obj->region.
+ 						  space_id)));
+ 
+diff --git a/drivers/acpi/acpica/exdump.c b/drivers/acpi/acpica/exdump.c
+index 7c213b6b6472..1da52bef632e 100644
+--- a/drivers/acpi/acpica/exdump.c
++++ b/drivers/acpi/acpica/exdump.c
+@@ -767,8 +767,8 @@ void acpi_ex_dump_operand(union acpi_operand_object *obj_desc, u32 depth)
+ 			acpi_os_printf("\n");
+ 		} else {
+ 			acpi_os_printf(" base %8.8X%8.8X Length %X\n",
+-				       ACPI_FORMAT_NATIVE_UINT(obj_desc->region.
+-							       address),
++				       ACPI_FORMAT_UINT64(obj_desc->region.
++							  address),
+ 				       obj_desc->region.length);
+ 		}
+ 		break;
+diff --git a/drivers/acpi/acpica/exfldio.c b/drivers/acpi/acpica/exfldio.c
+index 49479927e7f7..725a3746a2df 100644
+--- a/drivers/acpi/acpica/exfldio.c
++++ b/drivers/acpi/acpica/exfldio.c
+@@ -263,17 +263,15 @@ acpi_ex_access_region(union acpi_operand_object *obj_desc,
+ 	}
+ 
+ 	ACPI_DEBUG_PRINT_RAW((ACPI_DB_BFIELD,
+-			      " Region [%s:%X], Width %X, ByteBase %X, Offset %X at %p\n",
++			      " Region [%s:%X], Width %X, ByteBase %X, Offset %X at %8.8X%8.8X\n",
+ 			      acpi_ut_get_region_name(rgn_desc->region.
+ 						      space_id),
+ 			      rgn_desc->region.space_id,
+ 			      obj_desc->common_field.access_byte_width,
+ 			      obj_desc->common_field.base_byte_offset,
+-			      field_datum_byte_offset, ACPI_CAST_PTR(void,
+-								     (rgn_desc->
+-								      region.
+-								      address +
+-								      region_offset))));
++			      field_datum_byte_offset,
++			      ACPI_FORMAT_UINT64(rgn_desc->region.address +
++						 region_offset)));
+ 
+ 	/* Invoke the appropriate address_space/op_region handler */
+ 
+diff --git a/drivers/acpi/acpica/exregion.c b/drivers/acpi/acpica/exregion.c
+index 0fe188e238ef..b4bbf3150bc1 100644
+--- a/drivers/acpi/acpica/exregion.c
++++ b/drivers/acpi/acpica/exregion.c
+@@ -181,7 +181,7 @@ acpi_ex_system_memory_space_handler(u32 function,
+ 		if (!mem_info->mapped_logical_address) {
+ 			ACPI_ERROR((AE_INFO,
+ 				    "Could not map memory at 0x%8.8X%8.8X, size %u",
+-				    ACPI_FORMAT_NATIVE_UINT(address),
++				    ACPI_FORMAT_UINT64(address),
+ 				    (u32) map_length));
+ 			mem_info->mapped_length = 0;
+ 			return_ACPI_STATUS(AE_NO_MEMORY);
+@@ -202,8 +202,7 @@ acpi_ex_system_memory_space_handler(u32 function,
+ 
+ 	ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ 			  "System-Memory (width %u) R/W %u Address=%8.8X%8.8X\n",
+-			  bit_width, function,
+-			  ACPI_FORMAT_NATIVE_UINT(address)));
++			  bit_width, function, ACPI_FORMAT_UINT64(address)));
+ 
+ 	/*
+ 	 * Perform the memory read or write
+@@ -318,8 +317,7 @@ acpi_ex_system_io_space_handler(u32 function,
+ 
+ 	ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ 			  "System-IO (width %u) R/W %u Address=%8.8X%8.8X\n",
+-			  bit_width, function,
+-			  ACPI_FORMAT_NATIVE_UINT(address)));
++			  bit_width, function, ACPI_FORMAT_UINT64(address)));
+ 
+ 	/* Decode the function parameter */
+ 
+diff --git a/drivers/acpi/acpica/hwvalid.c b/drivers/acpi/acpica/hwvalid.c
+index 2bd33fe56cb3..29033d71417b 100644
+--- a/drivers/acpi/acpica/hwvalid.c
++++ b/drivers/acpi/acpica/hwvalid.c
+@@ -142,17 +142,17 @@ acpi_hw_validate_io_request(acpi_io_address address, u32 bit_width)
+ 	byte_width = ACPI_DIV_8(bit_width);
+ 	last_address = address + byte_width - 1;
+ 
+-	ACPI_DEBUG_PRINT((ACPI_DB_IO, "Address %p LastAddress %p Length %X",
+-			  ACPI_CAST_PTR(void, address), ACPI_CAST_PTR(void,
+-								      last_address),
+-			  byte_width));
++	ACPI_DEBUG_PRINT((ACPI_DB_IO,
++			  "Address %8.8X%8.8X LastAddress %8.8X%8.8X Length %X",
++			  ACPI_FORMAT_UINT64(address),
++			  ACPI_FORMAT_UINT64(last_address), byte_width));
+ 
+ 	/* Maximum 16-bit address in I/O space */
+ 
+ 	if (last_address > ACPI_UINT16_MAX) {
+ 		ACPI_ERROR((AE_INFO,
+-			    "Illegal I/O port address/length above 64K: %p/0x%X",
+-			    ACPI_CAST_PTR(void, address), byte_width));
++			    "Illegal I/O port address/length above 64K: %8.8X%8.8X/0x%X",
++			    ACPI_FORMAT_UINT64(address), byte_width));
+ 		return_ACPI_STATUS(AE_LIMIT);
+ 	}
+ 
+@@ -181,8 +181,8 @@ acpi_hw_validate_io_request(acpi_io_address address, u32 bit_width)
+ 
+ 			if (acpi_gbl_osi_data >= port_info->osi_dependency) {
+ 				ACPI_DEBUG_PRINT((ACPI_DB_IO,
+-						  "Denied AML access to port 0x%p/%X (%s 0x%.4X-0x%.4X)",
+-						  ACPI_CAST_PTR(void, address),
++						  "Denied AML access to port 0x%8.8X%8.8X/%X (%s 0x%.4X-0x%.4X)",
++						  ACPI_FORMAT_UINT64(address),
+ 						  byte_width, port_info->name,
+ 						  port_info->start,
+ 						  port_info->end));
+diff --git a/drivers/acpi/acpica/nsdump.c b/drivers/acpi/acpica/nsdump.c
+index 80f097eb7381..d259393505fa 100644
+--- a/drivers/acpi/acpica/nsdump.c
++++ b/drivers/acpi/acpica/nsdump.c
+@@ -271,12 +271,11 @@ acpi_ns_dump_one_object(acpi_handle obj_handle,
+ 		switch (type) {
+ 		case ACPI_TYPE_PROCESSOR:
+ 
+-			acpi_os_printf("ID %02X Len %02X Addr %p\n",
++			acpi_os_printf("ID %02X Len %02X Addr %8.8X%8.8X\n",
+ 				       obj_desc->processor.proc_id,
+ 				       obj_desc->processor.length,
+-				       ACPI_CAST_PTR(void,
+-						     obj_desc->processor.
+-						     address));
++				       ACPI_FORMAT_UINT64(obj_desc->processor.
++							  address));
+ 			break;
+ 
+ 		case ACPI_TYPE_DEVICE:
+@@ -347,8 +346,9 @@ acpi_ns_dump_one_object(acpi_handle obj_handle,
+ 							       space_id));
+ 			if (obj_desc->region.flags & AOPOBJ_DATA_VALID) {
+ 				acpi_os_printf(" Addr %8.8X%8.8X Len %.4X\n",
+-					       ACPI_FORMAT_NATIVE_UINT
+-					       (obj_desc->region.address),
++					       ACPI_FORMAT_UINT64(obj_desc->
++								  region.
++								  address),
+ 					       obj_desc->region.length);
+ 			} else {
+ 				acpi_os_printf
+diff --git a/drivers/acpi/acpica/tbdata.c b/drivers/acpi/acpica/tbdata.c
+index 6a144957aadd..fd5998b2b46b 100644
+--- a/drivers/acpi/acpica/tbdata.c
++++ b/drivers/acpi/acpica/tbdata.c
+@@ -113,9 +113,9 @@ acpi_tb_acquire_table(struct acpi_table_desc *table_desc,
+ 	case ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL:
+ 	case ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL:
+ 
+-		table =
+-		    ACPI_CAST_PTR(struct acpi_table_header,
+-				  table_desc->address);
++		table = ACPI_CAST_PTR(struct acpi_table_header,
++				      ACPI_PHYSADDR_TO_PTR(table_desc->
++							   address));
+ 		break;
+ 
+ 	default:
+@@ -214,7 +214,8 @@ acpi_tb_acquire_temp_table(struct acpi_table_desc *table_desc,
+ 	case ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL:
+ 	case ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL:
+ 
+-		table_header = ACPI_CAST_PTR(struct acpi_table_header, address);
++		table_header = ACPI_CAST_PTR(struct acpi_table_header,
++					     ACPI_PHYSADDR_TO_PTR(address));
+ 		if (!table_header) {
+ 			return (AE_NO_MEMORY);
+ 		}
+@@ -398,14 +399,14 @@ acpi_tb_verify_temp_table(struct acpi_table_desc * table_desc, char *signature)
+ 					    table_desc->length);
+ 		if (ACPI_FAILURE(status)) {
+ 			ACPI_EXCEPTION((AE_INFO, AE_NO_MEMORY,
+-					"%4.4s " ACPI_PRINTF_UINT
++					"%4.4s 0x%8.8X%8.8X"
+ 					" Attempted table install failed",
+ 					acpi_ut_valid_acpi_name(table_desc->
+ 								signature.
+ 								ascii) ?
+ 					table_desc->signature.ascii : "????",
+-					ACPI_FORMAT_TO_UINT(table_desc->
+-							    address)));
++					ACPI_FORMAT_UINT64(table_desc->
++							   address)));
+ 			goto invalidate_and_exit;
+ 		}
+ 	}
+diff --git a/drivers/acpi/acpica/tbinstal.c b/drivers/acpi/acpica/tbinstal.c
+index 7fbc2b9dcbbb..7e69bc73bd16 100644
+--- a/drivers/acpi/acpica/tbinstal.c
++++ b/drivers/acpi/acpica/tbinstal.c
+@@ -187,8 +187,9 @@ acpi_tb_install_fixed_table(acpi_physical_address address,
+ 	status = acpi_tb_acquire_temp_table(&new_table_desc, address,
+ 					    ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL);
+ 	if (ACPI_FAILURE(status)) {
+-		ACPI_ERROR((AE_INFO, "Could not acquire table length at %p",
+-			    ACPI_CAST_PTR(void, address)));
++		ACPI_ERROR((AE_INFO,
++			    "Could not acquire table length at %8.8X%8.8X",
++			    ACPI_FORMAT_UINT64(address)));
+ 		return_ACPI_STATUS(status);
+ 	}
+ 
+@@ -246,8 +247,9 @@ acpi_tb_install_standard_table(acpi_physical_address address,
+ 
+ 	status = acpi_tb_acquire_temp_table(&new_table_desc, address, flags);
+ 	if (ACPI_FAILURE(status)) {
+-		ACPI_ERROR((AE_INFO, "Could not acquire table length at %p",
+-			    ACPI_CAST_PTR(void, address)));
++		ACPI_ERROR((AE_INFO,
++			    "Could not acquire table length at %8.8X%8.8X",
++			    ACPI_FORMAT_UINT64(address)));
+ 		return_ACPI_STATUS(status);
+ 	}
+ 
+@@ -258,9 +260,10 @@ acpi_tb_install_standard_table(acpi_physical_address address,
+ 	if (!reload &&
+ 	    acpi_gbl_disable_ssdt_table_install &&
+ 	    ACPI_COMPARE_NAME(&new_table_desc.signature, ACPI_SIG_SSDT)) {
+-		ACPI_INFO((AE_INFO, "Ignoring installation of %4.4s at %p",
+-			   new_table_desc.signature.ascii, ACPI_CAST_PTR(void,
+-									 address)));
++		ACPI_INFO((AE_INFO,
++			   "Ignoring installation of %4.4s at %8.8X%8.8X",
++			   new_table_desc.signature.ascii,
++			   ACPI_FORMAT_UINT64(address)));
+ 		goto release_and_exit;
+ 	}
+ 
+@@ -428,11 +431,11 @@ finish_override:
+ 		return;
+ 	}
+ 
+-	ACPI_INFO((AE_INFO, "%4.4s " ACPI_PRINTF_UINT
+-		   " %s table override, new table: " ACPI_PRINTF_UINT,
++	ACPI_INFO((AE_INFO, "%4.4s 0x%8.8X%8.8X"
++		   " %s table override, new table: 0x%8.8X%8.8X",
+ 		   old_table_desc->signature.ascii,
+-		   ACPI_FORMAT_TO_UINT(old_table_desc->address),
+-		   override_type, ACPI_FORMAT_TO_UINT(new_table_desc.address)));
++		   ACPI_FORMAT_UINT64(old_table_desc->address),
++		   override_type, ACPI_FORMAT_UINT64(new_table_desc.address)));
+ 
+ 	/* We can now uninstall the original table */
+ 
+@@ -516,7 +519,7 @@ void acpi_tb_uninstall_table(struct acpi_table_desc *table_desc)
+ 
+ 	if ((table_desc->flags & ACPI_TABLE_ORIGIN_MASK) ==
+ 	    ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL) {
+-		ACPI_FREE(ACPI_CAST_PTR(void, table_desc->address));
++		ACPI_FREE(ACPI_PHYSADDR_TO_PTR(table_desc->address));
+ 	}
+ 
+ 	table_desc->address = ACPI_PTR_TO_PHYSADDR(NULL);
+diff --git a/drivers/acpi/acpica/tbprint.c b/drivers/acpi/acpica/tbprint.c
+index ef16c06e5091..77ba5c71c6e7 100644
+--- a/drivers/acpi/acpica/tbprint.c
++++ b/drivers/acpi/acpica/tbprint.c
+@@ -127,18 +127,12 @@ acpi_tb_print_table_header(acpi_physical_address address,
+ {
+ 	struct acpi_table_header local_header;
+ 
+-	/*
+-	 * The reason that we use ACPI_PRINTF_UINT and ACPI_FORMAT_TO_UINT is to
+-	 * support both 32-bit and 64-bit hosts/addresses in a consistent manner.
+-	 * The %p specifier does not emit uniform output on all hosts. On some,
+-	 * leading zeros are not supported.
+-	 */
+ 	if (ACPI_COMPARE_NAME(header->signature, ACPI_SIG_FACS)) {
+ 
+ 		/* FACS only has signature and length fields */
+ 
+-		ACPI_INFO((AE_INFO, "%-4.4s " ACPI_PRINTF_UINT " %06X",
+-			   header->signature, ACPI_FORMAT_TO_UINT(address),
++		ACPI_INFO((AE_INFO, "%-4.4s 0x%8.8X%8.8X %06X",
++			   header->signature, ACPI_FORMAT_UINT64(address),
+ 			   header->length));
+ 	} else if (ACPI_VALIDATE_RSDP_SIG(header->signature)) {
+ 
+@@ -149,9 +143,8 @@ acpi_tb_print_table_header(acpi_physical_address address,
+ 					  header)->oem_id, ACPI_OEM_ID_SIZE);
+ 		acpi_tb_fix_string(local_header.oem_id, ACPI_OEM_ID_SIZE);
+ 
+-		ACPI_INFO((AE_INFO,
+-			   "RSDP " ACPI_PRINTF_UINT " %06X (v%.2d %-6.6s)",
+-			   ACPI_FORMAT_TO_UINT(address),
++		ACPI_INFO((AE_INFO, "RSDP 0x%8.8X%8.8X %06X (v%.2d %-6.6s)",
++			   ACPI_FORMAT_UINT64(address),
+ 			   (ACPI_CAST_PTR(struct acpi_table_rsdp, header)->
+ 			    revision >
+ 			    0) ? ACPI_CAST_PTR(struct acpi_table_rsdp,
+@@ -165,9 +158,9 @@ acpi_tb_print_table_header(acpi_physical_address address,
+ 		acpi_tb_cleanup_table_header(&local_header, header);
+ 
+ 		ACPI_INFO((AE_INFO,
+-			   "%-4.4s " ACPI_PRINTF_UINT
++			   "%-4.4s 0x%8.8X%8.8X"
+ 			   " %06X (v%.2d %-6.6s %-8.8s %08X %-4.4s %08X)",
+-			   local_header.signature, ACPI_FORMAT_TO_UINT(address),
++			   local_header.signature, ACPI_FORMAT_UINT64(address),
+ 			   local_header.length, local_header.revision,
+ 			   local_header.oem_id, local_header.oem_table_id,
+ 			   local_header.oem_revision,
+diff --git a/drivers/acpi/acpica/tbxfroot.c b/drivers/acpi/acpica/tbxfroot.c
+index eac52cf14f1a..fa76a3603aa1 100644
+--- a/drivers/acpi/acpica/tbxfroot.c
++++ b/drivers/acpi/acpica/tbxfroot.c
+@@ -142,7 +142,7 @@ acpi_status acpi_tb_validate_rsdp(struct acpi_table_rsdp * rsdp)
+  *
+  ******************************************************************************/
+ 
+-acpi_status __init acpi_find_root_pointer(acpi_size *table_address)
++acpi_status __init acpi_find_root_pointer(acpi_physical_address * table_address)
+ {
+ 	u8 *table_ptr;
+ 	u8 *mem_rover;
+@@ -200,7 +200,8 @@ acpi_status __init acpi_find_root_pointer(acpi_size *table_address)
+ 			physical_address +=
+ 			    (u32) ACPI_PTR_DIFF(mem_rover, table_ptr);
+ 
+-			*table_address = physical_address;
++			*table_address =
++			    (acpi_physical_address) physical_address;
+ 			return_ACPI_STATUS(AE_OK);
+ 		}
+ 	}
+@@ -233,7 +234,7 @@ acpi_status __init acpi_find_root_pointer(acpi_size *table_address)
+ 		    (ACPI_HI_RSDP_WINDOW_BASE +
+ 		     ACPI_PTR_DIFF(mem_rover, table_ptr));
+ 
+-		*table_address = physical_address;
++		*table_address = (acpi_physical_address) physical_address;
+ 		return_ACPI_STATUS(AE_OK);
+ 	}
+ 
+diff --git a/drivers/acpi/acpica/utaddress.c b/drivers/acpi/acpica/utaddress.c
+index 1279f50da757..911ea8e7fe87 100644
+--- a/drivers/acpi/acpica/utaddress.c
++++ b/drivers/acpi/acpica/utaddress.c
+@@ -107,10 +107,10 @@ acpi_ut_add_address_range(acpi_adr_space_type space_id,
+ 	acpi_gbl_address_range_list[space_id] = range_info;
+ 
+ 	ACPI_DEBUG_PRINT((ACPI_DB_NAMES,
+-			  "\nAdded [%4.4s] address range: 0x%p-0x%p\n",
++			  "\nAdded [%4.4s] address range: 0x%8.8X%8.8X-0x%8.8X%8.8X\n",
+ 			  acpi_ut_get_node_name(range_info->region_node),
+-			  ACPI_CAST_PTR(void, address),
+-			  ACPI_CAST_PTR(void, range_info->end_address)));
++			  ACPI_FORMAT_UINT64(address),
++			  ACPI_FORMAT_UINT64(range_info->end_address)));
+ 
+ 	(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
+ 	return_ACPI_STATUS(AE_OK);
+@@ -160,15 +160,13 @@ acpi_ut_remove_address_range(acpi_adr_space_type space_id,
+ 			}
+ 
+ 			ACPI_DEBUG_PRINT((ACPI_DB_NAMES,
+-					  "\nRemoved [%4.4s] address range: 0x%p-0x%p\n",
++					  "\nRemoved [%4.4s] address range: 0x%8.8X%8.8X-0x%8.8X%8.8X\n",
+ 					  acpi_ut_get_node_name(range_info->
+ 								region_node),
+-					  ACPI_CAST_PTR(void,
+-							range_info->
+-							start_address),
+-					  ACPI_CAST_PTR(void,
+-							range_info->
+-							end_address)));
++					  ACPI_FORMAT_UINT64(range_info->
++							     start_address),
++					  ACPI_FORMAT_UINT64(range_info->
++							     end_address)));
+ 
+ 			ACPI_FREE(range_info);
+ 			return_VOID;
+@@ -245,16 +243,14 @@ acpi_ut_check_address_range(acpi_adr_space_type space_id,
+ 								  region_node);
+ 
+ 				ACPI_WARNING((AE_INFO,
+-					      "%s range 0x%p-0x%p conflicts with OpRegion 0x%p-0x%p (%s)",
++					      "%s range 0x%8.8X%8.8X-0x%8.8X%8.8X conflicts with OpRegion 0x%8.8X%8.8X-0x%8.8X%8.8X (%s)",
+ 					      acpi_ut_get_region_name(space_id),
+-					      ACPI_CAST_PTR(void, address),
+-					      ACPI_CAST_PTR(void, end_address),
+-					      ACPI_CAST_PTR(void,
+-							    range_info->
+-							    start_address),
+-					      ACPI_CAST_PTR(void,
+-							    range_info->
+-							    end_address),
++					      ACPI_FORMAT_UINT64(address),
++					      ACPI_FORMAT_UINT64(end_address),
++					      ACPI_FORMAT_UINT64(range_info->
++								 start_address),
++					      ACPI_FORMAT_UINT64(range_info->
++								 end_address),
+ 					      pathname));
+ 				ACPI_FREE(pathname);
+ 			}
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 5589a6e2a023..8244f013f210 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -573,7 +573,7 @@ EXPORT_SYMBOL_GPL(acpi_dev_get_resources);
+  * @ares: Input ACPI resource object.
+  * @types: Valid resource types of IORESOURCE_XXX
+  *
+- * This is a hepler function to support acpi_dev_get_resources(), which filters
++ * This is a helper function to support acpi_dev_get_resources(), which filters
+  * ACPI resource objects according to resource types.
+  */
+ int acpi_dev_filter_resource_type(struct acpi_resource *ares,
+diff --git a/drivers/acpi/sbshc.c b/drivers/acpi/sbshc.c
+index 26e5b5060523..bf034f8b7c1a 100644
+--- a/drivers/acpi/sbshc.c
++++ b/drivers/acpi/sbshc.c
+@@ -14,6 +14,7 @@
+ #include <linux/delay.h>
+ #include <linux/module.h>
+ #include <linux/interrupt.h>
++#include <linux/dmi.h>
+ #include "sbshc.h"
+ 
+ #define PREFIX "ACPI: "
+@@ -87,6 +88,8 @@ enum acpi_smb_offset {
+ 	ACPI_SMB_ALARM_DATA = 0x26,	/* 2 bytes alarm data */
+ };
+ 
++static bool macbook;
++
+ static inline int smb_hc_read(struct acpi_smb_hc *hc, u8 address, u8 *data)
+ {
+ 	return ec_read(hc->offset + address, data);
+@@ -132,6 +135,8 @@ static int acpi_smbus_transaction(struct acpi_smb_hc *hc, u8 protocol,
+ 	}
+ 
+ 	mutex_lock(&hc->lock);
++	if (macbook)
++		udelay(5);
+ 	if (smb_hc_read(hc, ACPI_SMB_PROTOCOL, &temp))
+ 		goto end;
+ 	if (temp) {
+@@ -257,12 +262,29 @@ extern int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit,
+ 			      acpi_handle handle, acpi_ec_query_func func,
+ 			      void *data);
+ 
++static int macbook_dmi_match(const struct dmi_system_id *d)
++{
++	pr_debug("Detected MacBook, enabling workaround\n");
++	macbook = true;
++	return 0;
++}
++
++static struct dmi_system_id acpi_smbus_dmi_table[] = {
++	{ macbook_dmi_match, "Apple MacBook", {
++	  DMI_MATCH(DMI_BOARD_VENDOR, "Apple"),
++	  DMI_MATCH(DMI_PRODUCT_NAME, "MacBook") },
++	},
++	{ },
++};
++
+ static int acpi_smbus_hc_add(struct acpi_device *device)
+ {
+ 	int status;
+ 	unsigned long long val;
+ 	struct acpi_smb_hc *hc;
+ 
++	dmi_check_system(acpi_smbus_dmi_table);
++
+ 	if (!device)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index d1f168b73634..773e964f14d9 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1672,8 +1672,8 @@ out:
+ 
+ static void loop_remove(struct loop_device *lo)
+ {
+-	del_gendisk(lo->lo_disk);
+ 	blk_cleanup_queue(lo->lo_queue);
++	del_gendisk(lo->lo_disk);
+ 	blk_mq_free_tag_set(&lo->tag_set);
+ 	put_disk(lo->lo_disk);
+ 	kfree(lo);
+diff --git a/drivers/gpio/gpiolib-sysfs.c b/drivers/gpio/gpiolib-sysfs.c
+index 7722ed53bd65..af3bc7a8033b 100644
+--- a/drivers/gpio/gpiolib-sysfs.c
++++ b/drivers/gpio/gpiolib-sysfs.c
+@@ -551,6 +551,7 @@ static struct class gpio_class = {
+  */
+ int gpiod_export(struct gpio_desc *desc, bool direction_may_change)
+ {
++	struct gpio_chip	*chip;
+ 	unsigned long		flags;
+ 	int			status;
+ 	const char		*ioname = NULL;
+@@ -568,8 +569,16 @@ int gpiod_export(struct gpio_desc *desc, bool direction_may_change)
+ 		return -EINVAL;
+ 	}
+ 
++	chip = desc->chip;
++
+ 	mutex_lock(&sysfs_lock);
+ 
++	/* check if chip is being removed */
++	if (!chip || !chip->exported) {
++		status = -ENODEV;
++		goto fail_unlock;
++	}
++
+ 	spin_lock_irqsave(&gpio_lock, flags);
+ 	if (!test_bit(FLAG_REQUESTED, &desc->flags) ||
+ 	     test_bit(FLAG_EXPORT, &desc->flags)) {
+@@ -783,12 +792,15 @@ void gpiochip_unexport(struct gpio_chip *chip)
+ {
+ 	int			status;
+ 	struct device		*dev;
++	struct gpio_desc *desc;
++	unsigned int i;
+ 
+ 	mutex_lock(&sysfs_lock);
+ 	dev = class_find_device(&gpio_class, NULL, chip, match_export);
+ 	if (dev) {
+ 		put_device(dev);
+ 		device_unregister(dev);
++		/* prevent further gpiod exports */
+ 		chip->exported = false;
+ 		status = 0;
+ 	} else
+@@ -797,6 +809,13 @@ void gpiochip_unexport(struct gpio_chip *chip)
+ 
+ 	if (status)
+ 		chip_dbg(chip, "%s: status %d\n", __func__, status);
++
++	/* unregister gpiod class devices owned by sysfs */
++	for (i = 0; i < chip->ngpio; i++) {
++		desc = &chip->desc[i];
++		if (test_and_clear_bit(FLAG_SYSFS, &desc->flags))
++			gpiod_free(desc);
++	}
+ }
+ 
+ static int __init gpiolib_sysfs_init(void)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index d8135adb2238..39762a7d2ec7 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -429,9 +429,10 @@ static int unregister_process_nocpsch(struct device_queue_manager *dqm,
+ 
+ 	BUG_ON(!dqm || !qpd);
+ 
+-	BUG_ON(!list_empty(&qpd->queues_list));
++	pr_debug("In func %s\n", __func__);
+ 
+-	pr_debug("kfd: In func %s\n", __func__);
++	pr_debug("qpd->queues_list is %s\n",
++			list_empty(&qpd->queues_list) ? "empty" : "not empty");
+ 
+ 	retval = 0;
+ 	mutex_lock(&dqm->lock);
+@@ -878,6 +879,8 @@ static int create_queue_cpsch(struct device_queue_manager *dqm, struct queue *q,
+ 		return -ENOMEM;
+ 	}
+ 
++	init_sdma_vm(dqm, q, qpd);
++
+ 	retval = mqd->init_mqd(mqd, &q->mqd, &q->mqd_mem_obj,
+ 				&q->gart_mqd_addr, &q->properties);
+ 	if (retval != 0)
+diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
+index 10574a0c3a55..5769db4f51f3 100644
+--- a/drivers/gpu/drm/drm_irq.c
++++ b/drivers/gpu/drm/drm_irq.c
+@@ -131,12 +131,11 @@ static void drm_update_vblank_count(struct drm_device *dev, int crtc)
+ 
+ 	/* Reinitialize corresponding vblank timestamp if high-precision query
+ 	 * available. Skip this step if query unsupported or failed. Will
+-	 * reinitialize delayed at next vblank interrupt in that case.
++	 * reinitialize delayed at next vblank interrupt in that case and
++	 * assign 0 for now, to mark the vblanktimestamp as invalid.
+ 	 */
+-	if (rc) {
+-		tslot = atomic_read(&vblank->count) + diff;
+-		vblanktimestamp(dev, crtc, tslot) = t_vblank;
+-	}
++	tslot = atomic_read(&vblank->count) + diff;
++	vblanktimestamp(dev, crtc, tslot) = rc ? t_vblank : (struct timeval) {0, 0};
+ 
+ 	smp_mb__before_atomic();
+ 	atomic_add(diff, &vblank->count);
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index a74aaf9242b9..88b36a9173c9 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -1176,7 +1176,7 @@ intel_dp_compute_config(struct intel_encoder *encoder,
+ 
+ 	pipe_config->has_dp_encoder = true;
+ 	pipe_config->has_drrs = false;
+-	pipe_config->has_audio = intel_dp->has_audio;
++	pipe_config->has_audio = intel_dp->has_audio && port != PORT_A;
+ 
+ 	if (is_edp(intel_dp) && intel_connector->panel.fixed_mode) {
+ 		intel_fixed_panel_mode(intel_connector->panel.fixed_mode,
+@@ -2026,8 +2026,8 @@ static void intel_dp_get_config(struct intel_encoder *encoder,
+ 	int dotclock;
+ 
+ 	tmp = I915_READ(intel_dp->output_reg);
+-	if (tmp & DP_AUDIO_OUTPUT_ENABLE)
+-		pipe_config->has_audio = true;
++
++	pipe_config->has_audio = tmp & DP_AUDIO_OUTPUT_ENABLE && port != PORT_A;
+ 
+ 	if ((port == PORT_A) || !HAS_PCH_CPT(dev)) {
+ 		if (tmp & DP_SYNC_HS_HIGH)
+diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c
+index 071b96d6e146..fbc2a83795fa 100644
+--- a/drivers/gpu/drm/i915/intel_lvds.c
++++ b/drivers/gpu/drm/i915/intel_lvds.c
+@@ -812,12 +812,28 @@ static int intel_dual_link_lvds_callback(const struct dmi_system_id *id)
+ static const struct dmi_system_id intel_dual_link_lvds[] = {
+ 	{
+ 		.callback = intel_dual_link_lvds_callback,
+-		.ident = "Apple MacBook Pro (Core i5/i7 Series)",
++		.ident = "Apple MacBook Pro 15\" (2010)",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro6,2"),
++		},
++	},
++	{
++		.callback = intel_dual_link_lvds_callback,
++		.ident = "Apple MacBook Pro 15\" (2011)",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro8,2"),
+ 		},
+ 	},
++	{
++		.callback = intel_dual_link_lvds_callback,
++		.ident = "Apple MacBook Pro 15\" (2012)",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro9,1"),
++		},
++	},
+ 	{ }	/* terminating entry */
+ };
+ 
+@@ -847,6 +863,11 @@ static bool compute_is_dual_link_lvds(struct intel_lvds_encoder *lvds_encoder)
+ 	if (i915.lvds_channel_mode > 0)
+ 		return i915.lvds_channel_mode == 2;
+ 
++	/* single channel LVDS is limited to 112 MHz */
++	if (lvds_encoder->attached_connector->base.panel.fixed_mode->clock
++	    > 112999)
++		return true;
++
+ 	if (dmi_check_system(intel_dual_link_lvds))
+ 		return true;
+ 
+@@ -1104,6 +1125,8 @@ void intel_lvds_init(struct drm_device *dev)
+ out:
+ 	mutex_unlock(&dev->mode_config.mutex);
+ 
++	intel_panel_init(&intel_connector->panel, fixed_mode, downclock_mode);
++
+ 	lvds_encoder->is_dual_link = compute_is_dual_link_lvds(lvds_encoder);
+ 	DRM_DEBUG_KMS("detected %s-link lvds configuration\n",
+ 		      lvds_encoder->is_dual_link ? "dual" : "single");
+@@ -1118,7 +1141,6 @@ out:
+ 	}
+ 	drm_connector_register(connector);
+ 
+-	intel_panel_init(&intel_connector->panel, fixed_mode, downclock_mode);
+ 	intel_panel_setup_backlight(connector, INVALID_PIPE);
+ 
+ 	return;
+diff --git a/drivers/gpu/drm/radeon/radeon_asic.c b/drivers/gpu/drm/radeon/radeon_asic.c
+index c0ecd128b14b..7348f222684d 100644
+--- a/drivers/gpu/drm/radeon/radeon_asic.c
++++ b/drivers/gpu/drm/radeon/radeon_asic.c
+@@ -1180,7 +1180,7 @@ static struct radeon_asic rs780_asic = {
+ static struct radeon_asic_ring rv770_uvd_ring = {
+ 	.ib_execute = &uvd_v1_0_ib_execute,
+ 	.emit_fence = &uvd_v2_2_fence_emit,
+-	.emit_semaphore = &uvd_v1_0_semaphore_emit,
++	.emit_semaphore = &uvd_v2_2_semaphore_emit,
+ 	.cs_parse = &radeon_uvd_cs_parse,
+ 	.ring_test = &uvd_v1_0_ring_test,
+ 	.ib_test = &uvd_v1_0_ib_test,
+diff --git a/drivers/gpu/drm/radeon/radeon_asic.h b/drivers/gpu/drm/radeon/radeon_asic.h
+index 72bdd3bf0d8e..c2fd3a5e6c55 100644
+--- a/drivers/gpu/drm/radeon/radeon_asic.h
++++ b/drivers/gpu/drm/radeon/radeon_asic.h
+@@ -919,6 +919,10 @@ void uvd_v1_0_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib);
+ int uvd_v2_2_resume(struct radeon_device *rdev);
+ void uvd_v2_2_fence_emit(struct radeon_device *rdev,
+ 			 struct radeon_fence *fence);
++bool uvd_v2_2_semaphore_emit(struct radeon_device *rdev,
++			     struct radeon_ring *ring,
++			     struct radeon_semaphore *semaphore,
++			     bool emit_wait);
+ 
+ /* uvd v3.1 */
+ bool uvd_v3_1_semaphore_emit(struct radeon_device *rdev,
+diff --git a/drivers/gpu/drm/radeon/radeon_audio.c b/drivers/gpu/drm/radeon/radeon_audio.c
+index b7d33a13db9f..b7c6bb69f3c7 100644
+--- a/drivers/gpu/drm/radeon/radeon_audio.c
++++ b/drivers/gpu/drm/radeon/radeon_audio.c
+@@ -464,6 +464,10 @@ void radeon_audio_detect(struct drm_connector *connector,
+ 		return;
+ 
+ 	rdev = connector->encoder->dev->dev_private;
++
++	if (!radeon_audio_chipset_supported(rdev))
++		return;
++
+ 	radeon_encoder = to_radeon_encoder(connector->encoder);
+ 	dig = radeon_encoder->enc_priv;
+ 
+diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
+index b292aca0f342..edafd3c2b170 100644
+--- a/drivers/gpu/drm/radeon/radeon_ttm.c
++++ b/drivers/gpu/drm/radeon/radeon_ttm.c
+@@ -591,8 +591,7 @@ static void radeon_ttm_tt_unpin_userptr(struct ttm_tt *ttm)
+ {
+ 	struct radeon_device *rdev = radeon_get_rdev(ttm->bdev);
+ 	struct radeon_ttm_tt *gtt = (void *)ttm;
+-	struct scatterlist *sg;
+-	int i;
++	struct sg_page_iter sg_iter;
+ 
+ 	int write = !(gtt->userflags & RADEON_GEM_USERPTR_READONLY);
+ 	enum dma_data_direction direction = write ?
+@@ -605,9 +604,8 @@ static void radeon_ttm_tt_unpin_userptr(struct ttm_tt *ttm)
+ 	/* free the sg table and pages again */
+ 	dma_unmap_sg(rdev->dev, ttm->sg->sgl, ttm->sg->nents, direction);
+ 
+-	for_each_sg(ttm->sg->sgl, sg, ttm->sg->nents, i) {
+-		struct page *page = sg_page(sg);
+-
++	for_each_sg_page(ttm->sg->sgl, &sg_iter, ttm->sg->nents, 0) {
++		struct page *page = sg_page_iter_page(&sg_iter);
+ 		if (!(gtt->userflags & RADEON_GEM_USERPTR_READONLY))
+ 			set_page_dirty(page);
+ 
+diff --git a/drivers/gpu/drm/radeon/radeon_uvd.c b/drivers/gpu/drm/radeon/radeon_uvd.c
+index c10b2aec6450..cd630287cf0a 100644
+--- a/drivers/gpu/drm/radeon/radeon_uvd.c
++++ b/drivers/gpu/drm/radeon/radeon_uvd.c
+@@ -396,6 +396,29 @@ static int radeon_uvd_cs_msg_decode(uint32_t *msg, unsigned buf_sizes[])
+ 	return 0;
+ }
+ 
++static int radeon_uvd_validate_codec(struct radeon_cs_parser *p,
++				     unsigned stream_type)
++{
++	switch (stream_type) {
++	case 0: /* H264 */
++	case 1: /* VC1 */
++		/* always supported */
++		return 0;
++
++	case 3: /* MPEG2 */
++	case 4: /* MPEG4 */
++		/* only since UVD 3 */
++		if (p->rdev->family >= CHIP_PALM)
++			return 0;
++
++		/* fall through */
++	default:
++		DRM_ERROR("UVD codec not supported by hardware %d!\n",
++			  stream_type);
++		return -EINVAL;
++	}
++}
++
+ static int radeon_uvd_cs_msg(struct radeon_cs_parser *p, struct radeon_bo *bo,
+ 			     unsigned offset, unsigned buf_sizes[])
+ {
+@@ -436,50 +459,70 @@ static int radeon_uvd_cs_msg(struct radeon_cs_parser *p, struct radeon_bo *bo,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (msg_type == 1) {
+-		/* it's a decode msg, calc buffer sizes */
+-		r = radeon_uvd_cs_msg_decode(msg, buf_sizes);
+-		/* calc image size (width * height) */
+-		img_size = msg[6] * msg[7];
++	switch (msg_type) {
++	case 0:
++		/* it's a create msg, calc image size (width * height) */
++		img_size = msg[7] * msg[8];
++
++		r = radeon_uvd_validate_codec(p, msg[4]);
++		radeon_bo_kunmap(bo);
++		if (r)
++			return r;
++
++		/* try to alloc a new handle */
++		for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) {
++			if (atomic_read(&p->rdev->uvd.handles[i]) == handle) {
++				DRM_ERROR("Handle 0x%x already in use!\n", handle);
++				return -EINVAL;
++			}
++
++			if (!atomic_cmpxchg(&p->rdev->uvd.handles[i], 0, handle)) {
++				p->rdev->uvd.filp[i] = p->filp;
++				p->rdev->uvd.img_size[i] = img_size;
++				return 0;
++			}
++		}
++
++		DRM_ERROR("No more free UVD handles!\n");
++		return -EINVAL;
++
++	case 1:
++		/* it's a decode msg, validate codec and calc buffer sizes */
++		r = radeon_uvd_validate_codec(p, msg[4]);
++		if (!r)
++			r = radeon_uvd_cs_msg_decode(msg, buf_sizes);
+ 		radeon_bo_kunmap(bo);
+ 		if (r)
+ 			return r;
+ 
+-	} else if (msg_type == 2) {
++		/* validate the handle */
++		for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) {
++			if (atomic_read(&p->rdev->uvd.handles[i]) == handle) {
++				if (p->rdev->uvd.filp[i] != p->filp) {
++					DRM_ERROR("UVD handle collision detected!\n");
++					return -EINVAL;
++				}
++				return 0;
++			}
++		}
++
++		DRM_ERROR("Invalid UVD handle 0x%x!\n", handle);
++		return -ENOENT;
++
++	case 2:
+ 		/* it's a destroy msg, free the handle */
+ 		for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i)
+ 			atomic_cmpxchg(&p->rdev->uvd.handles[i], handle, 0);
+ 		radeon_bo_kunmap(bo);
+ 		return 0;
+-	} else {
+-		/* it's a create msg, calc image size (width * height) */
+-		img_size = msg[7] * msg[8];
+-		radeon_bo_kunmap(bo);
+ 
+-		if (msg_type != 0) {
+-			DRM_ERROR("Illegal UVD message type (%d)!\n", msg_type);
+-			return -EINVAL;
+-		}
+-
+-		/* it's a create msg, no special handling needed */
+-	}
+-
+-	/* create or decode, validate the handle */
+-	for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) {
+-		if (atomic_read(&p->rdev->uvd.handles[i]) == handle)
+-			return 0;
+-	}
++	default:
+ 
+-	/* handle not found try to alloc a new one */
+-	for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) {
+-		if (!atomic_cmpxchg(&p->rdev->uvd.handles[i], 0, handle)) {
+-			p->rdev->uvd.filp[i] = p->filp;
+-			p->rdev->uvd.img_size[i] = img_size;
+-			return 0;
+-		}
++		DRM_ERROR("Illegal UVD message type (%d)!\n", msg_type);
++		return -EINVAL;
+ 	}
+ 
+-	DRM_ERROR("No more free UVD handles!\n");
++	BUG();
+ 	return -EINVAL;
+ }
+ 
+diff --git a/drivers/gpu/drm/radeon/radeon_vce.c b/drivers/gpu/drm/radeon/radeon_vce.c
+index 976fe432f4e2..7ed561225007 100644
+--- a/drivers/gpu/drm/radeon/radeon_vce.c
++++ b/drivers/gpu/drm/radeon/radeon_vce.c
+@@ -493,18 +493,27 @@ int radeon_vce_cs_reloc(struct radeon_cs_parser *p, int lo, int hi,
+  *
+  * @p: parser context
+  * @handle: handle to validate
++ * @allocated: allocated a new handle?
+  *
+  * Validates the handle and return the found session index or -EINVAL
+  * we we don't have another free session index.
+  */
+-int radeon_vce_validate_handle(struct radeon_cs_parser *p, uint32_t handle)
++static int radeon_vce_validate_handle(struct radeon_cs_parser *p,
++				      uint32_t handle, bool *allocated)
+ {
+ 	unsigned i;
+ 
++	*allocated = false;
++
+ 	/* validate the handle */
+ 	for (i = 0; i < RADEON_MAX_VCE_HANDLES; ++i) {
+-		if (atomic_read(&p->rdev->vce.handles[i]) == handle)
++		if (atomic_read(&p->rdev->vce.handles[i]) == handle) {
++			if (p->rdev->vce.filp[i] != p->filp) {
++				DRM_ERROR("VCE handle collision detected!\n");
++				return -EINVAL;
++			}
+ 			return i;
++		}
+ 	}
+ 
+ 	/* handle not found try to alloc a new one */
+@@ -512,6 +521,7 @@ int radeon_vce_validate_handle(struct radeon_cs_parser *p, uint32_t handle)
+ 		if (!atomic_cmpxchg(&p->rdev->vce.handles[i], 0, handle)) {
+ 			p->rdev->vce.filp[i] = p->filp;
+ 			p->rdev->vce.img_size[i] = 0;
++			*allocated = true;
+ 			return i;
+ 		}
+ 	}
+@@ -529,10 +539,10 @@ int radeon_vce_validate_handle(struct radeon_cs_parser *p, uint32_t handle)
+ int radeon_vce_cs_parse(struct radeon_cs_parser *p)
+ {
+ 	int session_idx = -1;
+-	bool destroyed = false;
++	bool destroyed = false, created = false, allocated = false;
+ 	uint32_t tmp, handle = 0;
+ 	uint32_t *size = &tmp;
+-	int i, r;
++	int i, r = 0;
+ 
+ 	while (p->idx < p->chunk_ib->length_dw) {
+ 		uint32_t len = radeon_get_ib_value(p, p->idx);
+@@ -540,18 +550,21 @@ int radeon_vce_cs_parse(struct radeon_cs_parser *p)
+ 
+ 		if ((len < 8) || (len & 3)) {
+ 			DRM_ERROR("invalid VCE command length (%d)!\n", len);
+-                	return -EINVAL;
++			r = -EINVAL;
++			goto out;
+ 		}
+ 
+ 		if (destroyed) {
+ 			DRM_ERROR("No other command allowed after destroy!\n");
+-			return -EINVAL;
++			r = -EINVAL;
++			goto out;
+ 		}
+ 
+ 		switch (cmd) {
+ 		case 0x00000001: // session
+ 			handle = radeon_get_ib_value(p, p->idx + 2);
+-			session_idx = radeon_vce_validate_handle(p, handle);
++			session_idx = radeon_vce_validate_handle(p, handle,
++								 &allocated);
+ 			if (session_idx < 0)
+ 				return session_idx;
+ 			size = &p->rdev->vce.img_size[session_idx];
+@@ -561,6 +574,13 @@ int radeon_vce_cs_parse(struct radeon_cs_parser *p)
+ 			break;
+ 
+ 		case 0x01000001: // create
++			created = true;
++			if (!allocated) {
++				DRM_ERROR("Handle already in use!\n");
++				r = -EINVAL;
++				goto out;
++			}
++
+ 			*size = radeon_get_ib_value(p, p->idx + 8) *
+ 				radeon_get_ib_value(p, p->idx + 10) *
+ 				8 * 3 / 2;
+@@ -577,12 +597,12 @@ int radeon_vce_cs_parse(struct radeon_cs_parser *p)
+ 			r = radeon_vce_cs_reloc(p, p->idx + 10, p->idx + 9,
+ 						*size);
+ 			if (r)
+-				return r;
++				goto out;
+ 
+ 			r = radeon_vce_cs_reloc(p, p->idx + 12, p->idx + 11,
+ 						*size / 3);
+ 			if (r)
+-				return r;
++				goto out;
+ 			break;
+ 
+ 		case 0x02000001: // destroy
+@@ -593,7 +613,7 @@ int radeon_vce_cs_parse(struct radeon_cs_parser *p)
+ 			r = radeon_vce_cs_reloc(p, p->idx + 3, p->idx + 2,
+ 						*size * 2);
+ 			if (r)
+-				return r;
++				goto out;
+ 			break;
+ 
+ 		case 0x05000004: // video bitstream buffer
+@@ -601,36 +621,47 @@ int radeon_vce_cs_parse(struct radeon_cs_parser *p)
+ 			r = radeon_vce_cs_reloc(p, p->idx + 3, p->idx + 2,
+ 						tmp);
+ 			if (r)
+-				return r;
++				goto out;
+ 			break;
+ 
+ 		case 0x05000005: // feedback buffer
+ 			r = radeon_vce_cs_reloc(p, p->idx + 3, p->idx + 2,
+ 						4096);
+ 			if (r)
+-				return r;
++				goto out;
+ 			break;
+ 
+ 		default:
+ 			DRM_ERROR("invalid VCE command (0x%x)!\n", cmd);
+-			return -EINVAL;
++			r = -EINVAL;
++			goto out;
+ 		}
+ 
+ 		if (session_idx == -1) {
+ 			DRM_ERROR("no session command at start of IB\n");
+-			return -EINVAL;
++			r = -EINVAL;
++			goto out;
+ 		}
+ 
+ 		p->idx += len / 4;
+ 	}
+ 
+-	if (destroyed) {
+-		/* IB contains a destroy msg, free the handle */
++	if (allocated && !created) {
++		DRM_ERROR("New session without create command!\n");
++		r = -ENOENT;
++	}
++
++out:
++	if ((!r && destroyed) || (r && allocated)) {
++		/*
++		 * IB contains a destroy msg or we have allocated an
++		 * handle and got an error, anyway free the handle
++		 */
+ 		for (i = 0; i < RADEON_MAX_VCE_HANDLES; ++i)
+ 			atomic_cmpxchg(&p->rdev->vce.handles[i], handle, 0);
+ 	}
+ 
+-	return 0;
++	return r;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/radeon/rv770d.h b/drivers/gpu/drm/radeon/rv770d.h
+index 3cf1e2921545..9ef2064b1c9c 100644
+--- a/drivers/gpu/drm/radeon/rv770d.h
++++ b/drivers/gpu/drm/radeon/rv770d.h
+@@ -989,6 +989,9 @@
+ 			 ((n) & 0x3FFF) << 16)
+ 
+ /* UVD */
++#define UVD_SEMA_ADDR_LOW				0xef00
++#define UVD_SEMA_ADDR_HIGH				0xef04
++#define UVD_SEMA_CMD					0xef08
+ #define UVD_GPCOM_VCPU_CMD				0xef0c
+ #define UVD_GPCOM_VCPU_DATA0				0xef10
+ #define UVD_GPCOM_VCPU_DATA1				0xef14
+diff --git a/drivers/gpu/drm/radeon/uvd_v1_0.c b/drivers/gpu/drm/radeon/uvd_v1_0.c
+index e72b3cb59358..c6b1cbca47fc 100644
+--- a/drivers/gpu/drm/radeon/uvd_v1_0.c
++++ b/drivers/gpu/drm/radeon/uvd_v1_0.c
+@@ -466,18 +466,8 @@ bool uvd_v1_0_semaphore_emit(struct radeon_device *rdev,
+ 			     struct radeon_semaphore *semaphore,
+ 			     bool emit_wait)
+ {
+-	uint64_t addr = semaphore->gpu_addr;
+-
+-	radeon_ring_write(ring, PACKET0(UVD_SEMA_ADDR_LOW, 0));
+-	radeon_ring_write(ring, (addr >> 3) & 0x000FFFFF);
+-
+-	radeon_ring_write(ring, PACKET0(UVD_SEMA_ADDR_HIGH, 0));
+-	radeon_ring_write(ring, (addr >> 23) & 0x000FFFFF);
+-
+-	radeon_ring_write(ring, PACKET0(UVD_SEMA_CMD, 0));
+-	radeon_ring_write(ring, emit_wait ? 1 : 0);
+-
+-	return true;
++	/* disable semaphores for UVD V1 hardware */
++	return false;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/radeon/uvd_v2_2.c b/drivers/gpu/drm/radeon/uvd_v2_2.c
+index 89193519f8a1..7ed778cec7c6 100644
+--- a/drivers/gpu/drm/radeon/uvd_v2_2.c
++++ b/drivers/gpu/drm/radeon/uvd_v2_2.c
+@@ -60,6 +60,35 @@ void uvd_v2_2_fence_emit(struct radeon_device *rdev,
+ }
+ 
+ /**
++ * uvd_v2_2_semaphore_emit - emit semaphore command
++ *
++ * @rdev: radeon_device pointer
++ * @ring: radeon_ring pointer
++ * @semaphore: semaphore to emit commands for
++ * @emit_wait: true if we should emit a wait command
++ *
++ * Emit a semaphore command (either wait or signal) to the UVD ring.
++ */
++bool uvd_v2_2_semaphore_emit(struct radeon_device *rdev,
++			     struct radeon_ring *ring,
++			     struct radeon_semaphore *semaphore,
++			     bool emit_wait)
++{
++	uint64_t addr = semaphore->gpu_addr;
++
++	radeon_ring_write(ring, PACKET0(UVD_SEMA_ADDR_LOW, 0));
++	radeon_ring_write(ring, (addr >> 3) & 0x000FFFFF);
++
++	radeon_ring_write(ring, PACKET0(UVD_SEMA_ADDR_HIGH, 0));
++	radeon_ring_write(ring, (addr >> 23) & 0x000FFFFF);
++
++	radeon_ring_write(ring, PACKET0(UVD_SEMA_CMD, 0));
++	radeon_ring_write(ring, emit_wait ? 1 : 0);
++
++	return true;
++}
++
++/**
+  * uvd_v2_2_resume - memory controller programming
+  *
+  * @rdev: radeon_device pointer
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index d570030d899c..06441a43c3aa 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -859,19 +859,27 @@ static void cma_save_ib_info(struct rdma_cm_id *id, struct rdma_cm_id *listen_id
+ 	memcpy(&ib->sib_addr, &path->dgid, 16);
+ }
+ 
++static __be16 ss_get_port(const struct sockaddr_storage *ss)
++{
++	if (ss->ss_family == AF_INET)
++		return ((struct sockaddr_in *)ss)->sin_port;
++	else if (ss->ss_family == AF_INET6)
++		return ((struct sockaddr_in6 *)ss)->sin6_port;
++	BUG();
++}
++
+ static void cma_save_ip4_info(struct rdma_cm_id *id, struct rdma_cm_id *listen_id,
+ 			      struct cma_hdr *hdr)
+ {
+-	struct sockaddr_in *listen4, *ip4;
++	struct sockaddr_in *ip4;
+ 
+-	listen4 = (struct sockaddr_in *) &listen_id->route.addr.src_addr;
+ 	ip4 = (struct sockaddr_in *) &id->route.addr.src_addr;
+-	ip4->sin_family = listen4->sin_family;
++	ip4->sin_family = AF_INET;
+ 	ip4->sin_addr.s_addr = hdr->dst_addr.ip4.addr;
+-	ip4->sin_port = listen4->sin_port;
++	ip4->sin_port = ss_get_port(&listen_id->route.addr.src_addr);
+ 
+ 	ip4 = (struct sockaddr_in *) &id->route.addr.dst_addr;
+-	ip4->sin_family = listen4->sin_family;
++	ip4->sin_family = AF_INET;
+ 	ip4->sin_addr.s_addr = hdr->src_addr.ip4.addr;
+ 	ip4->sin_port = hdr->port;
+ }
+@@ -879,16 +887,15 @@ static void cma_save_ip4_info(struct rdma_cm_id *id, struct rdma_cm_id *listen_i
+ static void cma_save_ip6_info(struct rdma_cm_id *id, struct rdma_cm_id *listen_id,
+ 			      struct cma_hdr *hdr)
+ {
+-	struct sockaddr_in6 *listen6, *ip6;
++	struct sockaddr_in6 *ip6;
+ 
+-	listen6 = (struct sockaddr_in6 *) &listen_id->route.addr.src_addr;
+ 	ip6 = (struct sockaddr_in6 *) &id->route.addr.src_addr;
+-	ip6->sin6_family = listen6->sin6_family;
++	ip6->sin6_family = AF_INET6;
+ 	ip6->sin6_addr = hdr->dst_addr.ip6;
+-	ip6->sin6_port = listen6->sin6_port;
++	ip6->sin6_port = ss_get_port(&listen_id->route.addr.src_addr);
+ 
+ 	ip6 = (struct sockaddr_in6 *) &id->route.addr.dst_addr;
+-	ip6->sin6_family = listen6->sin6_family;
++	ip6->sin6_family = AF_INET6;
+ 	ip6->sin6_addr = hdr->src_addr.ip6;
+ 	ip6->sin6_port = hdr->port;
+ }
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 414739295d04..713a96237a80 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -925,10 +925,11 @@ static int crypt_convert(struct crypt_config *cc,
+ 
+ 		switch (r) {
+ 		/* async */
+-		case -EINPROGRESS:
+ 		case -EBUSY:
+ 			wait_for_completion(&ctx->restart);
+ 			reinit_completion(&ctx->restart);
++			/* fall through*/
++		case -EINPROGRESS:
+ 			ctx->req = NULL;
+ 			ctx->cc_sector++;
+ 			continue;
+@@ -1345,8 +1346,10 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
+ 	struct dm_crypt_io *io = container_of(ctx, struct dm_crypt_io, ctx);
+ 	struct crypt_config *cc = io->cc;
+ 
+-	if (error == -EINPROGRESS)
++	if (error == -EINPROGRESS) {
++		complete(&ctx->restart);
+ 		return;
++	}
+ 
+ 	if (!error && cc->iv_gen_ops && cc->iv_gen_ops->post)
+ 		error = cc->iv_gen_ops->post(cc, iv_of_dmreq(cc, dmreq), dmreq);
+@@ -1357,15 +1360,12 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
+ 	crypt_free_req(cc, req_of_dmreq(cc, dmreq), io->base_bio);
+ 
+ 	if (!atomic_dec_and_test(&ctx->cc_pending))
+-		goto done;
++		return;
+ 
+ 	if (bio_data_dir(io->base_bio) == READ)
+ 		kcryptd_crypt_read_done(io);
+ 	else
+ 		kcryptd_crypt_write_io_submit(io, 1);
+-done:
+-	if (!completion_done(&ctx->restart))
+-		complete(&ctx->restart);
+ }
+ 
+ static void kcryptd_crypt(struct work_struct *work)
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index e6178787ce3d..e47d1dd046da 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -4754,12 +4754,12 @@ static void md_free(struct kobject *ko)
+ 	if (mddev->sysfs_state)
+ 		sysfs_put(mddev->sysfs_state);
+ 
++	if (mddev->queue)
++		blk_cleanup_queue(mddev->queue);
+ 	if (mddev->gendisk) {
+ 		del_gendisk(mddev->gendisk);
+ 		put_disk(mddev->gendisk);
+ 	}
+-	if (mddev->queue)
+-		blk_cleanup_queue(mddev->queue);
+ 
+ 	kfree(mddev);
+ }
+diff --git a/drivers/media/platform/marvell-ccic/mcam-core.c b/drivers/media/platform/marvell-ccic/mcam-core.c
+index dd5b1415f974..f902eb4ee569 100644
+--- a/drivers/media/platform/marvell-ccic/mcam-core.c
++++ b/drivers/media/platform/marvell-ccic/mcam-core.c
+@@ -116,8 +116,8 @@ static struct mcam_format_struct {
+ 		.planar		= false,
+ 	},
+ 	{
+-		.desc		= "UYVY 4:2:2",
+-		.pixelformat	= V4L2_PIX_FMT_UYVY,
++		.desc		= "YVYU 4:2:2",
++		.pixelformat	= V4L2_PIX_FMT_YVYU,
+ 		.mbus_code	= MEDIA_BUS_FMT_YUYV8_2X8,
+ 		.bpp		= 2,
+ 		.planar		= false,
+@@ -748,7 +748,7 @@ static void mcam_ctlr_image(struct mcam_camera *cam)
+ 
+ 	switch (fmt->pixelformat) {
+ 	case V4L2_PIX_FMT_YUYV:
+-	case V4L2_PIX_FMT_UYVY:
++	case V4L2_PIX_FMT_YVYU:
+ 		widthy = fmt->width * 2;
+ 		widthuv = 0;
+ 		break;
+@@ -784,15 +784,15 @@ static void mcam_ctlr_image(struct mcam_camera *cam)
+ 	case V4L2_PIX_FMT_YUV420:
+ 	case V4L2_PIX_FMT_YVU420:
+ 		mcam_reg_write_mask(cam, REG_CTRL0,
+-			C0_DF_YUV | C0_YUV_420PL | C0_YUVE_YVYU, C0_DF_MASK);
++			C0_DF_YUV | C0_YUV_420PL | C0_YUVE_VYUY, C0_DF_MASK);
+ 		break;
+ 	case V4L2_PIX_FMT_YUYV:
+ 		mcam_reg_write_mask(cam, REG_CTRL0,
+-			C0_DF_YUV | C0_YUV_PACKED | C0_YUVE_UYVY, C0_DF_MASK);
++			C0_DF_YUV | C0_YUV_PACKED | C0_YUVE_NOSWAP, C0_DF_MASK);
+ 		break;
+-	case V4L2_PIX_FMT_UYVY:
++	case V4L2_PIX_FMT_YVYU:
+ 		mcam_reg_write_mask(cam, REG_CTRL0,
+-			C0_DF_YUV | C0_YUV_PACKED | C0_YUVE_YUYV, C0_DF_MASK);
++			C0_DF_YUV | C0_YUV_PACKED | C0_YUVE_SWAP24, C0_DF_MASK);
+ 		break;
+ 	case V4L2_PIX_FMT_JPEG:
+ 		mcam_reg_write_mask(cam, REG_CTRL0,
+diff --git a/drivers/media/platform/marvell-ccic/mcam-core.h b/drivers/media/platform/marvell-ccic/mcam-core.h
+index aa0c6eac254a..7ffdf4dbaf8c 100644
+--- a/drivers/media/platform/marvell-ccic/mcam-core.h
++++ b/drivers/media/platform/marvell-ccic/mcam-core.h
+@@ -330,10 +330,10 @@ int mccic_resume(struct mcam_camera *cam);
+ #define	  C0_YUVE_YVYU	  0x00010000	/* Y1CrY0Cb		*/
+ #define	  C0_YUVE_VYUY	  0x00020000	/* CrY1CbY0		*/
+ #define	  C0_YUVE_UYVY	  0x00030000	/* CbY1CrY0		*/
+-#define	  C0_YUVE_XYUV	  0x00000000	/* 420: .YUV		*/
+-#define	  C0_YUVE_XYVU	  0x00010000	/* 420: .YVU		*/
+-#define	  C0_YUVE_XUVY	  0x00020000	/* 420: .UVY		*/
+-#define	  C0_YUVE_XVUY	  0x00030000	/* 420: .VUY		*/
++#define	  C0_YUVE_NOSWAP  0x00000000	/* no bytes swapping	*/
++#define	  C0_YUVE_SWAP13  0x00010000	/* swap byte 1 and 3	*/
++#define	  C0_YUVE_SWAP24  0x00020000	/* swap byte 2 and 4	*/
++#define	  C0_YUVE_SWAP1324 0x00030000	/* swap bytes 1&3 and 2&4 */
+ /* Bayer bits 18,19 if needed */
+ #define	  C0_EOF_VSYNC	  0x00400000	/* Generate EOF by VSYNC */
+ #define	  C0_VEDGE_CTRL   0x00800000	/* Detect falling edge of VSYNC */
+diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
+index c69afb5e264e..ed2e71a74a58 100644
+--- a/drivers/mmc/card/block.c
++++ b/drivers/mmc/card/block.c
+@@ -1029,6 +1029,18 @@ static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type)
+ 	md->reset_done &= ~type;
+ }
+ 
++int mmc_access_rpmb(struct mmc_queue *mq)
++{
++	struct mmc_blk_data *md = mq->data;
++	/*
++	 * If this is a RPMB partition access, return ture
++	 */
++	if (md && md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB)
++		return true;
++
++	return false;
++}
++
+ static int mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req)
+ {
+ 	struct mmc_blk_data *md = mq->data;
+diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
+index 236d194c2883..8efa3684aef8 100644
+--- a/drivers/mmc/card/queue.c
++++ b/drivers/mmc/card/queue.c
+@@ -38,7 +38,7 @@ static int mmc_prep_request(struct request_queue *q, struct request *req)
+ 		return BLKPREP_KILL;
+ 	}
+ 
+-	if (mq && mmc_card_removed(mq->card))
++	if (mq && (mmc_card_removed(mq->card) || mmc_access_rpmb(mq)))
+ 		return BLKPREP_KILL;
+ 
+ 	req->cmd_flags |= REQ_DONTPREP;
+diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
+index 5752d50049a3..99e6521e6169 100644
+--- a/drivers/mmc/card/queue.h
++++ b/drivers/mmc/card/queue.h
+@@ -73,4 +73,6 @@ extern void mmc_queue_bounce_post(struct mmc_queue_req *);
+ extern int mmc_packed_init(struct mmc_queue *, struct mmc_card *);
+ extern void mmc_packed_clean(struct mmc_queue *);
+ 
++extern int mmc_access_rpmb(struct mmc_queue *);
++
+ #endif
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index 23f10f72e5f3..57a8d00672d3 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -2648,6 +2648,7 @@ int mmc_pm_notify(struct notifier_block *notify_block,
+ 	switch (mode) {
+ 	case PM_HIBERNATION_PREPARE:
+ 	case PM_SUSPEND_PREPARE:
++	case PM_RESTORE_PREPARE:
+ 		spin_lock_irqsave(&host->lock, flags);
+ 		host->rescan_disable = 1;
+ 		spin_unlock_irqrestore(&host->lock, flags);
+diff --git a/drivers/mmc/host/sh_mmcif.c b/drivers/mmc/host/sh_mmcif.c
+index 7d9d6a321521..5165ae75d540 100644
+--- a/drivers/mmc/host/sh_mmcif.c
++++ b/drivers/mmc/host/sh_mmcif.c
+@@ -1402,7 +1402,7 @@ static int sh_mmcif_probe(struct platform_device *pdev)
+ 	host		= mmc_priv(mmc);
+ 	host->mmc	= mmc;
+ 	host->addr	= reg;
+-	host->timeout	= msecs_to_jiffies(1000);
++	host->timeout	= msecs_to_jiffies(10000);
+ 	host->ccs_enable = !pd || !pd->ccs_unsupported;
+ 	host->clk_ctrl2_enable = pd && pd->clk_ctrl2_present;
+ 
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index 89dca77ca038..18ee2089df4a 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -1110,7 +1110,7 @@ void devm_pinctrl_put(struct pinctrl *p)
+ EXPORT_SYMBOL_GPL(devm_pinctrl_put);
+ 
+ int pinctrl_register_map(struct pinctrl_map const *maps, unsigned num_maps,
+-			 bool dup, bool locked)
++			 bool dup)
+ {
+ 	int i, ret;
+ 	struct pinctrl_maps *maps_node;
+@@ -1178,11 +1178,9 @@ int pinctrl_register_map(struct pinctrl_map const *maps, unsigned num_maps,
+ 		maps_node->maps = maps;
+ 	}
+ 
+-	if (!locked)
+-		mutex_lock(&pinctrl_maps_mutex);
++	mutex_lock(&pinctrl_maps_mutex);
+ 	list_add_tail(&maps_node->node, &pinctrl_maps);
+-	if (!locked)
+-		mutex_unlock(&pinctrl_maps_mutex);
++	mutex_unlock(&pinctrl_maps_mutex);
+ 
+ 	return 0;
+ }
+@@ -1197,7 +1195,7 @@ int pinctrl_register_map(struct pinctrl_map const *maps, unsigned num_maps,
+ int pinctrl_register_mappings(struct pinctrl_map const *maps,
+ 			      unsigned num_maps)
+ {
+-	return pinctrl_register_map(maps, num_maps, true, false);
++	return pinctrl_register_map(maps, num_maps, true);
+ }
+ 
+ void pinctrl_unregister_map(struct pinctrl_map const *map)
+diff --git a/drivers/pinctrl/core.h b/drivers/pinctrl/core.h
+index 75476b3d87da..b24ea846c867 100644
+--- a/drivers/pinctrl/core.h
++++ b/drivers/pinctrl/core.h
+@@ -183,7 +183,7 @@ static inline struct pin_desc *pin_desc_get(struct pinctrl_dev *pctldev,
+ }
+ 
+ int pinctrl_register_map(struct pinctrl_map const *maps, unsigned num_maps,
+-			 bool dup, bool locked);
++			 bool dup);
+ void pinctrl_unregister_map(struct pinctrl_map const *map);
+ 
+ extern int pinctrl_force_sleep(struct pinctrl_dev *pctldev);
+diff --git a/drivers/pinctrl/devicetree.c b/drivers/pinctrl/devicetree.c
+index eda13de2e7c0..0bbf7d71b281 100644
+--- a/drivers/pinctrl/devicetree.c
++++ b/drivers/pinctrl/devicetree.c
+@@ -92,7 +92,7 @@ static int dt_remember_or_free_map(struct pinctrl *p, const char *statename,
+ 	dt_map->num_maps = num_maps;
+ 	list_add_tail(&dt_map->node, &p->dt_maps);
+ 
+-	return pinctrl_register_map(map, num_maps, false, true);
++	return pinctrl_register_map(map, num_maps, false);
+ }
+ 
+ struct pinctrl_dev *of_pinctrl_get(struct device_node *np)
+diff --git a/drivers/rtc/rtc-armada38x.c b/drivers/rtc/rtc-armada38x.c
+index 43e04af39e09..cb70ced7e0db 100644
+--- a/drivers/rtc/rtc-armada38x.c
++++ b/drivers/rtc/rtc-armada38x.c
+@@ -40,6 +40,13 @@ struct armada38x_rtc {
+ 	void __iomem	    *regs;
+ 	void __iomem	    *regs_soc;
+ 	spinlock_t	    lock;
++	/*
++	 * While setting the time, the RTC TIME register should not be
++	 * accessed. Setting the RTC time involves sleeping during
++	 * 100ms, so a mutex instead of a spinlock is used to protect
++	 * it
++	 */
++	struct mutex	    mutex_time;
+ 	int		    irq;
+ };
+ 
+@@ -59,8 +66,7 @@ static int armada38x_rtc_read_time(struct device *dev, struct rtc_time *tm)
+ 	struct armada38x_rtc *rtc = dev_get_drvdata(dev);
+ 	unsigned long time, time_check, flags;
+ 
+-	spin_lock_irqsave(&rtc->lock, flags);
+-
++	mutex_lock(&rtc->mutex_time);
+ 	time = readl(rtc->regs + RTC_TIME);
+ 	/*
+ 	 * WA for failing time set attempts. As stated in HW ERRATA if
+@@ -71,7 +77,7 @@ static int armada38x_rtc_read_time(struct device *dev, struct rtc_time *tm)
+ 	if ((time_check - time) > 1)
+ 		time_check = readl(rtc->regs + RTC_TIME);
+ 
+-	spin_unlock_irqrestore(&rtc->lock, flags);
++	mutex_unlock(&rtc->mutex_time);
+ 
+ 	rtc_time_to_tm(time_check, tm);
+ 
+@@ -94,19 +100,12 @@ static int armada38x_rtc_set_time(struct device *dev, struct rtc_time *tm)
+ 	 * then wait for 100ms before writing to the time register to be
+ 	 * sure that the data will be taken into account.
+ 	 */
+-	spin_lock_irqsave(&rtc->lock, flags);
+-
++	mutex_lock(&rtc->mutex_time);
+ 	rtc_delayed_write(0, rtc, RTC_STATUS);
+-
+-	spin_unlock_irqrestore(&rtc->lock, flags);
+-
+ 	msleep(100);
+-
+-	spin_lock_irqsave(&rtc->lock, flags);
+-
+ 	rtc_delayed_write(time, rtc, RTC_TIME);
++	mutex_unlock(&rtc->mutex_time);
+ 
+-	spin_unlock_irqrestore(&rtc->lock, flags);
+ out:
+ 	return ret;
+ }
+@@ -230,6 +229,7 @@ static __init int armada38x_rtc_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	spin_lock_init(&rtc->lock);
++	mutex_init(&rtc->mutex_time);
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rtc");
+ 	rtc->regs = devm_ioremap_resource(&pdev->dev, res);
+diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
+index f1e57425e39f..5bab1c684bb1 100644
+--- a/drivers/tty/hvc/hvc_xen.c
++++ b/drivers/tty/hvc/hvc_xen.c
+@@ -299,11 +299,27 @@ static int xen_initial_domain_console_init(void)
+ 	return 0;
+ }
+ 
++static void xen_console_update_evtchn(struct xencons_info *info)
++{
++	if (xen_hvm_domain()) {
++		uint64_t v;
++		int err;
++
++		err = hvm_get_parameter(HVM_PARAM_CONSOLE_EVTCHN, &v);
++		if (!err && v)
++			info->evtchn = v;
++	} else
++		info->evtchn = xen_start_info->console.domU.evtchn;
++}
++
+ void xen_console_resume(void)
+ {
+ 	struct xencons_info *info = vtermno_to_xencons(HVC_COOKIE);
+-	if (info != NULL && info->irq)
++	if (info != NULL && info->irq) {
++		if (!xen_initial_domain())
++			xen_console_update_evtchn(info);
+ 		rebind_evtchn_irq(info->evtchn, info->irq);
++	}
+ }
+ 
+ static void xencons_disconnect_backend(struct xencons_info *info)
+diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
+index 4cde85501444..837d1778970b 100644
+--- a/drivers/vfio/vfio.c
++++ b/drivers/vfio/vfio.c
+@@ -711,6 +711,8 @@ void *vfio_del_group_dev(struct device *dev)
+ 	void *device_data = device->device_data;
+ 	struct vfio_unbound_dev *unbound;
+ 	unsigned int i = 0;
++	long ret;
++	bool interrupted = false;
+ 
+ 	/*
+ 	 * The group exists so long as we have a device reference.  Get
+@@ -756,9 +758,22 @@ void *vfio_del_group_dev(struct device *dev)
+ 
+ 		vfio_device_put(device);
+ 
+-	} while (wait_event_interruptible_timeout(vfio.release_q,
+-						  !vfio_dev_present(group, dev),
+-						  HZ * 10) <= 0);
++		if (interrupted) {
++			ret = wait_event_timeout(vfio.release_q,
++					!vfio_dev_present(group, dev), HZ * 10);
++		} else {
++			ret = wait_event_interruptible_timeout(vfio.release_q,
++					!vfio_dev_present(group, dev), HZ * 10);
++			if (ret == -ERESTARTSYS) {
++				interrupted = true;
++				dev_warn(dev,
++					 "Device is currently in use, task"
++					 " \"%s\" (%d) "
++					 "blocked until device is released",
++					 current->comm, task_pid_nr(current));
++			}
++		}
++	} while (ret <= 0);
+ 
+ 	vfio_group_put(group);
+ 
+diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
+index 5db43fc100a4..7dd46312c180 100644
+--- a/drivers/xen/events/events_2l.c
++++ b/drivers/xen/events/events_2l.c
+@@ -345,6 +345,15 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
+ 	return IRQ_HANDLED;
+ }
+ 
++static void evtchn_2l_resume(void)
++{
++	int i;
++
++	for_each_online_cpu(i)
++		memset(per_cpu(cpu_evtchn_mask, i), 0, sizeof(xen_ulong_t) *
++				EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
++}
++
+ static const struct evtchn_ops evtchn_ops_2l = {
+ 	.max_channels      = evtchn_2l_max_channels,
+ 	.nr_channels       = evtchn_2l_max_channels,
+@@ -356,6 +365,7 @@ static const struct evtchn_ops evtchn_ops_2l = {
+ 	.mask              = evtchn_2l_mask,
+ 	.unmask            = evtchn_2l_unmask,
+ 	.handle_events     = evtchn_2l_handle_events,
++	.resume	           = evtchn_2l_resume,
+ };
+ 
+ void __init xen_evtchn_2l_init(void)
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 70fba973a107..2b8553bd8715 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -529,8 +529,8 @@ static unsigned int __startup_pirq(unsigned int irq)
+ 	if (rc)
+ 		goto err;
+ 
+-	bind_evtchn_to_cpu(evtchn, 0);
+ 	info->evtchn = evtchn;
++	bind_evtchn_to_cpu(evtchn, 0);
+ 
+ 	rc = xen_evtchn_port_setup(info);
+ 	if (rc)
+@@ -1279,8 +1279,9 @@ void rebind_evtchn_irq(int evtchn, int irq)
+ 
+ 	mutex_unlock(&irq_mapping_update_lock);
+ 
+-	/* new event channels are always bound to cpu 0 */
+-	irq_set_affinity(irq, cpumask_of(0));
++        bind_evtchn_to_cpu(evtchn, info->cpu);
++	/* This will be deferred until interrupt is processed */
++	irq_set_affinity(irq, cpumask_of(info->cpu));
+ 
+ 	/* Unmask the event channel. */
+ 	enable_irq(irq);
+diff --git a/drivers/xen/xen-pciback/conf_space.c b/drivers/xen/xen-pciback/conf_space.c
+index 75fe3d466515..9c234209d8b5 100644
+--- a/drivers/xen/xen-pciback/conf_space.c
++++ b/drivers/xen/xen-pciback/conf_space.c
+@@ -16,8 +16,8 @@
+ #include "conf_space.h"
+ #include "conf_space_quirks.h"
+ 
+-bool permissive;
+-module_param(permissive, bool, 0644);
++bool xen_pcibk_permissive;
++module_param_named(permissive, xen_pcibk_permissive, bool, 0644);
+ 
+ /* This is where xen_pcibk_read_config_byte, xen_pcibk_read_config_word,
+  * xen_pcibk_write_config_word, and xen_pcibk_write_config_byte are created. */
+@@ -262,7 +262,7 @@ int xen_pcibk_config_write(struct pci_dev *dev, int offset, int size, u32 value)
+ 		 * This means that some fields may still be read-only because
+ 		 * they have entries in the config_field list that intercept
+ 		 * the write and do nothing. */
+-		if (dev_data->permissive || permissive) {
++		if (dev_data->permissive || xen_pcibk_permissive) {
+ 			switch (size) {
+ 			case 1:
+ 				err = pci_write_config_byte(dev, offset,
+diff --git a/drivers/xen/xen-pciback/conf_space.h b/drivers/xen/xen-pciback/conf_space.h
+index 2e1d73d1d5d0..62461a8ba1d6 100644
+--- a/drivers/xen/xen-pciback/conf_space.h
++++ b/drivers/xen/xen-pciback/conf_space.h
+@@ -64,7 +64,7 @@ struct config_field_entry {
+ 	void *data;
+ };
+ 
+-extern bool permissive;
++extern bool xen_pcibk_permissive;
+ 
+ #define OFFSET(cfg_entry) ((cfg_entry)->base_offset+(cfg_entry)->field->offset)
+ 
+diff --git a/drivers/xen/xen-pciback/conf_space_header.c b/drivers/xen/xen-pciback/conf_space_header.c
+index 2d7369391472..f8baf463dd35 100644
+--- a/drivers/xen/xen-pciback/conf_space_header.c
++++ b/drivers/xen/xen-pciback/conf_space_header.c
+@@ -105,7 +105,7 @@ static int command_write(struct pci_dev *dev, int offset, u16 value, void *data)
+ 
+ 	cmd->val = value;
+ 
+-	if (!permissive && (!dev_data || !dev_data->permissive))
++	if (!xen_pcibk_permissive && (!dev_data || !dev_data->permissive))
+ 		return 0;
+ 
+ 	/* Only allow the guest to control certain bits. */
+diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
+index 564b31584860..5390a674b5e3 100644
+--- a/drivers/xen/xenbus/xenbus_probe.c
++++ b/drivers/xen/xenbus/xenbus_probe.c
+@@ -57,6 +57,7 @@
+ #include <xen/xen.h>
+ #include <xen/xenbus.h>
+ #include <xen/events.h>
++#include <xen/xen-ops.h>
+ #include <xen/page.h>
+ 
+ #include <xen/hvm.h>
+@@ -735,6 +736,30 @@ static int __init xenstored_local_init(void)
+ 	return err;
+ }
+ 
++static int xenbus_resume_cb(struct notifier_block *nb,
++			    unsigned long action, void *data)
++{
++	int err = 0;
++
++	if (xen_hvm_domain()) {
++		uint64_t v;
++
++		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
++		if (!err && v)
++			xen_store_evtchn = v;
++		else
++			pr_warn("Cannot update xenstore event channel: %d\n",
++				err);
++	} else
++		xen_store_evtchn = xen_start_info->store_evtchn;
++
++	return err;
++}
++
++static struct notifier_block xenbus_resume_nb = {
++	.notifier_call = xenbus_resume_cb,
++};
++
+ static int __init xenbus_init(void)
+ {
+ 	int err = 0;
+@@ -793,6 +818,10 @@ static int __init xenbus_init(void)
+ 		goto out_error;
+ 	}
+ 
++	if ((xen_store_domain_type != XS_LOCAL) &&
++	    (xen_store_domain_type != XS_UNKNOWN))
++		xen_resume_notifier_register(&xenbus_resume_nb);
++
+ #ifdef CONFIG_XEN_COMPAT_XENFS
+ 	/*
+ 	 * Create xenfs mountpoint in /proc for compatibility with
+diff --git a/fs/coredump.c b/fs/coredump.c
+index f319926ddf8c..bbbe139ab280 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -657,7 +657,7 @@ void do_coredump(const siginfo_t *siginfo)
+ 		 */
+ 		if (!uid_eq(inode->i_uid, current_fsuid()))
+ 			goto close_fail;
+-		if (!cprm.file->f_op->write)
++		if (!(cprm.file->f_mode & FMODE_CAN_WRITE))
+ 			goto close_fail;
+ 		if (do_truncate(cprm.file->f_path.dentry, 0, 0, cprm.file))
+ 			goto close_fail;
+diff --git a/fs/namei.c b/fs/namei.c
+index caa38a24e1f7..50a8583e8156 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -3228,7 +3228,7 @@ static struct file *path_openat(int dfd, struct filename *pathname,
+ 
+ 	if (unlikely(file->f_flags & __O_TMPFILE)) {
+ 		error = do_tmpfile(dfd, pathname, nd, flags, op, file, &opened);
+-		goto out;
++		goto out2;
+ 	}
+ 
+ 	error = path_init(dfd, pathname->name, flags, nd);
+@@ -3258,6 +3258,7 @@ static struct file *path_openat(int dfd, struct filename *pathname,
+ 	}
+ out:
+ 	path_cleanup(nd);
++out2:
+ 	if (!(opened & FILE_OPENED)) {
+ 		BUG_ON(!error);
+ 		put_filp(file);
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 4622ee32a5e2..38ed1e1bed41 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -3178,6 +3178,12 @@ bool fs_fully_visible(struct file_system_type *type)
+ 		if (mnt->mnt.mnt_sb->s_type != type)
+ 			continue;
+ 
++		/* This mount is not fully visible if it's root directory
++		 * is not the root directory of the filesystem.
++		 */
++		if (mnt->mnt.mnt_root != mnt->mnt.mnt_sb->s_root)
++			continue;
++
+ 		/* This mount is not fully visible if there are any child mounts
+ 		 * that cover anything except for empty directories.
+ 		 */
+diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
+index ecdbae19a766..090d8ce25bd1 100644
+--- a/fs/nilfs2/btree.c
++++ b/fs/nilfs2/btree.c
+@@ -388,7 +388,7 @@ static int nilfs_btree_root_broken(const struct nilfs_btree_node *node,
+ 	nchildren = nilfs_btree_node_get_nchildren(node);
+ 
+ 	if (unlikely(level < NILFS_BTREE_LEVEL_NODE_MIN ||
+-		     level > NILFS_BTREE_LEVEL_MAX ||
++		     level >= NILFS_BTREE_LEVEL_MAX ||
+ 		     nchildren < 0 ||
+ 		     nchildren > NILFS_BTREE_ROOT_NCHILDREN_MAX)) {
+ 		pr_crit("NILFS: bad btree root (inode number=%lu): level = %d, flags = 0x%x, nchildren = %d\n",
+diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c
+index a6944b25fd5b..fdf4b41d0609 100644
+--- a/fs/ocfs2/dlm/dlmmaster.c
++++ b/fs/ocfs2/dlm/dlmmaster.c
+@@ -757,6 +757,19 @@ lookup:
+ 	if (tmpres) {
+ 		spin_unlock(&dlm->spinlock);
+ 		spin_lock(&tmpres->spinlock);
++
++		/*
++		 * Right after dlm spinlock was released, dlm_thread could have
++		 * purged the lockres. Check if lockres got unhashed. If so
++		 * start over.
++		 */
++		if (hlist_unhashed(&tmpres->hash_node)) {
++			spin_unlock(&tmpres->spinlock);
++			dlm_lockres_put(tmpres);
++			tmpres = NULL;
++			goto lookup;
++		}
++
+ 		/* Wait on the thread that is mastering the resource */
+ 		if (tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN) {
+ 			__dlm_wait_on_lockres(tmpres);
+diff --git a/include/acpi/acpixf.h b/include/acpi/acpixf.h
+index d56f5d722138..65aa4fa0ae4e 100644
+--- a/include/acpi/acpixf.h
++++ b/include/acpi/acpixf.h
+@@ -431,13 +431,13 @@ ACPI_EXTERNAL_RETURN_STATUS(acpi_status __init acpi_load_tables(void))
+ ACPI_EXTERNAL_RETURN_STATUS(acpi_status __init acpi_reallocate_root_table(void))
+ 
+ ACPI_EXTERNAL_RETURN_STATUS(acpi_status __init
+-			    acpi_find_root_pointer(acpi_size * rsdp_address))
+-
++			    acpi_find_root_pointer(acpi_physical_address *
++						   rsdp_address))
+ ACPI_EXTERNAL_RETURN_STATUS(acpi_status
+-			    acpi_get_table_header(acpi_string signature,
+-						  u32 instance,
+-						  struct acpi_table_header
+-						  *out_table_header))
++			     acpi_get_table_header(acpi_string signature,
++						   u32 instance,
++						   struct acpi_table_header
++						   *out_table_header))
+ ACPI_EXTERNAL_RETURN_STATUS(acpi_status
+ 			     acpi_get_table(acpi_string signature, u32 instance,
+ 					    struct acpi_table_header
+diff --git a/include/linux/nilfs2_fs.h b/include/linux/nilfs2_fs.h
+index ff3fea3194c6..9abb763e4b86 100644
+--- a/include/linux/nilfs2_fs.h
++++ b/include/linux/nilfs2_fs.h
+@@ -460,7 +460,7 @@ struct nilfs_btree_node {
+ /* level */
+ #define NILFS_BTREE_LEVEL_DATA          0
+ #define NILFS_BTREE_LEVEL_NODE_MIN      (NILFS_BTREE_LEVEL_DATA + 1)
+-#define NILFS_BTREE_LEVEL_MAX           14
++#define NILFS_BTREE_LEVEL_MAX           14	/* Max level (exclusive) */
+ 
+ /**
+  * struct nilfs_palloc_group_desc - block group descriptor
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index d487f8dc6d39..72a5224c8084 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1141,10 +1141,10 @@ int memory_failure(unsigned long pfn, int trapno, int flags)
+ 	 * The check (unnecessarily) ignores LRU pages being isolated and
+ 	 * walked by the page reclaim code, however that's not a big loss.
+ 	 */
+-	if (!PageHuge(p) && !PageTransTail(p)) {
+-		if (!PageLRU(p))
+-			shake_page(p, 0);
+-		if (!PageLRU(p)) {
++	if (!PageHuge(p)) {
++		if (!PageLRU(hpage))
++			shake_page(hpage, 0);
++		if (!PageLRU(hpage)) {
+ 			/*
+ 			 * shake_page could have turned it free.
+ 			 */
+@@ -1721,12 +1721,12 @@ int soft_offline_page(struct page *page, int flags)
+ 	} else if (ret == 0) { /* for free pages */
+ 		if (PageHuge(page)) {
+ 			set_page_hwpoison_huge_page(hpage);
+-			dequeue_hwpoisoned_huge_page(hpage);
+-			atomic_long_add(1 << compound_order(hpage),
++			if (!dequeue_hwpoisoned_huge_page(hpage))
++				atomic_long_add(1 << compound_order(hpage),
+ 					&num_poisoned_pages);
+ 		} else {
+-			SetPageHWPoison(page);
+-			atomic_long_inc(&num_poisoned_pages);
++			if (!TestSetPageHWPoison(page))
++				atomic_long_inc(&num_poisoned_pages);
+ 		}
+ 	}
+ 	unset_migratetype_isolate(page, MIGRATE_MOVABLE);
+diff --git a/mm/page-writeback.c b/mm/page-writeback.c
+index 644bcb665773..ad05f2f7bb65 100644
+--- a/mm/page-writeback.c
++++ b/mm/page-writeback.c
+@@ -580,7 +580,7 @@ static long long pos_ratio_polynom(unsigned long setpoint,
+ 	long x;
+ 
+ 	x = div64_s64(((s64)setpoint - (s64)dirty) << RATELIMIT_CALC_SHIFT,
+-		    limit - setpoint + 1);
++		      (limit - setpoint) | 1);
+ 	pos_ratio = x;
+ 	pos_ratio = pos_ratio * x >> RATELIMIT_CALC_SHIFT;
+ 	pos_ratio = pos_ratio * x >> RATELIMIT_CALC_SHIFT;
+@@ -807,7 +807,7 @@ static unsigned long bdi_position_ratio(struct backing_dev_info *bdi,
+ 	 * scale global setpoint to bdi's:
+ 	 *	bdi_setpoint = setpoint * bdi_thresh / thresh
+ 	 */
+-	x = div_u64((u64)bdi_thresh << 16, thresh + 1);
++	x = div_u64((u64)bdi_thresh << 16, thresh | 1);
+ 	bdi_setpoint = setpoint * (u64)x >> 16;
+ 	/*
+ 	 * Use span=(8*write_bw) in single bdi case as indicated by
+@@ -822,7 +822,7 @@ static unsigned long bdi_position_ratio(struct backing_dev_info *bdi,
+ 
+ 	if (bdi_dirty < x_intercept - span / 4) {
+ 		pos_ratio = div64_u64(pos_ratio * (x_intercept - bdi_dirty),
+-				    x_intercept - bdi_setpoint + 1);
++				      (x_intercept - bdi_setpoint) | 1);
+ 	} else
+ 		pos_ratio /= 4;
+ 
+diff --git a/sound/oss/sequencer.c b/sound/oss/sequencer.c
+index c0eea1dfe90f..f19da4b47c1d 100644
+--- a/sound/oss/sequencer.c
++++ b/sound/oss/sequencer.c
+@@ -681,13 +681,8 @@ static int seq_timing_event(unsigned char *event_rec)
+ 			break;
+ 
+ 		case TMR_ECHO:
+-			if (seq_mode == SEQ_2)
+-				seq_copy_to_input(event_rec, 8);
+-			else
+-			{
+-				parm = (parm << 8 | SEQ_ECHO);
+-				seq_copy_to_input((unsigned char *) &parm, 4);
+-			}
++			parm = (parm << 8 | SEQ_ECHO);
++			seq_copy_to_input((unsigned char *) &parm, 4);
+ 			break;
+ 
+ 		default:;
+@@ -1324,7 +1319,6 @@ int sequencer_ioctl(int dev, struct file *file, unsigned int cmd, void __user *a
+ 	int mode = translate_mode(file);
+ 	struct synth_info inf;
+ 	struct seq_event_rec event_rec;
+-	unsigned long flags;
+ 	int __user *p = arg;
+ 
+ 	orig_dev = dev = dev >> 4;
+@@ -1479,9 +1473,7 @@ int sequencer_ioctl(int dev, struct file *file, unsigned int cmd, void __user *a
+ 		case SNDCTL_SEQ_OUTOFBAND:
+ 			if (copy_from_user(&event_rec, arg, sizeof(event_rec)))
+ 				return -EFAULT;
+-			spin_lock_irqsave(&lock,flags);
+ 			play_event(event_rec.arr);
+-			spin_unlock_irqrestore(&lock,flags);
+ 			return 0;
+ 
+ 		case SNDCTL_MIDI_INFO:


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-06-06 22:03 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-06-06 22:03 UTC (permalink / raw
  To: gentoo-commits

commit:     b80f0b1fd45f663435ca84e9a9694c636e502613
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jun  6 22:03:54 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jun  6 22:03:54 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b80f0b1f

Linux patch 4.0.5

 0000_README            |    4 +
 1004_linux-4.0.5.patch | 4937 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4941 insertions(+)

diff --git a/0000_README b/0000_README
index 3bcb0f8..0f63559 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1003_linux-4.0.4.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.0.4
 
+Patch:  1004_linux-4.0.5.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.0.5
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1004_linux-4.0.5.patch b/1004_linux-4.0.5.patch
new file mode 100644
index 0000000..84509c0
--- /dev/null
+++ b/1004_linux-4.0.5.patch
@@ -0,0 +1,4937 @@
+diff --git a/Documentation/hwmon/tmp401 b/Documentation/hwmon/tmp401
+index 8eb88e974055..711f75e189eb 100644
+--- a/Documentation/hwmon/tmp401
++++ b/Documentation/hwmon/tmp401
+@@ -20,7 +20,7 @@ Supported chips:
+     Datasheet: http://focus.ti.com/docs/prod/folders/print/tmp432.html
+   * Texas Instruments TMP435
+     Prefix: 'tmp435'
+-    Addresses scanned: I2C 0x37, 0x48 - 0x4f
++    Addresses scanned: I2C 0x48 - 0x4f
+     Datasheet: http://focus.ti.com/docs/prod/folders/print/tmp435.html
+ 
+ Authors:
+diff --git a/Documentation/serial/tty.txt b/Documentation/serial/tty.txt
+index 1e52d67d0abf..dbe6623fed1c 100644
+--- a/Documentation/serial/tty.txt
++++ b/Documentation/serial/tty.txt
+@@ -198,6 +198,9 @@ TTY_IO_ERROR		If set, causes all subsequent userspace read/write
+ 
+ TTY_OTHER_CLOSED	Device is a pty and the other side has closed.
+ 
++TTY_OTHER_DONE		Device is a pty and the other side has closed and
++			all pending input processing has been completed.
++
+ TTY_NO_WRITE_SPLIT	Prevent driver from splitting up writes into
+ 			smaller chunks.
+ 
+diff --git a/Documentation/virtual/kvm/mmu.txt b/Documentation/virtual/kvm/mmu.txt
+index 53838d9c6295..c59bd9bc41ef 100644
+--- a/Documentation/virtual/kvm/mmu.txt
++++ b/Documentation/virtual/kvm/mmu.txt
+@@ -169,6 +169,10 @@ Shadow pages contain the following information:
+     Contains the value of cr4.smep && !cr0.wp for which the page is valid
+     (pages for which this is true are different from other pages; see the
+     treatment of cr0.wp=0 below).
++  role.smap_andnot_wp:
++    Contains the value of cr4.smap && !cr0.wp for which the page is valid
++    (pages for which this is true are different from other pages; see the
++    treatment of cr0.wp=0 below).
+   gfn:
+     Either the guest page table containing the translations shadowed by this
+     page, or the base page frame for linear translations.  See role.direct.
+@@ -344,10 +348,16 @@ on fault type:
+ 
+ (user write faults generate a #PF)
+ 
+-In the first case there is an additional complication if CR4.SMEP is
+-enabled: since we've turned the page into a kernel page, the kernel may now
+-execute it.  We handle this by also setting spte.nx.  If we get a user
+-fetch or read fault, we'll change spte.u=1 and spte.nx=gpte.nx back.
++In the first case there are two additional complications:
++- if CR4.SMEP is enabled: since we've turned the page into a kernel page,
++  the kernel may now execute it.  We handle this by also setting spte.nx.
++  If we get a user fetch or read fault, we'll change spte.u=1 and
++  spte.nx=gpte.nx back.
++- if CR4.SMAP is disabled: since the page has been changed to a kernel
++  page, it can not be reused when CR4.SMAP is enabled. We set
++  CR4.SMAP && !CR0.WP into shadow page's role to avoid this case. Note,
++  here we do not care the case that CR4.SMAP is enabled since KVM will
++  directly inject #PF to guest due to failed permission check.
+ 
+ To prevent an spte that was converted into a kernel page with cr0.wp=0
+ from being written by the kernel after cr0.wp has changed to 1, we make
+diff --git a/Makefile b/Makefile
+index 3d16bcc87585..1880cf77059b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
+index 067551b6920a..9917a45fc430 100644
+--- a/arch/arc/include/asm/atomic.h
++++ b/arch/arc/include/asm/atomic.h
+@@ -99,7 +99,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
+ 	atomic_ops_unlock(flags);					\
+ }
+ 
+-#define ATOMIC_OP_RETURN(op, c_op)					\
++#define ATOMIC_OP_RETURN(op, c_op, asm_op)				\
+ static inline int atomic_##op##_return(int i, atomic_t *v)		\
+ {									\
+ 	unsigned long flags;						\
+diff --git a/arch/arm/boot/dts/Makefile b/arch/arm/boot/dts/Makefile
+index a1c776b8dcec..992ea0b063d5 100644
+--- a/arch/arm/boot/dts/Makefile
++++ b/arch/arm/boot/dts/Makefile
+@@ -215,7 +215,7 @@ dtb-$(CONFIG_SOC_IMX25) += \
+ 	imx25-eukrea-mbimxsd25-baseboard-dvi-vga.dtb \
+ 	imx25-karo-tx25.dtb \
+ 	imx25-pdk.dtb
+-dtb-$(CONFIG_SOC_IMX31) += \
++dtb-$(CONFIG_SOC_IMX27) += \
+ 	imx27-apf27.dtb \
+ 	imx27-apf27dev.dtb \
+ 	imx27-eukrea-mbimxsd27-baseboard.dtb \
+diff --git a/arch/arm/boot/dts/exynos4412-trats2.dts b/arch/arm/boot/dts/exynos4412-trats2.dts
+index 173ffa479ad3..792394dd0f2a 100644
+--- a/arch/arm/boot/dts/exynos4412-trats2.dts
++++ b/arch/arm/boot/dts/exynos4412-trats2.dts
+@@ -736,7 +736,7 @@
+ 
+ 			display-timings {
+ 				timing-0 {
+-					clock-frequency = <0>;
++					clock-frequency = <57153600>;
+ 					hactive = <720>;
+ 					vactive = <1280>;
+ 					hfront-porch = <5>;
+diff --git a/arch/arm/boot/dts/imx27.dtsi b/arch/arm/boot/dts/imx27.dtsi
+index 4b063b68db44..9ce1d2128749 100644
+--- a/arch/arm/boot/dts/imx27.dtsi
++++ b/arch/arm/boot/dts/imx27.dtsi
+@@ -531,7 +531,7 @@
+ 
+ 			fec: ethernet@1002b000 {
+ 				compatible = "fsl,imx27-fec";
+-				reg = <0x1002b000 0x4000>;
++				reg = <0x1002b000 0x1000>;
+ 				interrupts = <50>;
+ 				clocks = <&clks IMX27_CLK_FEC_IPG_GATE>,
+ 					 <&clks IMX27_CLK_FEC_AHB_GATE>;
+diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
+index f8ccc21fa032..4e7f40c577e6 100644
+--- a/arch/arm/kernel/entry-common.S
++++ b/arch/arm/kernel/entry-common.S
+@@ -33,7 +33,9 @@ ret_fast_syscall:
+  UNWIND(.fnstart	)
+  UNWIND(.cantunwind	)
+ 	disable_irq				@ disable interrupts
+-	ldr	r1, [tsk, #TI_FLAGS]
++	ldr	r1, [tsk, #TI_FLAGS]		@ re-check for syscall tracing
++	tst	r1, #_TIF_SYSCALL_WORK
++	bne	__sys_trace_return
+ 	tst	r1, #_TIF_WORK_MASK
+ 	bne	fast_work_pending
+ 	asm_trace_hardirqs_on
+diff --git a/arch/arm/mach-exynos/pm_domains.c b/arch/arm/mach-exynos/pm_domains.c
+index 37266a826437..1f02bcb350e5 100644
+--- a/arch/arm/mach-exynos/pm_domains.c
++++ b/arch/arm/mach-exynos/pm_domains.c
+@@ -169,7 +169,7 @@ no_clk:
+ 		args.np = np;
+ 		args.args_count = 0;
+ 		child_domain = of_genpd_get_from_provider(&args);
+-		if (!child_domain)
++		if (IS_ERR(child_domain))
+ 			continue;
+ 
+ 		if (of_parse_phandle_with_args(np, "power-domains",
+@@ -177,7 +177,7 @@ no_clk:
+ 			continue;
+ 
+ 		parent_domain = of_genpd_get_from_provider(&args);
+-		if (!parent_domain)
++		if (IS_ERR(parent_domain))
+ 			continue;
+ 
+ 		if (pm_genpd_add_subdomain(parent_domain, child_domain))
+diff --git a/arch/arm/mach-exynos/sleep.S b/arch/arm/mach-exynos/sleep.S
+index 31d25834b9c4..cf950790fbdc 100644
+--- a/arch/arm/mach-exynos/sleep.S
++++ b/arch/arm/mach-exynos/sleep.S
+@@ -23,14 +23,7 @@
+ #define CPU_MASK	0xff0ffff0
+ #define CPU_CORTEX_A9	0x410fc090
+ 
+-	/*
+-	 * The following code is located into the .data section. This is to
+-	 * allow l2x0_regs_phys to be accessed with a relative load while we
+-	 * can't rely on any MMU translation. We could have put l2x0_regs_phys
+-	 * in the .text section as well, but some setups might insist on it to
+-	 * be truly read-only. (Reference from: arch/arm/kernel/sleep.S)
+-	 */
+-	.data
++	.text
+ 	.align
+ 
+ 	/*
+@@ -69,10 +62,12 @@ ENTRY(exynos_cpu_resume_ns)
+ 	cmp	r0, r1
+ 	bne	skip_cp15
+ 
+-	adr	r0, cp15_save_power
++	adr	r0, _cp15_save_power
+ 	ldr	r1, [r0]
+-	adr	r0, cp15_save_diag
++	ldr	r1, [r0, r1]
++	adr	r0, _cp15_save_diag
+ 	ldr	r2, [r0]
++	ldr	r2, [r0, r2]
+ 	mov	r0, #SMC_CMD_C15RESUME
+ 	dsb
+ 	smc	#0
+@@ -118,14 +113,20 @@ skip_l2x0:
+ skip_cp15:
+ 	b	cpu_resume
+ ENDPROC(exynos_cpu_resume_ns)
++
++	.align
++_cp15_save_power:
++	.long	cp15_save_power - .
++_cp15_save_diag:
++	.long	cp15_save_diag - .
++#ifdef CONFIG_CACHE_L2X0
++1:	.long	l2x0_saved_regs - .
++#endif /* CONFIG_CACHE_L2X0 */
++
++	.data
+ 	.globl cp15_save_diag
+ cp15_save_diag:
+ 	.long	0	@ cp15 diagnostic
+ 	.globl cp15_save_power
+ cp15_save_power:
+ 	.long	0	@ cp15 power control
+-
+-#ifdef CONFIG_CACHE_L2X0
+-	.align
+-1:	.long	l2x0_saved_regs - .
+-#endif /* CONFIG_CACHE_L2X0 */
+diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
+index 4e6ef896c619..7186382672b5 100644
+--- a/arch/arm/mm/mmu.c
++++ b/arch/arm/mm/mmu.c
+@@ -1112,22 +1112,22 @@ void __init sanity_check_meminfo(void)
+ 			}
+ 
+ 			/*
+-			 * Find the first non-section-aligned page, and point
++			 * Find the first non-pmd-aligned page, and point
+ 			 * memblock_limit at it. This relies on rounding the
+-			 * limit down to be section-aligned, which happens at
+-			 * the end of this function.
++			 * limit down to be pmd-aligned, which happens at the
++			 * end of this function.
+ 			 *
+ 			 * With this algorithm, the start or end of almost any
+-			 * bank can be non-section-aligned. The only exception
+-			 * is that the start of the bank 0 must be section-
++			 * bank can be non-pmd-aligned. The only exception is
++			 * that the start of the bank 0 must be section-
+ 			 * aligned, since otherwise memory would need to be
+ 			 * allocated when mapping the start of bank 0, which
+ 			 * occurs before any free memory is mapped.
+ 			 */
+ 			if (!memblock_limit) {
+-				if (!IS_ALIGNED(block_start, SECTION_SIZE))
++				if (!IS_ALIGNED(block_start, PMD_SIZE))
+ 					memblock_limit = block_start;
+-				else if (!IS_ALIGNED(block_end, SECTION_SIZE))
++				else if (!IS_ALIGNED(block_end, PMD_SIZE))
+ 					memblock_limit = arm_lowmem_limit;
+ 			}
+ 
+@@ -1137,12 +1137,12 @@ void __init sanity_check_meminfo(void)
+ 	high_memory = __va(arm_lowmem_limit - 1) + 1;
+ 
+ 	/*
+-	 * Round the memblock limit down to a section size.  This
++	 * Round the memblock limit down to a pmd size.  This
+ 	 * helps to ensure that we will allocate memory from the
+-	 * last full section, which should be mapped.
++	 * last full pmd, which should be mapped.
+ 	 */
+ 	if (memblock_limit)
+-		memblock_limit = round_down(memblock_limit, SECTION_SIZE);
++		memblock_limit = round_down(memblock_limit, PMD_SIZE);
+ 	if (!memblock_limit)
+ 		memblock_limit = arm_lowmem_limit;
+ 
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index edba042b2325..dc6a4842683a 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -487,7 +487,7 @@ emit_cond_jmp:
+ 			return -EINVAL;
+ 		}
+ 
+-		imm64 = (u64)insn1.imm << 32 | imm;
++		imm64 = (u64)insn1.imm << 32 | (u32)imm;
+ 		emit_a64_mov_i64(dst, imm64, ctx);
+ 
+ 		return 1;
+diff --git a/arch/mips/kernel/elf.c b/arch/mips/kernel/elf.c
+index d2c09f6475c5..f20cedcb50f1 100644
+--- a/arch/mips/kernel/elf.c
++++ b/arch/mips/kernel/elf.c
+@@ -76,14 +76,6 @@ int arch_elf_pt_proc(void *_ehdr, void *_phdr, struct file *elf,
+ 
+ 	/* Lets see if this is an O32 ELF */
+ 	if (ehdr32->e_ident[EI_CLASS] == ELFCLASS32) {
+-		/* FR = 1 for N32 */
+-		if (ehdr32->e_flags & EF_MIPS_ABI2)
+-			state->overall_fp_mode = FP_FR1;
+-		else
+-			/* Set a good default FPU mode for O32 */
+-			state->overall_fp_mode = cpu_has_mips_r6 ?
+-				FP_FRE : FP_FR0;
+-
+ 		if (ehdr32->e_flags & EF_MIPS_FP64) {
+ 			/*
+ 			 * Set MIPS_ABI_FP_OLD_64 for EF_MIPS_FP64. We will override it
+@@ -104,9 +96,6 @@ int arch_elf_pt_proc(void *_ehdr, void *_phdr, struct file *elf,
+ 				  (char *)&abiflags,
+ 				  sizeof(abiflags));
+ 	} else {
+-		/* FR=1 is really the only option for 64-bit */
+-		state->overall_fp_mode = FP_FR1;
+-
+ 		if (phdr64->p_type != PT_MIPS_ABIFLAGS)
+ 			return 0;
+ 		if (phdr64->p_filesz < sizeof(abiflags))
+@@ -147,6 +136,7 @@ int arch_check_elf(void *_ehdr, bool has_interpreter,
+ 	struct elf32_hdr *ehdr = _ehdr;
+ 	struct mode_req prog_req, interp_req;
+ 	int fp_abi, interp_fp_abi, abi0, abi1, max_abi;
++	bool is_mips64;
+ 
+ 	if (!config_enabled(CONFIG_MIPS_O32_FP64_SUPPORT))
+ 		return 0;
+@@ -162,10 +152,22 @@ int arch_check_elf(void *_ehdr, bool has_interpreter,
+ 		abi0 = abi1 = fp_abi;
+ 	}
+ 
+-	/* ABI limits. O32 = FP_64A, N32/N64 = FP_SOFT */
+-	max_abi = ((ehdr->e_ident[EI_CLASS] == ELFCLASS32) &&
+-		   (!(ehdr->e_flags & EF_MIPS_ABI2))) ?
+-		MIPS_ABI_FP_64A : MIPS_ABI_FP_SOFT;
++	is_mips64 = (ehdr->e_ident[EI_CLASS] == ELFCLASS64) ||
++		    (ehdr->e_flags & EF_MIPS_ABI2);
++
++	if (is_mips64) {
++		/* MIPS64 code always uses FR=1, thus the default is easy */
++		state->overall_fp_mode = FP_FR1;
++
++		/* Disallow access to the various FPXX & FP64 ABIs */
++		max_abi = MIPS_ABI_FP_SOFT;
++	} else {
++		/* Default to a mode capable of running code expecting FR=0 */
++		state->overall_fp_mode = cpu_has_mips_r6 ? FP_FRE : FP_FR0;
++
++		/* Allow all ABIs we know about */
++		max_abi = MIPS_ABI_FP_64A;
++	}
+ 
+ 	if ((abi0 > max_abi && abi0 != MIPS_ABI_FP_UNKNOWN) ||
+ 	    (abi1 > max_abi && abi1 != MIPS_ABI_FP_UNKNOWN))
+diff --git a/arch/parisc/include/asm/elf.h b/arch/parisc/include/asm/elf.h
+index 3391d061eccc..78c9fd32c554 100644
+--- a/arch/parisc/include/asm/elf.h
++++ b/arch/parisc/include/asm/elf.h
+@@ -348,6 +348,10 @@ struct pt_regs;	/* forward declaration... */
+ 
+ #define ELF_HWCAP	0
+ 
++#define STACK_RND_MASK	(is_32bit_task() ? \
++				0x7ff >> (PAGE_SHIFT - 12) : \
++				0x3ffff >> (PAGE_SHIFT - 12))
++
+ struct mm_struct;
+ extern unsigned long arch_randomize_brk(struct mm_struct *);
+ #define arch_randomize_brk arch_randomize_brk
+diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
+index e1ffea2f9a0b..5aba01ac457f 100644
+--- a/arch/parisc/kernel/sys_parisc.c
++++ b/arch/parisc/kernel/sys_parisc.c
+@@ -77,6 +77,9 @@ static unsigned long mmap_upper_limit(void)
+ 	if (stack_base > STACK_SIZE_MAX)
+ 		stack_base = STACK_SIZE_MAX;
+ 
++	/* Add space for stack randomization. */
++	stack_base += (STACK_RND_MASK << PAGE_SHIFT);
++
+ 	return PAGE_ALIGN(STACK_TOP - stack_base);
+ }
+ 
+diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
+index 15c99b649b04..b2eb4686bd8f 100644
+--- a/arch/powerpc/kernel/mce.c
++++ b/arch/powerpc/kernel/mce.c
+@@ -73,7 +73,7 @@ void save_mce_event(struct pt_regs *regs, long handled,
+ 		    uint64_t nip, uint64_t addr)
+ {
+ 	uint64_t srr1;
+-	int index = __this_cpu_inc_return(mce_nest_count);
++	int index = __this_cpu_inc_return(mce_nest_count) - 1;
+ 	struct machine_check_event *mce = this_cpu_ptr(&mce_event[index]);
+ 
+ 	/*
+@@ -184,7 +184,7 @@ void machine_check_queue_event(void)
+ 	if (!get_mce_event(&evt, MCE_EVENT_RELEASE))
+ 		return;
+ 
+-	index = __this_cpu_inc_return(mce_queue_count);
++	index = __this_cpu_inc_return(mce_queue_count) - 1;
+ 	/* If queue is full, just return for now. */
+ 	if (index >= MAX_MC_EVT) {
+ 		__this_cpu_dec(mce_queue_count);
+diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
+index f096e72262f4..1db685104ffc 100644
+--- a/arch/powerpc/kernel/vmlinux.lds.S
++++ b/arch/powerpc/kernel/vmlinux.lds.S
+@@ -213,6 +213,7 @@ SECTIONS
+ 		*(.opd)
+ 	}
+ 
++	. = ALIGN(256);
+ 	.got : AT(ADDR(.got) - LOAD_OFFSET) {
+ 		__toc_start = .;
+ #ifndef CONFIG_RELOCATABLE
+diff --git a/arch/s390/crypto/ghash_s390.c b/arch/s390/crypto/ghash_s390.c
+index 7940dc90e80b..b258110da952 100644
+--- a/arch/s390/crypto/ghash_s390.c
++++ b/arch/s390/crypto/ghash_s390.c
+@@ -16,11 +16,12 @@
+ #define GHASH_DIGEST_SIZE	16
+ 
+ struct ghash_ctx {
+-	u8 icv[16];
+-	u8 key[16];
++	u8 key[GHASH_BLOCK_SIZE];
+ };
+ 
+ struct ghash_desc_ctx {
++	u8 icv[GHASH_BLOCK_SIZE];
++	u8 key[GHASH_BLOCK_SIZE];
+ 	u8 buffer[GHASH_BLOCK_SIZE];
+ 	u32 bytes;
+ };
+@@ -28,8 +29,10 @@ struct ghash_desc_ctx {
+ static int ghash_init(struct shash_desc *desc)
+ {
+ 	struct ghash_desc_ctx *dctx = shash_desc_ctx(desc);
++	struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm);
+ 
+ 	memset(dctx, 0, sizeof(*dctx));
++	memcpy(dctx->key, ctx->key, GHASH_BLOCK_SIZE);
+ 
+ 	return 0;
+ }
+@@ -45,7 +48,6 @@ static int ghash_setkey(struct crypto_shash *tfm,
+ 	}
+ 
+ 	memcpy(ctx->key, key, GHASH_BLOCK_SIZE);
+-	memset(ctx->icv, 0, GHASH_BLOCK_SIZE);
+ 
+ 	return 0;
+ }
+@@ -54,7 +56,6 @@ static int ghash_update(struct shash_desc *desc,
+ 			 const u8 *src, unsigned int srclen)
+ {
+ 	struct ghash_desc_ctx *dctx = shash_desc_ctx(desc);
+-	struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm);
+ 	unsigned int n;
+ 	u8 *buf = dctx->buffer;
+ 	int ret;
+@@ -70,7 +71,7 @@ static int ghash_update(struct shash_desc *desc,
+ 		src += n;
+ 
+ 		if (!dctx->bytes) {
+-			ret = crypt_s390_kimd(KIMD_GHASH, ctx, buf,
++			ret = crypt_s390_kimd(KIMD_GHASH, dctx, buf,
+ 					      GHASH_BLOCK_SIZE);
+ 			if (ret != GHASH_BLOCK_SIZE)
+ 				return -EIO;
+@@ -79,7 +80,7 @@ static int ghash_update(struct shash_desc *desc,
+ 
+ 	n = srclen & ~(GHASH_BLOCK_SIZE - 1);
+ 	if (n) {
+-		ret = crypt_s390_kimd(KIMD_GHASH, ctx, src, n);
++		ret = crypt_s390_kimd(KIMD_GHASH, dctx, src, n);
+ 		if (ret != n)
+ 			return -EIO;
+ 		src += n;
+@@ -94,7 +95,7 @@ static int ghash_update(struct shash_desc *desc,
+ 	return 0;
+ }
+ 
+-static int ghash_flush(struct ghash_ctx *ctx, struct ghash_desc_ctx *dctx)
++static int ghash_flush(struct ghash_desc_ctx *dctx)
+ {
+ 	u8 *buf = dctx->buffer;
+ 	int ret;
+@@ -104,24 +105,24 @@ static int ghash_flush(struct ghash_ctx *ctx, struct ghash_desc_ctx *dctx)
+ 
+ 		memset(pos, 0, dctx->bytes);
+ 
+-		ret = crypt_s390_kimd(KIMD_GHASH, ctx, buf, GHASH_BLOCK_SIZE);
++		ret = crypt_s390_kimd(KIMD_GHASH, dctx, buf, GHASH_BLOCK_SIZE);
+ 		if (ret != GHASH_BLOCK_SIZE)
+ 			return -EIO;
++
++		dctx->bytes = 0;
+ 	}
+ 
+-	dctx->bytes = 0;
+ 	return 0;
+ }
+ 
+ static int ghash_final(struct shash_desc *desc, u8 *dst)
+ {
+ 	struct ghash_desc_ctx *dctx = shash_desc_ctx(desc);
+-	struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm);
+ 	int ret;
+ 
+-	ret = ghash_flush(ctx, dctx);
++	ret = ghash_flush(dctx);
+ 	if (!ret)
+-		memcpy(dst, ctx->icv, GHASH_BLOCK_SIZE);
++		memcpy(dst, dctx->icv, GHASH_BLOCK_SIZE);
+ 	return ret;
+ }
+ 
+diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
+index e08ec38f8c6e..e10112da008d 100644
+--- a/arch/s390/include/asm/pgtable.h
++++ b/arch/s390/include/asm/pgtable.h
+@@ -600,7 +600,7 @@ static inline int pmd_large(pmd_t pmd)
+ 	return (pmd_val(pmd) & _SEGMENT_ENTRY_LARGE) != 0;
+ }
+ 
+-static inline int pmd_pfn(pmd_t pmd)
++static inline unsigned long pmd_pfn(pmd_t pmd)
+ {
+ 	unsigned long origin_mask;
+ 
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index a236e39cc385..1c0fb570b5c2 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -212,6 +212,7 @@ union kvm_mmu_page_role {
+ 		unsigned nxe:1;
+ 		unsigned cr0_wp:1;
+ 		unsigned smep_andnot_wp:1;
++		unsigned smap_andnot_wp:1;
+ 	};
+ };
+ 
+@@ -404,6 +405,7 @@ struct kvm_vcpu_arch {
+ 	struct kvm_mmu_memory_cache mmu_page_header_cache;
+ 
+ 	struct fpu guest_fpu;
++	bool eager_fpu;
+ 	u64 xcr0;
+ 	u64 guest_supported_xcr0;
+ 	u32 guest_xstate_size;
+@@ -735,6 +737,7 @@ struct kvm_x86_ops {
+ 	void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
+ 	unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
+ 	void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
++	void (*fpu_activate)(struct kvm_vcpu *vcpu);
+ 	void (*fpu_deactivate)(struct kvm_vcpu *vcpu);
+ 
+ 	void (*tlb_flush)(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
+index 3c036cb4a370..11dd8f23fcea 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce.c
++++ b/arch/x86/kernel/cpu/mcheck/mce.c
+@@ -705,6 +705,7 @@ static int mce_no_way_out(struct mce *m, char **msg, unsigned long *validp,
+ 			  struct pt_regs *regs)
+ {
+ 	int i, ret = 0;
++	char *tmp;
+ 
+ 	for (i = 0; i < mca_cfg.banks; i++) {
+ 		m->status = mce_rdmsrl(MSR_IA32_MCx_STATUS(i));
+@@ -713,9 +714,11 @@ static int mce_no_way_out(struct mce *m, char **msg, unsigned long *validp,
+ 			if (quirk_no_way_out)
+ 				quirk_no_way_out(i, m, regs);
+ 		}
+-		if (mce_severity(m, mca_cfg.tolerant, msg, true) >=
+-		    MCE_PANIC_SEVERITY)
++
++		if (mce_severity(m, mca_cfg.tolerant, &tmp, true) >= MCE_PANIC_SEVERITY) {
++			*msg = tmp;
+ 			ret = 1;
++		}
+ 	}
+ 	return ret;
+ }
+diff --git a/arch/x86/kernel/cpu/perf_event_intel_rapl.c b/arch/x86/kernel/cpu/perf_event_intel_rapl.c
+index c4bb8b8e5017..76d8cbe5a10f 100644
+--- a/arch/x86/kernel/cpu/perf_event_intel_rapl.c
++++ b/arch/x86/kernel/cpu/perf_event_intel_rapl.c
+@@ -680,6 +680,7 @@ static int __init rapl_pmu_init(void)
+ 		break;
+ 	case 60: /* Haswell */
+ 	case 69: /* Haswell-Celeron */
++	case 61: /* Broadwell */
+ 		rapl_cntr_mask = RAPL_IDX_HSW;
+ 		rapl_pmu_events_group.attrs = rapl_events_hsw_attr;
+ 		break;
+diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
+index d5651fce0b71..f341d56b7883 100644
+--- a/arch/x86/kernel/i387.c
++++ b/arch/x86/kernel/i387.c
+@@ -169,6 +169,21 @@ static void init_thread_xstate(void)
+ 		xstate_size = sizeof(struct i387_fxsave_struct);
+ 	else
+ 		xstate_size = sizeof(struct i387_fsave_struct);
++
++	/*
++	 * Quirk: we don't yet handle the XSAVES* instructions
++	 * correctly, as we don't correctly convert between
++	 * standard and compacted format when interfacing
++	 * with user-space - so disable it for now.
++	 *
++	 * The difference is small: with recent CPUs the
++	 * compacted format is only marginally smaller than
++	 * the standard FPU state format.
++	 *
++	 * ( This is easy to backport while we are fixing
++	 *   XSAVES* support. )
++	 */
++	setup_clear_cpu_cap(X86_FEATURE_XSAVES);
+ }
+ 
+ /*
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 8a80737ee6e6..307f9ec28e08 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -16,6 +16,8 @@
+ #include <linux/module.h>
+ #include <linux/vmalloc.h>
+ #include <linux/uaccess.h>
++#include <asm/i387.h> /* For use_eager_fpu.  Ugh! */
++#include <asm/fpu-internal.h> /* For use_eager_fpu.  Ugh! */
+ #include <asm/user.h>
+ #include <asm/xsave.h>
+ #include "cpuid.h"
+@@ -95,6 +97,8 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
+ 	if (best && (best->eax & (F(XSAVES) | F(XSAVEC))))
+ 		best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
+ 
++	vcpu->arch.eager_fpu = guest_cpuid_has_mpx(vcpu);
++
+ 	/*
+ 	 * The existing code assumes virtual address is 48-bit in the canonical
+ 	 * address checks; exit if it is ever changed.
+diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
+index 4452eedfaedd..9bec2b8cdced 100644
+--- a/arch/x86/kvm/cpuid.h
++++ b/arch/x86/kvm/cpuid.h
+@@ -111,4 +111,12 @@ static inline bool guest_cpuid_has_rtm(struct kvm_vcpu *vcpu)
+ 	best = kvm_find_cpuid_entry(vcpu, 7, 0);
+ 	return best && (best->ebx & bit(X86_FEATURE_RTM));
+ }
++
++static inline bool guest_cpuid_has_mpx(struct kvm_vcpu *vcpu)
++{
++	struct kvm_cpuid_entry2 *best;
++
++	best = kvm_find_cpuid_entry(vcpu, 7, 0);
++	return best && (best->ebx & bit(X86_FEATURE_MPX));
++}
+ #endif
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index cee759299a35..88ee9282a57e 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -3736,8 +3736,8 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu,
+ 	}
+ }
+ 
+-void update_permission_bitmask(struct kvm_vcpu *vcpu,
+-		struct kvm_mmu *mmu, bool ept)
++static void update_permission_bitmask(struct kvm_vcpu *vcpu,
++				      struct kvm_mmu *mmu, bool ept)
+ {
+ 	unsigned bit, byte, pfec;
+ 	u8 map;
+@@ -3918,6 +3918,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
+ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
+ {
+ 	bool smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP);
++	bool smap = kvm_read_cr4_bits(vcpu, X86_CR4_SMAP);
+ 	struct kvm_mmu *context = &vcpu->arch.mmu;
+ 
+ 	MMU_WARN_ON(VALID_PAGE(context->root_hpa));
+@@ -3936,6 +3937,8 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
+ 	context->base_role.cr0_wp  = is_write_protection(vcpu);
+ 	context->base_role.smep_andnot_wp
+ 		= smep && !is_write_protection(vcpu);
++	context->base_role.smap_andnot_wp
++		= smap && !is_write_protection(vcpu);
+ }
+ EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
+ 
+@@ -4207,12 +4210,18 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
+ 		       const u8 *new, int bytes)
+ {
+ 	gfn_t gfn = gpa >> PAGE_SHIFT;
+-	union kvm_mmu_page_role mask = { .word = 0 };
+ 	struct kvm_mmu_page *sp;
+ 	LIST_HEAD(invalid_list);
+ 	u64 entry, gentry, *spte;
+ 	int npte;
+ 	bool remote_flush, local_flush, zap_page;
++	union kvm_mmu_page_role mask = (union kvm_mmu_page_role) {
++		.cr0_wp = 1,
++		.cr4_pae = 1,
++		.nxe = 1,
++		.smep_andnot_wp = 1,
++		.smap_andnot_wp = 1,
++	};
+ 
+ 	/*
+ 	 * If we don't have indirect shadow pages, it means no page is
+@@ -4238,7 +4247,6 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
+ 	++vcpu->kvm->stat.mmu_pte_write;
+ 	kvm_mmu_audit(vcpu, AUDIT_PRE_PTE_WRITE);
+ 
+-	mask.cr0_wp = mask.cr4_pae = mask.nxe = 1;
+ 	for_each_gfn_indirect_valid_sp(vcpu->kvm, sp, gfn) {
+ 		if (detect_write_misaligned(sp, gpa, bytes) ||
+ 		      detect_write_flooding(sp)) {
+diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
+index c7d65637c851..0ada65ecddcf 100644
+--- a/arch/x86/kvm/mmu.h
++++ b/arch/x86/kvm/mmu.h
+@@ -71,8 +71,6 @@ enum {
+ int handle_mmio_page_fault_common(struct kvm_vcpu *vcpu, u64 addr, bool direct);
+ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu);
+ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly);
+-void update_permission_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
+-		bool ept);
+ 
+ static inline unsigned int kvm_mmu_available_pages(struct kvm *kvm)
+ {
+@@ -166,6 +164,8 @@ static inline bool permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
+ 	int index = (pfec >> 1) +
+ 		    (smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1));
+ 
++	WARN_ON(pfec & PFERR_RSVD_MASK);
++
+ 	return (mmu->permissions[index] >> pte_access) & 1;
+ }
+ 
+diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
+index fd49c867b25a..6e6d115fe9b5 100644
+--- a/arch/x86/kvm/paging_tmpl.h
++++ b/arch/x86/kvm/paging_tmpl.h
+@@ -718,6 +718,13 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
+ 					      mmu_is_nested(vcpu));
+ 		if (likely(r != RET_MMIO_PF_INVALID))
+ 			return r;
++
++		/*
++		 * page fault with PFEC.RSVD  = 1 is caused by shadow
++		 * page fault, should not be used to walk guest page
++		 * table.
++		 */
++		error_code &= ~PFERR_RSVD_MASK;
+ 	};
+ 
+ 	r = mmu_topup_memory_caches(vcpu);
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index cc618c882f90..a4e62fcfabcb 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -4374,6 +4374,7 @@ static struct kvm_x86_ops svm_x86_ops = {
+ 	.cache_reg = svm_cache_reg,
+ 	.get_rflags = svm_get_rflags,
+ 	.set_rflags = svm_set_rflags,
++	.fpu_activate = svm_fpu_activate,
+ 	.fpu_deactivate = svm_fpu_deactivate,
+ 
+ 	.tlb_flush = svm_flush_tlb,
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index a60bd3aa0965..5318d64674b0 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -10179,6 +10179,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
+ 	.cache_reg = vmx_cache_reg,
+ 	.get_rflags = vmx_get_rflags,
+ 	.set_rflags = vmx_set_rflags,
++	.fpu_activate = vmx_fpu_activate,
+ 	.fpu_deactivate = vmx_fpu_deactivate,
+ 
+ 	.tlb_flush = vmx_flush_tlb,
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index e222ba5d2beb..8838057da9c3 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -702,8 +702,9 @@ EXPORT_SYMBOL_GPL(kvm_set_xcr);
+ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ {
+ 	unsigned long old_cr4 = kvm_read_cr4(vcpu);
+-	unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE |
+-				   X86_CR4_PAE | X86_CR4_SMEP;
++	unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |
++				   X86_CR4_SMEP | X86_CR4_SMAP;
++
+ 	if (cr4 & CR4_RESERVED_BITS)
+ 		return 1;
+ 
+@@ -744,9 +745,6 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ 	    (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)))
+ 		kvm_mmu_reset_context(vcpu);
+ 
+-	if ((cr4 ^ old_cr4) & X86_CR4_SMAP)
+-		update_permission_bitmask(vcpu, vcpu->arch.walk_mmu, false);
+-
+ 	if ((cr4 ^ old_cr4) & X86_CR4_OSXSAVE)
+ 		kvm_update_cpuid(vcpu);
+ 
+@@ -6141,6 +6139,8 @@ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
+ 		return;
+ 
+ 	page = gfn_to_page(vcpu->kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
++	if (is_error_page(page))
++		return;
+ 	kvm_x86_ops->set_apic_access_page_addr(vcpu, page_to_phys(page));
+ 
+ 	/*
+@@ -6996,7 +6996,9 @@ void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
+ 	fpu_save_init(&vcpu->arch.guest_fpu);
+ 	__kernel_fpu_end();
+ 	++vcpu->stat.fpu_reload;
+-	kvm_make_request(KVM_REQ_DEACTIVATE_FPU, vcpu);
++	if (!vcpu->arch.eager_fpu)
++		kvm_make_request(KVM_REQ_DEACTIVATE_FPU, vcpu);
++
+ 	trace_kvm_fpu(0);
+ }
+ 
+@@ -7012,11 +7014,21 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
+ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
+ 						unsigned int id)
+ {
++	struct kvm_vcpu *vcpu;
++
+ 	if (check_tsc_unstable() && atomic_read(&kvm->online_vcpus) != 0)
+ 		printk_once(KERN_WARNING
+ 		"kvm: SMP vm created on host with unstable TSC; "
+ 		"guest TSC will not be reliable\n");
+-	return kvm_x86_ops->vcpu_create(kvm, id);
++
++	vcpu = kvm_x86_ops->vcpu_create(kvm, id);
++
++	/*
++	 * Activate fpu unconditionally in case the guest needs eager FPU.  It will be
++	 * deactivated soon if it doesn't.
++	 */
++	kvm_x86_ops->fpu_activate(vcpu);
++	return vcpu;
+ }
+ 
+ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
+diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
+index f9eeae871593..5aa1f6e281d2 100644
+--- a/drivers/acpi/osl.c
++++ b/drivers/acpi/osl.c
+@@ -182,7 +182,7 @@ static void __init acpi_request_region (struct acpi_generic_address *gas,
+ 		request_mem_region(addr, length, desc);
+ }
+ 
+-static int __init acpi_reserve_resources(void)
++static void __init acpi_reserve_resources(void)
+ {
+ 	acpi_request_region(&acpi_gbl_FADT.xpm1a_event_block, acpi_gbl_FADT.pm1_event_length,
+ 		"ACPI PM1a_EVT_BLK");
+@@ -211,10 +211,7 @@ static int __init acpi_reserve_resources(void)
+ 	if (!(acpi_gbl_FADT.gpe1_block_length & 0x1))
+ 		acpi_request_region(&acpi_gbl_FADT.xgpe1_block,
+ 			       acpi_gbl_FADT.gpe1_block_length, "ACPI GPE1_BLK");
+-
+-	return 0;
+ }
+-device_initcall(acpi_reserve_resources);
+ 
+ void acpi_os_printf(const char *fmt, ...)
+ {
+@@ -1845,6 +1842,7 @@ acpi_status __init acpi_os_initialize(void)
+ 
+ acpi_status __init acpi_os_initialize1(void)
+ {
++	acpi_reserve_resources();
+ 	kacpid_wq = alloc_workqueue("kacpid", 0, 1);
+ 	kacpi_notify_wq = alloc_workqueue("kacpi_notify", 0, 1);
+ 	kacpi_hotplug_wq = alloc_ordered_workqueue("kacpi_hotplug", 0);
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 33bb06e006c9..adce56fa9cef 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -66,6 +66,7 @@ enum board_ids {
+ 	board_ahci_yes_fbs,
+ 
+ 	/* board IDs for specific chipsets in alphabetical order */
++	board_ahci_avn,
+ 	board_ahci_mcp65,
+ 	board_ahci_mcp77,
+ 	board_ahci_mcp89,
+@@ -84,6 +85,8 @@ enum board_ids {
+ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
+ static int ahci_vt8251_hardreset(struct ata_link *link, unsigned int *class,
+ 				 unsigned long deadline);
++static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,
++			      unsigned long deadline);
+ static void ahci_mcp89_apple_enable(struct pci_dev *pdev);
+ static bool is_mcp89_apple(struct pci_dev *pdev);
+ static int ahci_p5wdh_hardreset(struct ata_link *link, unsigned int *class,
+@@ -107,6 +110,11 @@ static struct ata_port_operations ahci_p5wdh_ops = {
+ 	.hardreset		= ahci_p5wdh_hardreset,
+ };
+ 
++static struct ata_port_operations ahci_avn_ops = {
++	.inherits		= &ahci_ops,
++	.hardreset		= ahci_avn_hardreset,
++};
++
+ static const struct ata_port_info ahci_port_info[] = {
+ 	/* by features */
+ 	[board_ahci] = {
+@@ -151,6 +159,12 @@ static const struct ata_port_info ahci_port_info[] = {
+ 		.port_ops	= &ahci_ops,
+ 	},
+ 	/* by chipsets */
++	[board_ahci_avn] = {
++		.flags		= AHCI_FLAG_COMMON,
++		.pio_mask	= ATA_PIO4,
++		.udma_mask	= ATA_UDMA6,
++		.port_ops	= &ahci_avn_ops,
++	},
+ 	[board_ahci_mcp65] = {
+ 		AHCI_HFLAGS	(AHCI_HFLAG_NO_FPDMA_AA | AHCI_HFLAG_NO_PMP |
+ 				 AHCI_HFLAG_YES_NCQ),
+@@ -290,14 +304,14 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x1f27), board_ahci }, /* Avoton RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x1f2e), board_ahci }, /* Avoton RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x1f2f), board_ahci }, /* Avoton RAID */
+-	{ PCI_VDEVICE(INTEL, 0x1f32), board_ahci }, /* Avoton AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x1f33), board_ahci }, /* Avoton AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x1f34), board_ahci }, /* Avoton RAID */
+-	{ PCI_VDEVICE(INTEL, 0x1f35), board_ahci }, /* Avoton RAID */
+-	{ PCI_VDEVICE(INTEL, 0x1f36), board_ahci }, /* Avoton RAID */
+-	{ PCI_VDEVICE(INTEL, 0x1f37), board_ahci }, /* Avoton RAID */
+-	{ PCI_VDEVICE(INTEL, 0x1f3e), board_ahci }, /* Avoton RAID */
+-	{ PCI_VDEVICE(INTEL, 0x1f3f), board_ahci }, /* Avoton RAID */
++	{ PCI_VDEVICE(INTEL, 0x1f32), board_ahci_avn }, /* Avoton AHCI */
++	{ PCI_VDEVICE(INTEL, 0x1f33), board_ahci_avn }, /* Avoton AHCI */
++	{ PCI_VDEVICE(INTEL, 0x1f34), board_ahci_avn }, /* Avoton RAID */
++	{ PCI_VDEVICE(INTEL, 0x1f35), board_ahci_avn }, /* Avoton RAID */
++	{ PCI_VDEVICE(INTEL, 0x1f36), board_ahci_avn }, /* Avoton RAID */
++	{ PCI_VDEVICE(INTEL, 0x1f37), board_ahci_avn }, /* Avoton RAID */
++	{ PCI_VDEVICE(INTEL, 0x1f3e), board_ahci_avn }, /* Avoton RAID */
++	{ PCI_VDEVICE(INTEL, 0x1f3f), board_ahci_avn }, /* Avoton RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x2823), board_ahci }, /* Wellsburg RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x2827), board_ahci }, /* Wellsburg RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x8d02), board_ahci }, /* Wellsburg AHCI */
+@@ -670,6 +684,79 @@ static int ahci_p5wdh_hardreset(struct ata_link *link, unsigned int *class,
+ 	return rc;
+ }
+ 
++/*
++ * ahci_avn_hardreset - attempt more aggressive recovery of Avoton ports.
++ *
++ * It has been observed with some SSDs that the timing of events in the
++ * link synchronization phase can leave the port in a state that can not
++ * be recovered by a SATA-hard-reset alone.  The failing signature is
++ * SStatus.DET stuck at 1 ("Device presence detected but Phy
++ * communication not established").  It was found that unloading and
++ * reloading the driver when this problem occurs allows the drive
++ * connection to be recovered (DET advanced to 0x3).  The critical
++ * component of reloading the driver is that the port state machines are
++ * reset by bouncing "port enable" in the AHCI PCS configuration
++ * register.  So, reproduce that effect by bouncing a port whenever we
++ * see DET==1 after a reset.
++ */
++static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,
++			      unsigned long deadline)
++{
++	const unsigned long *timing = sata_ehc_deb_timing(&link->eh_context);
++	struct ata_port *ap = link->ap;
++	struct ahci_port_priv *pp = ap->private_data;
++	struct ahci_host_priv *hpriv = ap->host->private_data;
++	u8 *d2h_fis = pp->rx_fis + RX_FIS_D2H_REG;
++	unsigned long tmo = deadline - jiffies;
++	struct ata_taskfile tf;
++	bool online;
++	int rc, i;
++
++	DPRINTK("ENTER\n");
++
++	ahci_stop_engine(ap);
++
++	for (i = 0; i < 2; i++) {
++		u16 val;
++		u32 sstatus;
++		int port = ap->port_no;
++		struct ata_host *host = ap->host;
++		struct pci_dev *pdev = to_pci_dev(host->dev);
++
++		/* clear D2H reception area to properly wait for D2H FIS */
++		ata_tf_init(link->device, &tf);
++		tf.command = ATA_BUSY;
++		ata_tf_to_fis(&tf, 0, 0, d2h_fis);
++
++		rc = sata_link_hardreset(link, timing, deadline, &online,
++				ahci_check_ready);
++
++		if (sata_scr_read(link, SCR_STATUS, &sstatus) != 0 ||
++				(sstatus & 0xf) != 1)
++			break;
++
++		ata_link_printk(link, KERN_INFO, "avn bounce port%d\n",
++				port);
++
++		pci_read_config_word(pdev, 0x92, &val);
++		val &= ~(1 << port);
++		pci_write_config_word(pdev, 0x92, val);
++		ata_msleep(ap, 1000);
++		val |= 1 << port;
++		pci_write_config_word(pdev, 0x92, val);
++		deadline += tmo;
++	}
++
++	hpriv->start_engine(ap);
++
++	if (online)
++		*class = ahci_dev_classify(ap);
++
++	DPRINTK("EXIT, rc=%d, class=%u\n", rc, *class);
++	return rc;
++}
++
++
+ #ifdef CONFIG_PM
+ static int ahci_pci_device_suspend(struct pci_dev *pdev, pm_message_t mesg)
+ {
+diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
+index 61a9c07e0dff..287c4ba0219f 100644
+--- a/drivers/ata/libahci.c
++++ b/drivers/ata/libahci.c
+@@ -1707,8 +1707,7 @@ static void ahci_handle_port_interrupt(struct ata_port *ap,
+ 	if (unlikely(resetting))
+ 		status &= ~PORT_IRQ_BAD_PMP;
+ 
+-	/* if LPM is enabled, PHYRDY doesn't mean anything */
+-	if (ap->link.lpm_policy > ATA_LPM_MAX_POWER) {
++	if (sata_lpm_ignore_phy_events(&ap->link)) {
+ 		status &= ~PORT_IRQ_PHYRDY;
+ 		ahci_scr_write(&ap->link, SCR_ERROR, SERR_PHYRDY_CHG);
+ 	}
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 23dac3babfe3..87b4b7f9fdc6 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -4214,7 +4214,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 	{ "Crucial_CT*MX100*",		"MU01",	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+-	{ "Samsung SSD 850 PRO*",	NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
++	{ "Samsung SSD 8*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 
+ 	/*
+@@ -6728,6 +6728,38 @@ u32 ata_wait_register(struct ata_port *ap, void __iomem *reg, u32 mask, u32 val,
+ 	return tmp;
+ }
+ 
++/**
++ *	sata_lpm_ignore_phy_events - test if PHY event should be ignored
++ *	@link: Link receiving the event
++ *
++ *	Test whether the received PHY event has to be ignored or not.
++ *
++ *	LOCKING:
++ *	None:
++ *
++ *	RETURNS:
++ *	True if the event has to be ignored.
++ */
++bool sata_lpm_ignore_phy_events(struct ata_link *link)
++{
++	unsigned long lpm_timeout = link->last_lpm_change +
++				    msecs_to_jiffies(ATA_TMOUT_SPURIOUS_PHY);
++
++	/* if LPM is enabled, PHYRDY doesn't mean anything */
++	if (link->lpm_policy > ATA_LPM_MAX_POWER)
++		return true;
++
++	/* ignore the first PHY event after the LPM policy changed
++	 * as it is might be spurious
++	 */
++	if ((link->flags & ATA_LFLAG_CHANGED) &&
++	    time_before(jiffies, lpm_timeout))
++		return true;
++
++	return false;
++}
++EXPORT_SYMBOL_GPL(sata_lpm_ignore_phy_events);
++
+ /*
+  * Dummy port_ops
+  */
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index d2029a462e2c..89c3d83e1ca7 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -3489,6 +3489,9 @@ static int ata_eh_set_lpm(struct ata_link *link, enum ata_lpm_policy policy,
+ 		}
+ 	}
+ 
++	link->last_lpm_change = jiffies;
++	link->flags |= ATA_LFLAG_CHANGED;
++
+ 	return 0;
+ 
+ fail:
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 237f23f68bfc..1daa0ea2f1ac 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -1443,8 +1443,10 @@ static struct clk_core *__clk_set_parent_before(struct clk_core *clk,
+ 	 */
+ 	if (clk->prepare_count) {
+ 		clk_core_prepare(parent);
++		flags = clk_enable_lock();
+ 		clk_core_enable(parent);
+ 		clk_core_enable(clk);
++		clk_enable_unlock(flags);
+ 	}
+ 
+ 	/* update the clk tree topology */
+@@ -1459,13 +1461,17 @@ static void __clk_set_parent_after(struct clk_core *core,
+ 				   struct clk_core *parent,
+ 				   struct clk_core *old_parent)
+ {
++	unsigned long flags;
++
+ 	/*
+ 	 * Finish the migration of prepare state and undo the changes done
+ 	 * for preventing a race with clk_enable().
+ 	 */
+ 	if (core->prepare_count) {
++		flags = clk_enable_lock();
+ 		clk_core_disable(core);
+ 		clk_core_disable(old_parent);
++		clk_enable_unlock(flags);
+ 		clk_core_unprepare(old_parent);
+ 	}
+ }
+@@ -1489,8 +1495,10 @@ static int __clk_set_parent(struct clk_core *clk, struct clk_core *parent,
+ 		clk_enable_unlock(flags);
+ 
+ 		if (clk->prepare_count) {
++			flags = clk_enable_lock();
+ 			clk_core_disable(clk);
+ 			clk_core_disable(parent);
++			clk_enable_unlock(flags);
+ 			clk_core_unprepare(parent);
+ 		}
+ 		return ret;
+diff --git a/drivers/clk/samsung/clk-exynos5420.c b/drivers/clk/samsung/clk-exynos5420.c
+index 07d666cc6a29..bea4a173eef5 100644
+--- a/drivers/clk/samsung/clk-exynos5420.c
++++ b/drivers/clk/samsung/clk-exynos5420.c
+@@ -271,6 +271,7 @@ static const struct samsung_clk_reg_dump exynos5420_set_clksrc[] = {
+ 	{ .offset = SRC_MASK_PERIC0,		.value = 0x11111110, },
+ 	{ .offset = SRC_MASK_PERIC1,		.value = 0x11111100, },
+ 	{ .offset = SRC_MASK_ISP,		.value = 0x11111000, },
++	{ .offset = GATE_BUS_TOP,		.value = 0xffffffff, },
+ 	{ .offset = GATE_BUS_DISP1,		.value = 0xffffffff, },
+ 	{ .offset = GATE_IP_PERIC,		.value = 0xffffffff, },
+ };
+diff --git a/drivers/firmware/dmi_scan.c b/drivers/firmware/dmi_scan.c
+index 2eebd28b4c40..ccc20188f00c 100644
+--- a/drivers/firmware/dmi_scan.c
++++ b/drivers/firmware/dmi_scan.c
+@@ -499,18 +499,19 @@ static int __init dmi_present(const u8 *buf)
+ 	buf += 16;
+ 
+ 	if (memcmp(buf, "_DMI_", 5) == 0 && dmi_checksum(buf, 15)) {
++		if (smbios_ver)
++			dmi_ver = smbios_ver;
++		else
++			dmi_ver = (buf[14] & 0xF0) << 4 | (buf[14] & 0x0F);
+ 		dmi_num = get_unaligned_le16(buf + 12);
+ 		dmi_len = get_unaligned_le16(buf + 6);
+ 		dmi_base = get_unaligned_le32(buf + 8);
+ 
+ 		if (dmi_walk_early(dmi_decode) == 0) {
+ 			if (smbios_ver) {
+-				dmi_ver = smbios_ver;
+ 				pr_info("SMBIOS %d.%d present.\n",
+ 				       dmi_ver >> 8, dmi_ver & 0xFF);
+ 			} else {
+-				dmi_ver = (buf[14] & 0xF0) << 4 |
+-					   (buf[14] & 0x0F);
+ 				pr_info("Legacy DMI %d.%d present.\n",
+ 				       dmi_ver >> 8, dmi_ver & 0xFF);
+ 			}
+diff --git a/drivers/gpio/gpio-kempld.c b/drivers/gpio/gpio-kempld.c
+index 443518f63f15..a6b0def4bd7b 100644
+--- a/drivers/gpio/gpio-kempld.c
++++ b/drivers/gpio/gpio-kempld.c
+@@ -117,7 +117,7 @@ static int kempld_gpio_get_direction(struct gpio_chip *chip, unsigned offset)
+ 		= container_of(chip, struct kempld_gpio_data, chip);
+ 	struct kempld_device_data *pld = gpio->pld;
+ 
+-	return kempld_gpio_get_bit(pld, KEMPLD_GPIO_DIR_NUM(offset), offset);
++	return !kempld_gpio_get_bit(pld, KEMPLD_GPIO_DIR_NUM(offset), offset);
+ }
+ 
+ static int kempld_gpio_pincount(struct kempld_device_data *pld)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index 498399323a8c..406624a0b201 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -729,7 +729,7 @@ static ssize_t node_show(struct kobject *kobj, struct attribute *attr,
+ 				kfd2kgd->get_max_engine_clock_in_mhz(
+ 					dev->gpu->kgd));
+ 		sysfs_show_64bit_prop(buffer, "local_mem_size",
+-				kfd2kgd->get_vmem_size(dev->gpu->kgd));
++				(unsigned long long int) 0);
+ 
+ 		sysfs_show_32bit_prop(buffer, "fw_version",
+ 				kfd2kgd->get_fw_version(
+diff --git a/drivers/gpu/drm/drm_plane_helper.c b/drivers/gpu/drm/drm_plane_helper.c
+index 5ba5792bfdba..98b125763ecd 100644
+--- a/drivers/gpu/drm/drm_plane_helper.c
++++ b/drivers/gpu/drm/drm_plane_helper.c
+@@ -476,6 +476,9 @@ int drm_plane_helper_commit(struct drm_plane *plane,
+ 		if (!crtc[i])
+ 			continue;
+ 
++		if (crtc[i]->cursor == plane)
++			continue;
++
+ 		/* There's no other way to figure out whether the crtc is running. */
+ 		ret = drm_crtc_vblank_get(crtc[i]);
+ 		if (ret == 0) {
+diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c
+index 1afc0b419da2..965a45619f6b 100644
+--- a/drivers/gpu/drm/radeon/atombios_crtc.c
++++ b/drivers/gpu/drm/radeon/atombios_crtc.c
+@@ -1789,7 +1789,9 @@ static int radeon_get_shared_nondp_ppll(struct drm_crtc *crtc)
+ 			if ((crtc->mode.clock == test_crtc->mode.clock) &&
+ 			    (adjusted_clock == test_adjusted_clock) &&
+ 			    (radeon_crtc->ss_enabled == test_radeon_crtc->ss_enabled) &&
+-			    (test_radeon_crtc->pll_id != ATOM_PPLL_INVALID))
++			    (test_radeon_crtc->pll_id != ATOM_PPLL_INVALID) &&
++			    (drm_detect_monitor_audio(radeon_connector_edid(test_radeon_crtc->connector)) ==
++			     drm_detect_monitor_audio(radeon_connector_edid(radeon_crtc->connector))))
+ 				return test_radeon_crtc->pll_id;
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/radeon/atombios_dp.c b/drivers/gpu/drm/radeon/atombios_dp.c
+index 8d74de82456e..8b2c4c890507 100644
+--- a/drivers/gpu/drm/radeon/atombios_dp.c
++++ b/drivers/gpu/drm/radeon/atombios_dp.c
+@@ -412,19 +412,21 @@ bool radeon_dp_getdpcd(struct radeon_connector *radeon_connector)
+ {
+ 	struct radeon_connector_atom_dig *dig_connector = radeon_connector->con_priv;
+ 	u8 msg[DP_DPCD_SIZE];
+-	int ret;
++	int ret, i;
+ 
+-	ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_DPCD_REV, msg,
+-			       DP_DPCD_SIZE);
+-	if (ret > 0) {
+-		memcpy(dig_connector->dpcd, msg, DP_DPCD_SIZE);
++	for (i = 0; i < 7; i++) {
++		ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_DPCD_REV, msg,
++				       DP_DPCD_SIZE);
++		if (ret == DP_DPCD_SIZE) {
++			memcpy(dig_connector->dpcd, msg, DP_DPCD_SIZE);
+ 
+-		DRM_DEBUG_KMS("DPCD: %*ph\n", (int)sizeof(dig_connector->dpcd),
+-			      dig_connector->dpcd);
++			DRM_DEBUG_KMS("DPCD: %*ph\n", (int)sizeof(dig_connector->dpcd),
++				      dig_connector->dpcd);
+ 
+-		radeon_dp_probe_oui(radeon_connector);
++			radeon_dp_probe_oui(radeon_connector);
+ 
+-		return true;
++			return true;
++		}
+ 	}
+ 	dig_connector->dpcd[0] = 0;
+ 	return false;
+diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
+index 3e670d344a20..19aafb71fd8e 100644
+--- a/drivers/gpu/drm/radeon/cik.c
++++ b/drivers/gpu/drm/radeon/cik.c
+@@ -5804,7 +5804,7 @@ static int cik_pcie_gart_enable(struct radeon_device *rdev)
+ 	/* restore context1-15 */
+ 	/* set vm size, must be a multiple of 4 */
+ 	WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0);
+-	WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn);
++	WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn - 1);
+ 	for (i = 1; i < 16; i++) {
+ 		if (i < 8)
+ 			WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),
+diff --git a/drivers/gpu/drm/radeon/evergreen_hdmi.c b/drivers/gpu/drm/radeon/evergreen_hdmi.c
+index 0926739c9fa7..9953356fe263 100644
+--- a/drivers/gpu/drm/radeon/evergreen_hdmi.c
++++ b/drivers/gpu/drm/radeon/evergreen_hdmi.c
+@@ -400,7 +400,7 @@ void evergreen_hdmi_enable(struct drm_encoder *encoder, bool enable)
+ 	if (enable) {
+ 		struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
+ 
+-		if (drm_detect_monitor_audio(radeon_connector_edid(connector))) {
++		if (connector && drm_detect_monitor_audio(radeon_connector_edid(connector))) {
+ 			WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset,
+ 			       HDMI_AVI_INFO_SEND | /* enable AVI info frames */
+ 			       HDMI_AVI_INFO_CONT | /* required for audio info values to be updated */
+@@ -438,7 +438,8 @@ void evergreen_dp_enable(struct drm_encoder *encoder, bool enable)
+ 	if (!dig || !dig->afmt)
+ 		return;
+ 
+-	if (enable && drm_detect_monitor_audio(radeon_connector_edid(connector))) {
++	if (enable && connector &&
++	    drm_detect_monitor_audio(radeon_connector_edid(connector))) {
+ 		struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
+ 		struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ 		struct radeon_connector_atom_dig *dig_connector;
+diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
+index dab00812abaa..02d585455f49 100644
+--- a/drivers/gpu/drm/radeon/ni.c
++++ b/drivers/gpu/drm/radeon/ni.c
+@@ -1272,7 +1272,8 @@ static int cayman_pcie_gart_enable(struct radeon_device *rdev)
+ 	 */
+ 	for (i = 1; i < 8; i++) {
+ 		WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR + (i << 2), 0);
+-		WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR + (i << 2), rdev->vm_manager.max_pfn);
++		WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR + (i << 2),
++			rdev->vm_manager.max_pfn - 1);
+ 		WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),
+ 		       rdev->vm_manager.saved_table_addr[i]);
+ 	}
+diff --git a/drivers/gpu/drm/radeon/radeon_audio.c b/drivers/gpu/drm/radeon/radeon_audio.c
+index b7c6bb69f3c7..88c04bc0a7f6 100644
+--- a/drivers/gpu/drm/radeon/radeon_audio.c
++++ b/drivers/gpu/drm/radeon/radeon_audio.c
+@@ -460,9 +460,6 @@ void radeon_audio_detect(struct drm_connector *connector,
+ 	if (!connector || !connector->encoder)
+ 		return;
+ 
+-	if (!radeon_encoder_is_digital(connector->encoder))
+-		return;
+-
+ 	rdev = connector->encoder->dev->dev_private;
+ 
+ 	if (!radeon_audio_chipset_supported(rdev))
+@@ -471,26 +468,26 @@ void radeon_audio_detect(struct drm_connector *connector,
+ 	radeon_encoder = to_radeon_encoder(connector->encoder);
+ 	dig = radeon_encoder->enc_priv;
+ 
+-	if (!dig->afmt)
+-		return;
+-
+ 	if (status == connector_status_connected) {
+-		struct radeon_connector *radeon_connector = to_radeon_connector(connector);
++		struct radeon_connector *radeon_connector;
++		int sink_type;
++
++		if (!drm_detect_monitor_audio(radeon_connector_edid(connector))) {
++			radeon_encoder->audio = NULL;
++			return;
++		}
++
++		radeon_connector = to_radeon_connector(connector);
++		sink_type = radeon_dp_getsinktype(radeon_connector);
+ 
+ 		if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort &&
+-		    radeon_dp_getsinktype(radeon_connector) ==
+-		    CONNECTOR_OBJECT_ID_DISPLAYPORT)
++			sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT)
+ 			radeon_encoder->audio = rdev->audio.dp_funcs;
+ 		else
+ 			radeon_encoder->audio = rdev->audio.hdmi_funcs;
+ 
+ 		dig->afmt->pin = radeon_audio_get_pin(connector->encoder);
+-		if (drm_detect_monitor_audio(radeon_connector_edid(connector))) {
+-			radeon_audio_enable(rdev, dig->afmt->pin, 0xf);
+-		} else {
+-			radeon_audio_enable(rdev, dig->afmt->pin, 0);
+-			dig->afmt->pin = NULL;
+-		}
++		radeon_audio_enable(rdev, dig->afmt->pin, 0xf);
+ 	} else {
+ 		radeon_audio_enable(rdev, dig->afmt->pin, 0);
+ 		dig->afmt->pin = NULL;
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index 27973e3faf0e..27def67cb6be 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -1333,10 +1333,8 @@ out:
+ 	/* updated in get modes as well since we need to know if it's analog or digital */
+ 	radeon_connector_update_scratch_regs(connector, ret);
+ 
+-	if (radeon_audio != 0) {
+-		radeon_connector_get_edid(connector);
++	if (radeon_audio != 0)
+ 		radeon_audio_detect(connector, ret);
+-	}
+ 
+ exit:
+ 	pm_runtime_mark_last_busy(connector->dev->dev);
+@@ -1661,10 +1659,8 @@ radeon_dp_detect(struct drm_connector *connector, bool force)
+ 
+ 	radeon_connector_update_scratch_regs(connector, ret);
+ 
+-	if (radeon_audio != 0) {
+-		radeon_connector_get_edid(connector);
++	if (radeon_audio != 0)
+ 		radeon_audio_detect(connector, ret);
+-	}
+ 
+ out:
+ 	pm_runtime_mark_last_busy(connector->dev->dev);
+diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
+index a7fb2735d4a9..f433491fab6f 100644
+--- a/drivers/gpu/drm/radeon/si.c
++++ b/drivers/gpu/drm/radeon/si.c
+@@ -4288,7 +4288,7 @@ static int si_pcie_gart_enable(struct radeon_device *rdev)
+ 	/* empty context1-15 */
+ 	/* set vm size, must be a multiple of 4 */
+ 	WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0);
+-	WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn);
++	WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn - 1);
+ 	/* Assign the pt base to something valid for now; the pts used for
+ 	 * the VMs are determined by the application and setup and assigned
+ 	 * on the fly in the vm part of radeon_gart.c
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index e77658cd037c..2caf5b2f3446 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -39,7 +39,6 @@ MODULE_AUTHOR("Nestor Lopez Casado <nlopezcasad@logitech.com>");
+ /* bits 1..20 are reserved for classes */
+ #define HIDPP_QUIRK_DELAYED_INIT		BIT(21)
+ #define HIDPP_QUIRK_WTP_PHYSICAL_BUTTONS	BIT(22)
+-#define HIDPP_QUIRK_MULTI_INPUT			BIT(23)
+ 
+ /*
+  * There are two hidpp protocols in use, the first version hidpp10 is known
+@@ -701,12 +700,6 @@ static int wtp_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 		struct hid_field *field, struct hid_usage *usage,
+ 		unsigned long **bit, int *max)
+ {
+-	struct hidpp_device *hidpp = hid_get_drvdata(hdev);
+-
+-	if ((hidpp->quirks & HIDPP_QUIRK_MULTI_INPUT) &&
+-	    (field->application == HID_GD_KEYBOARD))
+-		return 0;
+-
+ 	return -1;
+ }
+ 
+@@ -715,10 +708,6 @@ static void wtp_populate_input(struct hidpp_device *hidpp,
+ {
+ 	struct wtp_data *wd = hidpp->private_data;
+ 
+-	if ((hidpp->quirks & HIDPP_QUIRK_MULTI_INPUT) && origin_is_hid_core)
+-		/* this is the generic hid-input call */
+-		return;
+-
+ 	__set_bit(EV_ABS, input_dev->evbit);
+ 	__set_bit(EV_KEY, input_dev->evbit);
+ 	__clear_bit(EV_REL, input_dev->evbit);
+@@ -1234,10 +1223,6 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	if (hidpp->quirks & HIDPP_QUIRK_DELAYED_INIT)
+ 		connect_mask &= ~HID_CONNECT_HIDINPUT;
+ 
+-	/* Re-enable hidinput for multi-input devices */
+-	if (hidpp->quirks & HIDPP_QUIRK_MULTI_INPUT)
+-		connect_mask |= HID_CONNECT_HIDINPUT;
+-
+ 	ret = hid_hw_start(hdev, connect_mask);
+ 	if (ret) {
+ 		hid_err(hdev, "%s:hid_hw_start returned error\n", __func__);
+@@ -1285,11 +1270,6 @@ static const struct hid_device_id hidpp_devices[] = {
+ 	  HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,
+ 		USB_DEVICE_ID_LOGITECH_T651),
+ 	  .driver_data = HIDPP_QUIRK_CLASS_WTP },
+-	{ /* Keyboard TK820 */
+-	  HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
+-		USB_VENDOR_ID_LOGITECH, 0x4102),
+-	  .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_MULTI_INPUT |
+-			 HIDPP_QUIRK_CLASS_WTP },
+ 
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
+ 		USB_VENDOR_ID_LOGITECH, HID_ANY_ID)},
+diff --git a/drivers/hwmon/nct6683.c b/drivers/hwmon/nct6683.c
+index f3830db02d46..37f01702d081 100644
+--- a/drivers/hwmon/nct6683.c
++++ b/drivers/hwmon/nct6683.c
+@@ -439,6 +439,7 @@ nct6683_create_attr_group(struct device *dev, struct sensor_template_group *tg,
+ 				 (*t)->dev_attr.attr.name, tg->base + i);
+ 			if ((*t)->s2) {
+ 				a2 = &su->u.a2;
++				sysfs_attr_init(&a2->dev_attr.attr);
+ 				a2->dev_attr.attr.name = su->name;
+ 				a2->nr = (*t)->u.s.nr + i;
+ 				a2->index = (*t)->u.s.index;
+@@ -449,6 +450,7 @@ nct6683_create_attr_group(struct device *dev, struct sensor_template_group *tg,
+ 				*attrs = &a2->dev_attr.attr;
+ 			} else {
+ 				a = &su->u.a1;
++				sysfs_attr_init(&a->dev_attr.attr);
+ 				a->dev_attr.attr.name = su->name;
+ 				a->index = (*t)->u.index + i;
+ 				a->dev_attr.attr.mode =
+diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
+index 1be41177b620..0773930c110e 100644
+--- a/drivers/hwmon/nct6775.c
++++ b/drivers/hwmon/nct6775.c
+@@ -994,6 +994,7 @@ nct6775_create_attr_group(struct device *dev, struct sensor_template_group *tg,
+ 				 (*t)->dev_attr.attr.name, tg->base + i);
+ 			if ((*t)->s2) {
+ 				a2 = &su->u.a2;
++				sysfs_attr_init(&a2->dev_attr.attr);
+ 				a2->dev_attr.attr.name = su->name;
+ 				a2->nr = (*t)->u.s.nr + i;
+ 				a2->index = (*t)->u.s.index;
+@@ -1004,6 +1005,7 @@ nct6775_create_attr_group(struct device *dev, struct sensor_template_group *tg,
+ 				*attrs = &a2->dev_attr.attr;
+ 			} else {
+ 				a = &su->u.a1;
++				sysfs_attr_init(&a->dev_attr.attr);
+ 				a->dev_attr.attr.name = su->name;
+ 				a->index = (*t)->u.index + i;
+ 				a->dev_attr.attr.mode =
+diff --git a/drivers/hwmon/ntc_thermistor.c b/drivers/hwmon/ntc_thermistor.c
+index 112e4d45e4a0..68800115876b 100644
+--- a/drivers/hwmon/ntc_thermistor.c
++++ b/drivers/hwmon/ntc_thermistor.c
+@@ -239,8 +239,10 @@ static struct ntc_thermistor_platform_data *
+ ntc_thermistor_parse_dt(struct platform_device *pdev)
+ {
+ 	struct iio_channel *chan;
++	enum iio_chan_type type;
+ 	struct device_node *np = pdev->dev.of_node;
+ 	struct ntc_thermistor_platform_data *pdata;
++	int ret;
+ 
+ 	if (!np)
+ 		return NULL;
+@@ -253,6 +255,13 @@ ntc_thermistor_parse_dt(struct platform_device *pdev)
+ 	if (IS_ERR(chan))
+ 		return ERR_CAST(chan);
+ 
++	ret = iio_get_channel_type(chan, &type);
++	if (ret < 0)
++		return ERR_PTR(ret);
++
++	if (type != IIO_VOLTAGE)
++		return ERR_PTR(-EINVAL);
++
+ 	if (of_property_read_u32(np, "pullup-uv", &pdata->pullup_uv))
+ 		return ERR_PTR(-ENODEV);
+ 	if (of_property_read_u32(np, "pullup-ohm", &pdata->pullup_ohm))
+diff --git a/drivers/hwmon/tmp401.c b/drivers/hwmon/tmp401.c
+index 99664ebc738d..ccf4cffe0ee1 100644
+--- a/drivers/hwmon/tmp401.c
++++ b/drivers/hwmon/tmp401.c
+@@ -44,7 +44,7 @@
+ #include <linux/sysfs.h>
+ 
+ /* Addresses to scan */
+-static const unsigned short normal_i2c[] = { 0x37, 0x48, 0x49, 0x4a, 0x4c, 0x4d,
++static const unsigned short normal_i2c[] = { 0x48, 0x49, 0x4a, 0x4c, 0x4d,
+ 	0x4e, 0x4f, I2C_CLIENT_END };
+ 
+ enum chips { tmp401, tmp411, tmp431, tmp432, tmp435 };
+diff --git a/drivers/iio/accel/st_accel_core.c b/drivers/iio/accel/st_accel_core.c
+index 53f32629283a..6805db0e4f07 100644
+--- a/drivers/iio/accel/st_accel_core.c
++++ b/drivers/iio/accel/st_accel_core.c
+@@ -465,6 +465,7 @@ int st_accel_common_probe(struct iio_dev *indio_dev)
+ 
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->info = &accel_info;
++	mutex_init(&adata->tb.buf_lock);
+ 
+ 	st_sensors_power_enable(indio_dev);
+ 
+diff --git a/drivers/iio/adc/axp288_adc.c b/drivers/iio/adc/axp288_adc.c
+index 08bcfb061ca5..56008a86b78f 100644
+--- a/drivers/iio/adc/axp288_adc.c
++++ b/drivers/iio/adc/axp288_adc.c
+@@ -53,39 +53,42 @@ static const struct iio_chan_spec const axp288_adc_channels[] = {
+ 		.channel = 0,
+ 		.address = AXP288_TS_ADC_H,
+ 		.datasheet_name = "TS_PIN",
++		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 	}, {
+ 		.indexed = 1,
+ 		.type = IIO_TEMP,
+ 		.channel = 1,
+ 		.address = AXP288_PMIC_ADC_H,
+ 		.datasheet_name = "PMIC_TEMP",
++		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 	}, {
+ 		.indexed = 1,
+ 		.type = IIO_TEMP,
+ 		.channel = 2,
+ 		.address = AXP288_GP_ADC_H,
+ 		.datasheet_name = "GPADC",
++		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 	}, {
+ 		.indexed = 1,
+ 		.type = IIO_CURRENT,
+ 		.channel = 3,
+ 		.address = AXP20X_BATT_CHRG_I_H,
+ 		.datasheet_name = "BATT_CHG_I",
+-		.info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
++		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 	}, {
+ 		.indexed = 1,
+ 		.type = IIO_CURRENT,
+ 		.channel = 4,
+ 		.address = AXP20X_BATT_DISCHRG_I_H,
+ 		.datasheet_name = "BATT_DISCHRG_I",
+-		.info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
++		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 	}, {
+ 		.indexed = 1,
+ 		.type = IIO_VOLTAGE,
+ 		.channel = 5,
+ 		.address = AXP20X_BATT_V_H,
+ 		.datasheet_name = "BATT_V",
+-		.info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
++		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 	},
+ };
+ 
+@@ -151,9 +154,6 @@ static int axp288_adc_read_raw(struct iio_dev *indio_dev,
+ 						chan->address))
+ 			dev_err(&indio_dev->dev, "TS pin restore\n");
+ 		break;
+-	case IIO_CHAN_INFO_PROCESSED:
+-		ret = axp288_adc_read_channel(val, chan->address, info->regmap);
+-		break;
+ 	default:
+ 		ret = -EINVAL;
+ 	}
+diff --git a/drivers/iio/adc/cc10001_adc.c b/drivers/iio/adc/cc10001_adc.c
+index 51e2a83c9404..115f6e99a7fa 100644
+--- a/drivers/iio/adc/cc10001_adc.c
++++ b/drivers/iio/adc/cc10001_adc.c
+@@ -35,8 +35,9 @@
+ #define CC10001_ADC_EOC_SET		BIT(0)
+ 
+ #define CC10001_ADC_CHSEL_SAMPLED	0x0c
+-#define CC10001_ADC_POWER_UP		0x10
+-#define CC10001_ADC_POWER_UP_SET	BIT(0)
++#define CC10001_ADC_POWER_DOWN		0x10
++#define CC10001_ADC_POWER_DOWN_SET	BIT(0)
++
+ #define CC10001_ADC_DEBUG		0x14
+ #define CC10001_ADC_DATA_COUNT		0x20
+ 
+@@ -62,7 +63,6 @@ struct cc10001_adc_device {
+ 	u16 *buf;
+ 
+ 	struct mutex lock;
+-	unsigned long channel_map;
+ 	unsigned int start_delay_ns;
+ 	unsigned int eoc_delay_ns;
+ };
+@@ -79,6 +79,18 @@ static inline u32 cc10001_adc_read_reg(struct cc10001_adc_device *adc_dev,
+ 	return readl(adc_dev->reg_base + reg);
+ }
+ 
++static void cc10001_adc_power_up(struct cc10001_adc_device *adc_dev)
++{
++	cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_DOWN, 0);
++	ndelay(adc_dev->start_delay_ns);
++}
++
++static void cc10001_adc_power_down(struct cc10001_adc_device *adc_dev)
++{
++	cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_DOWN,
++			      CC10001_ADC_POWER_DOWN_SET);
++}
++
+ static void cc10001_adc_start(struct cc10001_adc_device *adc_dev,
+ 			      unsigned int channel)
+ {
+@@ -88,6 +100,7 @@ static void cc10001_adc_start(struct cc10001_adc_device *adc_dev,
+ 	val = (channel & CC10001_ADC_CH_MASK) | CC10001_ADC_MODE_SINGLE_CONV;
+ 	cc10001_adc_write_reg(adc_dev, CC10001_ADC_CONFIG, val);
+ 
++	udelay(1);
+ 	val = cc10001_adc_read_reg(adc_dev, CC10001_ADC_CONFIG);
+ 	val = val | CC10001_ADC_START_CONV;
+ 	cc10001_adc_write_reg(adc_dev, CC10001_ADC_CONFIG, val);
+@@ -129,6 +142,7 @@ static irqreturn_t cc10001_adc_trigger_h(int irq, void *p)
+ 	struct iio_dev *indio_dev;
+ 	unsigned int delay_ns;
+ 	unsigned int channel;
++	unsigned int scan_idx;
+ 	bool sample_invalid;
+ 	u16 *data;
+ 	int i;
+@@ -139,20 +153,17 @@ static irqreturn_t cc10001_adc_trigger_h(int irq, void *p)
+ 
+ 	mutex_lock(&adc_dev->lock);
+ 
+-	cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP,
+-			      CC10001_ADC_POWER_UP_SET);
+-
+-	/* Wait for 8 (6+2) clock cycles before activating START */
+-	ndelay(adc_dev->start_delay_ns);
++	cc10001_adc_power_up(adc_dev);
+ 
+ 	/* Calculate delay step for eoc and sampled data */
+ 	delay_ns = adc_dev->eoc_delay_ns / CC10001_MAX_POLL_COUNT;
+ 
+ 	i = 0;
+ 	sample_invalid = false;
+-	for_each_set_bit(channel, indio_dev->active_scan_mask,
++	for_each_set_bit(scan_idx, indio_dev->active_scan_mask,
+ 				  indio_dev->masklength) {
+ 
++		channel = indio_dev->channels[scan_idx].channel;
+ 		cc10001_adc_start(adc_dev, channel);
+ 
+ 		data[i] = cc10001_adc_poll_done(indio_dev, channel, delay_ns);
+@@ -166,7 +177,7 @@ static irqreturn_t cc10001_adc_trigger_h(int irq, void *p)
+ 	}
+ 
+ done:
+-	cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP, 0);
++	cc10001_adc_power_down(adc_dev);
+ 
+ 	mutex_unlock(&adc_dev->lock);
+ 
+@@ -185,11 +196,7 @@ static u16 cc10001_adc_read_raw_voltage(struct iio_dev *indio_dev,
+ 	unsigned int delay_ns;
+ 	u16 val;
+ 
+-	cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP,
+-			      CC10001_ADC_POWER_UP_SET);
+-
+-	/* Wait for 8 (6+2) clock cycles before activating START */
+-	ndelay(adc_dev->start_delay_ns);
++	cc10001_adc_power_up(adc_dev);
+ 
+ 	/* Calculate delay step for eoc and sampled data */
+ 	delay_ns = adc_dev->eoc_delay_ns / CC10001_MAX_POLL_COUNT;
+@@ -198,7 +205,7 @@ static u16 cc10001_adc_read_raw_voltage(struct iio_dev *indio_dev,
+ 
+ 	val = cc10001_adc_poll_done(indio_dev, chan->channel, delay_ns);
+ 
+-	cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP, 0);
++	cc10001_adc_power_down(adc_dev);
+ 
+ 	return val;
+ }
+@@ -224,7 +231,7 @@ static int cc10001_adc_read_raw(struct iio_dev *indio_dev,
+ 
+ 	case IIO_CHAN_INFO_SCALE:
+ 		ret = regulator_get_voltage(adc_dev->reg);
+-		if (ret)
++		if (ret < 0)
+ 			return ret;
+ 
+ 		*val = ret / 1000;
+@@ -255,22 +262,22 @@ static const struct iio_info cc10001_adc_info = {
+ 	.update_scan_mode = &cc10001_update_scan_mode,
+ };
+ 
+-static int cc10001_adc_channel_init(struct iio_dev *indio_dev)
++static int cc10001_adc_channel_init(struct iio_dev *indio_dev,
++				    unsigned long channel_map)
+ {
+-	struct cc10001_adc_device *adc_dev = iio_priv(indio_dev);
+ 	struct iio_chan_spec *chan_array, *timestamp;
+ 	unsigned int bit, idx = 0;
+ 
+-	indio_dev->num_channels = bitmap_weight(&adc_dev->channel_map,
+-						CC10001_ADC_NUM_CHANNELS);
++	indio_dev->num_channels = bitmap_weight(&channel_map,
++						CC10001_ADC_NUM_CHANNELS) + 1;
+ 
+-	chan_array = devm_kcalloc(&indio_dev->dev, indio_dev->num_channels + 1,
++	chan_array = devm_kcalloc(&indio_dev->dev, indio_dev->num_channels,
+ 				  sizeof(struct iio_chan_spec),
+ 				  GFP_KERNEL);
+ 	if (!chan_array)
+ 		return -ENOMEM;
+ 
+-	for_each_set_bit(bit, &adc_dev->channel_map, CC10001_ADC_NUM_CHANNELS) {
++	for_each_set_bit(bit, &channel_map, CC10001_ADC_NUM_CHANNELS) {
+ 		struct iio_chan_spec *chan = &chan_array[idx];
+ 
+ 		chan->type = IIO_VOLTAGE;
+@@ -305,6 +312,7 @@ static int cc10001_adc_probe(struct platform_device *pdev)
+ 	unsigned long adc_clk_rate;
+ 	struct resource *res;
+ 	struct iio_dev *indio_dev;
++	unsigned long channel_map;
+ 	int ret;
+ 
+ 	indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*adc_dev));
+@@ -313,9 +321,9 @@ static int cc10001_adc_probe(struct platform_device *pdev)
+ 
+ 	adc_dev = iio_priv(indio_dev);
+ 
+-	adc_dev->channel_map = GENMASK(CC10001_ADC_NUM_CHANNELS - 1, 0);
++	channel_map = GENMASK(CC10001_ADC_NUM_CHANNELS - 1, 0);
+ 	if (!of_property_read_u32(node, "adc-reserved-channels", &ret))
+-		adc_dev->channel_map &= ~ret;
++		channel_map &= ~ret;
+ 
+ 	adc_dev->reg = devm_regulator_get(&pdev->dev, "vref");
+ 	if (IS_ERR(adc_dev->reg))
+@@ -361,7 +369,7 @@ static int cc10001_adc_probe(struct platform_device *pdev)
+ 	adc_dev->start_delay_ns = adc_dev->eoc_delay_ns * CC10001_WAIT_CYCLES;
+ 
+ 	/* Setup the ADC channels available on the device */
+-	ret = cc10001_adc_channel_init(indio_dev);
++	ret = cc10001_adc_channel_init(indio_dev, channel_map);
+ 	if (ret < 0)
+ 		goto err_disable_clk;
+ 
+diff --git a/drivers/iio/adc/qcom-spmi-vadc.c b/drivers/iio/adc/qcom-spmi-vadc.c
+index 3211729bcb0b..0c4618b4d515 100644
+--- a/drivers/iio/adc/qcom-spmi-vadc.c
++++ b/drivers/iio/adc/qcom-spmi-vadc.c
+@@ -18,6 +18,7 @@
+ #include <linux/iio/iio.h>
+ #include <linux/interrupt.h>
+ #include <linux/kernel.h>
++#include <linux/math64.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+@@ -471,11 +472,11 @@ static s32 vadc_calibrate(struct vadc_priv *vadc,
+ 			  const struct vadc_channel_prop *prop, u16 adc_code)
+ {
+ 	const struct vadc_prescale_ratio *prescale;
+-	s32 voltage;
++	s64 voltage;
+ 
+ 	voltage = adc_code - vadc->graph[prop->calibration].gnd;
+ 	voltage *= vadc->graph[prop->calibration].dx;
+-	voltage = voltage / vadc->graph[prop->calibration].dy;
++	voltage = div64_s64(voltage, vadc->graph[prop->calibration].dy);
+ 
+ 	if (prop->calibration == VADC_CALIB_ABSOLUTE)
+ 		voltage += vadc->graph[prop->calibration].dx;
+@@ -487,7 +488,7 @@ static s32 vadc_calibrate(struct vadc_priv *vadc,
+ 
+ 	voltage = voltage * prescale->den;
+ 
+-	return voltage / prescale->num;
++	return div64_s64(voltage, prescale->num);
+ }
+ 
+ static int vadc_decimation_from_dt(u32 value)
+diff --git a/drivers/iio/adc/xilinx-xadc-core.c b/drivers/iio/adc/xilinx-xadc-core.c
+index a221f7329b79..ce93bd8e3f68 100644
+--- a/drivers/iio/adc/xilinx-xadc-core.c
++++ b/drivers/iio/adc/xilinx-xadc-core.c
+@@ -856,6 +856,7 @@ static int xadc_read_raw(struct iio_dev *indio_dev,
+ 			switch (chan->address) {
+ 			case XADC_REG_VCCINT:
+ 			case XADC_REG_VCCAUX:
++			case XADC_REG_VREFP:
+ 			case XADC_REG_VCCBRAM:
+ 			case XADC_REG_VCCPINT:
+ 			case XADC_REG_VCCPAUX:
+@@ -996,7 +997,7 @@ static const struct iio_event_spec xadc_voltage_events[] = {
+ 	.num_event_specs = (_alarm) ? ARRAY_SIZE(xadc_voltage_events) : 0, \
+ 	.scan_index = (_scan_index), \
+ 	.scan_type = { \
+-		.sign = 'u', \
++		.sign = ((_addr) == XADC_REG_VREFN) ? 's' : 'u', \
+ 		.realbits = 12, \
+ 		.storagebits = 16, \
+ 		.shift = 4, \
+@@ -1008,7 +1009,7 @@ static const struct iio_event_spec xadc_voltage_events[] = {
+ static const struct iio_chan_spec xadc_channels[] = {
+ 	XADC_CHAN_TEMP(0, 8, XADC_REG_TEMP),
+ 	XADC_CHAN_VOLTAGE(0, 9, XADC_REG_VCCINT, "vccint", true),
+-	XADC_CHAN_VOLTAGE(1, 10, XADC_REG_VCCINT, "vccaux", true),
++	XADC_CHAN_VOLTAGE(1, 10, XADC_REG_VCCAUX, "vccaux", true),
+ 	XADC_CHAN_VOLTAGE(2, 14, XADC_REG_VCCBRAM, "vccbram", true),
+ 	XADC_CHAN_VOLTAGE(3, 5, XADC_REG_VCCPINT, "vccpint", true),
+ 	XADC_CHAN_VOLTAGE(4, 6, XADC_REG_VCCPAUX, "vccpaux", true),
+diff --git a/drivers/iio/adc/xilinx-xadc.h b/drivers/iio/adc/xilinx-xadc.h
+index c7487e8d7f80..54adc5087210 100644
+--- a/drivers/iio/adc/xilinx-xadc.h
++++ b/drivers/iio/adc/xilinx-xadc.h
+@@ -145,9 +145,9 @@ static inline int xadc_write_adc_reg(struct xadc *xadc, unsigned int reg,
+ #define XADC_REG_MAX_VCCPINT	0x28
+ #define XADC_REG_MAX_VCCPAUX	0x29
+ #define XADC_REG_MAX_VCCO_DDR	0x2a
+-#define XADC_REG_MIN_VCCPINT	0x2b
+-#define XADC_REG_MIN_VCCPAUX	0x2c
+-#define XADC_REG_MIN_VCCO_DDR	0x2d
++#define XADC_REG_MIN_VCCPINT	0x2c
++#define XADC_REG_MIN_VCCPAUX	0x2d
++#define XADC_REG_MIN_VCCO_DDR	0x2e
+ 
+ #define XADC_REG_CONF0		0x40
+ #define XADC_REG_CONF1		0x41
+diff --git a/drivers/iio/common/st_sensors/st_sensors_core.c b/drivers/iio/common/st_sensors/st_sensors_core.c
+index edd13d2b4121..8dd0477e201c 100644
+--- a/drivers/iio/common/st_sensors/st_sensors_core.c
++++ b/drivers/iio/common/st_sensors/st_sensors_core.c
+@@ -304,8 +304,6 @@ int st_sensors_init_sensor(struct iio_dev *indio_dev,
+ 	struct st_sensors_platform_data *of_pdata;
+ 	int err = 0;
+ 
+-	mutex_init(&sdata->tb.buf_lock);
+-
+ 	/* If OF/DT pdata exists, it will take precedence of anything else */
+ 	of_pdata = st_sensors_of_probe(indio_dev->dev.parent, pdata);
+ 	if (of_pdata)
+diff --git a/drivers/iio/gyro/st_gyro_core.c b/drivers/iio/gyro/st_gyro_core.c
+index f07a2336f7dc..566f7d2df031 100644
+--- a/drivers/iio/gyro/st_gyro_core.c
++++ b/drivers/iio/gyro/st_gyro_core.c
+@@ -317,6 +317,7 @@ int st_gyro_common_probe(struct iio_dev *indio_dev)
+ 
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->info = &gyro_info;
++	mutex_init(&gdata->tb.buf_lock);
+ 
+ 	st_sensors_power_enable(indio_dev);
+ 
+diff --git a/drivers/iio/light/hid-sensor-prox.c b/drivers/iio/light/hid-sensor-prox.c
+index 3ecf79ed08ac..88f21bbe947c 100644
+--- a/drivers/iio/light/hid-sensor-prox.c
++++ b/drivers/iio/light/hid-sensor-prox.c
+@@ -43,8 +43,6 @@ struct prox_state {
+ static const struct iio_chan_spec prox_channels[] = {
+ 	{
+ 		.type = IIO_PROXIMITY,
+-		.modified = 1,
+-		.channel2 = IIO_NO_MOD,
+ 		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 		.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_OFFSET) |
+ 		BIT(IIO_CHAN_INFO_SCALE) |
+diff --git a/drivers/iio/magnetometer/st_magn_core.c b/drivers/iio/magnetometer/st_magn_core.c
+index 8ade473f99fe..2e56f812a644 100644
+--- a/drivers/iio/magnetometer/st_magn_core.c
++++ b/drivers/iio/magnetometer/st_magn_core.c
+@@ -369,6 +369,7 @@ int st_magn_common_probe(struct iio_dev *indio_dev)
+ 
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->info = &magn_info;
++	mutex_init(&mdata->tb.buf_lock);
+ 
+ 	st_sensors_power_enable(indio_dev);
+ 
+diff --git a/drivers/iio/pressure/hid-sensor-press.c b/drivers/iio/pressure/hid-sensor-press.c
+index 1af314926ebd..476a7d03d2ce 100644
+--- a/drivers/iio/pressure/hid-sensor-press.c
++++ b/drivers/iio/pressure/hid-sensor-press.c
+@@ -47,8 +47,6 @@ struct press_state {
+ static const struct iio_chan_spec press_channels[] = {
+ 	{
+ 		.type = IIO_PRESSURE,
+-		.modified = 1,
+-		.channel2 = IIO_NO_MOD,
+ 		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 		.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_OFFSET) |
+ 		BIT(IIO_CHAN_INFO_SCALE) |
+diff --git a/drivers/iio/pressure/st_pressure_core.c b/drivers/iio/pressure/st_pressure_core.c
+index 97baf40d424b..e881fa6291e9 100644
+--- a/drivers/iio/pressure/st_pressure_core.c
++++ b/drivers/iio/pressure/st_pressure_core.c
+@@ -417,6 +417,7 @@ int st_press_common_probe(struct iio_dev *indio_dev)
+ 
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->info = &press_info;
++	mutex_init(&press_data->tb.buf_lock);
+ 
+ 	st_sensors_power_enable(indio_dev);
+ 
+diff --git a/drivers/infiniband/core/iwpm_msg.c b/drivers/infiniband/core/iwpm_msg.c
+index b85ddbc979e0..e5558b2660f2 100644
+--- a/drivers/infiniband/core/iwpm_msg.c
++++ b/drivers/infiniband/core/iwpm_msg.c
+@@ -33,7 +33,7 @@
+ 
+ #include "iwpm_util.h"
+ 
+-static const char iwpm_ulib_name[] = "iWarpPortMapperUser";
++static const char iwpm_ulib_name[IWPM_ULIBNAME_SIZE] = "iWarpPortMapperUser";
+ static int iwpm_ulib_version = 3;
+ static int iwpm_user_pid = IWPM_PID_UNDEFINED;
+ static atomic_t echo_nlmsg_seq;
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index 991dc6b20a58..79363b687195 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -315,7 +315,7 @@ static void elantech_report_semi_mt_data(struct input_dev *dev,
+ 					 unsigned int x2, unsigned int y2)
+ {
+ 	elantech_set_slot(dev, 0, num_fingers != 0, x1, y1);
+-	elantech_set_slot(dev, 1, num_fingers == 2, x2, y2);
++	elantech_set_slot(dev, 1, num_fingers >= 2, x2, y2);
+ }
+ 
+ /*
+diff --git a/drivers/iommu/amd_iommu_v2.c b/drivers/iommu/amd_iommu_v2.c
+index 6d5a5c44453b..173e70dbf61b 100644
+--- a/drivers/iommu/amd_iommu_v2.c
++++ b/drivers/iommu/amd_iommu_v2.c
+@@ -266,6 +266,7 @@ static void put_pasid_state(struct pasid_state *pasid_state)
+ 
+ static void put_pasid_state_wait(struct pasid_state *pasid_state)
+ {
++	atomic_dec(&pasid_state->count);
+ 	wait_event(pasid_state->wq, !atomic_read(&pasid_state->count));
+ 	free_pasid_state(pasid_state);
+ }
+diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
+index a3adde6519f0..bd6252b01510 100644
+--- a/drivers/iommu/arm-smmu.c
++++ b/drivers/iommu/arm-smmu.c
+@@ -224,14 +224,7 @@
+ #define RESUME_TERMINATE		(1 << 0)
+ 
+ #define TTBCR2_SEP_SHIFT		15
+-#define TTBCR2_SEP_MASK			0x7
+-
+-#define TTBCR2_ADDR_32			0
+-#define TTBCR2_ADDR_36			1
+-#define TTBCR2_ADDR_40			2
+-#define TTBCR2_ADDR_42			3
+-#define TTBCR2_ADDR_44			4
+-#define TTBCR2_ADDR_48			5
++#define TTBCR2_SEP_UPSTREAM		(0x7 << TTBCR2_SEP_SHIFT)
+ 
+ #define TTBRn_HI_ASID_SHIFT            16
+ 
+@@ -783,26 +776,7 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain,
+ 		writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR);
+ 		if (smmu->version > ARM_SMMU_V1) {
+ 			reg = pgtbl_cfg->arm_lpae_s1_cfg.tcr >> 32;
+-			switch (smmu->va_size) {
+-			case 32:
+-				reg |= (TTBCR2_ADDR_32 << TTBCR2_SEP_SHIFT);
+-				break;
+-			case 36:
+-				reg |= (TTBCR2_ADDR_36 << TTBCR2_SEP_SHIFT);
+-				break;
+-			case 40:
+-				reg |= (TTBCR2_ADDR_40 << TTBCR2_SEP_SHIFT);
+-				break;
+-			case 42:
+-				reg |= (TTBCR2_ADDR_42 << TTBCR2_SEP_SHIFT);
+-				break;
+-			case 44:
+-				reg |= (TTBCR2_ADDR_44 << TTBCR2_SEP_SHIFT);
+-				break;
+-			case 48:
+-				reg |= (TTBCR2_ADDR_48 << TTBCR2_SEP_SHIFT);
+-				break;
+-			}
++			reg |= TTBCR2_SEP_UPSTREAM;
+ 			writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR2);
+ 		}
+ 	} else {
+diff --git a/drivers/lguest/core.c b/drivers/lguest/core.c
+index 7dc93aa004c8..312ffd3d0017 100644
+--- a/drivers/lguest/core.c
++++ b/drivers/lguest/core.c
+@@ -173,7 +173,7 @@ static void unmap_switcher(void)
+ bool lguest_address_ok(const struct lguest *lg,
+ 		       unsigned long addr, unsigned long len)
+ {
+-	return (addr+len) / PAGE_SIZE < lg->pfn_limit && (addr+len >= addr);
++	return addr+len <= lg->pfn_limit * PAGE_SIZE && (addr+len >= addr);
+ }
+ 
+ /*
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 6554d9148927..757f1ba34c4d 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -823,6 +823,12 @@ void dm_consume_args(struct dm_arg_set *as, unsigned num_args)
+ }
+ EXPORT_SYMBOL(dm_consume_args);
+ 
++static bool __table_type_request_based(unsigned table_type)
++{
++	return (table_type == DM_TYPE_REQUEST_BASED ||
++		table_type == DM_TYPE_MQ_REQUEST_BASED);
++}
++
+ static int dm_table_set_type(struct dm_table *t)
+ {
+ 	unsigned i;
+@@ -855,8 +861,7 @@ static int dm_table_set_type(struct dm_table *t)
+ 		 * Determine the type from the live device.
+ 		 * Default to bio-based if device is new.
+ 		 */
+-		if (live_md_type == DM_TYPE_REQUEST_BASED ||
+-		    live_md_type == DM_TYPE_MQ_REQUEST_BASED)
++		if (__table_type_request_based(live_md_type))
+ 			request_based = 1;
+ 		else
+ 			bio_based = 1;
+@@ -906,7 +911,7 @@ static int dm_table_set_type(struct dm_table *t)
+ 			}
+ 		t->type = DM_TYPE_MQ_REQUEST_BASED;
+ 
+-	} else if (hybrid && list_empty(devices) && live_md_type != DM_TYPE_NONE) {
++	} else if (list_empty(devices) && __table_type_request_based(live_md_type)) {
+ 		/* inherit live MD type */
+ 		t->type = live_md_type;
+ 
+@@ -928,10 +933,7 @@ struct target_type *dm_table_get_immutable_target_type(struct dm_table *t)
+ 
+ bool dm_table_request_based(struct dm_table *t)
+ {
+-	unsigned table_type = dm_table_get_type(t);
+-
+-	return (table_type == DM_TYPE_REQUEST_BASED ||
+-		table_type == DM_TYPE_MQ_REQUEST_BASED);
++	return __table_type_request_based(dm_table_get_type(t));
+ }
+ 
+ bool dm_table_mq_request_based(struct dm_table *t)
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 8001fe9e3434..9b4e30a82e4a 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1642,8 +1642,7 @@ static int dm_merge_bvec(struct request_queue *q,
+ 	struct mapped_device *md = q->queuedata;
+ 	struct dm_table *map = dm_get_live_table_fast(md);
+ 	struct dm_target *ti;
+-	sector_t max_sectors;
+-	int max_size = 0;
++	sector_t max_sectors, max_size = 0;
+ 
+ 	if (unlikely(!map))
+ 		goto out;
+@@ -1658,8 +1657,16 @@ static int dm_merge_bvec(struct request_queue *q,
+ 	max_sectors = min(max_io_len(bvm->bi_sector, ti),
+ 			  (sector_t) queue_max_sectors(q));
+ 	max_size = (max_sectors << SECTOR_SHIFT) - bvm->bi_size;
+-	if (unlikely(max_size < 0)) /* this shouldn't _ever_ happen */
+-		max_size = 0;
++
++	/*
++	 * FIXME: this stop-gap fix _must_ be cleaned up (by passing a sector_t
++	 * to the targets' merge function since it holds sectors not bytes).
++	 * Just doing this as an interim fix for stable@ because the more
++	 * comprehensive cleanup of switching to sector_t will impact every
++	 * DM target that implements a ->merge hook.
++	 */
++	if (max_size > INT_MAX)
++		max_size = INT_MAX;
+ 
+ 	/*
+ 	 * merge_bvec_fn() returns number of bytes
+@@ -1667,7 +1674,7 @@ static int dm_merge_bvec(struct request_queue *q,
+ 	 * max is precomputed maximal io size
+ 	 */
+ 	if (max_size && ti->type->merge)
+-		max_size = ti->type->merge(ti, bvm, biovec, max_size);
++		max_size = ti->type->merge(ti, bvm, biovec, (int) max_size);
+ 	/*
+ 	 * If the target doesn't support merge method and some of the devices
+ 	 * provided their merge_bvec method (we know this by looking for the
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index e47d1dd046da..907534b7f40d 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -4138,12 +4138,12 @@ action_store(struct mddev *mddev, const char *page, size_t len)
+ 	if (!mddev->pers || !mddev->pers->sync_request)
+ 		return -EINVAL;
+ 
+-	if (cmd_match(page, "frozen"))
+-		set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+-	else
+-		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 
+ 	if (cmd_match(page, "idle") || cmd_match(page, "frozen")) {
++		if (cmd_match(page, "frozen"))
++			set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
++		else
++			clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 		flush_workqueue(md_misc_wq);
+ 		if (mddev->sync_thread) {
+ 			set_bit(MD_RECOVERY_INTR, &mddev->recovery);
+@@ -4156,16 +4156,17 @@ action_store(struct mddev *mddev, const char *page, size_t len)
+ 		   test_bit(MD_RECOVERY_NEEDED, &mddev->recovery))
+ 		return -EBUSY;
+ 	else if (cmd_match(page, "resync"))
+-		set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
++		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 	else if (cmd_match(page, "recover")) {
++		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 		set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
+-		set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ 	} else if (cmd_match(page, "reshape")) {
+ 		int err;
+ 		if (mddev->pers->start_reshape == NULL)
+ 			return -EINVAL;
+ 		err = mddev_lock(mddev);
+ 		if (!err) {
++			clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 			err = mddev->pers->start_reshape(mddev);
+ 			mddev_unlock(mddev);
+ 		}
+@@ -4177,6 +4178,7 @@ action_store(struct mddev *mddev, const char *page, size_t len)
+ 			set_bit(MD_RECOVERY_CHECK, &mddev->recovery);
+ 		else if (!cmd_match(page, "repair"))
+ 			return -EINVAL;
++		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 		set_bit(MD_RECOVERY_REQUESTED, &mddev->recovery);
+ 		set_bit(MD_RECOVERY_SYNC, &mddev->recovery);
+ 	}
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index 3b5d7f704aa3..903391ce9353 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -517,6 +517,9 @@ static void raid0_make_request(struct mddev *mddev, struct bio *bio)
+ 			 ? (sector & (chunk_sects-1))
+ 			 : sector_div(sector, chunk_sects));
+ 
++		/* Restore due to sector_div */
++		sector = bio->bi_iter.bi_sector;
++
+ 		if (sectors < bio_sectors(bio)) {
+ 			split = bio_split(bio, sectors, GFP_NOIO, fs_bio_set);
+ 			bio_chain(split, bio);
+@@ -524,7 +527,6 @@ static void raid0_make_request(struct mddev *mddev, struct bio *bio)
+ 			split = bio;
+ 		}
+ 
+-		sector = bio->bi_iter.bi_sector;
+ 		zone = find_zone(mddev->private, &sector);
+ 		tmp_dev = map_sector(mddev, zone, sector, &sector);
+ 		split->bi_bdev = tmp_dev->bdev;
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index cd2f96b2c572..007ab861eca0 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -1933,7 +1933,8 @@ static int resize_stripes(struct r5conf *conf, int newsize)
+ 
+ 	conf->slab_cache = sc;
+ 	conf->active_name = 1-conf->active_name;
+-	conf->pool_size = newsize;
++	if (!err)
++		conf->pool_size = newsize;
+ 	return err;
+ }
+ 
+diff --git a/drivers/mfd/da9052-core.c b/drivers/mfd/da9052-core.c
+index ae498b53ee40..46e3840c7a37 100644
+--- a/drivers/mfd/da9052-core.c
++++ b/drivers/mfd/da9052-core.c
+@@ -433,6 +433,10 @@ EXPORT_SYMBOL_GPL(da9052_adc_read_temp);
+ static const struct mfd_cell da9052_subdev_info[] = {
+ 	{
+ 		.name = "da9052-regulator",
++		.id = 0,
++	},
++	{
++		.name = "da9052-regulator",
+ 		.id = 1,
+ 	},
+ 	{
+@@ -484,10 +488,6 @@ static const struct mfd_cell da9052_subdev_info[] = {
+ 		.id = 13,
+ 	},
+ 	{
+-		.name = "da9052-regulator",
+-		.id = 14,
+-	},
+-	{
+ 		.name = "da9052-onkey",
+ 	},
+ 	{
+diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
+index 03d7c7521d97..9a39e0b7e583 100644
+--- a/drivers/mmc/host/atmel-mci.c
++++ b/drivers/mmc/host/atmel-mci.c
+@@ -1304,7 +1304,7 @@ static void atmci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 
+ 	if (ios->clock) {
+ 		unsigned int clock_min = ~0U;
+-		u32 clkdiv;
++		int clkdiv;
+ 
+ 		spin_lock_bh(&host->lock);
+ 		if (!host->mode_reg) {
+@@ -1328,7 +1328,12 @@ static void atmci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 		/* Calculate clock divider */
+ 		if (host->caps.has_odd_clk_div) {
+ 			clkdiv = DIV_ROUND_UP(host->bus_hz, clock_min) - 2;
+-			if (clkdiv > 511) {
++			if (clkdiv < 0) {
++				dev_warn(&mmc->class_dev,
++					 "clock %u too fast; using %lu\n",
++					 clock_min, host->bus_hz / 2);
++				clkdiv = 0;
++			} else if (clkdiv > 511) {
+ 				dev_warn(&mmc->class_dev,
+ 				         "clock %u too slow; using %lu\n",
+ 				         clock_min, host->bus_hz / (511 + 2));
+diff --git a/drivers/mtd/ubi/block.c b/drivers/mtd/ubi/block.c
+index db2c05b6fe7f..c9eb78f10a0d 100644
+--- a/drivers/mtd/ubi/block.c
++++ b/drivers/mtd/ubi/block.c
+@@ -310,6 +310,8 @@ static void ubiblock_do_work(struct work_struct *work)
+ 	blk_rq_map_sg(req->q, req, pdu->usgl.sg);
+ 
+ 	ret = ubiblock_read(pdu);
++	rq_flush_dcache_pages(req);
++
+ 	blk_mq_end_request(req, ret);
+ }
+ 
+diff --git a/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c b/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c
+index 6262612dec45..7a3231d8b933 100644
+--- a/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c
++++ b/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c
+@@ -512,11 +512,9 @@ static int brcmf_msgbuf_query_dcmd(struct brcmf_pub *drvr, int ifidx,
+ 				     msgbuf->rx_pktids,
+ 				     msgbuf->ioctl_resp_pktid);
+ 	if (msgbuf->ioctl_resp_ret_len != 0) {
+-		if (!skb) {
+-			brcmf_err("Invalid packet id idx recv'd %d\n",
+-				  msgbuf->ioctl_resp_pktid);
++		if (!skb)
+ 			return -EBADF;
+-		}
++
+ 		memcpy(buf, skb->data, (len < msgbuf->ioctl_resp_ret_len) ?
+ 				       len : msgbuf->ioctl_resp_ret_len);
+ 	}
+@@ -875,10 +873,8 @@ brcmf_msgbuf_process_txstatus(struct brcmf_msgbuf *msgbuf, void *buf)
+ 	flowid -= BRCMF_NROF_H2D_COMMON_MSGRINGS;
+ 	skb = brcmf_msgbuf_get_pktid(msgbuf->drvr->bus_if->dev,
+ 				     msgbuf->tx_pktids, idx);
+-	if (!skb) {
+-		brcmf_err("Invalid packet id idx recv'd %d\n", idx);
++	if (!skb)
+ 		return;
+-	}
+ 
+ 	set_bit(flowid, msgbuf->txstatus_done_map);
+ 	commonring = msgbuf->flowrings[flowid];
+@@ -1157,6 +1153,8 @@ brcmf_msgbuf_process_rx_complete(struct brcmf_msgbuf *msgbuf, void *buf)
+ 
+ 	skb = brcmf_msgbuf_get_pktid(msgbuf->drvr->bus_if->dev,
+ 				     msgbuf->rx_pktids, idx);
++	if (!skb)
++		return;
+ 
+ 	if (data_offset)
+ 		skb_pull(skb, data_offset);
+diff --git a/drivers/net/wireless/iwlwifi/mvm/d3.c b/drivers/net/wireless/iwlwifi/mvm/d3.c
+index 14e8fd661889..fd5a0bb1493f 100644
+--- a/drivers/net/wireless/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/iwlwifi/mvm/d3.c
+@@ -1742,8 +1742,10 @@ static void iwl_mvm_query_netdetect_reasons(struct iwl_mvm *mvm,
+ 	int i, j, n_matches, ret;
+ 
+ 	fw_status = iwl_mvm_get_wakeup_status(mvm, vif);
+-	if (!IS_ERR_OR_NULL(fw_status))
++	if (!IS_ERR_OR_NULL(fw_status)) {
+ 		reasons = le32_to_cpu(fw_status->wakeup_reasons);
++		kfree(fw_status);
++	}
+ 
+ 	if (reasons & IWL_WOWLAN_WAKEUP_BY_RFKILL_DEASSERTED)
+ 		wakeup.rfkill_release = true;
+@@ -1860,15 +1862,15 @@ static int __iwl_mvm_resume(struct iwl_mvm *mvm, bool test)
+ 	/* get the BSS vif pointer again */
+ 	vif = iwl_mvm_get_bss_vif(mvm);
+ 	if (IS_ERR_OR_NULL(vif))
+-		goto out_unlock;
++		goto err;
+ 
+ 	ret = iwl_trans_d3_resume(mvm->trans, &d3_status, test);
+ 	if (ret)
+-		goto out_unlock;
++		goto err;
+ 
+ 	if (d3_status != IWL_D3_STATUS_ALIVE) {
+ 		IWL_INFO(mvm, "Device was reset during suspend\n");
+-		goto out_unlock;
++		goto err;
+ 	}
+ 
+ 	/* query SRAM first in case we want event logging */
+@@ -1886,7 +1888,8 @@ static int __iwl_mvm_resume(struct iwl_mvm *mvm, bool test)
+ 	/* has unlocked the mutex, so skip that */
+ 	goto out;
+ 
+- out_unlock:
++err:
++	iwl_mvm_free_nd(mvm);
+ 	mutex_unlock(&mvm->mutex);
+ 
+  out:
+diff --git a/drivers/net/wireless/iwlwifi/pcie/trans.c b/drivers/net/wireless/iwlwifi/pcie/trans.c
+index 69935aa5a1b3..cb72edb3d16a 100644
+--- a/drivers/net/wireless/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/iwlwifi/pcie/trans.c
+@@ -5,8 +5,8 @@
+  *
+  * GPL LICENSE SUMMARY
+  *
+- * Copyright(c) 2007 - 2014 Intel Corporation. All rights reserved.
+- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
++ * Copyright(c) 2007 - 2015 Intel Corporation. All rights reserved.
++ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of version 2 of the GNU General Public License as
+@@ -31,8 +31,8 @@
+  *
+  * BSD LICENSE
+  *
+- * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.
+- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
++ * Copyright(c) 2005 - 2015 Intel Corporation. All rights reserved.
++ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  * All rights reserved.
+  *
+  * Redistribution and use in source and binary forms, with or without
+@@ -104,7 +104,7 @@ static void iwl_pcie_free_fw_monitor(struct iwl_trans *trans)
+ static void iwl_pcie_alloc_fw_monitor(struct iwl_trans *trans)
+ {
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+-	struct page *page;
++	struct page *page = NULL;
+ 	dma_addr_t phys;
+ 	u32 size;
+ 	u8 power;
+@@ -131,6 +131,7 @@ static void iwl_pcie_alloc_fw_monitor(struct iwl_trans *trans)
+ 				    DMA_FROM_DEVICE);
+ 		if (dma_mapping_error(trans->dev, phys)) {
+ 			__free_pages(page, order);
++			page = NULL;
+ 			continue;
+ 		}
+ 		IWL_INFO(trans,
+diff --git a/drivers/net/wireless/rt2x00/rt2800usb.c b/drivers/net/wireless/rt2x00/rt2800usb.c
+index 8444313eabe2..8694dddcce9a 100644
+--- a/drivers/net/wireless/rt2x00/rt2800usb.c
++++ b/drivers/net/wireless/rt2x00/rt2800usb.c
+@@ -1040,6 +1040,7 @@ static struct usb_device_id rt2800usb_device_table[] = {
+ 	{ USB_DEVICE(0x07d1, 0x3c17) },
+ 	{ USB_DEVICE(0x2001, 0x3317) },
+ 	{ USB_DEVICE(0x2001, 0x3c1b) },
++	{ USB_DEVICE(0x2001, 0x3c25) },
+ 	/* Draytek */
+ 	{ USB_DEVICE(0x07fa, 0x7712) },
+ 	/* DVICO */
+diff --git a/drivers/net/wireless/rtlwifi/usb.c b/drivers/net/wireless/rtlwifi/usb.c
+index 46ee956d0235..27cd6cabf6c5 100644
+--- a/drivers/net/wireless/rtlwifi/usb.c
++++ b/drivers/net/wireless/rtlwifi/usb.c
+@@ -126,7 +126,7 @@ static int _usbctrl_vendorreq_sync_read(struct usb_device *udev, u8 request,
+ 
+ 	do {
+ 		status = usb_control_msg(udev, pipe, request, reqtype, value,
+-					 index, pdata, len, 0); /*max. timeout*/
++					 index, pdata, len, 1000);
+ 		if (status < 0) {
+ 			/* firmware download is checksumed, don't retry */
+ 			if ((value >= FW_8192C_START_ADDRESS &&
+diff --git a/drivers/power/reset/at91-reset.c b/drivers/power/reset/at91-reset.c
+index 13584e24736a..4d7d60e593b8 100644
+--- a/drivers/power/reset/at91-reset.c
++++ b/drivers/power/reset/at91-reset.c
+@@ -212,9 +212,9 @@ static int at91_reset_platform_probe(struct platform_device *pdev)
+ 		res = platform_get_resource(pdev, IORESOURCE_MEM, idx + 1 );
+ 		at91_ramc_base[idx] = devm_ioremap(&pdev->dev, res->start,
+ 						   resource_size(res));
+-		if (IS_ERR(at91_ramc_base[idx])) {
++		if (!at91_ramc_base[idx]) {
+ 			dev_err(&pdev->dev, "Could not map ram controller address\n");
+-			return PTR_ERR(at91_ramc_base[idx]);
++			return -ENOMEM;
+ 		}
+ 	}
+ 
+diff --git a/drivers/pwm/pwm-img.c b/drivers/pwm/pwm-img.c
+index 476171a768d6..8a029f9bc18c 100644
+--- a/drivers/pwm/pwm-img.c
++++ b/drivers/pwm/pwm-img.c
+@@ -16,6 +16,7 @@
+ #include <linux/mfd/syscon.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
++#include <linux/of_device.h>
+ #include <linux/platform_device.h>
+ #include <linux/pwm.h>
+ #include <linux/regmap.h>
+@@ -38,7 +39,22 @@
+ #define PERIP_PWM_PDM_CONTROL_CH_MASK		0x1
+ #define PERIP_PWM_PDM_CONTROL_CH_SHIFT(ch)	((ch) * 4)
+ 
+-#define MAX_TMBASE_STEPS			65536
++/*
++ * PWM period is specified with a timebase register,
++ * in number of step periods. The PWM duty cycle is also
++ * specified in step periods, in the [0, $timebase] range.
++ * In other words, the timebase imposes the duty cycle
++ * resolution. Therefore, let's constraint the timebase to
++ * a minimum value to allow a sane range of duty cycle values.
++ * Imposing a minimum timebase, will impose a maximum PWM frequency.
++ *
++ * The value chosen is completely arbitrary.
++ */
++#define MIN_TMBASE_STEPS			16
++
++struct img_pwm_soc_data {
++	u32 max_timebase;
++};
+ 
+ struct img_pwm_chip {
+ 	struct device	*dev;
+@@ -47,6 +63,9 @@ struct img_pwm_chip {
+ 	struct clk	*sys_clk;
+ 	void __iomem	*base;
+ 	struct regmap	*periph_regs;
++	int		max_period_ns;
++	int		min_period_ns;
++	const struct img_pwm_soc_data   *data;
+ };
+ 
+ static inline struct img_pwm_chip *to_img_pwm_chip(struct pwm_chip *chip)
+@@ -72,24 +91,31 @@ static int img_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	u32 val, div, duty, timebase;
+ 	unsigned long mul, output_clk_hz, input_clk_hz;
+ 	struct img_pwm_chip *pwm_chip = to_img_pwm_chip(chip);
++	unsigned int max_timebase = pwm_chip->data->max_timebase;
++
++	if (period_ns < pwm_chip->min_period_ns ||
++	    period_ns > pwm_chip->max_period_ns) {
++		dev_err(chip->dev, "configured period not in range\n");
++		return -ERANGE;
++	}
+ 
+ 	input_clk_hz = clk_get_rate(pwm_chip->pwm_clk);
+ 	output_clk_hz = DIV_ROUND_UP(NSEC_PER_SEC, period_ns);
+ 
+ 	mul = DIV_ROUND_UP(input_clk_hz, output_clk_hz);
+-	if (mul <= MAX_TMBASE_STEPS) {
++	if (mul <= max_timebase) {
+ 		div = PWM_CTRL_CFG_NO_SUB_DIV;
+ 		timebase = DIV_ROUND_UP(mul, 1);
+-	} else if (mul <= MAX_TMBASE_STEPS * 8) {
++	} else if (mul <= max_timebase * 8) {
+ 		div = PWM_CTRL_CFG_SUB_DIV0;
+ 		timebase = DIV_ROUND_UP(mul, 8);
+-	} else if (mul <= MAX_TMBASE_STEPS * 64) {
++	} else if (mul <= max_timebase * 64) {
+ 		div = PWM_CTRL_CFG_SUB_DIV1;
+ 		timebase = DIV_ROUND_UP(mul, 64);
+-	} else if (mul <= MAX_TMBASE_STEPS * 512) {
++	} else if (mul <= max_timebase * 512) {
+ 		div = PWM_CTRL_CFG_SUB_DIV0_DIV1;
+ 		timebase = DIV_ROUND_UP(mul, 512);
+-	} else if (mul > MAX_TMBASE_STEPS * 512) {
++	} else if (mul > max_timebase * 512) {
+ 		dev_err(chip->dev,
+ 			"failed to configure timebase steps/divider value\n");
+ 		return -EINVAL;
+@@ -143,11 +169,27 @@ static const struct pwm_ops img_pwm_ops = {
+ 	.owner = THIS_MODULE,
+ };
+ 
++static const struct img_pwm_soc_data pistachio_pwm = {
++	.max_timebase = 255,
++};
++
++static const struct of_device_id img_pwm_of_match[] = {
++	{
++		.compatible = "img,pistachio-pwm",
++		.data = &pistachio_pwm,
++	},
++	{ }
++};
++MODULE_DEVICE_TABLE(of, img_pwm_of_match);
++
+ static int img_pwm_probe(struct platform_device *pdev)
+ {
+ 	int ret;
++	u64 val;
++	unsigned long clk_rate;
+ 	struct resource *res;
+ 	struct img_pwm_chip *pwm;
++	const struct of_device_id *of_dev_id;
+ 
+ 	pwm = devm_kzalloc(&pdev->dev, sizeof(*pwm), GFP_KERNEL);
+ 	if (!pwm)
+@@ -160,6 +202,11 @@ static int img_pwm_probe(struct platform_device *pdev)
+ 	if (IS_ERR(pwm->base))
+ 		return PTR_ERR(pwm->base);
+ 
++	of_dev_id = of_match_device(img_pwm_of_match, &pdev->dev);
++	if (!of_dev_id)
++		return -ENODEV;
++	pwm->data = of_dev_id->data;
++
+ 	pwm->periph_regs = syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
+ 							   "img,cr-periph");
+ 	if (IS_ERR(pwm->periph_regs))
+@@ -189,6 +236,17 @@ static int img_pwm_probe(struct platform_device *pdev)
+ 		goto disable_sysclk;
+ 	}
+ 
++	clk_rate = clk_get_rate(pwm->pwm_clk);
++
++	/* The maximum input clock divider is 512 */
++	val = (u64)NSEC_PER_SEC * 512 * pwm->data->max_timebase;
++	do_div(val, clk_rate);
++	pwm->max_period_ns = val;
++
++	val = (u64)NSEC_PER_SEC * MIN_TMBASE_STEPS;
++	do_div(val, clk_rate);
++	pwm->min_period_ns = val;
++
+ 	pwm->chip.dev = &pdev->dev;
+ 	pwm->chip.ops = &img_pwm_ops;
+ 	pwm->chip.base = -1;
+@@ -228,12 +286,6 @@ static int img_pwm_remove(struct platform_device *pdev)
+ 	return pwmchip_remove(&pwm_chip->chip);
+ }
+ 
+-static const struct of_device_id img_pwm_of_match[] = {
+-	{ .compatible = "img,pistachio-pwm", },
+-	{ }
+-};
+-MODULE_DEVICE_TABLE(of, img_pwm_of_match);
+-
+ static struct platform_driver img_pwm_driver = {
+ 	.driver = {
+ 		.name = "img-pwm",
+diff --git a/drivers/regulator/da9052-regulator.c b/drivers/regulator/da9052-regulator.c
+index 8a4df7a1f2ee..e628d4c2f2ae 100644
+--- a/drivers/regulator/da9052-regulator.c
++++ b/drivers/regulator/da9052-regulator.c
+@@ -394,6 +394,7 @@ static inline struct da9052_regulator_info *find_regulator_info(u8 chip_id,
+ 
+ static int da9052_regulator_probe(struct platform_device *pdev)
+ {
++	const struct mfd_cell *cell = mfd_get_cell(pdev);
+ 	struct regulator_config config = { };
+ 	struct da9052_regulator *regulator;
+ 	struct da9052 *da9052;
+@@ -409,7 +410,7 @@ static int da9052_regulator_probe(struct platform_device *pdev)
+ 	regulator->da9052 = da9052;
+ 
+ 	regulator->info = find_regulator_info(regulator->da9052->chip_id,
+-					      pdev->id);
++					      cell->id);
+ 	if (regulator->info == NULL) {
+ 		dev_err(&pdev->dev, "invalid regulator ID specified\n");
+ 		return -EINVAL;
+@@ -419,7 +420,7 @@ static int da9052_regulator_probe(struct platform_device *pdev)
+ 	config.driver_data = regulator;
+ 	config.regmap = da9052->regmap;
+ 	if (pdata && pdata->regulators) {
+-		config.init_data = pdata->regulators[pdev->id];
++		config.init_data = pdata->regulators[cell->id];
+ 	} else {
+ #ifdef CONFIG_OF
+ 		struct device_node *nproot = da9052->dev->of_node;
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 3290a3ed5b31..a661d339adf7 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1624,6 +1624,7 @@ static unsigned int sd_completed_bytes(struct scsi_cmnd *scmd)
+ {
+ 	u64 start_lba = blk_rq_pos(scmd->request);
+ 	u64 end_lba = blk_rq_pos(scmd->request) + (scsi_bufflen(scmd) / 512);
++	u64 factor = scmd->device->sector_size / 512;
+ 	u64 bad_lba;
+ 	int info_valid;
+ 	/*
+@@ -1645,16 +1646,9 @@ static unsigned int sd_completed_bytes(struct scsi_cmnd *scmd)
+ 	if (scsi_bufflen(scmd) <= scmd->device->sector_size)
+ 		return 0;
+ 
+-	if (scmd->device->sector_size < 512) {
+-		/* only legitimate sector_size here is 256 */
+-		start_lba <<= 1;
+-		end_lba <<= 1;
+-	} else {
+-		/* be careful ... don't want any overflows */
+-		unsigned int factor = scmd->device->sector_size / 512;
+-		do_div(start_lba, factor);
+-		do_div(end_lba, factor);
+-	}
++	/* be careful ... don't want any overflows */
++	do_div(start_lba, factor);
++	do_div(end_lba, factor);
+ 
+ 	/* The bad lba was reported incorrectly, we have no idea where
+ 	 * the error is.
+@@ -2212,8 +2206,7 @@ got_data:
+ 	if (sector_size != 512 &&
+ 	    sector_size != 1024 &&
+ 	    sector_size != 2048 &&
+-	    sector_size != 4096 &&
+-	    sector_size != 256) {
++	    sector_size != 4096) {
+ 		sd_printk(KERN_NOTICE, sdkp, "Unsupported sector size %d.\n",
+ 			  sector_size);
+ 		/*
+@@ -2268,8 +2261,6 @@ got_data:
+ 		sdkp->capacity <<= 2;
+ 	else if (sector_size == 1024)
+ 		sdkp->capacity <<= 1;
+-	else if (sector_size == 256)
+-		sdkp->capacity >>= 1;
+ 
+ 	blk_queue_physical_block_size(sdp->request_queue,
+ 				      sdkp->physical_block_size);
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index bf8c5c1e254e..75efaaeb0eca 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -1565,8 +1565,7 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
+ 		break;
+ 	default:
+ 		vm_srb->data_in = UNKNOWN_TYPE;
+-		vm_srb->win8_extension.srb_flags |= (SRB_FLAGS_DATA_IN |
+-						     SRB_FLAGS_DATA_OUT);
++		vm_srb->win8_extension.srb_flags |= SRB_FLAGS_NO_DATA_TRANSFER;
+ 		break;
+ 	}
+ 
+diff --git a/drivers/staging/gdm724x/gdm_mux.c b/drivers/staging/gdm724x/gdm_mux.c
+index d1ab996b3305..a21a51efaad0 100644
+--- a/drivers/staging/gdm724x/gdm_mux.c
++++ b/drivers/staging/gdm724x/gdm_mux.c
+@@ -158,7 +158,7 @@ static int up_to_host(struct mux_rx *r)
+ 	unsigned int start_flag;
+ 	unsigned int payload_size;
+ 	unsigned short packet_type;
+-	int dummy_cnt;
++	int total_len;
+ 	u32 packet_size_sum = r->offset;
+ 	int index;
+ 	int ret = TO_HOST_INVALID_PACKET;
+@@ -176,10 +176,10 @@ static int up_to_host(struct mux_rx *r)
+ 			break;
+ 		}
+ 
+-		dummy_cnt = ALIGN(MUX_HEADER_SIZE + payload_size, 4);
++		total_len = ALIGN(MUX_HEADER_SIZE + payload_size, 4);
+ 
+ 		if (len - packet_size_sum <
+-			MUX_HEADER_SIZE + payload_size + dummy_cnt) {
++			total_len) {
+ 			pr_err("invalid payload : %d %d %04x\n",
+ 			       payload_size, len, packet_type);
+ 			break;
+@@ -202,7 +202,7 @@ static int up_to_host(struct mux_rx *r)
+ 			break;
+ 		}
+ 
+-		packet_size_sum += MUX_HEADER_SIZE + payload_size + dummy_cnt;
++		packet_size_sum += total_len;
+ 		if (len - packet_size_sum <= MUX_HEADER_SIZE + 2) {
+ 			ret = r->callback(NULL,
+ 					0,
+@@ -361,7 +361,6 @@ static int gdm_mux_send(void *priv_dev, void *data, int len, int tty_index,
+ 	struct mux_pkt_header *mux_header;
+ 	struct mux_tx *t = NULL;
+ 	static u32 seq_num = 1;
+-	int dummy_cnt;
+ 	int total_len;
+ 	int ret;
+ 	unsigned long flags;
+@@ -374,9 +373,7 @@ static int gdm_mux_send(void *priv_dev, void *data, int len, int tty_index,
+ 
+ 	spin_lock_irqsave(&mux_dev->write_lock, flags);
+ 
+-	dummy_cnt = ALIGN(MUX_HEADER_SIZE + len, 4);
+-
+-	total_len = len + MUX_HEADER_SIZE + dummy_cnt;
++	total_len = ALIGN(MUX_HEADER_SIZE + len, 4);
+ 
+ 	t = alloc_mux_tx(total_len);
+ 	if (!t) {
+@@ -392,7 +389,8 @@ static int gdm_mux_send(void *priv_dev, void *data, int len, int tty_index,
+ 	mux_header->packet_type = __cpu_to_le16(packet_type[tty_index]);
+ 
+ 	memcpy(t->buf+MUX_HEADER_SIZE, data, len);
+-	memset(t->buf+MUX_HEADER_SIZE+len, 0, dummy_cnt);
++	memset(t->buf+MUX_HEADER_SIZE+len, 0, total_len - MUX_HEADER_SIZE -
++	       len);
+ 
+ 	t->len = total_len;
+ 	t->callback = cb;
+diff --git a/drivers/staging/vt6655/device_main.c b/drivers/staging/vt6655/device_main.c
+index 03b2a90b9ac0..992236f605d8 100644
+--- a/drivers/staging/vt6655/device_main.c
++++ b/drivers/staging/vt6655/device_main.c
+@@ -911,7 +911,11 @@ static int vnt_int_report_rate(struct vnt_private *priv,
+ 
+ 	if (!(tsr1 & TSR1_TERR)) {
+ 		info->status.rates[0].idx = idx;
+-		info->flags |= IEEE80211_TX_STAT_ACK;
++
++		if (info->flags & IEEE80211_TX_CTL_NO_ACK)
++			info->flags |= IEEE80211_TX_STAT_NOACK_TRANSMITTED;
++		else
++			info->flags |= IEEE80211_TX_STAT_ACK;
+ 	}
+ 
+ 	return 0;
+@@ -936,9 +940,6 @@ static int device_tx_srv(struct vnt_private *pDevice, unsigned int uIdx)
+ 		//Only the status of first TD in the chain is correct
+ 		if (pTD->m_td1TD1.byTCR & TCR_STP) {
+ 			if ((pTD->pTDInfo->byFlags & TD_FLAGS_NETIF_SKB) != 0) {
+-
+-				vnt_int_report_rate(pDevice, pTD->pTDInfo, byTsr0, byTsr1);
+-
+ 				if (!(byTsr1 & TSR1_TERR)) {
+ 					if (byTsr0 != 0) {
+ 						pr_debug(" Tx[%d] OK but has error. tsr1[%02X] tsr0[%02X]\n",
+@@ -957,6 +958,9 @@ static int device_tx_srv(struct vnt_private *pDevice, unsigned int uIdx)
+ 						 (int)uIdx, byTsr1, byTsr0);
+ 				}
+ 			}
++
++			vnt_int_report_rate(pDevice, pTD->pTDInfo, byTsr0, byTsr1);
++
+ 			device_free_tx_buf(pDevice, pTD);
+ 			pDevice->iTDUsed[uIdx]--;
+ 		}
+@@ -988,10 +992,8 @@ static void device_free_tx_buf(struct vnt_private *pDevice, PSTxDesc pDesc)
+ 				 PCI_DMA_TODEVICE);
+ 	}
+ 
+-	if (pTDInfo->byFlags & TD_FLAGS_NETIF_SKB)
++	if (skb)
+ 		ieee80211_tx_status_irqsafe(pDevice->hw, skb);
+-	else
+-		dev_kfree_skb_irq(skb);
+ 
+ 	pTDInfo->skb_dma = 0;
+ 	pTDInfo->skb = NULL;
+@@ -1201,14 +1203,6 @@ static int vnt_tx_packet(struct vnt_private *priv, struct sk_buff *skb)
+ 	if (dma_idx == TYPE_AC0DMA)
+ 		head_td->pTDInfo->byFlags = TD_FLAGS_NETIF_SKB;
+ 
+-	priv->iTDUsed[dma_idx]++;
+-
+-	/* Take ownership */
+-	wmb();
+-	head_td->m_td0TD0.f1Owner = OWNED_BY_NIC;
+-
+-	/* get Next */
+-	wmb();
+ 	priv->apCurrTD[dma_idx] = head_td->next;
+ 
+ 	spin_unlock_irqrestore(&priv->lock, flags);
+@@ -1229,11 +1223,18 @@ static int vnt_tx_packet(struct vnt_private *priv, struct sk_buff *skb)
+ 
+ 	head_td->buff_addr = cpu_to_le32(head_td->pTDInfo->skb_dma);
+ 
++	/* Poll Transmit the adapter */
++	wmb();
++	head_td->m_td0TD0.f1Owner = OWNED_BY_NIC;
++	wmb(); /* second memory barrier */
++
+ 	if (head_td->pTDInfo->byFlags & TD_FLAGS_NETIF_SKB)
+ 		MACvTransmitAC0(priv->PortOffset);
+ 	else
+ 		MACvTransmit0(priv->PortOffset);
+ 
++	priv->iTDUsed[dma_idx]++;
++
+ 	spin_unlock_irqrestore(&priv->lock, flags);
+ 
+ 	return 0;
+@@ -1413,9 +1414,16 @@ static void vnt_bss_info_changed(struct ieee80211_hw *hw,
+ 
+ 	priv->current_aid = conf->aid;
+ 
+-	if (changed & BSS_CHANGED_BSSID)
++	if (changed & BSS_CHANGED_BSSID) {
++		unsigned long flags;
++
++		spin_lock_irqsave(&priv->lock, flags);
++
+ 		MACvWriteBSSIDAddress(priv->PortOffset, (u8 *)conf->bssid);
+ 
++		spin_unlock_irqrestore(&priv->lock, flags);
++	}
++
+ 	if (changed & BSS_CHANGED_BASIC_RATES) {
+ 		priv->basic_rates = conf->basic_rates;
+ 
+diff --git a/drivers/staging/vt6656/rxtx.c b/drivers/staging/vt6656/rxtx.c
+index 33baf26de4b5..ee9ce165dcde 100644
+--- a/drivers/staging/vt6656/rxtx.c
++++ b/drivers/staging/vt6656/rxtx.c
+@@ -805,10 +805,18 @@ int vnt_tx_packet(struct vnt_private *priv, struct sk_buff *skb)
+ 		vnt_schedule_command(priv, WLAN_CMD_SETPOWER);
+ 	}
+ 
+-	if (current_rate > RATE_11M)
+-		pkt_type = priv->packet_type;
+-	else
++	if (current_rate > RATE_11M) {
++		if (info->band == IEEE80211_BAND_5GHZ) {
++			pkt_type = PK_TYPE_11A;
++		} else {
++			if (tx_rate->flags & IEEE80211_TX_RC_USE_CTS_PROTECT)
++				pkt_type = PK_TYPE_11GB;
++			else
++				pkt_type = PK_TYPE_11GA;
++		}
++	} else {
+ 		pkt_type = PK_TYPE_11B;
++	}
+ 
+ 	spin_lock_irqsave(&priv->lock, flags);
+ 
+diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
+index f6c954c4635f..4073869d2090 100644
+--- a/drivers/target/target_core_pscsi.c
++++ b/drivers/target/target_core_pscsi.c
+@@ -521,6 +521,7 @@ static int pscsi_configure_device(struct se_device *dev)
+ 					" pdv_host_id: %d\n", pdv->pdv_host_id);
+ 				return -EINVAL;
+ 			}
++			pdv->pdv_lld_host = sh;
+ 		}
+ 	} else {
+ 		if (phv->phv_mode == PHV_VIRTUAL_HOST_ID) {
+@@ -603,6 +604,8 @@ static void pscsi_free_device(struct se_device *dev)
+ 		if ((phv->phv_mode == PHV_LLD_SCSI_HOST_NO) &&
+ 		    (phv->phv_lld_host != NULL))
+ 			scsi_host_put(phv->phv_lld_host);
++		else if (pdv->pdv_lld_host)
++			scsi_host_put(pdv->pdv_lld_host);
+ 
+ 		if ((sd->type == TYPE_DISK) || (sd->type == TYPE_ROM))
+ 			scsi_device_put(sd);
+diff --git a/drivers/target/target_core_pscsi.h b/drivers/target/target_core_pscsi.h
+index 1bd757dff8ee..820d3052b775 100644
+--- a/drivers/target/target_core_pscsi.h
++++ b/drivers/target/target_core_pscsi.h
+@@ -45,6 +45,7 @@ struct pscsi_dev_virt {
+ 	int	pdv_lun_id;
+ 	struct block_device *pdv_bd;
+ 	struct scsi_device *pdv_sd;
++	struct Scsi_Host *pdv_lld_host;
+ } ____cacheline_aligned;
+ 
+ typedef enum phv_modes {
+diff --git a/drivers/thermal/armada_thermal.c b/drivers/thermal/armada_thermal.c
+index c2556cf5186b..01255fd65135 100644
+--- a/drivers/thermal/armada_thermal.c
++++ b/drivers/thermal/armada_thermal.c
+@@ -224,9 +224,9 @@ static const struct armada_thermal_data armada380_data = {
+ 	.is_valid_shift = 10,
+ 	.temp_shift = 0,
+ 	.temp_mask = 0x3ff,
+-	.coef_b = 1169498786UL,
+-	.coef_m = 2000000UL,
+-	.coef_div = 4289,
++	.coef_b = 2931108200UL,
++	.coef_m = 5000000UL,
++	.coef_div = 10502,
+ 	.inverted = true,
+ };
+ 
+diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
+index 5bab1c684bb1..7a3d146a5f0e 100644
+--- a/drivers/tty/hvc/hvc_xen.c
++++ b/drivers/tty/hvc/hvc_xen.c
+@@ -289,7 +289,7 @@ static int xen_initial_domain_console_init(void)
+ 			return -ENOMEM;
+ 	}
+ 
+-	info->irq = bind_virq_to_irq(VIRQ_CONSOLE, 0);
++	info->irq = bind_virq_to_irq(VIRQ_CONSOLE, 0, false);
+ 	info->vtermno = HVC_COOKIE;
+ 
+ 	spin_lock(&xencons_lock);
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index c4343764cc5b..bce16e405d59 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -3170,7 +3170,7 @@ static int gsmtty_break_ctl(struct tty_struct *tty, int state)
+ 	return gsmtty_modem_update(dlci, encode);
+ }
+ 
+-static void gsmtty_remove(struct tty_driver *driver, struct tty_struct *tty)
++static void gsmtty_cleanup(struct tty_struct *tty)
+ {
+ 	struct gsm_dlci *dlci = tty->driver_data;
+ 	struct gsm_mux *gsm = dlci->gsm;
+@@ -3178,7 +3178,6 @@ static void gsmtty_remove(struct tty_driver *driver, struct tty_struct *tty)
+ 	dlci_put(dlci);
+ 	dlci_put(gsm->dlci[0]);
+ 	mux_put(gsm);
+-	driver->ttys[tty->index] = NULL;
+ }
+ 
+ /* Virtual ttys for the demux */
+@@ -3199,7 +3198,7 @@ static const struct tty_operations gsmtty_ops = {
+ 	.tiocmget		= gsmtty_tiocmget,
+ 	.tiocmset		= gsmtty_tiocmset,
+ 	.break_ctl		= gsmtty_break_ctl,
+-	.remove			= gsmtty_remove,
++	.cleanup		= gsmtty_cleanup,
+ };
+ 
+ 
+diff --git a/drivers/tty/n_hdlc.c b/drivers/tty/n_hdlc.c
+index 644ddb841d9f..bbc4ce66c2c1 100644
+--- a/drivers/tty/n_hdlc.c
++++ b/drivers/tty/n_hdlc.c
+@@ -600,7 +600,7 @@ static ssize_t n_hdlc_tty_read(struct tty_struct *tty, struct file *file,
+ 	add_wait_queue(&tty->read_wait, &wait);
+ 
+ 	for (;;) {
+-		if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) {
++		if (test_bit(TTY_OTHER_DONE, &tty->flags)) {
+ 			ret = -EIO;
+ 			break;
+ 		}
+@@ -828,7 +828,7 @@ static unsigned int n_hdlc_tty_poll(struct tty_struct *tty, struct file *filp,
+ 		/* set bits for operations that won't block */
+ 		if (n_hdlc->rx_buf_list.head)
+ 			mask |= POLLIN | POLLRDNORM;	/* readable */
+-		if (test_bit(TTY_OTHER_CLOSED, &tty->flags))
++		if (test_bit(TTY_OTHER_DONE, &tty->flags))
+ 			mask |= POLLHUP;
+ 		if (tty_hung_up_p(filp))
+ 			mask |= POLLHUP;
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index cf6e0f2e1331..cc57a3a6b02b 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -1949,6 +1949,18 @@ static inline int input_available_p(struct tty_struct *tty, int poll)
+ 		return ldata->commit_head - ldata->read_tail >= amt;
+ }
+ 
++static inline int check_other_done(struct tty_struct *tty)
++{
++	int done = test_bit(TTY_OTHER_DONE, &tty->flags);
++	if (done) {
++		/* paired with cmpxchg() in check_other_closed(); ensures
++		 * read buffer head index is not stale
++		 */
++		smp_mb__after_atomic();
++	}
++	return done;
++}
++
+ /**
+  *	copy_from_read_buf	-	copy read data directly
+  *	@tty: terminal device
+@@ -2167,7 +2179,7 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
+ 	struct n_tty_data *ldata = tty->disc_data;
+ 	unsigned char __user *b = buf;
+ 	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+-	int c;
++	int c, done;
+ 	int minimum, time;
+ 	ssize_t retval = 0;
+ 	long timeout;
+@@ -2235,8 +2247,10 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
+ 		    ((minimum - (b - buf)) >= 1))
+ 			ldata->minimum_to_wake = (minimum - (b - buf));
+ 
++		done = check_other_done(tty);
++
+ 		if (!input_available_p(tty, 0)) {
+-			if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) {
++			if (done) {
+ 				retval = -EIO;
+ 				break;
+ 			}
+@@ -2443,12 +2457,12 @@ static unsigned int n_tty_poll(struct tty_struct *tty, struct file *file,
+ 
+ 	poll_wait(file, &tty->read_wait, wait);
+ 	poll_wait(file, &tty->write_wait, wait);
++	if (check_other_done(tty))
++		mask |= POLLHUP;
+ 	if (input_available_p(tty, 1))
+ 		mask |= POLLIN | POLLRDNORM;
+ 	if (tty->packet && tty->link->ctrl_status)
+ 		mask |= POLLPRI | POLLIN | POLLRDNORM;
+-	if (test_bit(TTY_OTHER_CLOSED, &tty->flags))
+-		mask |= POLLHUP;
+ 	if (tty_hung_up_p(file))
+ 		mask |= POLLHUP;
+ 	if (!(mask & (POLLHUP | POLLIN | POLLRDNORM))) {
+diff --git a/drivers/tty/pty.c b/drivers/tty/pty.c
+index e72ee629cead..4d5e8409769c 100644
+--- a/drivers/tty/pty.c
++++ b/drivers/tty/pty.c
+@@ -53,9 +53,8 @@ static void pty_close(struct tty_struct *tty, struct file *filp)
+ 	/* Review - krefs on tty_link ?? */
+ 	if (!tty->link)
+ 		return;
+-	tty_flush_to_ldisc(tty->link);
+ 	set_bit(TTY_OTHER_CLOSED, &tty->link->flags);
+-	wake_up_interruptible(&tty->link->read_wait);
++	tty_flip_buffer_push(tty->link->port);
+ 	wake_up_interruptible(&tty->link->write_wait);
+ 	if (tty->driver->subtype == PTY_TYPE_MASTER) {
+ 		set_bit(TTY_OTHER_CLOSED, &tty->flags);
+@@ -243,7 +242,9 @@ static int pty_open(struct tty_struct *tty, struct file *filp)
+ 		goto out;
+ 
+ 	clear_bit(TTY_IO_ERROR, &tty->flags);
++	/* TTY_OTHER_CLOSED must be cleared before TTY_OTHER_DONE */
+ 	clear_bit(TTY_OTHER_CLOSED, &tty->link->flags);
++	clear_bit(TTY_OTHER_DONE, &tty->link->flags);
+ 	set_bit(TTY_THROTTLED, &tty->flags);
+ 	return 0;
+ 
+diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
+index 75661641f5fe..2f78b77f0f81 100644
+--- a/drivers/tty/tty_buffer.c
++++ b/drivers/tty/tty_buffer.c
+@@ -37,6 +37,28 @@
+ 
+ #define TTY_BUFFER_PAGE	(((PAGE_SIZE - sizeof(struct tty_buffer)) / 2) & ~0xFF)
+ 
++/*
++ * If all tty flip buffers have been processed by flush_to_ldisc() or
++ * dropped by tty_buffer_flush(), check if the linked pty has been closed.
++ * If so, wake the reader/poll to process
++ */
++static inline void check_other_closed(struct tty_struct *tty)
++{
++	unsigned long flags, old;
++
++	/* transition from TTY_OTHER_CLOSED => TTY_OTHER_DONE must be atomic */
++	for (flags = ACCESS_ONCE(tty->flags);
++	     test_bit(TTY_OTHER_CLOSED, &flags);
++	     ) {
++		old = flags;
++		__set_bit(TTY_OTHER_DONE, &flags);
++		flags = cmpxchg(&tty->flags, old, flags);
++		if (old == flags) {
++			wake_up_interruptible(&tty->read_wait);
++			break;
++		}
++	}
++}
+ 
+ /**
+  *	tty_buffer_lock_exclusive	-	gain exclusive access to buffer
+@@ -229,6 +251,8 @@ void tty_buffer_flush(struct tty_struct *tty, struct tty_ldisc *ld)
+ 	if (ld && ld->ops->flush_buffer)
+ 		ld->ops->flush_buffer(tty);
+ 
++	check_other_closed(tty);
++
+ 	atomic_dec(&buf->priority);
+ 	mutex_unlock(&buf->lock);
+ }
+@@ -471,8 +495,10 @@ static void flush_to_ldisc(struct work_struct *work)
+ 		smp_rmb();
+ 		count = head->commit - head->read;
+ 		if (!count) {
+-			if (next == NULL)
++			if (next == NULL) {
++				check_other_closed(tty);
+ 				break;
++			}
+ 			buf->head = next;
+ 			tty_buffer_free(port, head);
+ 			continue;
+@@ -489,19 +515,6 @@ static void flush_to_ldisc(struct work_struct *work)
+ }
+ 
+ /**
+- *	tty_flush_to_ldisc
+- *	@tty: tty to push
+- *
+- *	Push the terminal flip buffers to the line discipline.
+- *
+- *	Must not be called from IRQ context.
+- */
+-void tty_flush_to_ldisc(struct tty_struct *tty)
+-{
+-	flush_work(&tty->port->buf.work);
+-}
+-
+-/**
+  *	tty_flip_buffer_push	-	terminal
+  *	@port: tty port to push
+  *
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index c42765b3a060..0495c94a23d7 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -1295,6 +1295,7 @@ static void purge_configs_funcs(struct gadget_info *gi)
+ 			}
+ 		}
+ 		c->next_interface_id = 0;
++		memset(c->interface, 0, sizeof(c->interface));
+ 		c->superspeed = 0;
+ 		c->highspeed = 0;
+ 		c->fullspeed = 0;
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index eeedde8c435a..6994c99e58a6 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -2026,8 +2026,13 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 		break;
+ 	case COMP_DEV_ERR:
+ 	case COMP_STALL:
++		frame->status = -EPROTO;
++		skip_td = true;
++		break;
+ 	case COMP_TX_ERR:
+ 		frame->status = -EPROTO;
++		if (event_trb != td->last_trb)
++			return 0;
+ 		skip_td = true;
+ 		break;
+ 	case COMP_STOP:
+@@ -2640,7 +2645,7 @@ irqreturn_t xhci_irq(struct usb_hcd *hcd)
+ 		xhci_halt(xhci);
+ hw_died:
+ 		spin_unlock(&xhci->lock);
+-		return -ESHUTDOWN;
++		return IRQ_HANDLED;
+ 	}
+ 
+ 	/*
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 8e421b89632d..ea75e8ccd3c1 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1267,7 +1267,7 @@ union xhci_trb {
+  * since the command ring is 64-byte aligned.
+  * It must also be greater than 16.
+  */
+-#define TRBS_PER_SEGMENT	64
++#define TRBS_PER_SEGMENT	256
+ /* Allow two commands + a link TRB, along with any reserved command TRBs */
+ #define MAX_RSVD_CMD_TRBS	(TRBS_PER_SEGMENT - 3)
+ #define TRB_SEGMENT_SIZE	(TRBS_PER_SEGMENT*16)
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 84ce2d74894c..9031750e7404 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -127,6 +127,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */
+ 	{ USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */
+ 	{ USB_DEVICE(0x10C4, 0x8977) },	/* CEL MeshWorks DevKit Device */
++	{ USB_DEVICE(0x10C4, 0x8998) }, /* KCF Technologies PRN */
+ 	{ USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */
+ 	{ USB_DEVICE(0x10C4, 0xEA61) }, /* Silicon Labs factory default */
+ 	{ USB_DEVICE(0x10C4, 0xEA70) }, /* Silicon Labs factory default */
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 829604d11f3f..f5257af33ecf 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -61,7 +61,6 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(DCU10_VENDOR_ID, DCU10_PRODUCT_ID) },
+ 	{ USB_DEVICE(SITECOM_VENDOR_ID, SITECOM_PRODUCT_ID) },
+ 	{ USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_ID) },
+-	{ USB_DEVICE(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_ID) },
+ 	{ USB_DEVICE(SIEMENS_VENDOR_ID, SIEMENS_PRODUCT_ID_SX1),
+ 		.driver_info = PL2303_QUIRK_UART_STATE_IDX0 },
+ 	{ USB_DEVICE(SIEMENS_VENDOR_ID, SIEMENS_PRODUCT_ID_X65),
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index 71fd9da1d6e7..e3b7af8adfb7 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -62,10 +62,6 @@
+ #define ALCATEL_VENDOR_ID	0x11f7
+ #define ALCATEL_PRODUCT_ID	0x02df
+ 
+-/* Samsung I330 phone cradle */
+-#define SAMSUNG_VENDOR_ID	0x04e8
+-#define SAMSUNG_PRODUCT_ID	0x8001
+-
+ #define SIEMENS_VENDOR_ID	0x11f5
+ #define SIEMENS_PRODUCT_ID_SX1	0x0001
+ #define SIEMENS_PRODUCT_ID_X65	0x0003
+diff --git a/drivers/usb/serial/visor.c b/drivers/usb/serial/visor.c
+index bf2bd40e5f2a..60afb39eb73c 100644
+--- a/drivers/usb/serial/visor.c
++++ b/drivers/usb/serial/visor.c
+@@ -95,7 +95,7 @@ static const struct usb_device_id id_table[] = {
+ 		.driver_info = (kernel_ulong_t)&palm_os_4_probe },
+ 	{ USB_DEVICE(ACER_VENDOR_ID, ACER_S10_ID),
+ 		.driver_info = (kernel_ulong_t)&palm_os_4_probe },
+-	{ USB_DEVICE(SAMSUNG_VENDOR_ID, SAMSUNG_SCH_I330_ID),
++	{ USB_DEVICE_INTERFACE_CLASS(SAMSUNG_VENDOR_ID, SAMSUNG_SCH_I330_ID, 0xff),
+ 		.driver_info = (kernel_ulong_t)&palm_os_4_probe },
+ 	{ USB_DEVICE(SAMSUNG_VENDOR_ID, SAMSUNG_SPH_I500_ID),
+ 		.driver_info = (kernel_ulong_t)&palm_os_4_probe },
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index d684b4b8108f..caf188800c67 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -766,6 +766,13 @@ UNUSUAL_DEV(  0x059f, 0x0643, 0x0000, 0x0000,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_GO_SLOW ),
+ 
++/* Reported by Christian Schaller <cschalle@redhat.com> */
++UNUSUAL_DEV(  0x059f, 0x0651, 0x0000, 0x0000,
++		"LaCie",
++		"External HDD",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_NO_WP_DETECT ),
++
+ /* Submitted by Joel Bourquard <numlock@freesurf.ch>
+  * Some versions of this device need the SubClass and Protocol overrides
+  * while others don't.
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 2b8553bd8715..38387950490e 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -957,7 +957,7 @@ unsigned xen_evtchn_nr_channels(void)
+ }
+ EXPORT_SYMBOL_GPL(xen_evtchn_nr_channels);
+ 
+-int bind_virq_to_irq(unsigned int virq, unsigned int cpu)
++int bind_virq_to_irq(unsigned int virq, unsigned int cpu, bool percpu)
+ {
+ 	struct evtchn_bind_virq bind_virq;
+ 	int evtchn, irq, ret;
+@@ -971,8 +971,12 @@ int bind_virq_to_irq(unsigned int virq, unsigned int cpu)
+ 		if (irq < 0)
+ 			goto out;
+ 
+-		irq_set_chip_and_handler_name(irq, &xen_percpu_chip,
+-					      handle_percpu_irq, "virq");
++		if (percpu)
++			irq_set_chip_and_handler_name(irq, &xen_percpu_chip,
++						      handle_percpu_irq, "virq");
++		else
++			irq_set_chip_and_handler_name(irq, &xen_dynamic_chip,
++						      handle_edge_irq, "virq");
+ 
+ 		bind_virq.virq = virq;
+ 		bind_virq.vcpu = cpu;
+@@ -1062,7 +1066,7 @@ int bind_virq_to_irqhandler(unsigned int virq, unsigned int cpu,
+ {
+ 	int irq, retval;
+ 
+-	irq = bind_virq_to_irq(virq, cpu);
++	irq = bind_virq_to_irq(virq, cpu, irqflags & IRQF_PERCPU);
+ 	if (irq < 0)
+ 		return irq;
+ 	retval = request_irq(irq, handler, irqflags, devname, dev_id);
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index d925f55e4857..8081aba116a7 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -928,7 +928,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ 			total_size = total_mapping_size(elf_phdata,
+ 							loc->elf_ex.e_phnum);
+ 			if (!total_size) {
+-				error = -EINVAL;
++				retval = -EINVAL;
+ 				goto out_free_dentry;
+ 			}
+ 		}
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 0a795c969c78..8b33da6ec3dd 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -8548,7 +8548,9 @@ int btrfs_set_block_group_ro(struct btrfs_root *root,
+ out:
+ 	if (cache->flags & BTRFS_BLOCK_GROUP_SYSTEM) {
+ 		alloc_flags = update_block_group_flags(root, cache->flags);
++		lock_chunks(root->fs_info->chunk_root);
+ 		check_system_chunk(trans, root, alloc_flags);
++		unlock_chunks(root->fs_info->chunk_root);
+ 	}
+ 
+ 	btrfs_end_transaction(trans, root);
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 8222f6f74147..44a7e0398d97 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4626,6 +4626,7 @@ int btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
+ {
+ 	u64 chunk_offset;
+ 
++	ASSERT(mutex_is_locked(&extent_root->fs_info->chunk_mutex));
+ 	chunk_offset = find_next_chunk(extent_root->fs_info);
+ 	return __btrfs_alloc_chunk(trans, extent_root, chunk_offset, type);
+ }
+diff --git a/fs/dcache.c b/fs/dcache.c
+index c71e3732e53b..922f23ef6041 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -1205,13 +1205,13 @@ ascend:
+ 		/* might go back up the wrong parent if we have had a rename. */
+ 		if (need_seqretry(&rename_lock, seq))
+ 			goto rename_retry;
+-		next = child->d_child.next;
+-		while (unlikely(child->d_flags & DCACHE_DENTRY_KILLED)) {
++		/* go into the first sibling still alive */
++		do {
++			next = child->d_child.next;
+ 			if (next == &this_parent->d_subdirs)
+ 				goto ascend;
+ 			child = list_entry(next, struct dentry, d_child);
+-			next = next->next;
+-		}
++		} while (unlikely(child->d_flags & DCACHE_DENTRY_KILLED));
+ 		rcu_read_unlock();
+ 		goto resume;
+ 	}
+diff --git a/fs/exec.c b/fs/exec.c
+index 00400cf522dc..120244523647 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -659,6 +659,9 @@ int setup_arg_pages(struct linux_binprm *bprm,
+ 	if (stack_base > STACK_SIZE_MAX)
+ 		stack_base = STACK_SIZE_MAX;
+ 
++	/* Add space for stack randomization. */
++	stack_base += (STACK_RND_MASK << PAGE_SHIFT);
++
+ 	/* Make sure we didn't let the argument array grow too large. */
+ 	if (vma->vm_end - vma->vm_start > stack_base)
+ 		return -ENOMEM;
+diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
+index 3445035c7e01..d41843181818 100644
+--- a/fs/ext4/ext4_jbd2.c
++++ b/fs/ext4/ext4_jbd2.c
+@@ -87,6 +87,12 @@ int __ext4_journal_stop(const char *where, unsigned int line, handle_t *handle)
+ 		ext4_put_nojournal(handle);
+ 		return 0;
+ 	}
++
++	if (!handle->h_transaction) {
++		err = jbd2_journal_stop(handle);
++		return handle->h_err ? handle->h_err : err;
++	}
++
+ 	sb = handle->h_transaction->t_journal->j_private;
+ 	err = handle->h_err;
+ 	rc = jbd2_journal_stop(handle);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 16f6365f65e7..ea4ee1732143 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -377,7 +377,7 @@ static int ext4_valid_extent(struct inode *inode, struct ext4_extent *ext)
+ 	ext4_lblk_t lblock = le32_to_cpu(ext->ee_block);
+ 	ext4_lblk_t last = lblock + len - 1;
+ 
+-	if (lblock > last)
++	if (len == 0 || lblock > last)
+ 		return 0;
+ 	return ext4_data_block_valid(EXT4_SB(inode->i_sb), block, len);
+ }
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 852cc521f327..1f252b4e0f51 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4233,7 +4233,7 @@ static void ext4_update_other_inodes_time(struct super_block *sb,
+ 	int inode_size = EXT4_INODE_SIZE(sb);
+ 
+ 	oi.orig_ino = orig_ino;
+-	ino = orig_ino & ~(inodes_per_block - 1);
++	ino = (orig_ino & ~(inodes_per_block - 1)) + 1;
+ 	for (i = 0; i < inodes_per_block; i++, ino++, buf += inode_size) {
+ 		if (ino == orig_ino)
+ 			continue;
+diff --git a/fs/fhandle.c b/fs/fhandle.c
+index 999ff5c3cab0..d59712dfa3e7 100644
+--- a/fs/fhandle.c
++++ b/fs/fhandle.c
+@@ -195,8 +195,9 @@ static int handle_to_path(int mountdirfd, struct file_handle __user *ufh,
+ 		goto out_err;
+ 	}
+ 	/* copy the full handle */
+-	if (copy_from_user(handle, ufh,
+-			   sizeof(struct file_handle) +
++	*handle = f_handle;
++	if (copy_from_user(&handle->f_handle,
++			   &ufh->f_handle,
+ 			   f_handle.handle_bytes)) {
+ 		retval = -EFAULT;
+ 		goto out_handle;
+diff --git a/fs/fs_pin.c b/fs/fs_pin.c
+index b06c98796afb..611b5408f6ec 100644
+--- a/fs/fs_pin.c
++++ b/fs/fs_pin.c
+@@ -9,8 +9,8 @@ static DEFINE_SPINLOCK(pin_lock);
+ void pin_remove(struct fs_pin *pin)
+ {
+ 	spin_lock(&pin_lock);
+-	hlist_del(&pin->m_list);
+-	hlist_del(&pin->s_list);
++	hlist_del_init(&pin->m_list);
++	hlist_del_init(&pin->s_list);
+ 	spin_unlock(&pin_lock);
+ 	spin_lock_irq(&pin->wait.lock);
+ 	pin->done = 1;
+diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
+index b5128c6e63ad..a9079d035ae5 100644
+--- a/fs/jbd2/recovery.c
++++ b/fs/jbd2/recovery.c
+@@ -842,15 +842,23 @@ static int scan_revoke_records(journal_t *journal, struct buffer_head *bh,
+ {
+ 	jbd2_journal_revoke_header_t *header;
+ 	int offset, max;
++	int csum_size = 0;
++	__u32 rcount;
+ 	int record_len = 4;
+ 
+ 	header = (jbd2_journal_revoke_header_t *) bh->b_data;
+ 	offset = sizeof(jbd2_journal_revoke_header_t);
+-	max = be32_to_cpu(header->r_count);
++	rcount = be32_to_cpu(header->r_count);
+ 
+ 	if (!jbd2_revoke_block_csum_verify(journal, header))
+ 		return -EINVAL;
+ 
++	if (jbd2_journal_has_csum_v2or3(journal))
++		csum_size = sizeof(struct jbd2_journal_revoke_tail);
++	if (rcount > journal->j_blocksize - csum_size)
++		return -EINVAL;
++	max = rcount;
++
+ 	if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))
+ 		record_len = 8;
+ 
+diff --git a/fs/jbd2/revoke.c b/fs/jbd2/revoke.c
+index c6cbaef2bda1..14214da80eb8 100644
+--- a/fs/jbd2/revoke.c
++++ b/fs/jbd2/revoke.c
+@@ -577,7 +577,7 @@ static void write_one_revoke_record(journal_t *journal,
+ {
+ 	int csum_size = 0;
+ 	struct buffer_head *descriptor;
+-	int offset;
++	int sz, offset;
+ 	journal_header_t *header;
+ 
+ 	/* If we are already aborting, this all becomes a noop.  We
+@@ -594,9 +594,14 @@ static void write_one_revoke_record(journal_t *journal,
+ 	if (jbd2_journal_has_csum_v2or3(journal))
+ 		csum_size = sizeof(struct jbd2_journal_revoke_tail);
+ 
++	if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))
++		sz = 8;
++	else
++		sz = 4;
++
+ 	/* Make sure we have a descriptor with space left for the record */
+ 	if (descriptor) {
+-		if (offset >= journal->j_blocksize - csum_size) {
++		if (offset + sz > journal->j_blocksize - csum_size) {
+ 			flush_descriptor(journal, descriptor, offset, write_op);
+ 			descriptor = NULL;
+ 		}
+@@ -619,16 +624,13 @@ static void write_one_revoke_record(journal_t *journal,
+ 		*descriptorp = descriptor;
+ 	}
+ 
+-	if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT)) {
++	if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))
+ 		* ((__be64 *)(&descriptor->b_data[offset])) =
+ 			cpu_to_be64(record->blocknr);
+-		offset += 8;
+-
+-	} else {
++	else
+ 		* ((__be32 *)(&descriptor->b_data[offset])) =
+ 			cpu_to_be32(record->blocknr);
+-		offset += 4;
+-	}
++	offset += sz;
+ 
+ 	*offsetp = offset;
+ }
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 5f09370c90a8..ff2f2e6ad311 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -551,7 +551,6 @@ int jbd2_journal_extend(handle_t *handle, int nblocks)
+ 	int result;
+ 	int wanted;
+ 
+-	WARN_ON(!transaction);
+ 	if (is_handle_aborted(handle))
+ 		return -EROFS;
+ 	journal = transaction->t_journal;
+@@ -627,7 +626,6 @@ int jbd2__journal_restart(handle_t *handle, int nblocks, gfp_t gfp_mask)
+ 	tid_t		tid;
+ 	int		need_to_start, ret;
+ 
+-	WARN_ON(!transaction);
+ 	/* If we've had an abort of any type, don't even think about
+ 	 * actually doing the restart! */
+ 	if (is_handle_aborted(handle))
+@@ -785,7 +783,6 @@ do_get_write_access(handle_t *handle, struct journal_head *jh,
+ 	int need_copy = 0;
+ 	unsigned long start_lock, time_lock;
+ 
+-	WARN_ON(!transaction);
+ 	if (is_handle_aborted(handle))
+ 		return -EROFS;
+ 	journal = transaction->t_journal;
+@@ -1051,7 +1048,6 @@ int jbd2_journal_get_create_access(handle_t *handle, struct buffer_head *bh)
+ 	int err;
+ 
+ 	jbd_debug(5, "journal_head %p\n", jh);
+-	WARN_ON(!transaction);
+ 	err = -EROFS;
+ 	if (is_handle_aborted(handle))
+ 		goto out;
+@@ -1266,7 +1262,6 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ 	struct journal_head *jh;
+ 	int ret = 0;
+ 
+-	WARN_ON(!transaction);
+ 	if (is_handle_aborted(handle))
+ 		return -EROFS;
+ 	journal = transaction->t_journal;
+@@ -1397,7 +1392,6 @@ int jbd2_journal_forget (handle_t *handle, struct buffer_head *bh)
+ 	int err = 0;
+ 	int was_modified = 0;
+ 
+-	WARN_ON(!transaction);
+ 	if (is_handle_aborted(handle))
+ 		return -EROFS;
+ 	journal = transaction->t_journal;
+@@ -1530,8 +1524,22 @@ int jbd2_journal_stop(handle_t *handle)
+ 	tid_t tid;
+ 	pid_t pid;
+ 
+-	if (!transaction)
+-		goto free_and_exit;
++	if (!transaction) {
++		/*
++		 * Handle is already detached from the transaction so
++		 * there is nothing to do other than decrease a refcount,
++		 * or free the handle if refcount drops to zero
++		 */
++		if (--handle->h_ref > 0) {
++			jbd_debug(4, "h_ref %d -> %d\n", handle->h_ref + 1,
++							 handle->h_ref);
++			return err;
++		} else {
++			if (handle->h_rsv_handle)
++				jbd2_free_handle(handle->h_rsv_handle);
++			goto free_and_exit;
++		}
++	}
+ 	journal = transaction->t_journal;
+ 
+ 	J_ASSERT(journal_current_handle() == handle);
+@@ -2373,7 +2381,6 @@ int jbd2_journal_file_inode(handle_t *handle, struct jbd2_inode *jinode)
+ 	transaction_t *transaction = handle->h_transaction;
+ 	journal_t *journal;
+ 
+-	WARN_ON(!transaction);
+ 	if (is_handle_aborted(handle))
+ 		return -EROFS;
+ 	journal = transaction->t_journal;
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index 6acc9648f986..345b35fd329d 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -518,7 +518,14 @@ static struct kernfs_node *__kernfs_new_node(struct kernfs_root *root,
+ 	if (!kn)
+ 		goto err_out1;
+ 
+-	ret = ida_simple_get(&root->ino_ida, 1, 0, GFP_KERNEL);
++	/*
++	 * If the ino of the sysfs entry created for a kmem cache gets
++	 * allocated from an ida layer, which is accounted to the memcg that
++	 * owns the cache, the memcg will get pinned forever. So do not account
++	 * ino ida allocations.
++	 */
++	ret = ida_simple_get(&root->ino_ida, 1, 0,
++			     GFP_KERNEL | __GFP_NOACCOUNT);
+ 	if (ret < 0)
+ 		goto err_out2;
+ 	kn->ino = ret;
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 38ed1e1bed41..13b0f7bfc096 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1709,8 +1709,11 @@ struct vfsmount *collect_mounts(struct path *path)
+ {
+ 	struct mount *tree;
+ 	namespace_lock();
+-	tree = copy_tree(real_mount(path->mnt), path->dentry,
+-			 CL_COPY_ALL | CL_PRIVATE);
++	if (!check_mnt(real_mount(path->mnt)))
++		tree = ERR_PTR(-EINVAL);
++	else
++		tree = copy_tree(real_mount(path->mnt), path->dentry,
++				 CL_COPY_ALL | CL_PRIVATE);
+ 	namespace_unlock();
+ 	if (IS_ERR(tree))
+ 		return ERR_CAST(tree);
+diff --git a/fs/nfsd/blocklayout.c b/fs/nfsd/blocklayout.c
+index 03d647bf195d..cdefaa331a07 100644
+--- a/fs/nfsd/blocklayout.c
++++ b/fs/nfsd/blocklayout.c
+@@ -181,6 +181,17 @@ nfsd4_block_proc_layoutcommit(struct inode *inode,
+ }
+ 
+ const struct nfsd4_layout_ops bl_layout_ops = {
++	/*
++	 * Pretend that we send notification to the client.  This is a blatant
++	 * lie to force recent Linux clients to cache our device IDs.
++	 * We rarely ever change the device ID, so the harm of leaking deviceids
++	 * for a while isn't too bad.  Unfortunately RFC5661 is a complete mess
++	 * in this regard, but I filed errata 4119 for this a while ago, and
++	 * hopefully the Linux client will eventually start caching deviceids
++	 * without this again.
++	 */
++	.notify_types		=
++			NOTIFY_DEVICEID4_DELETE | NOTIFY_DEVICEID4_CHANGE,
+ 	.proc_getdeviceinfo	= nfsd4_block_proc_getdeviceinfo,
+ 	.encode_getdeviceinfo	= nfsd4_block_encode_getdeviceinfo,
+ 	.proc_layoutget		= nfsd4_block_proc_layoutget,
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index ee1cccdb083a..b4541ede7cb8 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4386,10 +4386,17 @@ static __be32 check_stateid_generation(stateid_t *in, stateid_t *ref, bool has_s
+ 	return nfserr_old_stateid;
+ }
+ 
++static __be32 nfsd4_check_openowner_confirmed(struct nfs4_ol_stateid *ols)
++{
++	if (ols->st_stateowner->so_is_open_owner &&
++	    !(openowner(ols->st_stateowner)->oo_flags & NFS4_OO_CONFIRMED))
++		return nfserr_bad_stateid;
++	return nfs_ok;
++}
++
+ static __be32 nfsd4_validate_stateid(struct nfs4_client *cl, stateid_t *stateid)
+ {
+ 	struct nfs4_stid *s;
+-	struct nfs4_ol_stateid *ols;
+ 	__be32 status = nfserr_bad_stateid;
+ 
+ 	if (ZERO_STATEID(stateid) || ONE_STATEID(stateid))
+@@ -4419,13 +4426,7 @@ static __be32 nfsd4_validate_stateid(struct nfs4_client *cl, stateid_t *stateid)
+ 		break;
+ 	case NFS4_OPEN_STID:
+ 	case NFS4_LOCK_STID:
+-		ols = openlockstateid(s);
+-		if (ols->st_stateowner->so_is_open_owner
+-	    			&& !(openowner(ols->st_stateowner)->oo_flags
+-						& NFS4_OO_CONFIRMED))
+-			status = nfserr_bad_stateid;
+-		else
+-			status = nfs_ok;
++		status = nfsd4_check_openowner_confirmed(openlockstateid(s));
+ 		break;
+ 	default:
+ 		printk("unknown stateid type %x\n", s->sc_type);
+@@ -4517,8 +4518,8 @@ nfs4_preprocess_stateid_op(struct net *net, struct nfsd4_compound_state *cstate,
+ 		status = nfs4_check_fh(current_fh, stp);
+ 		if (status)
+ 			goto out;
+-		if (stp->st_stateowner->so_is_open_owner
+-		    && !(openowner(stp->st_stateowner)->oo_flags & NFS4_OO_CONFIRMED))
++		status = nfsd4_check_openowner_confirmed(stp);
++		if (status)
+ 			goto out;
+ 		status = nfs4_check_openmode(stp, flags);
+ 		if (status)
+diff --git a/fs/omfs/inode.c b/fs/omfs/inode.c
+index 138321b0c6c2..454111a3308e 100644
+--- a/fs/omfs/inode.c
++++ b/fs/omfs/inode.c
+@@ -306,7 +306,8 @@ static const struct super_operations omfs_sops = {
+  */
+ static int omfs_get_imap(struct super_block *sb)
+ {
+-	unsigned int bitmap_size, count, array_size;
++	unsigned int bitmap_size, array_size;
++	int count;
+ 	struct omfs_sb_info *sbi = OMFS_SB(sb);
+ 	struct buffer_head *bh;
+ 	unsigned long **ptr;
+@@ -359,7 +360,7 @@ nomem:
+ }
+ 
+ enum {
+-	Opt_uid, Opt_gid, Opt_umask, Opt_dmask, Opt_fmask
++	Opt_uid, Opt_gid, Opt_umask, Opt_dmask, Opt_fmask, Opt_err
+ };
+ 
+ static const match_table_t tokens = {
+@@ -368,6 +369,7 @@ static const match_table_t tokens = {
+ 	{Opt_umask, "umask=%o"},
+ 	{Opt_dmask, "dmask=%o"},
+ 	{Opt_fmask, "fmask=%o"},
++	{Opt_err, NULL},
+ };
+ 
+ static int parse_options(char *options, struct omfs_sb_info *sbi)
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 24f640441bd9..84d693d37428 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -299,6 +299,9 @@ int ovl_copy_up_one(struct dentry *parent, struct dentry *dentry,
+ 	struct cred *override_cred;
+ 	char *link = NULL;
+ 
++	if (WARN_ON(!workdir))
++		return -EROFS;
++
+ 	ovl_path_upper(parent, &parentpath);
+ 	upperdir = parentpath.dentry;
+ 
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index d139405d2bfa..692ceda3bc21 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -222,6 +222,9 @@ static struct dentry *ovl_clear_empty(struct dentry *dentry,
+ 	struct kstat stat;
+ 	int err;
+ 
++	if (WARN_ON(!workdir))
++		return ERR_PTR(-EROFS);
++
+ 	err = ovl_lock_rename_workdir(workdir, upperdir);
+ 	if (err)
+ 		goto out;
+@@ -322,6 +325,9 @@ static int ovl_create_over_whiteout(struct dentry *dentry, struct inode *inode,
+ 	struct dentry *newdentry;
+ 	int err;
+ 
++	if (WARN_ON(!workdir))
++		return -EROFS;
++
+ 	err = ovl_lock_rename_workdir(workdir, upperdir);
+ 	if (err)
+ 		goto out;
+@@ -506,11 +512,28 @@ static int ovl_remove_and_whiteout(struct dentry *dentry, bool is_dir)
+ 	struct dentry *opaquedir = NULL;
+ 	int err;
+ 
+-	if (is_dir && OVL_TYPE_MERGE_OR_LOWER(ovl_path_type(dentry))) {
+-		opaquedir = ovl_check_empty_and_clear(dentry);
+-		err = PTR_ERR(opaquedir);
+-		if (IS_ERR(opaquedir))
+-			goto out;
++	if (WARN_ON(!workdir))
++		return -EROFS;
++
++	if (is_dir) {
++		if (OVL_TYPE_MERGE_OR_LOWER(ovl_path_type(dentry))) {
++			opaquedir = ovl_check_empty_and_clear(dentry);
++			err = PTR_ERR(opaquedir);
++			if (IS_ERR(opaquedir))
++				goto out;
++		} else {
++			LIST_HEAD(list);
++
++			/*
++			 * When removing an empty opaque directory, then it
++			 * makes no sense to replace it with an exact replica of
++			 * itself.  But emptiness still needs to be checked.
++			 */
++			err = ovl_check_empty_dir(dentry, &list);
++			ovl_cache_free(&list);
++			if (err)
++				goto out;
++		}
+ 	}
+ 
+ 	err = ovl_lock_rename_workdir(workdir, upperdir);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 5f0d1993e6e3..bf8537c7f455 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -529,7 +529,7 @@ static int ovl_remount(struct super_block *sb, int *flags, char *data)
+ {
+ 	struct ovl_fs *ufs = sb->s_fs_info;
+ 
+-	if (!(*flags & MS_RDONLY) && !ufs->upper_mnt)
++	if (!(*flags & MS_RDONLY) && (!ufs->upper_mnt || !ufs->workdir))
+ 		return -EROFS;
+ 
+ 	return 0;
+@@ -925,9 +925,10 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ 		ufs->workdir = ovl_workdir_create(ufs->upper_mnt, workpath.dentry);
+ 		err = PTR_ERR(ufs->workdir);
+ 		if (IS_ERR(ufs->workdir)) {
+-			pr_err("overlayfs: failed to create directory %s/%s\n",
+-			       ufs->config.workdir, OVL_WORKDIR_NAME);
+-			goto out_put_upper_mnt;
++			pr_warn("overlayfs: failed to create directory %s/%s (errno: %i); mounting read-only\n",
++				ufs->config.workdir, OVL_WORKDIR_NAME, -err);
++			sb->s_flags |= MS_RDONLY;
++			ufs->workdir = NULL;
+ 		}
+ 	}
+ 
+@@ -997,7 +998,6 @@ out_put_lower_mnt:
+ 	kfree(ufs->lower_mnt);
+ out_put_workdir:
+ 	dput(ufs->workdir);
+-out_put_upper_mnt:
+ 	mntput(ufs->upper_mnt);
+ out_put_lowerpath:
+ 	for (i = 0; i < numlower; i++)
+diff --git a/fs/xfs/libxfs/xfs_attr_leaf.c b/fs/xfs/libxfs/xfs_attr_leaf.c
+index 15105dbc9e28..0166e7e829a7 100644
+--- a/fs/xfs/libxfs/xfs_attr_leaf.c
++++ b/fs/xfs/libxfs/xfs_attr_leaf.c
+@@ -498,8 +498,8 @@ xfs_attr_shortform_add(xfs_da_args_t *args, int forkoff)
+  * After the last attribute is removed revert to original inode format,
+  * making all literal area available to the data fork once more.
+  */
+-STATIC void
+-xfs_attr_fork_reset(
++void
++xfs_attr_fork_remove(
+ 	struct xfs_inode	*ip,
+ 	struct xfs_trans	*tp)
+ {
+@@ -565,7 +565,7 @@ xfs_attr_shortform_remove(xfs_da_args_t *args)
+ 	    (mp->m_flags & XFS_MOUNT_ATTR2) &&
+ 	    (dp->i_d.di_format != XFS_DINODE_FMT_BTREE) &&
+ 	    !(args->op_flags & XFS_DA_OP_ADDNAME)) {
+-		xfs_attr_fork_reset(dp, args->trans);
++		xfs_attr_fork_remove(dp, args->trans);
+ 	} else {
+ 		xfs_idata_realloc(dp, -size, XFS_ATTR_FORK);
+ 		dp->i_d.di_forkoff = xfs_attr_shortform_bytesfit(dp, totsize);
+@@ -828,7 +828,7 @@ xfs_attr3_leaf_to_shortform(
+ 	if (forkoff == -1) {
+ 		ASSERT(dp->i_mount->m_flags & XFS_MOUNT_ATTR2);
+ 		ASSERT(dp->i_d.di_format != XFS_DINODE_FMT_BTREE);
+-		xfs_attr_fork_reset(dp, args->trans);
++		xfs_attr_fork_remove(dp, args->trans);
+ 		goto out;
+ 	}
+ 
+diff --git a/fs/xfs/libxfs/xfs_attr_leaf.h b/fs/xfs/libxfs/xfs_attr_leaf.h
+index e2929da7c3ba..4f3a60aa93d4 100644
+--- a/fs/xfs/libxfs/xfs_attr_leaf.h
++++ b/fs/xfs/libxfs/xfs_attr_leaf.h
+@@ -53,7 +53,7 @@ int	xfs_attr_shortform_remove(struct xfs_da_args *args);
+ int	xfs_attr_shortform_list(struct xfs_attr_list_context *context);
+ int	xfs_attr_shortform_allfit(struct xfs_buf *bp, struct xfs_inode *dp);
+ int	xfs_attr_shortform_bytesfit(xfs_inode_t *dp, int bytes);
+-
++void	xfs_attr_fork_remove(struct xfs_inode *ip, struct xfs_trans *tp);
+ 
+ /*
+  * Internal routines when attribute fork size == XFS_LBSIZE(mp).
+diff --git a/fs/xfs/xfs_attr_inactive.c b/fs/xfs/xfs_attr_inactive.c
+index 83af4c149635..487c8374a1e0 100644
+--- a/fs/xfs/xfs_attr_inactive.c
++++ b/fs/xfs/xfs_attr_inactive.c
+@@ -379,23 +379,31 @@ xfs_attr3_root_inactive(
+ 	return error;
+ }
+ 
++/*
++ * xfs_attr_inactive kills all traces of an attribute fork on an inode. It
++ * removes both the on-disk and in-memory inode fork. Note that this also has to
++ * handle the condition of inodes without attributes but with an attribute fork
++ * configured, so we can't use xfs_inode_hasattr() here.
++ *
++ * The in-memory attribute fork is removed even on error.
++ */
+ int
+-xfs_attr_inactive(xfs_inode_t *dp)
++xfs_attr_inactive(
++	struct xfs_inode	*dp)
+ {
+-	xfs_trans_t *trans;
+-	xfs_mount_t *mp;
+-	int error;
++	struct xfs_trans	*trans;
++	struct xfs_mount	*mp;
++	int			cancel_flags = 0;
++	int			lock_mode = XFS_ILOCK_SHARED;
++	int			error = 0;
+ 
+ 	mp = dp->i_mount;
+ 	ASSERT(! XFS_NOT_DQATTACHED(mp, dp));
+ 
+-	xfs_ilock(dp, XFS_ILOCK_SHARED);
+-	if (!xfs_inode_hasattr(dp) ||
+-	    dp->i_d.di_aformat == XFS_DINODE_FMT_LOCAL) {
+-		xfs_iunlock(dp, XFS_ILOCK_SHARED);
+-		return 0;
+-	}
+-	xfs_iunlock(dp, XFS_ILOCK_SHARED);
++	xfs_ilock(dp, lock_mode);
++	if (!XFS_IFORK_Q(dp))
++		goto out_destroy_fork;
++	xfs_iunlock(dp, lock_mode);
+ 
+ 	/*
+ 	 * Start our first transaction of the day.
+@@ -407,13 +415,18 @@ xfs_attr_inactive(xfs_inode_t *dp)
+ 	 * the inode in every transaction to let it float upward through
+ 	 * the log.
+ 	 */
++	lock_mode = 0;
+ 	trans = xfs_trans_alloc(mp, XFS_TRANS_ATTRINVAL);
+ 	error = xfs_trans_reserve(trans, &M_RES(mp)->tr_attrinval, 0, 0);
+-	if (error) {
+-		xfs_trans_cancel(trans, 0);
+-		return error;
+-	}
+-	xfs_ilock(dp, XFS_ILOCK_EXCL);
++	if (error)
++		goto out_cancel;
++
++	lock_mode = XFS_ILOCK_EXCL;
++	cancel_flags = XFS_TRANS_RELEASE_LOG_RES | XFS_TRANS_ABORT;
++	xfs_ilock(dp, lock_mode);
++
++	if (!XFS_IFORK_Q(dp))
++		goto out_cancel;
+ 
+ 	/*
+ 	 * No need to make quota reservations here. We expect to release some
+@@ -421,29 +434,31 @@ xfs_attr_inactive(xfs_inode_t *dp)
+ 	 */
+ 	xfs_trans_ijoin(trans, dp, 0);
+ 
+-	/*
+-	 * Decide on what work routines to call based on the inode size.
+-	 */
+-	if (!xfs_inode_hasattr(dp) ||
+-	    dp->i_d.di_aformat == XFS_DINODE_FMT_LOCAL) {
+-		error = 0;
+-		goto out;
++	/* invalidate and truncate the attribute fork extents */
++	if (dp->i_d.di_aformat != XFS_DINODE_FMT_LOCAL) {
++		error = xfs_attr3_root_inactive(&trans, dp);
++		if (error)
++			goto out_cancel;
++
++		error = xfs_itruncate_extents(&trans, dp, XFS_ATTR_FORK, 0);
++		if (error)
++			goto out_cancel;
+ 	}
+-	error = xfs_attr3_root_inactive(&trans, dp);
+-	if (error)
+-		goto out;
+ 
+-	error = xfs_itruncate_extents(&trans, dp, XFS_ATTR_FORK, 0);
+-	if (error)
+-		goto out;
++	/* Reset the attribute fork - this also destroys the in-core fork */
++	xfs_attr_fork_remove(dp, trans);
+ 
+ 	error = xfs_trans_commit(trans, XFS_TRANS_RELEASE_LOG_RES);
+-	xfs_iunlock(dp, XFS_ILOCK_EXCL);
+-
++	xfs_iunlock(dp, lock_mode);
+ 	return error;
+ 
+-out:
+-	xfs_trans_cancel(trans, XFS_TRANS_RELEASE_LOG_RES|XFS_TRANS_ABORT);
+-	xfs_iunlock(dp, XFS_ILOCK_EXCL);
++out_cancel:
++	xfs_trans_cancel(trans, cancel_flags);
++out_destroy_fork:
++	/* kill the in-core attr fork before we drop the inode lock */
++	if (dp->i_afp)
++		xfs_idestroy_fork(dp, XFS_ATTR_FORK);
++	if (lock_mode)
++		xfs_iunlock(dp, lock_mode);
+ 	return error;
+ }
+diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
+index a2e1cb8a568b..f3ba637a8ece 100644
+--- a/fs/xfs/xfs_file.c
++++ b/fs/xfs/xfs_file.c
+@@ -125,7 +125,7 @@ xfs_iozero(
+ 		status = 0;
+ 	} while (count);
+ 
+-	return (-status);
++	return status;
+ }
+ 
+ int
+diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
+index 6163767aa856..b1edda7890f4 100644
+--- a/fs/xfs/xfs_inode.c
++++ b/fs/xfs/xfs_inode.c
+@@ -1889,21 +1889,17 @@ xfs_inactive(
+ 	/*
+ 	 * If there are attributes associated with the file then blow them away
+ 	 * now.  The code calls a routine that recursively deconstructs the
+-	 * attribute fork.  We need to just commit the current transaction
+-	 * because we can't use it for xfs_attr_inactive().
++	 * attribute fork. If also blows away the in-core attribute fork.
+ 	 */
+-	if (ip->i_d.di_anextents > 0) {
+-		ASSERT(ip->i_d.di_forkoff != 0);
+-
++	if (XFS_IFORK_Q(ip)) {
+ 		error = xfs_attr_inactive(ip);
+ 		if (error)
+ 			return;
+ 	}
+ 
+-	if (ip->i_afp)
+-		xfs_idestroy_fork(ip, XFS_ATTR_FORK);
+-
++	ASSERT(!ip->i_afp);
+ 	ASSERT(ip->i_d.di_anextents == 0);
++	ASSERT(ip->i_d.di_forkoff == 0);
+ 
+ 	/*
+ 	 * Free the inode.
+diff --git a/include/drm/drm_pciids.h b/include/drm/drm_pciids.h
+index 2dd405c9be78..45c39a37f924 100644
+--- a/include/drm/drm_pciids.h
++++ b/include/drm/drm_pciids.h
+@@ -186,6 +186,7 @@
+ 	{0x1002, 0x6658, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
+ 	{0x1002, 0x665c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
+ 	{0x1002, 0x665d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
++	{0x1002, 0x665f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
+ 	{0x1002, 0x6660, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ 	{0x1002, 0x6663, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ 	{0x1002, 0x6664, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+diff --git a/include/linux/fs_pin.h b/include/linux/fs_pin.h
+index 9dc4e0384bfb..3886b3bffd7f 100644
+--- a/include/linux/fs_pin.h
++++ b/include/linux/fs_pin.h
+@@ -13,6 +13,8 @@ struct vfsmount;
+ static inline void init_fs_pin(struct fs_pin *p, void (*kill)(struct fs_pin *))
+ {
+ 	init_waitqueue_head(&p->wait);
++	INIT_HLIST_NODE(&p->s_list);
++	INIT_HLIST_NODE(&p->m_list);
+ 	p->kill = kill;
+ }
+ 
+diff --git a/include/linux/gfp.h b/include/linux/gfp.h
+index 51bd1e72a917..eb6fafe66bec 100644
+--- a/include/linux/gfp.h
++++ b/include/linux/gfp.h
+@@ -30,6 +30,7 @@ struct vm_area_struct;
+ #define ___GFP_HARDWALL		0x20000u
+ #define ___GFP_THISNODE		0x40000u
+ #define ___GFP_RECLAIMABLE	0x80000u
++#define ___GFP_NOACCOUNT	0x100000u
+ #define ___GFP_NOTRACK		0x200000u
+ #define ___GFP_NO_KSWAPD	0x400000u
+ #define ___GFP_OTHER_NODE	0x800000u
+@@ -85,6 +86,7 @@ struct vm_area_struct;
+ #define __GFP_HARDWALL   ((__force gfp_t)___GFP_HARDWALL) /* Enforce hardwall cpuset memory allocs */
+ #define __GFP_THISNODE	((__force gfp_t)___GFP_THISNODE)/* No fallback, no policies */
+ #define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE) /* Page is reclaimable */
++#define __GFP_NOACCOUNT	((__force gfp_t)___GFP_NOACCOUNT) /* Don't account to kmemcg */
+ #define __GFP_NOTRACK	((__force gfp_t)___GFP_NOTRACK)  /* Don't track with kmemcheck */
+ 
+ #define __GFP_NO_KSWAPD	((__force gfp_t)___GFP_NO_KSWAPD)
+diff --git a/include/linux/ktime.h b/include/linux/ktime.h
+index 5fc3d1083071..2b6a204bd8d4 100644
+--- a/include/linux/ktime.h
++++ b/include/linux/ktime.h
+@@ -166,19 +166,34 @@ static inline bool ktime_before(const ktime_t cmp1, const ktime_t cmp2)
+ }
+ 
+ #if BITS_PER_LONG < 64
+-extern u64 __ktime_divns(const ktime_t kt, s64 div);
+-static inline u64 ktime_divns(const ktime_t kt, s64 div)
++extern s64 __ktime_divns(const ktime_t kt, s64 div);
++static inline s64 ktime_divns(const ktime_t kt, s64 div)
+ {
++	/*
++	 * Negative divisors could cause an inf loop,
++	 * so bug out here.
++	 */
++	BUG_ON(div < 0);
+ 	if (__builtin_constant_p(div) && !(div >> 32)) {
+-		u64 ns = kt.tv64;
+-		do_div(ns, div);
+-		return ns;
++		s64 ns = kt.tv64;
++		u64 tmp = ns < 0 ? -ns : ns;
++
++		do_div(tmp, div);
++		return ns < 0 ? -tmp : tmp;
+ 	} else {
+ 		return __ktime_divns(kt, div);
+ 	}
+ }
+ #else /* BITS_PER_LONG < 64 */
+-# define ktime_divns(kt, div)		(u64)((kt).tv64 / (div))
++static inline s64 ktime_divns(const ktime_t kt, s64 div)
++{
++	/*
++	 * 32-bit implementation cannot handle negative divisors,
++	 * so catch them on 64bit as well.
++	 */
++	WARN_ON(div < 0);
++	return kt.tv64 / div;
++}
+ #endif
+ 
+ static inline s64 ktime_to_us(const ktime_t kt)
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 6b08cc106c21..f8994b4b122c 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -205,6 +205,7 @@ enum {
+ 	ATA_LFLAG_SW_ACTIVITY	= (1 << 7), /* keep activity stats */
+ 	ATA_LFLAG_NO_LPM	= (1 << 8), /* disable LPM on this link */
+ 	ATA_LFLAG_RST_ONCE	= (1 << 9), /* limit recovery to one reset */
++	ATA_LFLAG_CHANGED	= (1 << 10), /* LPM state changed on this link */
+ 
+ 	/* struct ata_port flags */
+ 	ATA_FLAG_SLAVE_POSS	= (1 << 0), /* host supports slave dev */
+@@ -310,6 +311,12 @@ enum {
+ 	 */
+ 	ATA_TMOUT_PMP_SRST_WAIT	= 5000,
+ 
++	/* When the LPM policy is set to ATA_LPM_MAX_POWER, there might
++	 * be a spurious PHY event, so ignore the first PHY event that
++	 * occurs within 10s after the policy change.
++	 */
++	ATA_TMOUT_SPURIOUS_PHY	= 10000,
++
+ 	/* ATA bus states */
+ 	BUS_UNKNOWN		= 0,
+ 	BUS_DMA			= 1,
+@@ -789,6 +796,8 @@ struct ata_link {
+ 	struct ata_eh_context	eh_context;
+ 
+ 	struct ata_device	device[ATA_MAX_DEVICES];
++
++	unsigned long		last_lpm_change; /* when last LPM change happened */
+ };
+ #define ATA_LINK_CLEAR_BEGIN		offsetof(struct ata_link, active_tag)
+ #define ATA_LINK_CLEAR_END		offsetof(struct ata_link, device[0])
+@@ -1202,6 +1211,7 @@ extern struct ata_device *ata_dev_pair(struct ata_device *adev);
+ extern int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev);
+ extern void ata_scsi_port_error_handler(struct Scsi_Host *host, struct ata_port *ap);
+ extern void ata_scsi_cmd_error_handler(struct Scsi_Host *host, struct ata_port *ap, struct list_head *eh_q);
++extern bool sata_lpm_ignore_phy_events(struct ata_link *link);
+ 
+ extern int ata_cable_40wire(struct ata_port *ap);
+ extern int ata_cable_80wire(struct ata_port *ap);
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index 72dff5fb0d0c..6c8918114804 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -463,6 +463,8 @@ memcg_kmem_newpage_charge(gfp_t gfp, struct mem_cgroup **memcg, int order)
+ 	if (!memcg_kmem_enabled())
+ 		return true;
+ 
++	if (gfp & __GFP_NOACCOUNT)
++		return true;
+ 	/*
+ 	 * __GFP_NOFAIL allocations will move on even if charging is not
+ 	 * possible. Therefore we don't even try, and have this allocation
+@@ -522,6 +524,8 @@ memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp)
+ {
+ 	if (!memcg_kmem_enabled())
+ 		return cachep;
++	if (gfp & __GFP_NOACCOUNT)
++		return cachep;
+ 	if (gfp & __GFP_NOFAIL)
+ 		return cachep;
+ 	if (in_interrupt() || (!current->mm) || (current->flags & PF_KTHREAD))
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index 6341f5be6e24..a30b172df6e1 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -18,7 +18,7 @@ static inline int rt_task(struct task_struct *p)
+ #ifdef CONFIG_RT_MUTEXES
+ extern int rt_mutex_getprio(struct task_struct *p);
+ extern void rt_mutex_setprio(struct task_struct *p, int prio);
+-extern int rt_mutex_check_prio(struct task_struct *task, int newprio);
++extern int rt_mutex_get_effective_prio(struct task_struct *task, int newprio);
+ extern struct task_struct *rt_mutex_get_top_task(struct task_struct *task);
+ extern void rt_mutex_adjust_pi(struct task_struct *p);
+ static inline bool tsk_is_pi_blocked(struct task_struct *tsk)
+@@ -31,9 +31,10 @@ static inline int rt_mutex_getprio(struct task_struct *p)
+ 	return p->normal_prio;
+ }
+ 
+-static inline int rt_mutex_check_prio(struct task_struct *task, int newprio)
++static inline int rt_mutex_get_effective_prio(struct task_struct *task,
++					      int newprio)
+ {
+-	return 0;
++	return newprio;
+ }
+ 
+ static inline struct task_struct *rt_mutex_get_top_task(struct task_struct *task)
+diff --git a/include/linux/tty.h b/include/linux/tty.h
+index 358a337af598..790752ac074a 100644
+--- a/include/linux/tty.h
++++ b/include/linux/tty.h
+@@ -339,6 +339,7 @@ struct tty_file_private {
+ #define TTY_EXCLUSIVE 		3	/* Exclusive open mode */
+ #define TTY_DEBUG 		4	/* Debugging */
+ #define TTY_DO_WRITE_WAKEUP 	5	/* Call write_wakeup after queuing new */
++#define TTY_OTHER_DONE		6	/* Closed pty has completed input processing */
+ #define TTY_LDISC_OPEN	 	11	/* Line discipline is open */
+ #define TTY_PTY_LOCK 		16	/* pty private */
+ #define TTY_NO_WRITE_SPLIT 	17	/* Preserve write boundaries to driver */
+@@ -462,7 +463,6 @@ extern int tty_hung_up_p(struct file *filp);
+ extern void do_SAK(struct tty_struct *tty);
+ extern void __do_SAK(struct tty_struct *tty);
+ extern void no_tty(void);
+-extern void tty_flush_to_ldisc(struct tty_struct *tty);
+ extern void tty_buffer_free_all(struct tty_port *port);
+ extern void tty_buffer_flush(struct tty_struct *tty, struct tty_ldisc *ld);
+ extern void tty_buffer_init(struct tty_port *port);
+diff --git a/include/xen/events.h b/include/xen/events.h
+index 5321cd9636e6..7d95fdf9cf3e 100644
+--- a/include/xen/events.h
++++ b/include/xen/events.h
+@@ -17,7 +17,7 @@ int bind_evtchn_to_irqhandler(unsigned int evtchn,
+ 			      irq_handler_t handler,
+ 			      unsigned long irqflags, const char *devname,
+ 			      void *dev_id);
+-int bind_virq_to_irq(unsigned int virq, unsigned int cpu);
++int bind_virq_to_irq(unsigned int virq, unsigned int cpu, bool percpu);
+ int bind_virq_to_irqhandler(unsigned int virq, unsigned int cpu,
+ 			    irq_handler_t handler,
+ 			    unsigned long irqflags, const char *devname,
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index 6357265a31ad..ce9108c059fb 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -265,15 +265,17 @@ struct task_struct *rt_mutex_get_top_task(struct task_struct *task)
+ }
+ 
+ /*
+- * Called by sched_setscheduler() to check whether the priority change
+- * is overruled by a possible priority boosting.
++ * Called by sched_setscheduler() to get the priority which will be
++ * effective after the change.
+  */
+-int rt_mutex_check_prio(struct task_struct *task, int newprio)
++int rt_mutex_get_effective_prio(struct task_struct *task, int newprio)
+ {
+ 	if (!task_has_pi_waiters(task))
+-		return 0;
++		return newprio;
+ 
+-	return task_top_pi_waiter(task)->task->prio <= newprio;
++	if (task_top_pi_waiter(task)->task->prio <= newprio)
++		return task_top_pi_waiter(task)->task->prio;
++	return newprio;
+ }
+ 
+ /*
+diff --git a/kernel/module.c b/kernel/module.c
+index ec53f594e9c9..538794ce3cc7 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -3366,6 +3366,9 @@ static int load_module(struct load_info *info, const char __user *uargs,
+ 	module_bug_cleanup(mod);
+ 	mutex_unlock(&module_mutex);
+ 
++	blocking_notifier_call_chain(&module_notify_list,
++				     MODULE_STATE_GOING, mod);
++
+ 	/* we can't deallocate the module until we clear memory protection */
+ 	unset_module_init_ro_nx(mod);
+ 	unset_module_core_ro_nx(mod);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 3d5f6f6d14c2..f4da2cbbfd7f 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -3295,15 +3295,18 @@ static void __setscheduler_params(struct task_struct *p,
+ 
+ /* Actually do priority change: must hold pi & rq lock. */
+ static void __setscheduler(struct rq *rq, struct task_struct *p,
+-			   const struct sched_attr *attr)
++			   const struct sched_attr *attr, bool keep_boost)
+ {
+ 	__setscheduler_params(p, attr);
+ 
+ 	/*
+-	 * If we get here, there was no pi waiters boosting the
+-	 * task. It is safe to use the normal prio.
++	 * Keep a potential priority boosting if called from
++	 * sched_setscheduler().
+ 	 */
+-	p->prio = normal_prio(p);
++	if (keep_boost)
++		p->prio = rt_mutex_get_effective_prio(p, normal_prio(p));
++	else
++		p->prio = normal_prio(p);
+ 
+ 	if (dl_prio(p->prio))
+ 		p->sched_class = &dl_sched_class;
+@@ -3403,7 +3406,7 @@ static int __sched_setscheduler(struct task_struct *p,
+ 	int newprio = dl_policy(attr->sched_policy) ? MAX_DL_PRIO - 1 :
+ 		      MAX_RT_PRIO - 1 - attr->sched_priority;
+ 	int retval, oldprio, oldpolicy = -1, queued, running;
+-	int policy = attr->sched_policy;
++	int new_effective_prio, policy = attr->sched_policy;
+ 	unsigned long flags;
+ 	const struct sched_class *prev_class;
+ 	struct rq *rq;
+@@ -3585,15 +3588,14 @@ change:
+ 	oldprio = p->prio;
+ 
+ 	/*
+-	 * Special case for priority boosted tasks.
+-	 *
+-	 * If the new priority is lower or equal (user space view)
+-	 * than the current (boosted) priority, we just store the new
++	 * Take priority boosted tasks into account. If the new
++	 * effective priority is unchanged, we just store the new
+ 	 * normal parameters and do not touch the scheduler class and
+ 	 * the runqueue. This will be done when the task deboost
+ 	 * itself.
+ 	 */
+-	if (rt_mutex_check_prio(p, newprio)) {
++	new_effective_prio = rt_mutex_get_effective_prio(p, newprio);
++	if (new_effective_prio == oldprio) {
+ 		__setscheduler_params(p, attr);
+ 		task_rq_unlock(rq, p, &flags);
+ 		return 0;
+@@ -3607,7 +3609,7 @@ change:
+ 		put_prev_task(rq, p);
+ 
+ 	prev_class = p->sched_class;
+-	__setscheduler(rq, p, attr);
++	__setscheduler(rq, p, attr, true);
+ 
+ 	if (running)
+ 		p->sched_class->set_curr_task(rq);
+@@ -4382,10 +4384,7 @@ long __sched io_schedule_timeout(long timeout)
+ 	long ret;
+ 
+ 	current->in_iowait = 1;
+-	if (old_iowait)
+-		blk_schedule_flush_plug(current);
+-	else
+-		blk_flush_plug(current);
++	blk_schedule_flush_plug(current);
+ 
+ 	delayacct_blkio_start();
+ 	rq = raw_rq();
+@@ -7357,7 +7356,7 @@ static void normalize_task(struct rq *rq, struct task_struct *p)
+ 	queued = task_on_rq_queued(p);
+ 	if (queued)
+ 		dequeue_task(rq, p, 0);
+-	__setscheduler(rq, p, &attr);
++	__setscheduler(rq, p, &attr, false);
+ 	if (queued) {
+ 		enqueue_task(rq, p, 0);
+ 		resched_curr(rq);
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index bee0c1f78091..38f586c076fe 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -266,21 +266,23 @@ lock_hrtimer_base(const struct hrtimer *timer, unsigned long *flags)
+ /*
+  * Divide a ktime value by a nanosecond value
+  */
+-u64 __ktime_divns(const ktime_t kt, s64 div)
++s64 __ktime_divns(const ktime_t kt, s64 div)
+ {
+-	u64 dclc;
+ 	int sft = 0;
++	s64 dclc;
++	u64 tmp;
+ 
+ 	dclc = ktime_to_ns(kt);
++	tmp = dclc < 0 ? -dclc : dclc;
++
+ 	/* Make sure the divisor is less than 2^32: */
+ 	while (div >> 32) {
+ 		sft++;
+ 		div >>= 1;
+ 	}
+-	dclc >>= sft;
+-	do_div(dclc, (unsigned long) div);
+-
+-	return dclc;
++	tmp >>= sft;
++	do_div(tmp, (unsigned long) div);
++	return dclc < 0 ? -tmp : tmp;
+ }
+ EXPORT_SYMBOL_GPL(__ktime_divns);
+ #endif /* BITS_PER_LONG >= 64 */
+diff --git a/lib/strnlen_user.c b/lib/strnlen_user.c
+index a28df5206d95..11649615c505 100644
+--- a/lib/strnlen_user.c
++++ b/lib/strnlen_user.c
+@@ -57,7 +57,8 @@ static inline long do_strnlen_user(const char __user *src, unsigned long count,
+ 			return res + find_zero(data) + 1 - align;
+ 		}
+ 		res += sizeof(unsigned long);
+-		if (unlikely(max < sizeof(unsigned long)))
++		/* We already handled 'unsigned long' bytes. Did we do it all ? */
++		if (unlikely(max <= sizeof(unsigned long)))
+ 			break;
+ 		max -= sizeof(unsigned long);
+ 		if (unlikely(__get_user(c,(unsigned long __user *)(src+res))))
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 5405aff5a590..f0fe4f2c1fa7 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -115,7 +115,8 @@
+ #define BYTES_PER_POINTER	sizeof(void *)
+ 
+ /* GFP bitmask for kmemleak internal allocations */
+-#define gfp_kmemleak_mask(gfp)	(((gfp) & (GFP_KERNEL | GFP_ATOMIC)) | \
++#define gfp_kmemleak_mask(gfp)	(((gfp) & (GFP_KERNEL | GFP_ATOMIC | \
++					   __GFP_NOACCOUNT)) | \
+ 				 __GFP_NORETRY | __GFP_NOMEMALLOC | \
+ 				 __GFP_NOWARN)
+ 
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index de5dc5e12691..0f7d73b3e4b1 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -2517,7 +2517,7 @@ static void __init check_numabalancing_enable(void)
+ 	if (numabalancing_override)
+ 		set_numabalancing_state(numabalancing_override == 1);
+ 
+-	if (nr_node_ids > 1 && !numabalancing_override) {
++	if (num_online_nodes() > 1 && !numabalancing_override) {
+ 		pr_info("%s automatic NUMA balancing. "
+ 			"Configure with numa_balancing= or the "
+ 			"kernel.numa_balancing sysctl",
+diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
+index 41a4abc7e98e..c4ec9239249a 100644
+--- a/net/ceph/osd_client.c
++++ b/net/ceph/osd_client.c
+@@ -1306,8 +1306,6 @@ static void __unregister_linger_request(struct ceph_osd_client *osdc,
+ 		if (list_empty(&req->r_osd_item))
+ 			req->r_osd = NULL;
+ 	}
+-
+-	list_del_init(&req->r_req_lru_item); /* can be on notarget */
+ 	ceph_osdc_put_request(req);
+ }
+ 
+@@ -2017,20 +2015,29 @@ static void kick_requests(struct ceph_osd_client *osdc, bool force_resend,
+ 		err = __map_request(osdc, req,
+ 				    force_resend || force_resend_writes);
+ 		dout("__map_request returned %d\n", err);
+-		if (err == 0)
+-			continue;  /* no change and no osd was specified */
+ 		if (err < 0)
+ 			continue;  /* hrm! */
+-		if (req->r_osd == NULL) {
+-			dout("tid %llu maps to no valid osd\n", req->r_tid);
+-			needmap++;  /* request a newer map */
+-			continue;
+-		}
++		if (req->r_osd == NULL || err > 0) {
++			if (req->r_osd == NULL) {
++				dout("lingering %p tid %llu maps to no osd\n",
++				     req, req->r_tid);
++				/*
++				 * A homeless lingering request makes
++				 * no sense, as it's job is to keep
++				 * a particular OSD connection open.
++				 * Request a newer map and kick the
++				 * request, knowing that it won't be
++				 * resent until we actually get a map
++				 * that can tell us where to send it.
++				 */
++				needmap++;
++			}
+ 
+-		dout("kicking lingering %p tid %llu osd%d\n", req, req->r_tid,
+-		     req->r_osd ? req->r_osd->o_osd : -1);
+-		__register_request(osdc, req);
+-		__unregister_linger_request(osdc, req);
++			dout("kicking lingering %p tid %llu osd%d\n", req,
++			     req->r_tid, req->r_osd ? req->r_osd->o_osd : -1);
++			__register_request(osdc, req);
++			__unregister_linger_request(osdc, req);
++		}
+ 	}
+ 	reset_changed_osds(osdc);
+ 	mutex_unlock(&osdc->request_mutex);
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 8d53d65bd2ab..81e8dc5cb7f9 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -204,6 +204,8 @@ enum ieee80211_packet_rx_flags {
+  * @IEEE80211_RX_CMNTR: received on cooked monitor already
+  * @IEEE80211_RX_BEACON_REPORTED: This frame was already reported
+  *	to cfg80211_report_obss_beacon().
++ * @IEEE80211_RX_REORDER_TIMER: this frame is released by the
++ *	reorder buffer timeout timer, not the normal RX path
+  *
+  * These flags are used across handling multiple interfaces
+  * for a single frame.
+@@ -211,6 +213,7 @@ enum ieee80211_packet_rx_flags {
+ enum ieee80211_rx_flags {
+ 	IEEE80211_RX_CMNTR		= BIT(0),
+ 	IEEE80211_RX_BEACON_REPORTED	= BIT(1),
++	IEEE80211_RX_REORDER_TIMER	= BIT(2),
+ };
+ 
+ struct ieee80211_rx_data {
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 1eb730bf8752..4c887d053333 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -2106,7 +2106,8 @@ ieee80211_deliver_skb(struct ieee80211_rx_data *rx)
+ 		/* deliver to local stack */
+ 		skb->protocol = eth_type_trans(skb, dev);
+ 		memset(skb->cb, 0, sizeof(skb->cb));
+-		if (rx->local->napi)
++		if (!(rx->flags & IEEE80211_RX_REORDER_TIMER) &&
++		    rx->local->napi)
+ 			napi_gro_receive(rx->local->napi, skb);
+ 		else
+ 			netif_receive_skb(skb);
+@@ -3215,7 +3216,7 @@ void ieee80211_release_reorder_timeout(struct sta_info *sta, int tid)
+ 		/* This is OK -- must be QoS data frame */
+ 		.security_idx = tid,
+ 		.seqno_idx = tid,
+-		.flags = 0,
++		.flags = IEEE80211_RX_REORDER_TIMER,
+ 	};
+ 	struct tid_ampdu_rx *tid_agg_rx;
+ 
+diff --git a/net/mac80211/wep.c b/net/mac80211/wep.c
+index a4220e92f0cc..efa3f48f1ec5 100644
+--- a/net/mac80211/wep.c
++++ b/net/mac80211/wep.c
+@@ -98,8 +98,7 @@ static u8 *ieee80211_wep_add_iv(struct ieee80211_local *local,
+ 
+ 	hdr->frame_control |= cpu_to_le16(IEEE80211_FCTL_PROTECTED);
+ 
+-	if (WARN_ON(skb_tailroom(skb) < IEEE80211_WEP_ICV_LEN ||
+-		    skb_headroom(skb) < IEEE80211_WEP_IV_LEN))
++	if (WARN_ON(skb_headroom(skb) < IEEE80211_WEP_IV_LEN))
+ 		return NULL;
+ 
+ 	hdrlen = ieee80211_hdrlen(hdr->frame_control);
+@@ -167,6 +166,9 @@ int ieee80211_wep_encrypt(struct ieee80211_local *local,
+ 	size_t len;
+ 	u8 rc4key[3 + WLAN_KEY_LEN_WEP104];
+ 
++	if (WARN_ON(skb_tailroom(skb) < IEEE80211_WEP_ICV_LEN))
++		return -1;
++
+ 	iv = ieee80211_wep_add_iv(local, skb, keylen, keyidx);
+ 	if (!iv)
+ 		return -1;
+diff --git a/net/sunrpc/auth_gss/gss_rpc_xdr.c b/net/sunrpc/auth_gss/gss_rpc_xdr.c
+index 1ec19f6f0c2b..eeeba5adee6d 100644
+--- a/net/sunrpc/auth_gss/gss_rpc_xdr.c
++++ b/net/sunrpc/auth_gss/gss_rpc_xdr.c
+@@ -793,20 +793,26 @@ int gssx_dec_accept_sec_context(struct rpc_rqst *rqstp,
+ {
+ 	u32 value_follows;
+ 	int err;
++	struct page *scratch;
++
++	scratch = alloc_page(GFP_KERNEL);
++	if (!scratch)
++		return -ENOMEM;
++	xdr_set_scratch_buffer(xdr, page_address(scratch), PAGE_SIZE);
+ 
+ 	/* res->status */
+ 	err = gssx_dec_status(xdr, &res->status);
+ 	if (err)
+-		return err;
++		goto out_free;
+ 
+ 	/* res->context_handle */
+ 	err = gssx_dec_bool(xdr, &value_follows);
+ 	if (err)
+-		return err;
++		goto out_free;
+ 	if (value_follows) {
+ 		err = gssx_dec_ctx(xdr, res->context_handle);
+ 		if (err)
+-			return err;
++			goto out_free;
+ 	} else {
+ 		res->context_handle = NULL;
+ 	}
+@@ -814,11 +820,11 @@ int gssx_dec_accept_sec_context(struct rpc_rqst *rqstp,
+ 	/* res->output_token */
+ 	err = gssx_dec_bool(xdr, &value_follows);
+ 	if (err)
+-		return err;
++		goto out_free;
+ 	if (value_follows) {
+ 		err = gssx_dec_buffer(xdr, res->output_token);
+ 		if (err)
+-			return err;
++			goto out_free;
+ 	} else {
+ 		res->output_token = NULL;
+ 	}
+@@ -826,14 +832,17 @@ int gssx_dec_accept_sec_context(struct rpc_rqst *rqstp,
+ 	/* res->delegated_cred_handle */
+ 	err = gssx_dec_bool(xdr, &value_follows);
+ 	if (err)
+-		return err;
++		goto out_free;
+ 	if (value_follows) {
+ 		/* we do not support upcall servers sending this data. */
+-		return -EINVAL;
++		err = -EINVAL;
++		goto out_free;
+ 	}
+ 
+ 	/* res->options */
+ 	err = gssx_dec_option_array(xdr, &res->options);
+ 
++out_free:
++	__free_page(scratch);
+ 	return err;
+ }
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index a8a1e14272a1..a002a6d1e6da 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2108,6 +2108,8 @@ static const struct pci_device_id azx_ids[] = {
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
+ 	{ PCI_DEVICE(0x1002, 0xaab0),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
++	{ PCI_DEVICE(0x1002, 0xaac8),
++	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
+ 	/* VIA VT8251/VT8237A */
+ 	{ PCI_DEVICE(0x1106, 0x3288),
+ 	  .driver_data = AZX_DRIVER_VIA | AZX_DCAPS_POSFIX_VIA },
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index da67ea8645a6..e27298bdcd6d 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -973,6 +973,14 @@ static const struct hda_codec_preset snd_hda_preset_conexant[] = {
+ 	  .patch = patch_conexant_auto },
+ 	{ .id = 0x14f150b9, .name = "CX20665",
+ 	  .patch = patch_conexant_auto },
++	{ .id = 0x14f150f1, .name = "CX20721",
++	  .patch = patch_conexant_auto },
++	{ .id = 0x14f150f2, .name = "CX20722",
++	  .patch = patch_conexant_auto },
++	{ .id = 0x14f150f3, .name = "CX20723",
++	  .patch = patch_conexant_auto },
++	{ .id = 0x14f150f4, .name = "CX20724",
++	  .patch = patch_conexant_auto },
+ 	{ .id = 0x14f1510f, .name = "CX20751/2",
+ 	  .patch = patch_conexant_auto },
+ 	{ .id = 0x14f15110, .name = "CX20751/2",
+@@ -1007,6 +1015,10 @@ MODULE_ALIAS("snd-hda-codec-id:14f150ab");
+ MODULE_ALIAS("snd-hda-codec-id:14f150ac");
+ MODULE_ALIAS("snd-hda-codec-id:14f150b8");
+ MODULE_ALIAS("snd-hda-codec-id:14f150b9");
++MODULE_ALIAS("snd-hda-codec-id:14f150f1");
++MODULE_ALIAS("snd-hda-codec-id:14f150f2");
++MODULE_ALIAS("snd-hda-codec-id:14f150f3");
++MODULE_ALIAS("snd-hda-codec-id:14f150f4");
+ MODULE_ALIAS("snd-hda-codec-id:14f1510f");
+ MODULE_ALIAS("snd-hda-codec-id:14f15110");
+ MODULE_ALIAS("snd-hda-codec-id:14f15111");
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2fd490b1764b..93c78c3c4b95 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5027,6 +5027,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x104d, 0x9099, "Sony VAIO S13", ALC275_FIXUP_SONY_DISABLE_AAMIX),
+ 	SND_PCI_QUIRK(0x10cf, 0x1475, "Lifebook", ALC269_FIXUP_LIFEBOOK),
+ 	SND_PCI_QUIRK(0x10cf, 0x15dc, "Lifebook T731", ALC269_FIXUP_LIFEBOOK_HP_PIN),
++	SND_PCI_QUIRK(0x10cf, 0x1757, "Lifebook E752", ALC269_FIXUP_LIFEBOOK_HP_PIN),
+ 	SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
+ 	SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_BXBT2807_MIC),
+@@ -5056,6 +5057,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x5026, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x5034, "Thinkpad T450", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x5036, "Thinkpad T450s", ALC292_FIXUP_TPT440_DOCK),
++	SND_PCI_QUIRK(0x17aa, 0x503c, "Thinkpad L450", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+ 	SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+@@ -5246,6 +5248,13 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x17, 0x40000000},
+ 		{0x1d, 0x40700001},
+ 		{0x21, 0x02211050}),
++	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell Inspiron 5548", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++		ALC255_STANDARD_PINS,
++		{0x12, 0x90a60180},
++		{0x14, 0x90170130},
++		{0x17, 0x40000000},
++		{0x1d, 0x40700001},
++		{0x21, 0x02211040}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		ALC256_STANDARD_PINS,
+ 		{0x13, 0x40000000}),
+diff --git a/sound/pci/hda/thinkpad_helper.c b/sound/pci/hda/thinkpad_helper.c
+index 2341fc334163..6ba0b5517c40 100644
+--- a/sound/pci/hda/thinkpad_helper.c
++++ b/sound/pci/hda/thinkpad_helper.c
+@@ -72,7 +72,6 @@ static void hda_fixup_thinkpad_acpi(struct hda_codec *codec,
+ 		if (led_set_func(TPACPI_LED_MUTE, false) >= 0) {
+ 			old_vmaster_hook = spec->vmaster_mute.hook;
+ 			spec->vmaster_mute.hook = update_tpacpi_mute_led;
+-			spec->vmaster_mute_enum = 1;
+ 			removefunc = false;
+ 		}
+ 		if (led_set_func(TPACPI_LED_MICMUTE, false) >= 0) {
+diff --git a/sound/soc/codecs/mc13783.c b/sound/soc/codecs/mc13783.c
+index 2ffb9a0570dc..3d44fc50e4d0 100644
+--- a/sound/soc/codecs/mc13783.c
++++ b/sound/soc/codecs/mc13783.c
+@@ -623,14 +623,14 @@ static int mc13783_probe(struct snd_soc_codec *codec)
+ 				AUDIO_SSI_SEL, 0);
+ 	else
+ 		mc13xxx_reg_rmw(priv->mc13xxx, MC13783_AUDIO_CODEC,
+-				0, AUDIO_SSI_SEL);
++				AUDIO_SSI_SEL, AUDIO_SSI_SEL);
+ 
+ 	if (priv->dac_ssi_port == MC13783_SSI1_PORT)
+ 		mc13xxx_reg_rmw(priv->mc13xxx, MC13783_AUDIO_DAC,
+ 				AUDIO_SSI_SEL, 0);
+ 	else
+ 		mc13xxx_reg_rmw(priv->mc13xxx, MC13783_AUDIO_DAC,
+-				0, AUDIO_SSI_SEL);
++				AUDIO_SSI_SEL, AUDIO_SSI_SEL);
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/codecs/uda1380.c b/sound/soc/codecs/uda1380.c
+index dc7778b6dd7f..c3c33bd0df1c 100644
+--- a/sound/soc/codecs/uda1380.c
++++ b/sound/soc/codecs/uda1380.c
+@@ -437,7 +437,7 @@ static int uda1380_set_dai_fmt_both(struct snd_soc_dai *codec_dai,
+ 	if ((fmt & SND_SOC_DAIFMT_MASTER_MASK) != SND_SOC_DAIFMT_CBS_CFS)
+ 		return -EINVAL;
+ 
+-	uda1380_write(codec, UDA1380_IFACE, iface);
++	uda1380_write_reg_cache(codec, UDA1380_IFACE, iface);
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/codecs/wm8960.c b/sound/soc/codecs/wm8960.c
+index 3035d9856415..e97a7615df85 100644
+--- a/sound/soc/codecs/wm8960.c
++++ b/sound/soc/codecs/wm8960.c
+@@ -395,7 +395,7 @@ static const struct snd_soc_dapm_route audio_paths[] = {
+ 	{ "Right Input Mixer", "Boost Switch", "Right Boost Mixer", },
+ 	{ "Right Input Mixer", NULL, "RINPUT1", },  /* Really Boost Switch */
+ 	{ "Right Input Mixer", NULL, "RINPUT2" },
+-	{ "Right Input Mixer", NULL, "LINPUT3" },
++	{ "Right Input Mixer", NULL, "RINPUT3" },
+ 
+ 	{ "Left ADC", NULL, "Left Input Mixer" },
+ 	{ "Right ADC", NULL, "Right Input Mixer" },
+diff --git a/sound/soc/codecs/wm8994.c b/sound/soc/codecs/wm8994.c
+index 4fbc7689339a..a1c04dab6684 100644
+--- a/sound/soc/codecs/wm8994.c
++++ b/sound/soc/codecs/wm8994.c
+@@ -2754,7 +2754,7 @@ static struct {
+ };
+ 
+ static int fs_ratios[] = {
+-	64, 128, 192, 256, 348, 512, 768, 1024, 1408, 1536
++	64, 128, 192, 256, 384, 512, 768, 1024, 1408, 1536
+ };
+ 
+ static int bclk_divs[] = {
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index b6f88202b8c9..e19a6765bd8a 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -3074,11 +3074,16 @@ snd_soc_dapm_new_control(struct snd_soc_dapm_context *dapm,
+ 	}
+ 
+ 	prefix = soc_dapm_prefix(dapm);
+-	if (prefix)
++	if (prefix) {
+ 		w->name = kasprintf(GFP_KERNEL, "%s %s", prefix, widget->name);
+-	else
++		if (widget->sname)
++			w->sname = kasprintf(GFP_KERNEL, "%s %s", prefix,
++					     widget->sname);
++	} else {
+ 		w->name = kasprintf(GFP_KERNEL, "%s", widget->name);
+-
++		if (widget->sname)
++			w->sname = kasprintf(GFP_KERNEL, "%s", widget->sname);
++	}
+ 	if (w->name == NULL) {
+ 		kfree(w);
+ 		return NULL;
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 32631a86078b..e21ec5abcc3a 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1117,6 +1117,8 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
+ 	switch (chip->usb_id) {
+ 	case USB_ID(0x045E, 0x075D): /* MS Lifecam Cinema  */
+ 	case USB_ID(0x045E, 0x076D): /* MS Lifecam HD-5000 */
++	case USB_ID(0x045E, 0x0772): /* MS Lifecam Studio */
++	case USB_ID(0x045E, 0x0779): /* MS Lifecam HD-3000 */
+ 	case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */
+ 		return true;
+ 	}
+diff --git a/tools/vm/Makefile b/tools/vm/Makefile
+index ac884b65a072..93aadaf7ff63 100644
+--- a/tools/vm/Makefile
++++ b/tools/vm/Makefile
+@@ -3,7 +3,7 @@
+ TARGETS=page-types slabinfo page_owner_sort
+ 
+ LIB_DIR = ../lib/api
+-LIBS = $(LIB_DIR)/libapikfs.a
++LIBS = $(LIB_DIR)/libapi.a
+ 
+ CC = $(CROSS_COMPILE)gcc
+ CFLAGS = -Wall -Wextra -I../lib/


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-06-20 17:36 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-06-20 17:36 UTC (permalink / raw
  To: gentoo-commits

commit:     9a98f7941dcc85687150d8fef5885931cc6f841a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jun 20 17:35:59 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jun 20 17:35:59 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9a98f794

Add check to saved_root_name for supported filesystem path naming.

 2900_dev-root-proc-mount-fix.patch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/2900_dev-root-proc-mount-fix.patch b/2900_dev-root-proc-mount-fix.patch
index 6ea86e2..4cd558e 100644
--- a/2900_dev-root-proc-mount-fix.patch
+++ b/2900_dev-root-proc-mount-fix.patch
@@ -18,7 +18,7 @@
  #ifdef CONFIG_BLOCK
 -	create_dev("/dev/root", ROOT_DEV);
 -	mount_block_root("/dev/root", root_mountflags);
-+	if (saved_root_name[0]) {
++	if (saved_root_name[0] == '/') {
 +		create_dev(saved_root_name, ROOT_DEV);
 +		mount_block_root(saved_root_name, root_mountflags);
 +	} else {


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:master commit in: /
@ 2015-06-23 12:48 Mike Pagano
  2015-04-27 18:08 ` [gentoo-commits] proj/linux-patches:4.0 " Mike Pagano
  0 siblings, 1 reply; 22+ messages in thread
From: Mike Pagano @ 2015-06-23 12:48 UTC (permalink / raw
  To: gentoo-commits

commit:     f2dffc7244ec86ad41fde2ee164a4082c974ade5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Apr 27 17:56:11 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Apr 27 17:56:11 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f2dffc72

Patch to select REGMAP_IRQ for rt5033 mfd driver. See bug #546938.

 0000_README                             |  6 +++++-
 2600_select-REGMAP_IRQ-for-rt5033.patch | 30 ++++++++++++++++++++++++++++++
 2 files changed, 35 insertions(+), 1 deletion(-)

diff --git a/0000_README b/0000_README
index ca06e06..0cdee6d 100644
--- a/0000_README
+++ b/0000_README
@@ -49,7 +49,11 @@ Desc:   Support for namespace user.pax.* on tmpfs.
 
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
-Desc:   Enable link security restrictions by default
+Desc:   Enable link security restrictions by default.
+
+Patch:  2600_select-REGMAP_IRQ-for-rt5033.patch
+From:   http://git.kernel.org/
+Desc:   mfd: rt5033: MFD_RT5033 needs to select REGMAP_IRQ. See bug #546938.
 
 Patch:  2700_ThinkPad-30-brightness-control-fix.patch
 From:   Seth Forshee <seth.forshee@canonical.com>

diff --git a/2600_select-REGMAP_IRQ-for-rt5033.patch b/2600_select-REGMAP_IRQ-for-rt5033.patch
new file mode 100644
index 0000000..92fb2e0
--- /dev/null
+++ b/2600_select-REGMAP_IRQ-for-rt5033.patch
@@ -0,0 +1,30 @@
+From 23a2a22a3f3f17de094f386a893f7047c10e44a0 Mon Sep 17 00:00:00 2001
+From: Artem Savkov <asavkov@redhat.com>
+Date: Thu, 5 Mar 2015 12:42:27 +0100
+Subject: mfd: rt5033: MFD_RT5033 needs to select REGMAP_IRQ
+
+Since commit 0b2712585(linux-next.git) this driver uses regmap_irq and so needs
+to select REGMAP_IRQ.
+
+This fixes the following compilation errors:
+ERROR: "regmap_irq_get_domain" [drivers/mfd/rt5033.ko] undefined!
+ERROR: "regmap_add_irq_chip" [drivers/mfd/rt5033.ko] undefined!
+
+Signed-off-by: Artem Savkov <asavkov@redhat.com>
+Signed-off-by: Lee Jones <lee.jones@linaro.org>
+
+diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
+index f8ef77d9a..f49f404 100644
+--- a/drivers/mfd/Kconfig
++++ b/drivers/mfd/Kconfig
+@@ -680,6 +680,7 @@ config MFD_RT5033
+ 	depends on I2C=y
+ 	select MFD_CORE
+ 	select REGMAP_I2C
++	select REGMAP_IRQ
+ 	help
+ 	  This driver provides for the Richtek RT5033 Power Management IC,
+ 	  which includes the I2C driver and the Core APIs. This driver provides
+-- 
+cgit v0.10.2
+


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-06-23 14:01 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-06-23 14:01 UTC (permalink / raw
  To: gentoo-commits

commit:     bac443972d6de3c565d4d103ca34dda24d258876
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 23 13:52:30 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 23 13:52:30 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bac44397

Linux patch 4.0.6

 0000_README            |    4 +
 1005_linux-4.0.6.patch | 3730 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3734 insertions(+)

diff --git a/0000_README b/0000_README
index 0f63559..8761846 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-4.0.5.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.0.5
 
+Patch:  1005_linux-4.0.6.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.0.6
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1005_linux-4.0.6.patch b/1005_linux-4.0.6.patch
new file mode 100644
index 0000000..15519e7
--- /dev/null
+++ b/1005_linux-4.0.6.patch
@@ -0,0 +1,3730 @@
+diff --git a/Makefile b/Makefile
+index 1880cf77059b..af6da040b952 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+diff --git a/arch/arm/boot/dts/am335x-bone-common.dtsi b/arch/arm/boot/dts/am335x-bone-common.dtsi
+index c3255e0c90aa..dbb3f4d2bf84 100644
+--- a/arch/arm/boot/dts/am335x-bone-common.dtsi
++++ b/arch/arm/boot/dts/am335x-bone-common.dtsi
+@@ -223,6 +223,25 @@
+ /include/ "tps65217.dtsi"
+ 
+ &tps {
++	/*
++	 * Configure pmic to enter OFF-state instead of SLEEP-state ("RTC-only
++	 * mode") at poweroff.  Most BeagleBone versions do not support RTC-only
++	 * mode and risk hardware damage if this mode is entered.
++	 *
++	 * For details, see linux-omap mailing list May 2015 thread
++	 *	[PATCH] ARM: dts: am335x-bone* enable pmic-shutdown-controller
++	 * In particular, messages:
++	 *	http://www.spinics.net/lists/linux-omap/msg118585.html
++	 *	http://www.spinics.net/lists/linux-omap/msg118615.html
++	 *
++	 * You can override this later with
++	 *	&tps {  /delete-property/ ti,pmic-shutdown-controller;  }
++	 * if you want to use RTC-only mode and made sure you are not affected
++	 * by the hardware problems. (Tip: double-check by performing a current
++	 * measurement after shutdown: it should be less than 1 mA.)
++	 */
++	ti,pmic-shutdown-controller;
++
+ 	regulators {
+ 		dcdc1_reg: regulator@0 {
+ 			regulator-name = "vdds_dpr";
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173-evb.dts b/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
+index 43d54017b779..d0ab012fa379 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
+@@ -16,7 +16,8 @@
+ #include "mt8173.dtsi"
+ 
+ / {
+-	model = "mediatek,mt8173-evb";
++	model = "MediaTek MT8173 evaluation board";
++	compatible = "mediatek,mt8173-evb", "mediatek,mt8173";
+ 
+ 	aliases {
+ 		serial0 = &uart0;
+diff --git a/arch/mips/kernel/irq.c b/arch/mips/kernel/irq.c
+index d2bfbc2e8995..be15e52a47a0 100644
+--- a/arch/mips/kernel/irq.c
++++ b/arch/mips/kernel/irq.c
+@@ -109,7 +109,7 @@ void __init init_IRQ(void)
+ #endif
+ }
+ 
+-#ifdef DEBUG_STACKOVERFLOW
++#ifdef CONFIG_DEBUG_STACKOVERFLOW
+ static inline void check_stack_overflow(void)
+ {
+ 	unsigned long sp;
+diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
+index 838d3a6a5b7d..cea02968a908 100644
+--- a/arch/mips/kvm/emulate.c
++++ b/arch/mips/kvm/emulate.c
+@@ -2101,7 +2101,7 @@ enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
+ 		if (vcpu->mmio_needed == 2)
+ 			*gpr = *(int16_t *) run->mmio.data;
+ 		else
+-			*gpr = *(int16_t *) run->mmio.data;
++			*gpr = *(uint16_t *)run->mmio.data;
+ 
+ 		break;
+ 	case 1:
+diff --git a/arch/mips/ralink/ill_acc.c b/arch/mips/ralink/ill_acc.c
+index e20b02e3ae28..e10d10b9e82a 100644
+--- a/arch/mips/ralink/ill_acc.c
++++ b/arch/mips/ralink/ill_acc.c
+@@ -41,7 +41,7 @@ static irqreturn_t ill_acc_irq_handler(int irq, void *_priv)
+ 		addr, (type >> ILL_ACC_OFF_S) & ILL_ACC_OFF_M,
+ 		type & ILL_ACC_LEN_M);
+ 
+-	rt_memc_w32(REG_ILL_ACC_TYPE, REG_ILL_ACC_TYPE);
++	rt_memc_w32(ILL_INT_STATUS, REG_ILL_ACC_TYPE);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/arch/x86/include/asm/segment.h b/arch/x86/include/asm/segment.h
+index db257a58571f..e657b7ba3292 100644
+--- a/arch/x86/include/asm/segment.h
++++ b/arch/x86/include/asm/segment.h
+@@ -200,10 +200,21 @@
+ #define TLS_SIZE (GDT_ENTRY_TLS_ENTRIES * 8)
+ 
+ #ifdef __KERNEL__
++
++/*
++ * early_idt_handler_array is an array of entry points referenced in the
++ * early IDT.  For simplicity, it's a real array with one entry point
++ * every nine bytes.  That leaves room for an optional 'push $0' if the
++ * vector has no error code (two bytes), a 'push $vector_number' (two
++ * bytes), and a jump to the common entry code (up to five bytes).
++ */
++#define EARLY_IDT_HANDLER_SIZE 9
++
+ #ifndef __ASSEMBLY__
+-extern const char early_idt_handlers[NUM_EXCEPTION_VECTORS][2+2+5];
++
++extern const char early_idt_handler_array[NUM_EXCEPTION_VECTORS][EARLY_IDT_HANDLER_SIZE];
+ #ifdef CONFIG_TRACING
+-#define trace_early_idt_handlers early_idt_handlers
++# define trace_early_idt_handler_array early_idt_handler_array
+ #endif
+ 
+ /*
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index c4f8d4659070..b111ab5c4509 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -167,7 +167,7 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
+ 	clear_bss();
+ 
+ 	for (i = 0; i < NUM_EXCEPTION_VECTORS; i++)
+-		set_intr_gate(i, early_idt_handlers[i]);
++		set_intr_gate(i, early_idt_handler_array[i]);
+ 	load_idt((const struct desc_ptr *)&idt_descr);
+ 
+ 	copy_bootdata(__va(real_mode_data));
+diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
+index f36bd42d6f0c..30a2aa3782fa 100644
+--- a/arch/x86/kernel/head_32.S
++++ b/arch/x86/kernel/head_32.S
+@@ -477,21 +477,22 @@ is486:
+ __INIT
+ setup_once:
+ 	/*
+-	 * Set up a idt with 256 entries pointing to ignore_int,
+-	 * interrupt gates. It doesn't actually load idt - that needs
+-	 * to be done on each CPU. Interrupts are enabled elsewhere,
+-	 * when we can be relatively sure everything is ok.
++	 * Set up a idt with 256 interrupt gates that push zero if there
++	 * is no error code and then jump to early_idt_handler_common.
++	 * It doesn't actually load the idt - that needs to be done on
++	 * each CPU. Interrupts are enabled elsewhere, when we can be
++	 * relatively sure everything is ok.
+ 	 */
+ 
+ 	movl $idt_table,%edi
+-	movl $early_idt_handlers,%eax
++	movl $early_idt_handler_array,%eax
+ 	movl $NUM_EXCEPTION_VECTORS,%ecx
+ 1:
+ 	movl %eax,(%edi)
+ 	movl %eax,4(%edi)
+ 	/* interrupt gate, dpl=0, present */
+ 	movl $(0x8E000000 + __KERNEL_CS),2(%edi)
+-	addl $9,%eax
++	addl $EARLY_IDT_HANDLER_SIZE,%eax
+ 	addl $8,%edi
+ 	loop 1b
+ 
+@@ -523,26 +524,28 @@ setup_once:
+ 	andl $0,setup_once_ref	/* Once is enough, thanks */
+ 	ret
+ 
+-ENTRY(early_idt_handlers)
++ENTRY(early_idt_handler_array)
+ 	# 36(%esp) %eflags
+ 	# 32(%esp) %cs
+ 	# 28(%esp) %eip
+ 	# 24(%rsp) error code
+ 	i = 0
+ 	.rept NUM_EXCEPTION_VECTORS
+-	.if (EXCEPTION_ERRCODE_MASK >> i) & 1
+-	ASM_NOP2
+-	.else
++	.ifeq (EXCEPTION_ERRCODE_MASK >> i) & 1
+ 	pushl $0		# Dummy error code, to make stack frame uniform
+ 	.endif
+ 	pushl $i		# 20(%esp) Vector number
+-	jmp early_idt_handler
++	jmp early_idt_handler_common
+ 	i = i + 1
++	.fill early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc
+ 	.endr
+-ENDPROC(early_idt_handlers)
++ENDPROC(early_idt_handler_array)
+ 	
+-	/* This is global to keep gas from relaxing the jumps */
+-ENTRY(early_idt_handler)
++early_idt_handler_common:
++	/*
++	 * The stack is the hardware frame, an error code or zero, and the
++	 * vector number.
++	 */
+ 	cld
+ 
+ 	cmpl $2,(%esp)		# X86_TRAP_NMI
+@@ -602,7 +605,7 @@ ex_entry:
+ is_nmi:
+ 	addl $8,%esp		/* drop vector number and error code */
+ 	iret
+-ENDPROC(early_idt_handler)
++ENDPROC(early_idt_handler_common)
+ 
+ /* This is the default interrupt "handler" :-) */
+ 	ALIGN
+diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
+index 6fd514d9f69a..f8a8406033c3 100644
+--- a/arch/x86/kernel/head_64.S
++++ b/arch/x86/kernel/head_64.S
+@@ -321,26 +321,28 @@ bad_address:
+ 	jmp bad_address
+ 
+ 	__INIT
+-	.globl early_idt_handlers
+-early_idt_handlers:
++ENTRY(early_idt_handler_array)
+ 	# 104(%rsp) %rflags
+ 	#  96(%rsp) %cs
+ 	#  88(%rsp) %rip
+ 	#  80(%rsp) error code
+ 	i = 0
+ 	.rept NUM_EXCEPTION_VECTORS
+-	.if (EXCEPTION_ERRCODE_MASK >> i) & 1
+-	ASM_NOP2
+-	.else
++	.ifeq (EXCEPTION_ERRCODE_MASK >> i) & 1
+ 	pushq $0		# Dummy error code, to make stack frame uniform
+ 	.endif
+ 	pushq $i		# 72(%rsp) Vector number
+-	jmp early_idt_handler
++	jmp early_idt_handler_common
+ 	i = i + 1
++	.fill early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc
+ 	.endr
++ENDPROC(early_idt_handler_array)
+ 
+-/* This is global to keep gas from relaxing the jumps */
+-ENTRY(early_idt_handler)
++early_idt_handler_common:
++	/*
++	 * The stack is the hardware frame, an error code or zero, and the
++	 * vector number.
++	 */
+ 	cld
+ 
+ 	cmpl $2,(%rsp)		# X86_TRAP_NMI
+@@ -412,7 +414,7 @@ ENTRY(early_idt_handler)
+ is_nmi:
+ 	addq $16,%rsp		# drop vector number and error code
+ 	INTERRUPT_RETURN
+-ENDPROC(early_idt_handler)
++ENDPROC(early_idt_handler_common)
+ 
+ 	__INITDATA
+ 
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 987514396c1e..ddeff4844a10 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -559,6 +559,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ 				if (is_ereg(dst_reg))
+ 					EMIT1(0x41);
+ 				EMIT3(0xC1, add_1reg(0xC8, dst_reg), 8);
++
++				/* emit 'movzwl eax, ax' */
++				if (is_ereg(dst_reg))
++					EMIT3(0x45, 0x0F, 0xB7);
++				else
++					EMIT2(0x0F, 0xB7);
++				EMIT1(add_2reg(0xC0, dst_reg, dst_reg));
+ 				break;
+ 			case 32:
+ 				/* emit 'bswap eax' to swap lower 4 bytes */
+@@ -577,6 +584,27 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ 			break;
+ 
+ 		case BPF_ALU | BPF_END | BPF_FROM_LE:
++			switch (imm32) {
++			case 16:
++				/* emit 'movzwl eax, ax' to zero extend 16-bit
++				 * into 64 bit
++				 */
++				if (is_ereg(dst_reg))
++					EMIT3(0x45, 0x0F, 0xB7);
++				else
++					EMIT2(0x0F, 0xB7);
++				EMIT1(add_2reg(0xC0, dst_reg, dst_reg));
++				break;
++			case 32:
++				/* emit 'mov eax, eax' to clear upper 32-bits */
++				if (is_ereg(dst_reg))
++					EMIT1(0x45);
++				EMIT2(0x89, add_2reg(0xC0, dst_reg, dst_reg));
++				break;
++			case 64:
++				/* nop */
++				break;
++			}
+ 			break;
+ 
+ 			/* ST: *(u8*)(dst_reg + off) = imm */
+@@ -938,7 +966,12 @@ void bpf_int_jit_compile(struct bpf_prog *prog)
+ 	}
+ 	ctx.cleanup_addr = proglen;
+ 
+-	for (pass = 0; pass < 10; pass++) {
++	/* JITed image shrinks with every pass and the loop iterates
++	 * until the image stops shrinking. Very large bpf programs
++	 * may converge on the last pass. In such case do one more
++	 * pass to emit the final image
++	 */
++	for (pass = 0; pass < 10 || image; pass++) {
+ 		proglen = do_jit(prog, addrs, image, oldproglen, &ctx);
+ 		if (proglen <= 0) {
+ 			image = NULL;
+diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
+index 7b9be9822724..8533c96bab13 100644
+--- a/arch/x86/vdso/Makefile
++++ b/arch/x86/vdso/Makefile
+@@ -51,7 +51,7 @@ VDSO_LDFLAGS_vdso.lds = -m64 -Wl,-soname=linux-vdso.so.1 \
+ $(obj)/vdso64.so.dbg: $(src)/vdso.lds $(vobjs) FORCE
+ 	$(call if_changed,vdso)
+ 
+-HOST_EXTRACFLAGS += -I$(srctree)/tools/include
++HOST_EXTRACFLAGS += -I$(srctree)/tools/include -I$(srctree)/include/uapi -I$(srctree)/arch/x86/include/uapi
+ hostprogs-y			+= vdso2c
+ 
+ quiet_cmd_vdso2c = VDSO2C  $@
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 5c39703e644f..b2e73e1ef8a4 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1589,6 +1589,7 @@ static int blk_mq_hctx_notify(void *data, unsigned long action,
+ 	return NOTIFY_OK;
+ }
+ 
++/* hctx->ctxs will be freed in queue's release handler */
+ static void blk_mq_exit_hctx(struct request_queue *q,
+ 		struct blk_mq_tag_set *set,
+ 		struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx)
+@@ -1607,7 +1608,6 @@ static void blk_mq_exit_hctx(struct request_queue *q,
+ 
+ 	blk_mq_unregister_cpu_notifier(&hctx->cpu_notifier);
+ 	blk_free_flush_queue(hctx->fq);
+-	kfree(hctx->ctxs);
+ 	blk_mq_free_bitmap(&hctx->ctx_map);
+ }
+ 
+@@ -1873,8 +1873,12 @@ void blk_mq_release(struct request_queue *q)
+ 	unsigned int i;
+ 
+ 	/* hctx kobj stays in hctx */
+-	queue_for_each_hw_ctx(q, hctx, i)
++	queue_for_each_hw_ctx(q, hctx, i) {
++		if (!hctx)
++			continue;
++		kfree(hctx->ctxs);
+ 		kfree(hctx);
++	}
+ 
+ 	kfree(q->queue_hw_ctx);
+ 
+diff --git a/block/genhd.c b/block/genhd.c
+index 0a536dc05f3b..ea982eadaf63 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -422,9 +422,9 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
+ 	/* allocate ext devt */
+ 	idr_preload(GFP_KERNEL);
+ 
+-	spin_lock(&ext_devt_lock);
++	spin_lock_bh(&ext_devt_lock);
+ 	idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_NOWAIT);
+-	spin_unlock(&ext_devt_lock);
++	spin_unlock_bh(&ext_devt_lock);
+ 
+ 	idr_preload_end();
+ 	if (idx < 0)
+@@ -449,9 +449,9 @@ void blk_free_devt(dev_t devt)
+ 		return;
+ 
+ 	if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
+-		spin_lock(&ext_devt_lock);
++		spin_lock_bh(&ext_devt_lock);
+ 		idr_remove(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
+-		spin_unlock(&ext_devt_lock);
++		spin_unlock_bh(&ext_devt_lock);
+ 	}
+ }
+ 
+@@ -653,7 +653,6 @@ void del_gendisk(struct gendisk *disk)
+ 	disk->flags &= ~GENHD_FL_UP;
+ 
+ 	sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi");
+-	bdi_unregister(&disk->queue->backing_dev_info);
+ 	blk_unregister_queue(disk);
+ 	blk_unregister_region(disk_devt(disk), disk->minors);
+ 
+@@ -691,13 +690,13 @@ struct gendisk *get_gendisk(dev_t devt, int *partno)
+ 	} else {
+ 		struct hd_struct *part;
+ 
+-		spin_lock(&ext_devt_lock);
++		spin_lock_bh(&ext_devt_lock);
+ 		part = idr_find(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
+ 		if (part && get_disk(part_to_disk(part))) {
+ 			*partno = part->partno;
+ 			disk = part_to_disk(part);
+ 		}
+-		spin_unlock(&ext_devt_lock);
++		spin_unlock_bh(&ext_devt_lock);
+ 	}
+ 
+ 	return disk;
+diff --git a/drivers/ata/ahci_mvebu.c b/drivers/ata/ahci_mvebu.c
+index 23716dd8a7ec..5928d0746a27 100644
+--- a/drivers/ata/ahci_mvebu.c
++++ b/drivers/ata/ahci_mvebu.c
+@@ -45,7 +45,7 @@ static void ahci_mvebu_mbus_config(struct ahci_host_priv *hpriv,
+ 		writel((cs->mbus_attr << 8) |
+ 		       (dram->mbus_dram_target_id << 4) | 1,
+ 		       hpriv->mmio + AHCI_WINDOW_CTRL(i));
+-		writel(cs->base, hpriv->mmio + AHCI_WINDOW_BASE(i));
++		writel(cs->base >> 16, hpriv->mmio + AHCI_WINDOW_BASE(i));
+ 		writel(((cs->size - 1) & 0xffff0000),
+ 		       hpriv->mmio + AHCI_WINDOW_SIZE(i));
+ 	}
+diff --git a/drivers/ata/pata_octeon_cf.c b/drivers/ata/pata_octeon_cf.c
+index 80a80548ad0a..27245957eee3 100644
+--- a/drivers/ata/pata_octeon_cf.c
++++ b/drivers/ata/pata_octeon_cf.c
+@@ -1053,7 +1053,7 @@ static struct of_device_id octeon_cf_match[] = {
+ 	},
+ 	{},
+ };
+-MODULE_DEVICE_TABLE(of, octeon_i2c_match);
++MODULE_DEVICE_TABLE(of, octeon_cf_match);
+ 
+ static struct platform_driver octeon_cf_driver = {
+ 	.probe		= octeon_cf_probe,
+diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
+index 9c2ba1c97c42..df0c66cb7ad3 100644
+--- a/drivers/base/cacheinfo.c
++++ b/drivers/base/cacheinfo.c
+@@ -179,7 +179,7 @@ static int detect_cache_attributes(unsigned int cpu)
+ {
+ 	int ret;
+ 
+-	if (init_cache_level(cpu))
++	if (init_cache_level(cpu) || !cache_leaves(cpu))
+ 		return -ENOENT;
+ 
+ 	per_cpu_cacheinfo(cpu) = kcalloc(cache_leaves(cpu),
+diff --git a/drivers/bus/mvebu-mbus.c b/drivers/bus/mvebu-mbus.c
+index fb9ec6221730..6f047dcb94c2 100644
+--- a/drivers/bus/mvebu-mbus.c
++++ b/drivers/bus/mvebu-mbus.c
+@@ -58,7 +58,6 @@
+ #include <linux/debugfs.h>
+ #include <linux/log2.h>
+ #include <linux/syscore_ops.h>
+-#include <linux/memblock.h>
+ 
+ /*
+  * DDR target is the same on all platforms.
+@@ -70,6 +69,7 @@
+  */
+ #define WIN_CTRL_OFF		0x0000
+ #define   WIN_CTRL_ENABLE       BIT(0)
++/* Only on HW I/O coherency capable platforms */
+ #define   WIN_CTRL_SYNCBARRIER  BIT(1)
+ #define   WIN_CTRL_TGT_MASK     0xf0
+ #define   WIN_CTRL_TGT_SHIFT    4
+@@ -102,9 +102,7 @@
+ 
+ /* Relative to mbusbridge_base */
+ #define MBUS_BRIDGE_CTRL_OFF	0x0
+-#define  MBUS_BRIDGE_SIZE_MASK  0xffff0000
+ #define MBUS_BRIDGE_BASE_OFF	0x4
+-#define  MBUS_BRIDGE_BASE_MASK  0xffff0000
+ 
+ /* Maximum number of windows, for all known platforms */
+ #define MBUS_WINS_MAX           20
+@@ -323,8 +321,9 @@ static int mvebu_mbus_setup_window(struct mvebu_mbus_state *mbus,
+ 	ctrl = ((size - 1) & WIN_CTRL_SIZE_MASK) |
+ 		(attr << WIN_CTRL_ATTR_SHIFT)    |
+ 		(target << WIN_CTRL_TGT_SHIFT)   |
+-		WIN_CTRL_SYNCBARRIER             |
+ 		WIN_CTRL_ENABLE;
++	if (mbus->hw_io_coherency)
++		ctrl |= WIN_CTRL_SYNCBARRIER;
+ 
+ 	writel(base & WIN_BASE_LOW, addr + WIN_BASE_OFF);
+ 	writel(ctrl, addr + WIN_CTRL_OFF);
+@@ -577,106 +576,36 @@ static unsigned int armada_xp_mbus_win_remap_offset(int win)
+ 		return MVEBU_MBUS_NO_REMAP;
+ }
+ 
+-/*
+- * Use the memblock information to find the MBus bridge hole in the
+- * physical address space.
+- */
+-static void __init
+-mvebu_mbus_find_bridge_hole(uint64_t *start, uint64_t *end)
+-{
+-	struct memblock_region *r;
+-	uint64_t s = 0;
+-
+-	for_each_memblock(memory, r) {
+-		/*
+-		 * This part of the memory is above 4 GB, so we don't
+-		 * care for the MBus bridge hole.
+-		 */
+-		if (r->base >= 0x100000000)
+-			continue;
+-
+-		/*
+-		 * The MBus bridge hole is at the end of the RAM under
+-		 * the 4 GB limit.
+-		 */
+-		if (r->base + r->size > s)
+-			s = r->base + r->size;
+-	}
+-
+-	*start = s;
+-	*end = 0x100000000;
+-}
+-
+ static void __init
+ mvebu_mbus_default_setup_cpu_target(struct mvebu_mbus_state *mbus)
+ {
+ 	int i;
+ 	int cs;
+-	uint64_t mbus_bridge_base, mbus_bridge_end;
+ 
+ 	mvebu_mbus_dram_info.mbus_dram_target_id = TARGET_DDR;
+ 
+-	mvebu_mbus_find_bridge_hole(&mbus_bridge_base, &mbus_bridge_end);
+-
+ 	for (i = 0, cs = 0; i < 4; i++) {
+-		u64 base = readl(mbus->sdramwins_base + DDR_BASE_CS_OFF(i));
+-		u64 size = readl(mbus->sdramwins_base + DDR_SIZE_CS_OFF(i));
+-		u64 end;
+-		struct mbus_dram_window *w;
+-
+-		/* Ignore entries that are not enabled */
+-		if (!(size & DDR_SIZE_ENABLED))
+-			continue;
+-
+-		/*
+-		 * Ignore entries whose base address is above 2^32,
+-		 * since devices cannot DMA to such high addresses
+-		 */
+-		if (base & DDR_BASE_CS_HIGH_MASK)
+-			continue;
+-
+-		base = base & DDR_BASE_CS_LOW_MASK;
+-		size = (size | ~DDR_SIZE_MASK) + 1;
+-		end = base + size;
+-
+-		/*
+-		 * Adjust base/size of the current CS to make sure it
+-		 * doesn't overlap with the MBus bridge hole. This is
+-		 * particularly important for devices that do DMA from
+-		 * DRAM to a SRAM mapped in a MBus window, such as the
+-		 * CESA cryptographic engine.
+-		 */
++		u32 base = readl(mbus->sdramwins_base + DDR_BASE_CS_OFF(i));
++		u32 size = readl(mbus->sdramwins_base + DDR_SIZE_CS_OFF(i));
+ 
+ 		/*
+-		 * The CS is fully enclosed inside the MBus bridge
+-		 * area, so ignore it.
++		 * We only take care of entries for which the chip
++		 * select is enabled, and that don't have high base
++		 * address bits set (devices can only access the first
++		 * 32 bits of the memory).
+ 		 */
+-		if (base >= mbus_bridge_base && end <= mbus_bridge_end)
+-			continue;
++		if ((size & DDR_SIZE_ENABLED) &&
++		    !(base & DDR_BASE_CS_HIGH_MASK)) {
++			struct mbus_dram_window *w;
+ 
+-		/*
+-		 * Beginning of CS overlaps with end of MBus, raise CS
+-		 * base address, and shrink its size.
+-		 */
+-		if (base >= mbus_bridge_base && end > mbus_bridge_end) {
+-			size -= mbus_bridge_end - base;
+-			base = mbus_bridge_end;
++			w = &mvebu_mbus_dram_info.cs[cs++];
++			w->cs_index = i;
++			w->mbus_attr = 0xf & ~(1 << i);
++			if (mbus->hw_io_coherency)
++				w->mbus_attr |= ATTR_HW_COHERENCY;
++			w->base = base & DDR_BASE_CS_LOW_MASK;
++			w->size = (size | ~DDR_SIZE_MASK) + 1;
+ 		}
+-
+-		/*
+-		 * End of CS overlaps with beginning of MBus, shrink
+-		 * CS size.
+-		 */
+-		if (base < mbus_bridge_base && end > mbus_bridge_base)
+-			size -= end - mbus_bridge_base;
+-
+-		w = &mvebu_mbus_dram_info.cs[cs++];
+-		w->cs_index = i;
+-		w->mbus_attr = 0xf & ~(1 << i);
+-		if (mbus->hw_io_coherency)
+-			w->mbus_attr |= ATTR_HW_COHERENCY;
+-		w->base = base;
+-		w->size = size;
+ 	}
+ 	mvebu_mbus_dram_info.num_cs = cs;
+ }
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index d9891d3461f6..7992164ea9ec 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -174,6 +174,8 @@
+ #define AT_XDMAC_MBR_UBC_NDV3		(0x3 << 27)	/* Next Descriptor View 3 */
+ 
+ #define AT_XDMAC_MAX_CHAN	0x20
++#define AT_XDMAC_MAX_CSIZE	16	/* 16 data */
++#define AT_XDMAC_MAX_DWIDTH	8	/* 64 bits */
+ 
+ #define AT_XDMAC_DMA_BUSWIDTHS\
+ 	(BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) |\
+@@ -192,20 +194,17 @@ struct at_xdmac_chan {
+ 	struct dma_chan			chan;
+ 	void __iomem			*ch_regs;
+ 	u32				mask;		/* Channel Mask */
+-	u32				cfg[2];		/* Channel Configuration Register */
+-	#define	AT_XDMAC_DEV_TO_MEM_CFG	0		/* Predifined dev to mem channel conf */
+-	#define	AT_XDMAC_MEM_TO_DEV_CFG	1		/* Predifined mem to dev channel conf */
++	u32				cfg;		/* Channel Configuration Register */
+ 	u8				perid;		/* Peripheral ID */
+ 	u8				perif;		/* Peripheral Interface */
+ 	u8				memif;		/* Memory Interface */
+-	u32				per_src_addr;
+-	u32				per_dst_addr;
+ 	u32				save_cc;
+ 	u32				save_cim;
+ 	u32				save_cnda;
+ 	u32				save_cndc;
+ 	unsigned long			status;
+ 	struct tasklet_struct		tasklet;
++	struct dma_slave_config		sconfig;
+ 
+ 	spinlock_t			lock;
+ 
+@@ -415,8 +414,9 @@ static dma_cookie_t at_xdmac_tx_submit(struct dma_async_tx_descriptor *tx)
+ 	struct at_xdmac_desc	*desc = txd_to_at_desc(tx);
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(tx->chan);
+ 	dma_cookie_t		cookie;
++	unsigned long		irqflags;
+ 
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, irqflags);
+ 	cookie = dma_cookie_assign(tx);
+ 
+ 	dev_vdbg(chan2dev(tx->chan), "%s: atchan 0x%p, add desc 0x%p to xfers_list\n",
+@@ -425,7 +425,7 @@ static dma_cookie_t at_xdmac_tx_submit(struct dma_async_tx_descriptor *tx)
+ 	if (list_is_singular(&atchan->xfers_list))
+ 		at_xdmac_start_xfer(atchan, desc);
+ 
+-	spin_unlock_bh(&atchan->lock);
++	spin_unlock_irqrestore(&atchan->lock, irqflags);
+ 	return cookie;
+ }
+ 
+@@ -494,61 +494,94 @@ static struct dma_chan *at_xdmac_xlate(struct of_phandle_args *dma_spec,
+ 	return chan;
+ }
+ 
++static int at_xdmac_compute_chan_conf(struct dma_chan *chan,
++				      enum dma_transfer_direction direction)
++{
++	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
++	int			csize, dwidth;
++
++	if (direction == DMA_DEV_TO_MEM) {
++		atchan->cfg =
++			AT91_XDMAC_DT_PERID(atchan->perid)
++			| AT_XDMAC_CC_DAM_INCREMENTED_AM
++			| AT_XDMAC_CC_SAM_FIXED_AM
++			| AT_XDMAC_CC_DIF(atchan->memif)
++			| AT_XDMAC_CC_SIF(atchan->perif)
++			| AT_XDMAC_CC_SWREQ_HWR_CONNECTED
++			| AT_XDMAC_CC_DSYNC_PER2MEM
++			| AT_XDMAC_CC_MBSIZE_SIXTEEN
++			| AT_XDMAC_CC_TYPE_PER_TRAN;
++		csize = ffs(atchan->sconfig.src_maxburst) - 1;
++		if (csize < 0) {
++			dev_err(chan2dev(chan), "invalid src maxburst value\n");
++			return -EINVAL;
++		}
++		atchan->cfg |= AT_XDMAC_CC_CSIZE(csize);
++		dwidth = ffs(atchan->sconfig.src_addr_width) - 1;
++		if (dwidth < 0) {
++			dev_err(chan2dev(chan), "invalid src addr width value\n");
++			return -EINVAL;
++		}
++		atchan->cfg |= AT_XDMAC_CC_DWIDTH(dwidth);
++	} else if (direction == DMA_MEM_TO_DEV) {
++		atchan->cfg =
++			AT91_XDMAC_DT_PERID(atchan->perid)
++			| AT_XDMAC_CC_DAM_FIXED_AM
++			| AT_XDMAC_CC_SAM_INCREMENTED_AM
++			| AT_XDMAC_CC_DIF(atchan->perif)
++			| AT_XDMAC_CC_SIF(atchan->memif)
++			| AT_XDMAC_CC_SWREQ_HWR_CONNECTED
++			| AT_XDMAC_CC_DSYNC_MEM2PER
++			| AT_XDMAC_CC_MBSIZE_SIXTEEN
++			| AT_XDMAC_CC_TYPE_PER_TRAN;
++		csize = ffs(atchan->sconfig.dst_maxburst) - 1;
++		if (csize < 0) {
++			dev_err(chan2dev(chan), "invalid src maxburst value\n");
++			return -EINVAL;
++		}
++		atchan->cfg |= AT_XDMAC_CC_CSIZE(csize);
++		dwidth = ffs(atchan->sconfig.dst_addr_width) - 1;
++		if (dwidth < 0) {
++			dev_err(chan2dev(chan), "invalid dst addr width value\n");
++			return -EINVAL;
++		}
++		atchan->cfg |= AT_XDMAC_CC_DWIDTH(dwidth);
++	}
++
++	dev_dbg(chan2dev(chan),	"%s: cfg=0x%08x\n", __func__, atchan->cfg);
++
++	return 0;
++}
++
++/*
++ * Only check that maxburst and addr width values are supported by the
++ * the controller but not that the configuration is good to perform the
++ * transfer since we don't know the direction at this stage.
++ */
++static int at_xdmac_check_slave_config(struct dma_slave_config *sconfig)
++{
++	if ((sconfig->src_maxburst > AT_XDMAC_MAX_CSIZE)
++	    || (sconfig->dst_maxburst > AT_XDMAC_MAX_CSIZE))
++		return -EINVAL;
++
++	if ((sconfig->src_addr_width > AT_XDMAC_MAX_DWIDTH)
++	    || (sconfig->dst_addr_width > AT_XDMAC_MAX_DWIDTH))
++		return -EINVAL;
++
++	return 0;
++}
++
+ static int at_xdmac_set_slave_config(struct dma_chan *chan,
+ 				      struct dma_slave_config *sconfig)
+ {
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
+-	u8 dwidth;
+-	int csize;
+ 
+-	atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG] =
+-		AT91_XDMAC_DT_PERID(atchan->perid)
+-		| AT_XDMAC_CC_DAM_INCREMENTED_AM
+-		| AT_XDMAC_CC_SAM_FIXED_AM
+-		| AT_XDMAC_CC_DIF(atchan->memif)
+-		| AT_XDMAC_CC_SIF(atchan->perif)
+-		| AT_XDMAC_CC_SWREQ_HWR_CONNECTED
+-		| AT_XDMAC_CC_DSYNC_PER2MEM
+-		| AT_XDMAC_CC_MBSIZE_SIXTEEN
+-		| AT_XDMAC_CC_TYPE_PER_TRAN;
+-	csize = at_xdmac_csize(sconfig->src_maxburst);
+-	if (csize < 0) {
+-		dev_err(chan2dev(chan), "invalid src maxburst value\n");
++	if (at_xdmac_check_slave_config(sconfig)) {
++		dev_err(chan2dev(chan), "invalid slave configuration\n");
+ 		return -EINVAL;
+ 	}
+-	atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG] |= AT_XDMAC_CC_CSIZE(csize);
+-	dwidth = ffs(sconfig->src_addr_width) - 1;
+-	atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG] |= AT_XDMAC_CC_DWIDTH(dwidth);
+-
+-
+-	atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG] =
+-		AT91_XDMAC_DT_PERID(atchan->perid)
+-		| AT_XDMAC_CC_DAM_FIXED_AM
+-		| AT_XDMAC_CC_SAM_INCREMENTED_AM
+-		| AT_XDMAC_CC_DIF(atchan->perif)
+-		| AT_XDMAC_CC_SIF(atchan->memif)
+-		| AT_XDMAC_CC_SWREQ_HWR_CONNECTED
+-		| AT_XDMAC_CC_DSYNC_MEM2PER
+-		| AT_XDMAC_CC_MBSIZE_SIXTEEN
+-		| AT_XDMAC_CC_TYPE_PER_TRAN;
+-	csize = at_xdmac_csize(sconfig->dst_maxburst);
+-	if (csize < 0) {
+-		dev_err(chan2dev(chan), "invalid src maxburst value\n");
+-		return -EINVAL;
+-	}
+-	atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG] |= AT_XDMAC_CC_CSIZE(csize);
+-	dwidth = ffs(sconfig->dst_addr_width) - 1;
+-	atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG] |= AT_XDMAC_CC_DWIDTH(dwidth);
+-
+-	/* Src and dst addr are needed to configure the link list descriptor. */
+-	atchan->per_src_addr = sconfig->src_addr;
+-	atchan->per_dst_addr = sconfig->dst_addr;
+ 
+-	dev_dbg(chan2dev(chan),
+-		"%s: cfg[dev2mem]=0x%08x, cfg[mem2dev]=0x%08x, per_src_addr=0x%08x, per_dst_addr=0x%08x\n",
+-		__func__, atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG],
+-		atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG],
+-		atchan->per_src_addr, atchan->per_dst_addr);
++	memcpy(&atchan->sconfig, sconfig, sizeof(atchan->sconfig));
+ 
+ 	return 0;
+ }
+@@ -563,6 +596,8 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 	struct scatterlist	*sg;
+ 	int			i;
+ 	unsigned int		xfer_size = 0;
++	unsigned long		irqflags;
++	struct dma_async_tx_descriptor	*ret = NULL;
+ 
+ 	if (!sgl)
+ 		return NULL;
+@@ -578,7 +613,10 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 		 flags);
+ 
+ 	/* Protect dma_sconfig field that can be modified by set_slave_conf. */
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, irqflags);
++
++	if (at_xdmac_compute_chan_conf(chan, direction))
++		goto spin_unlock;
+ 
+ 	/* Prepare descriptors. */
+ 	for_each_sg(sgl, sg, sg_len, i) {
+@@ -589,8 +627,7 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 		mem = sg_dma_address(sg);
+ 		if (unlikely(!len)) {
+ 			dev_err(chan2dev(chan), "sg data length is zero\n");
+-			spin_unlock_bh(&atchan->lock);
+-			return NULL;
++			goto spin_unlock;
+ 		}
+ 		dev_dbg(chan2dev(chan), "%s: * sg%d len=%u, mem=0x%08x\n",
+ 			 __func__, i, len, mem);
+@@ -600,20 +637,18 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 			dev_err(chan2dev(chan), "can't get descriptor\n");
+ 			if (first)
+ 				list_splice_init(&first->descs_list, &atchan->free_descs_list);
+-			spin_unlock_bh(&atchan->lock);
+-			return NULL;
++			goto spin_unlock;
+ 		}
+ 
+ 		/* Linked list descriptor setup. */
+ 		if (direction == DMA_DEV_TO_MEM) {
+-			desc->lld.mbr_sa = atchan->per_src_addr;
++			desc->lld.mbr_sa = atchan->sconfig.src_addr;
+ 			desc->lld.mbr_da = mem;
+-			desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG];
+ 		} else {
+ 			desc->lld.mbr_sa = mem;
+-			desc->lld.mbr_da = atchan->per_dst_addr;
+-			desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG];
++			desc->lld.mbr_da = atchan->sconfig.dst_addr;
+ 		}
++		desc->lld.mbr_cfg = atchan->cfg;
+ 		dwidth = at_xdmac_get_dwidth(desc->lld.mbr_cfg);
+ 		fixed_dwidth = IS_ALIGNED(len, 1 << dwidth)
+ 			       ? at_xdmac_get_dwidth(desc->lld.mbr_cfg)
+@@ -645,13 +680,15 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 		xfer_size += len;
+ 	}
+ 
+-	spin_unlock_bh(&atchan->lock);
+ 
+ 	first->tx_dma_desc.flags = flags;
+ 	first->xfer_size = xfer_size;
+ 	first->direction = direction;
++	ret = &first->tx_dma_desc;
+ 
+-	return &first->tx_dma_desc;
++spin_unlock:
++	spin_unlock_irqrestore(&atchan->lock, irqflags);
++	return ret;
+ }
+ 
+ static struct dma_async_tx_descriptor *
+@@ -664,6 +701,7 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
+ 	struct at_xdmac_desc	*first = NULL, *prev = NULL;
+ 	unsigned int		periods = buf_len / period_len;
+ 	int			i;
++	unsigned long		irqflags;
+ 
+ 	dev_dbg(chan2dev(chan), "%s: buf_addr=%pad, buf_len=%zd, period_len=%zd, dir=%s, flags=0x%lx\n",
+ 		__func__, &buf_addr, buf_len, period_len,
+@@ -679,32 +717,34 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
+ 		return NULL;
+ 	}
+ 
++	if (at_xdmac_compute_chan_conf(chan, direction))
++		return NULL;
++
+ 	for (i = 0; i < periods; i++) {
+ 		struct at_xdmac_desc	*desc = NULL;
+ 
+-		spin_lock_bh(&atchan->lock);
++		spin_lock_irqsave(&atchan->lock, irqflags);
+ 		desc = at_xdmac_get_desc(atchan);
+ 		if (!desc) {
+ 			dev_err(chan2dev(chan), "can't get descriptor\n");
+ 			if (first)
+ 				list_splice_init(&first->descs_list, &atchan->free_descs_list);
+-			spin_unlock_bh(&atchan->lock);
++			spin_unlock_irqrestore(&atchan->lock, irqflags);
+ 			return NULL;
+ 		}
+-		spin_unlock_bh(&atchan->lock);
++		spin_unlock_irqrestore(&atchan->lock, irqflags);
+ 		dev_dbg(chan2dev(chan),
+ 			"%s: desc=0x%p, tx_dma_desc.phys=%pad\n",
+ 			__func__, desc, &desc->tx_dma_desc.phys);
+ 
+ 		if (direction == DMA_DEV_TO_MEM) {
+-			desc->lld.mbr_sa = atchan->per_src_addr;
++			desc->lld.mbr_sa = atchan->sconfig.src_addr;
+ 			desc->lld.mbr_da = buf_addr + i * period_len;
+-			desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG];
+ 		} else {
+ 			desc->lld.mbr_sa = buf_addr + i * period_len;
+-			desc->lld.mbr_da = atchan->per_dst_addr;
+-			desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG];
++			desc->lld.mbr_da = atchan->sconfig.dst_addr;
+ 		}
++		desc->lld.mbr_cfg = atchan->cfg;
+ 		desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV1
+ 			| AT_XDMAC_MBR_UBC_NDEN
+ 			| AT_XDMAC_MBR_UBC_NSEN
+@@ -766,6 +806,7 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
+ 					| AT_XDMAC_CC_SIF(0)
+ 					| AT_XDMAC_CC_MBSIZE_SIXTEEN
+ 					| AT_XDMAC_CC_TYPE_MEM_TRAN;
++	unsigned long		irqflags;
+ 
+ 	dev_dbg(chan2dev(chan), "%s: src=%pad, dest=%pad, len=%zd, flags=0x%lx\n",
+ 		__func__, &src, &dest, len, flags);
+@@ -798,9 +839,9 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
+ 
+ 		dev_dbg(chan2dev(chan), "%s: remaining_size=%zu\n", __func__, remaining_size);
+ 
+-		spin_lock_bh(&atchan->lock);
++		spin_lock_irqsave(&atchan->lock, irqflags);
+ 		desc = at_xdmac_get_desc(atchan);
+-		spin_unlock_bh(&atchan->lock);
++		spin_unlock_irqrestore(&atchan->lock, irqflags);
+ 		if (!desc) {
+ 			dev_err(chan2dev(chan), "can't get descriptor\n");
+ 			if (first)
+@@ -886,6 +927,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+ 	int			residue;
+ 	u32			cur_nda, mask, value;
+ 	u8			dwidth = 0;
++	unsigned long		flags;
+ 
+ 	ret = dma_cookie_status(chan, cookie, txstate);
+ 	if (ret == DMA_COMPLETE)
+@@ -894,7 +936,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+ 	if (!txstate)
+ 		return ret;
+ 
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, flags);
+ 
+ 	desc = list_first_entry(&atchan->xfers_list, struct at_xdmac_desc, xfer_node);
+ 
+@@ -904,8 +946,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+ 	 */
+ 	if (!desc->active_xfer) {
+ 		dma_set_residue(txstate, desc->xfer_size);
+-		spin_unlock_bh(&atchan->lock);
+-		return ret;
++		goto spin_unlock;
+ 	}
+ 
+ 	residue = desc->xfer_size;
+@@ -936,14 +977,14 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+ 	}
+ 	residue += at_xdmac_chan_read(atchan, AT_XDMAC_CUBC) << dwidth;
+ 
+-	spin_unlock_bh(&atchan->lock);
+-
+ 	dma_set_residue(txstate, residue);
+ 
+ 	dev_dbg(chan2dev(chan),
+ 		 "%s: desc=0x%p, tx_dma_desc.phys=%pad, tx_status=%d, cookie=%d, residue=%d\n",
+ 		 __func__, desc, &desc->tx_dma_desc.phys, ret, cookie, residue);
+ 
++spin_unlock:
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ 	return ret;
+ }
+ 
+@@ -964,8 +1005,9 @@ static void at_xdmac_remove_xfer(struct at_xdmac_chan *atchan,
+ static void at_xdmac_advance_work(struct at_xdmac_chan *atchan)
+ {
+ 	struct at_xdmac_desc	*desc;
++	unsigned long		flags;
+ 
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, flags);
+ 
+ 	/*
+ 	 * If channel is enabled, do nothing, advance_work will be triggered
+@@ -980,7 +1022,7 @@ static void at_xdmac_advance_work(struct at_xdmac_chan *atchan)
+ 			at_xdmac_start_xfer(atchan, desc);
+ 	}
+ 
+-	spin_unlock_bh(&atchan->lock);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ }
+ 
+ static void at_xdmac_handle_cyclic(struct at_xdmac_chan *atchan)
+@@ -1116,12 +1158,13 @@ static int at_xdmac_device_config(struct dma_chan *chan,
+ {
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
+ 	int ret;
++	unsigned long		flags;
+ 
+ 	dev_dbg(chan2dev(chan), "%s\n", __func__);
+ 
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, flags);
+ 	ret = at_xdmac_set_slave_config(chan, config);
+-	spin_unlock_bh(&atchan->lock);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+ 	return ret;
+ }
+@@ -1130,18 +1173,19 @@ static int at_xdmac_device_pause(struct dma_chan *chan)
+ {
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
+ 	struct at_xdmac		*atxdmac = to_at_xdmac(atchan->chan.device);
++	unsigned long		flags;
+ 
+ 	dev_dbg(chan2dev(chan), "%s\n", __func__);
+ 
+ 	if (test_and_set_bit(AT_XDMAC_CHAN_IS_PAUSED, &atchan->status))
+ 		return 0;
+ 
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, flags);
+ 	at_xdmac_write(atxdmac, AT_XDMAC_GRWS, atchan->mask);
+ 	while (at_xdmac_chan_read(atchan, AT_XDMAC_CC)
+ 	       & (AT_XDMAC_CC_WRIP | AT_XDMAC_CC_RDIP))
+ 		cpu_relax();
+-	spin_unlock_bh(&atchan->lock);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -1150,16 +1194,19 @@ static int at_xdmac_device_resume(struct dma_chan *chan)
+ {
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
+ 	struct at_xdmac		*atxdmac = to_at_xdmac(atchan->chan.device);
++	unsigned long		flags;
+ 
+ 	dev_dbg(chan2dev(chan), "%s\n", __func__);
+ 
+-	spin_lock_bh(&atchan->lock);
+-	if (!at_xdmac_chan_is_paused(atchan))
++	spin_lock_irqsave(&atchan->lock, flags);
++	if (!at_xdmac_chan_is_paused(atchan)) {
++		spin_unlock_irqrestore(&atchan->lock, flags);
+ 		return 0;
++	}
+ 
+ 	at_xdmac_write(atxdmac, AT_XDMAC_GRWR, atchan->mask);
+ 	clear_bit(AT_XDMAC_CHAN_IS_PAUSED, &atchan->status);
+-	spin_unlock_bh(&atchan->lock);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -1169,10 +1216,11 @@ static int at_xdmac_device_terminate_all(struct dma_chan *chan)
+ 	struct at_xdmac_desc	*desc, *_desc;
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
+ 	struct at_xdmac		*atxdmac = to_at_xdmac(atchan->chan.device);
++	unsigned long		flags;
+ 
+ 	dev_dbg(chan2dev(chan), "%s\n", __func__);
+ 
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, flags);
+ 	at_xdmac_write(atxdmac, AT_XDMAC_GD, atchan->mask);
+ 	while (at_xdmac_read(atxdmac, AT_XDMAC_GS) & atchan->mask)
+ 		cpu_relax();
+@@ -1182,7 +1230,7 @@ static int at_xdmac_device_terminate_all(struct dma_chan *chan)
+ 		at_xdmac_remove_xfer(atchan, desc);
+ 
+ 	clear_bit(AT_XDMAC_CHAN_IS_CYCLIC, &atchan->status);
+-	spin_unlock_bh(&atchan->lock);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -1192,8 +1240,9 @@ static int at_xdmac_alloc_chan_resources(struct dma_chan *chan)
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
+ 	struct at_xdmac_desc	*desc;
+ 	int			i;
++	unsigned long		flags;
+ 
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, flags);
+ 
+ 	if (at_xdmac_chan_is_enabled(atchan)) {
+ 		dev_err(chan2dev(chan),
+@@ -1224,7 +1273,7 @@ static int at_xdmac_alloc_chan_resources(struct dma_chan *chan)
+ 	dev_dbg(chan2dev(chan), "%s: allocated %d descriptors\n", __func__, i);
+ 
+ spin_unlock:
+-	spin_unlock_bh(&atchan->lock);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ 	return i;
+ }
+ 
+diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
+index ac336a961dea..8e70e580c98a 100644
+--- a/drivers/dma/dmaengine.c
++++ b/drivers/dma/dmaengine.c
+@@ -505,7 +505,11 @@ int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps)
+ 	caps->directions = device->directions;
+ 	caps->residue_granularity = device->residue_granularity;
+ 
+-	caps->cmd_pause = !!device->device_pause;
++	/*
++	 * Some devices implement only pause (e.g. to get residuum) but no
++	 * resume. However cmd_pause is advertised as pause AND resume.
++	 */
++	caps->cmd_pause = !!(device->device_pause && device->device_resume);
+ 	caps->cmd_terminate = !!device->device_terminate_all;
+ 
+ 	return 0;
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index 0e1f56772855..a2771a8d4377 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -2127,6 +2127,7 @@ static int pl330_terminate_all(struct dma_chan *chan)
+ 	struct pl330_dmac *pl330 = pch->dmac;
+ 	LIST_HEAD(list);
+ 
++	pm_runtime_get_sync(pl330->ddma.dev);
+ 	spin_lock_irqsave(&pch->lock, flags);
+ 	spin_lock(&pl330->lock);
+ 	_stop(pch->thread);
+@@ -2151,6 +2152,8 @@ static int pl330_terminate_all(struct dma_chan *chan)
+ 	list_splice_tail_init(&pch->work_list, &pl330->desc_pool);
+ 	list_splice_tail_init(&pch->completed_list, &pl330->desc_pool);
+ 	spin_unlock_irqrestore(&pch->lock, flags);
++	pm_runtime_mark_last_busy(pl330->ddma.dev);
++	pm_runtime_put_autosuspend(pl330->ddma.dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index 406624a0b201..340e21918f33 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -684,8 +684,6 @@ static ssize_t node_show(struct kobject *kobj, struct attribute *attr,
+ 			dev->node_props.cpu_core_id_base);
+ 	sysfs_show_32bit_prop(buffer, "simd_id_base",
+ 			dev->node_props.simd_id_base);
+-	sysfs_show_32bit_prop(buffer, "capability",
+-			dev->node_props.capability);
+ 	sysfs_show_32bit_prop(buffer, "max_waves_per_simd",
+ 			dev->node_props.max_waves_per_simd);
+ 	sysfs_show_32bit_prop(buffer, "lds_size_in_kb",
+@@ -735,6 +733,8 @@ static ssize_t node_show(struct kobject *kobj, struct attribute *attr,
+ 				kfd2kgd->get_fw_version(
+ 						dev->gpu->kgd,
+ 						KGD_ENGINE_MEC1));
++		sysfs_show_32bit_prop(buffer, "capability",
++				dev->node_props.capability);
+ 	}
+ 
+ 	return sysfs_show_32bit_prop(buffer, "max_engine_clk_ccompute",
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index 27ea6bdebce7..7a628e4cb27a 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -2732,9 +2732,6 @@ void i915_gem_reset(struct drm_device *dev)
+ void
+ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
+ {
+-	if (list_empty(&ring->request_list))
+-		return;
+-
+ 	WARN_ON(i915_verify_lists(ring->dev));
+ 
+ 	/* Retire requests first as we use it above for the early return.
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index 88b36a9173c9..336e8b63ca08 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -881,10 +881,8 @@ intel_dp_aux_ch(struct intel_dp *intel_dp,
+ 				      DP_AUX_CH_CTL_RECEIVE_ERROR))
+ 				continue;
+ 			if (status & DP_AUX_CH_CTL_DONE)
+-				break;
++				goto done;
+ 		}
+-		if (status & DP_AUX_CH_CTL_DONE)
+-			break;
+ 	}
+ 
+ 	if ((status & DP_AUX_CH_CTL_DONE) == 0) {
+@@ -893,6 +891,7 @@ intel_dp_aux_ch(struct intel_dp *intel_dp,
+ 		goto out;
+ 	}
+ 
++done:
+ 	/* Check for timeout or receive error.
+ 	 * Timeouts occur when the sink is not connected
+ 	 */
+diff --git a/drivers/gpu/drm/i915/intel_i2c.c b/drivers/gpu/drm/i915/intel_i2c.c
+index 56e437e31580..ae628001fd97 100644
+--- a/drivers/gpu/drm/i915/intel_i2c.c
++++ b/drivers/gpu/drm/i915/intel_i2c.c
+@@ -435,7 +435,7 @@ gmbus_xfer(struct i2c_adapter *adapter,
+ 					       struct intel_gmbus,
+ 					       adapter);
+ 	struct drm_i915_private *dev_priv = bus->dev_priv;
+-	int i, reg_offset;
++	int i = 0, inc, try = 0, reg_offset;
+ 	int ret = 0;
+ 
+ 	intel_aux_display_runtime_get(dev_priv);
+@@ -448,12 +448,14 @@ gmbus_xfer(struct i2c_adapter *adapter,
+ 
+ 	reg_offset = dev_priv->gpio_mmio_base;
+ 
++retry:
+ 	I915_WRITE(GMBUS0 + reg_offset, bus->reg0);
+ 
+-	for (i = 0; i < num; i++) {
++	for (; i < num; i += inc) {
++		inc = 1;
+ 		if (gmbus_is_index_read(msgs, i, num)) {
+ 			ret = gmbus_xfer_index_read(dev_priv, &msgs[i]);
+-			i += 1;  /* set i to the index of the read xfer */
++			inc = 2; /* an index read is two msgs */
+ 		} else if (msgs[i].flags & I2C_M_RD) {
+ 			ret = gmbus_xfer_read(dev_priv, &msgs[i], 0);
+ 		} else {
+@@ -525,6 +527,18 @@ clear_err:
+ 			 adapter->name, msgs[i].addr,
+ 			 (msgs[i].flags & I2C_M_RD) ? 'r' : 'w', msgs[i].len);
+ 
++	/*
++	 * Passive adapters sometimes NAK the first probe. Retry the first
++	 * message once on -ENXIO for GMBUS transfers; the bit banging algorithm
++	 * has retries internally. See also the retry loop in
++	 * drm_do_probe_ddc_edid, which bails out on the first -ENXIO.
++	 */
++	if (ret == -ENXIO && i == 0 && try++ == 0) {
++		DRM_DEBUG_KMS("GMBUS [%s] NAK on first message, retry\n",
++			      adapter->name);
++		goto retry;
++	}
++
+ 	goto out;
+ 
+ timeout:
+diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c
+index 965a45619f6b..9bd56116fd5a 100644
+--- a/drivers/gpu/drm/radeon/atombios_crtc.c
++++ b/drivers/gpu/drm/radeon/atombios_crtc.c
+@@ -580,9 +580,6 @@ static u32 atombios_adjust_pll(struct drm_crtc *crtc,
+ 		else
+ 			radeon_crtc->pll_flags |= RADEON_PLL_PREFER_LOW_REF_DIV;
+ 
+-		/* if there is no audio, set MINM_OVER_MAXP  */
+-		if (!drm_detect_monitor_audio(radeon_connector_edid(connector)))
+-			radeon_crtc->pll_flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
+ 		if (rdev->family < CHIP_RV770)
+ 			radeon_crtc->pll_flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
+ 		/* use frac fb div on APUs */
+@@ -1789,9 +1786,7 @@ static int radeon_get_shared_nondp_ppll(struct drm_crtc *crtc)
+ 			if ((crtc->mode.clock == test_crtc->mode.clock) &&
+ 			    (adjusted_clock == test_adjusted_clock) &&
+ 			    (radeon_crtc->ss_enabled == test_radeon_crtc->ss_enabled) &&
+-			    (test_radeon_crtc->pll_id != ATOM_PPLL_INVALID) &&
+-			    (drm_detect_monitor_audio(radeon_connector_edid(test_radeon_crtc->connector)) ==
+-			     drm_detect_monitor_audio(radeon_connector_edid(radeon_crtc->connector))))
++			    (test_radeon_crtc->pll_id != ATOM_PPLL_INVALID))
+ 				return test_radeon_crtc->pll_id;
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/radeon/dce3_1_afmt.c b/drivers/gpu/drm/radeon/dce3_1_afmt.c
+index f04205170b8a..cfa3a84a2af0 100644
+--- a/drivers/gpu/drm/radeon/dce3_1_afmt.c
++++ b/drivers/gpu/drm/radeon/dce3_1_afmt.c
+@@ -173,7 +173,7 @@ void dce3_2_hdmi_update_acr(struct drm_encoder *encoder, long offset,
+ 	struct drm_device *dev = encoder->dev;
+ 	struct radeon_device *rdev = dev->dev_private;
+ 
+-	WREG32(HDMI0_ACR_PACKET_CONTROL + offset,
++	WREG32(DCE3_HDMI0_ACR_PACKET_CONTROL + offset,
+ 		HDMI0_ACR_SOURCE |		/* select SW CTS value */
+ 		HDMI0_ACR_AUTO_SEND);	/* allow hw to sent ACR packets when required */
+ 
+diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
+index bd7519fdd3f4..aa232fd25992 100644
+--- a/drivers/gpu/drm/radeon/radeon_device.c
++++ b/drivers/gpu/drm/radeon/radeon_device.c
+@@ -1458,6 +1458,21 @@ int radeon_device_init(struct radeon_device *rdev,
+ 	if (r)
+ 		DRM_ERROR("ib ring test failed (%d).\n", r);
+ 
++	/*
++	 * Turks/Thames GPU will freeze whole laptop if DPM is not restarted
++	 * after the CP ring have chew one packet at least. Hence here we stop
++	 * and restart DPM after the radeon_ib_ring_tests().
++	 */
++	if (rdev->pm.dpm_enabled &&
++	    (rdev->pm.pm_method == PM_METHOD_DPM) &&
++	    (rdev->family == CHIP_TURKS) &&
++	    (rdev->flags & RADEON_IS_MOBILITY)) {
++		mutex_lock(&rdev->pm.mutex);
++		radeon_dpm_disable(rdev);
++		radeon_dpm_enable(rdev);
++		mutex_unlock(&rdev->pm.mutex);
++	}
++
+ 	if ((radeon_testing & 1)) {
+ 		if (rdev->accel_working)
+ 			radeon_test_moves(rdev);
+diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c
+index de42fc4a22b8..9c3377ca17b7 100644
+--- a/drivers/gpu/drm/radeon/radeon_vm.c
++++ b/drivers/gpu/drm/radeon/radeon_vm.c
+@@ -458,14 +458,16 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 		/* make sure object fit at this offset */
+ 		eoffset = soffset + size;
+ 		if (soffset >= eoffset) {
+-			return -EINVAL;
++			r = -EINVAL;
++			goto error_unreserve;
+ 		}
+ 
+ 		last_pfn = eoffset / RADEON_GPU_PAGE_SIZE;
+ 		if (last_pfn > rdev->vm_manager.max_pfn) {
+ 			dev_err(rdev->dev, "va above limit (0x%08X > 0x%08X)\n",
+ 				last_pfn, rdev->vm_manager.max_pfn);
+-			return -EINVAL;
++			r = -EINVAL;
++			goto error_unreserve;
+ 		}
+ 
+ 	} else {
+@@ -486,7 +488,8 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 				"(bo %p 0x%010lx 0x%010lx)\n", bo_va->bo,
+ 				soffset, tmp->bo, tmp->it.start, tmp->it.last);
+ 			mutex_unlock(&vm->mutex);
+-			return -EINVAL;
++			r = -EINVAL;
++			goto error_unreserve;
+ 		}
+ 	}
+ 
+@@ -497,7 +500,8 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 			tmp = kzalloc(sizeof(struct radeon_bo_va), GFP_KERNEL);
+ 			if (!tmp) {
+ 				mutex_unlock(&vm->mutex);
+-				return -ENOMEM;
++				r = -ENOMEM;
++				goto error_unreserve;
+ 			}
+ 			tmp->it.start = bo_va->it.start;
+ 			tmp->it.last = bo_va->it.last;
+@@ -555,7 +559,6 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 		r = radeon_vm_clear_bo(rdev, pt);
+ 		if (r) {
+ 			radeon_bo_unref(&pt);
+-			radeon_bo_reserve(bo_va->bo, false);
+ 			return r;
+ 		}
+ 
+@@ -575,6 +578,10 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 
+ 	mutex_unlock(&vm->mutex);
+ 	return 0;
++
++error_unreserve:
++	radeon_bo_unreserve(bo_va->bo);
++	return r;
+ }
+ 
+ /**
+diff --git a/drivers/i2c/busses/i2c-hix5hd2.c b/drivers/i2c/busses/i2c-hix5hd2.c
+index 8fe78d08e01c..7c6966434ee7 100644
+--- a/drivers/i2c/busses/i2c-hix5hd2.c
++++ b/drivers/i2c/busses/i2c-hix5hd2.c
+@@ -554,4 +554,4 @@ module_platform_driver(hix5hd2_i2c_driver);
+ MODULE_DESCRIPTION("Hix5hd2 I2C Bus driver");
+ MODULE_AUTHOR("Wei Yan <sledge.yanwei@huawei.com>");
+ MODULE_LICENSE("GPL");
+-MODULE_ALIAS("platform:i2c-hix5hd2");
++MODULE_ALIAS("platform:hix5hd2-i2c");
+diff --git a/drivers/i2c/busses/i2c-s3c2410.c b/drivers/i2c/busses/i2c-s3c2410.c
+index 958c8db4ec30..297e9c9ac943 100644
+--- a/drivers/i2c/busses/i2c-s3c2410.c
++++ b/drivers/i2c/busses/i2c-s3c2410.c
+@@ -1143,6 +1143,7 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	i2c->quirks = s3c24xx_get_device_quirks(pdev);
++	i2c->sysreg = ERR_PTR(-ENOENT);
+ 	if (pdata)
+ 		memcpy(i2c->pdata, pdata, sizeof(*pdata));
+ 	else
+diff --git a/drivers/iio/adc/twl6030-gpadc.c b/drivers/iio/adc/twl6030-gpadc.c
+index 89d8aa1d2818..df12c57e6ce0 100644
+--- a/drivers/iio/adc/twl6030-gpadc.c
++++ b/drivers/iio/adc/twl6030-gpadc.c
+@@ -1001,7 +1001,7 @@ static struct platform_driver twl6030_gpadc_driver = {
+ 
+ module_platform_driver(twl6030_gpadc_driver);
+ 
+-MODULE_ALIAS("platform: " DRIVER_NAME);
++MODULE_ALIAS("platform:" DRIVER_NAME);
+ MODULE_AUTHOR("Balaji T K <balajitk@ti.com>");
+ MODULE_AUTHOR("Graeme Gregory <gg@slimlogic.co.uk>");
+ MODULE_AUTHOR("Oleksandr Kozaruk <oleksandr.kozaruk@ti.com");
+diff --git a/drivers/iio/imu/adis16400.h b/drivers/iio/imu/adis16400.h
+index 0916bf6b6c31..73b189c1c0fb 100644
+--- a/drivers/iio/imu/adis16400.h
++++ b/drivers/iio/imu/adis16400.h
+@@ -139,6 +139,7 @@
+ #define ADIS16400_NO_BURST		BIT(1)
+ #define ADIS16400_HAS_SLOW_MODE		BIT(2)
+ #define ADIS16400_HAS_SERIAL_NUMBER	BIT(3)
++#define ADIS16400_BURST_DIAG_STAT	BIT(4)
+ 
+ struct adis16400_state;
+ 
+@@ -165,6 +166,7 @@ struct adis16400_state {
+ 	int				filt_int;
+ 
+ 	struct adis adis;
++	unsigned long avail_scan_mask[2];
+ };
+ 
+ /* At the moment triggers are only used for ring buffer
+diff --git a/drivers/iio/imu/adis16400_buffer.c b/drivers/iio/imu/adis16400_buffer.c
+index 6e727ffe5262..90c24a23c679 100644
+--- a/drivers/iio/imu/adis16400_buffer.c
++++ b/drivers/iio/imu/adis16400_buffer.c
+@@ -18,7 +18,8 @@ int adis16400_update_scan_mode(struct iio_dev *indio_dev,
+ {
+ 	struct adis16400_state *st = iio_priv(indio_dev);
+ 	struct adis *adis = &st->adis;
+-	uint16_t *tx;
++	unsigned int burst_length;
++	u8 *tx;
+ 
+ 	if (st->variant->flags & ADIS16400_NO_BURST)
+ 		return adis_update_scan_mode(indio_dev, scan_mask);
+@@ -26,26 +27,29 @@ int adis16400_update_scan_mode(struct iio_dev *indio_dev,
+ 	kfree(adis->xfer);
+ 	kfree(adis->buffer);
+ 
++	/* All but the timestamp channel */
++	burst_length = (indio_dev->num_channels - 1) * sizeof(u16);
++	if (st->variant->flags & ADIS16400_BURST_DIAG_STAT)
++		burst_length += sizeof(u16);
++
+ 	adis->xfer = kcalloc(2, sizeof(*adis->xfer), GFP_KERNEL);
+ 	if (!adis->xfer)
+ 		return -ENOMEM;
+ 
+-	adis->buffer = kzalloc(indio_dev->scan_bytes + sizeof(u16),
+-		GFP_KERNEL);
++	adis->buffer = kzalloc(burst_length + sizeof(u16), GFP_KERNEL);
+ 	if (!adis->buffer)
+ 		return -ENOMEM;
+ 
+-	tx = adis->buffer + indio_dev->scan_bytes;
+-
++	tx = adis->buffer + burst_length;
+ 	tx[0] = ADIS_READ_REG(ADIS16400_GLOB_CMD);
+ 	tx[1] = 0;
+ 
+ 	adis->xfer[0].tx_buf = tx;
+ 	adis->xfer[0].bits_per_word = 8;
+ 	adis->xfer[0].len = 2;
+-	adis->xfer[1].tx_buf = tx;
++	adis->xfer[1].rx_buf = adis->buffer;
+ 	adis->xfer[1].bits_per_word = 8;
+-	adis->xfer[1].len = indio_dev->scan_bytes;
++	adis->xfer[1].len = burst_length;
+ 
+ 	spi_message_init(&adis->msg);
+ 	spi_message_add_tail(&adis->xfer[0], &adis->msg);
+@@ -61,6 +65,7 @@ irqreturn_t adis16400_trigger_handler(int irq, void *p)
+ 	struct adis16400_state *st = iio_priv(indio_dev);
+ 	struct adis *adis = &st->adis;
+ 	u32 old_speed_hz = st->adis.spi->max_speed_hz;
++	void *buffer;
+ 	int ret;
+ 
+ 	if (!adis->buffer)
+@@ -81,7 +86,12 @@ irqreturn_t adis16400_trigger_handler(int irq, void *p)
+ 		spi_setup(st->adis.spi);
+ 	}
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, adis->buffer,
++	if (st->variant->flags & ADIS16400_BURST_DIAG_STAT)
++		buffer = adis->buffer + sizeof(u16);
++	else
++		buffer = adis->buffer;
++
++	iio_push_to_buffers_with_timestamp(indio_dev, buffer,
+ 		pf->timestamp);
+ 
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/imu/adis16400_core.c b/drivers/iio/imu/adis16400_core.c
+index fa795dcd5f75..2fd68f2219a7 100644
+--- a/drivers/iio/imu/adis16400_core.c
++++ b/drivers/iio/imu/adis16400_core.c
+@@ -405,6 +405,11 @@ static int adis16400_read_raw(struct iio_dev *indio_dev,
+ 			*val = st->variant->temp_scale_nano / 1000000;
+ 			*val2 = (st->variant->temp_scale_nano % 1000000);
+ 			return IIO_VAL_INT_PLUS_MICRO;
++		case IIO_PRESSURE:
++			/* 20 uBar = 0.002kPascal */
++			*val = 0;
++			*val2 = 2000;
++			return IIO_VAL_INT_PLUS_MICRO;
+ 		default:
+ 			return -EINVAL;
+ 		}
+@@ -454,10 +459,10 @@ static int adis16400_read_raw(struct iio_dev *indio_dev,
+ 	}
+ }
+ 
+-#define ADIS16400_VOLTAGE_CHAN(addr, bits, name, si) { \
++#define ADIS16400_VOLTAGE_CHAN(addr, bits, name, si, chn) { \
+ 	.type = IIO_VOLTAGE, \
+ 	.indexed = 1, \
+-	.channel = 0, \
++	.channel = chn, \
+ 	.extend_name = name, \
+ 	.info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | \
+ 		BIT(IIO_CHAN_INFO_SCALE), \
+@@ -474,10 +479,10 @@ static int adis16400_read_raw(struct iio_dev *indio_dev,
+ }
+ 
+ #define ADIS16400_SUPPLY_CHAN(addr, bits) \
+-	ADIS16400_VOLTAGE_CHAN(addr, bits, "supply", ADIS16400_SCAN_SUPPLY)
++	ADIS16400_VOLTAGE_CHAN(addr, bits, "supply", ADIS16400_SCAN_SUPPLY, 0)
+ 
+ #define ADIS16400_AUX_ADC_CHAN(addr, bits) \
+-	ADIS16400_VOLTAGE_CHAN(addr, bits, NULL, ADIS16400_SCAN_ADC)
++	ADIS16400_VOLTAGE_CHAN(addr, bits, NULL, ADIS16400_SCAN_ADC, 1)
+ 
+ #define ADIS16400_GYRO_CHAN(mod, addr, bits) { \
+ 	.type = IIO_ANGL_VEL, \
+@@ -773,7 +778,8 @@ static struct adis16400_chip_info adis16400_chips[] = {
+ 		.channels = adis16448_channels,
+ 		.num_channels = ARRAY_SIZE(adis16448_channels),
+ 		.flags = ADIS16400_HAS_PROD_ID |
+-				ADIS16400_HAS_SERIAL_NUMBER,
++				ADIS16400_HAS_SERIAL_NUMBER |
++				ADIS16400_BURST_DIAG_STAT,
+ 		.gyro_scale_micro = IIO_DEGREE_TO_RAD(10000), /* 0.01 deg/s */
+ 		.accel_scale_micro = IIO_G_TO_M_S_2(833), /* 1/1200 g */
+ 		.temp_scale_nano = 73860000, /* 0.07386 C */
+@@ -791,11 +797,6 @@ static const struct iio_info adis16400_info = {
+ 	.debugfs_reg_access = adis_debugfs_reg_access,
+ };
+ 
+-static const unsigned long adis16400_burst_scan_mask[] = {
+-	~0UL,
+-	0,
+-};
+-
+ static const char * const adis16400_status_error_msgs[] = {
+ 	[ADIS16400_DIAG_STAT_ZACCL_FAIL] = "Z-axis accelerometer self-test failure",
+ 	[ADIS16400_DIAG_STAT_YACCL_FAIL] = "Y-axis accelerometer self-test failure",
+@@ -843,6 +844,20 @@ static const struct adis_data adis16400_data = {
+ 		BIT(ADIS16400_DIAG_STAT_POWER_LOW),
+ };
+ 
++static void adis16400_setup_chan_mask(struct adis16400_state *st)
++{
++	const struct adis16400_chip_info *chip_info = st->variant;
++	unsigned i;
++
++	for (i = 0; i < chip_info->num_channels; i++) {
++		const struct iio_chan_spec *ch = &chip_info->channels[i];
++
++		if (ch->scan_index >= 0 &&
++		    ch->scan_index != ADIS16400_SCAN_TIMESTAMP)
++			st->avail_scan_mask[0] |= BIT(ch->scan_index);
++	}
++}
++
+ static int adis16400_probe(struct spi_device *spi)
+ {
+ 	struct adis16400_state *st;
+@@ -866,8 +881,10 @@ static int adis16400_probe(struct spi_device *spi)
+ 	indio_dev->info = &adis16400_info;
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 
+-	if (!(st->variant->flags & ADIS16400_NO_BURST))
+-		indio_dev->available_scan_masks = adis16400_burst_scan_mask;
++	if (!(st->variant->flags & ADIS16400_NO_BURST)) {
++		adis16400_setup_chan_mask(st);
++		indio_dev->available_scan_masks = st->avail_scan_mask;
++	}
+ 
+ 	ret = adis_init(&st->adis, indio_dev, spi, &adis16400_data);
+ 	if (ret)
+diff --git a/drivers/input/mouse/alps.c b/drivers/input/mouse/alps.c
+index ea6cb64dfb28..d5335e664240 100644
+--- a/drivers/input/mouse/alps.c
++++ b/drivers/input/mouse/alps.c
+@@ -1042,9 +1042,8 @@ static void alps_process_trackstick_packet_v7(struct psmouse *psmouse)
+ 	right = (packet[1] & 0x02) >> 1;
+ 	middle = (packet[1] & 0x04) >> 2;
+ 
+-	/* Divide 2 since trackpoint's speed is too fast */
+-	input_report_rel(dev2, REL_X, (char)x / 2);
+-	input_report_rel(dev2, REL_Y, -((char)y / 2));
++	input_report_rel(dev2, REL_X, (char)x);
++	input_report_rel(dev2, REL_Y, -((char)y));
+ 
+ 	input_report_key(dev2, BTN_LEFT, left);
+ 	input_report_key(dev2, BTN_RIGHT, right);
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index 79363b687195..ce3d40004458 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -1376,10 +1376,11 @@ static bool elantech_is_signature_valid(const unsigned char *param)
+ 		return true;
+ 
+ 	/*
+-	 * Some models have a revision higher then 20. Meaning param[2] may
+-	 * be 10 or 20, skip the rates check for these.
++	 * Some hw_version >= 4 models have a revision higher then 20. Meaning
++	 * that param[2] may be 10 or 20, skip the rates check for these.
+ 	 */
+-	if (param[0] == 0x46 && (param[1] & 0xef) == 0x0f && param[2] < 40)
++	if ((param[0] & 0x0f) >= 0x06 && (param[1] & 0xaf) == 0x0f &&
++	    param[2] < 40)
+ 		return true;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(rates); i++)
+@@ -1555,6 +1556,7 @@ static int elantech_set_properties(struct elantech_data *etd)
+ 		case 9:
+ 		case 10:
+ 		case 13:
++		case 14:
+ 			etd->hw_version = 4;
+ 			break;
+ 		default:
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index 3b06c8a360b6..907ac9bdd763 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -148,6 +148,11 @@ static const struct min_max_quirk min_max_pnpid_table[] = {
+ 		1024, 5112, 2024, 4832
+ 	},
+ 	{
++		(const char * const []){"LEN2000", NULL},
++		{ANY_BOARD_ID, ANY_BOARD_ID},
++		1024, 5113, 2021, 4832
++	},
++	{
+ 		(const char * const []){"LEN2001", NULL},
+ 		{ANY_BOARD_ID, ANY_BOARD_ID},
+ 		1024, 5022, 2508, 4832
+@@ -188,7 +193,7 @@ static const char * const topbuttonpad_pnp_ids[] = {
+ 	"LEN0045",
+ 	"LEN0047",
+ 	"LEN0049",
+-	"LEN2000",
++	"LEN2000", /* S540 */
+ 	"LEN2001", /* Edge E431 */
+ 	"LEN2002", /* Edge E531 */
+ 	"LEN2003",
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 2d1e05bdbb53..272149d66f5b 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -50,6 +50,7 @@
+ #define CONTEXT_SIZE		VTD_PAGE_SIZE
+ 
+ #define IS_GFX_DEVICE(pdev) ((pdev->class >> 16) == PCI_BASE_CLASS_DISPLAY)
++#define IS_USB_DEVICE(pdev) ((pdev->class >> 8) == PCI_CLASS_SERIAL_USB)
+ #define IS_ISA_DEVICE(pdev) ((pdev->class >> 8) == PCI_CLASS_BRIDGE_ISA)
+ #define IS_AZALIA(pdev) ((pdev)->vendor == 0x8086 && (pdev)->device == 0x3a3e)
+ 
+@@ -672,6 +673,11 @@ static void domain_update_iommu_cap(struct dmar_domain *domain)
+ 	domain->iommu_superpage = domain_update_iommu_superpage(NULL);
+ }
+ 
++static int iommu_dummy(struct device *dev)
++{
++	return dev->archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO;
++}
++
+ static struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devfn)
+ {
+ 	struct dmar_drhd_unit *drhd = NULL;
+@@ -681,6 +687,9 @@ static struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devf
+ 	u16 segment = 0;
+ 	int i;
+ 
++	if (iommu_dummy(dev))
++		return NULL;
++
+ 	if (dev_is_pci(dev)) {
+ 		pdev = to_pci_dev(dev);
+ 		segment = pci_domain_nr(pdev->bus);
+@@ -2554,6 +2563,10 @@ static bool device_has_rmrr(struct device *dev)
+  * In both cases we assume that PCI USB devices with RMRRs have them largely
+  * for historical reasons and that the RMRR space is not actively used post
+  * boot.  This exclusion may change if vendors begin to abuse it.
++ *
++ * The same exception is made for graphics devices, with the requirement that
++ * any use of the RMRR regions will be torn down before assigning the device
++ * to a guest.
+  */
+ static bool device_is_rmrr_locked(struct device *dev)
+ {
+@@ -2563,7 +2576,7 @@ static bool device_is_rmrr_locked(struct device *dev)
+ 	if (dev_is_pci(dev)) {
+ 		struct pci_dev *pdev = to_pci_dev(dev);
+ 
+-		if ((pdev->class >> 8) == PCI_CLASS_SERIAL_USB)
++		if (IS_USB_DEVICE(pdev) || IS_GFX_DEVICE(pdev))
+ 			return false;
+ 	}
+ 
+@@ -2969,11 +2982,6 @@ static inline struct dmar_domain *get_valid_domain_for_dev(struct device *dev)
+ 	return __get_valid_domain_for_dev(dev);
+ }
+ 
+-static int iommu_dummy(struct device *dev)
+-{
+-	return dev->archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO;
+-}
+-
+ /* Check if the dev needs to go through non-identity map and unmap process.*/
+ static int iommu_no_mapping(struct device *dev)
+ {
+diff --git a/drivers/irqchip/irq-sunxi-nmi.c b/drivers/irqchip/irq-sunxi-nmi.c
+index 4a9ce5b50c5b..6b2b582433bd 100644
+--- a/drivers/irqchip/irq-sunxi-nmi.c
++++ b/drivers/irqchip/irq-sunxi-nmi.c
+@@ -104,7 +104,7 @@ static int sunxi_sc_nmi_set_type(struct irq_data *data, unsigned int flow_type)
+ 	irqd_set_trigger_type(data, flow_type);
+ 	irq_setup_alt_chip(data, flow_type);
+ 
+-	for (i = 0; i <= gc->num_ct; i++, ct++)
++	for (i = 0; i < gc->num_ct; i++, ct++)
+ 		if (ct->type & flow_type)
+ 			ctrl_off = ct->regs.type;
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 907534b7f40d..b7bf8ee857fa 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -3765,7 +3765,7 @@ array_state_store(struct mddev *mddev, const char *buf, size_t len)
+ 				err = -EBUSY;
+ 		}
+ 		spin_unlock(&mddev->lock);
+-		return err;
++		return err ?: len;
+ 	}
+ 	err = mddev_lock(mddev);
+ 	if (err)
+@@ -4144,13 +4144,14 @@ action_store(struct mddev *mddev, const char *page, size_t len)
+ 			set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 		else
+ 			clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+-		flush_workqueue(md_misc_wq);
+-		if (mddev->sync_thread) {
+-			set_bit(MD_RECOVERY_INTR, &mddev->recovery);
+-			if (mddev_lock(mddev) == 0) {
++		if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) &&
++		    mddev_lock(mddev) == 0) {
++			flush_workqueue(md_misc_wq);
++			if (mddev->sync_thread) {
++				set_bit(MD_RECOVERY_INTR, &mddev->recovery);
+ 				md_reap_sync_thread(mddev);
+-				mddev_unlock(mddev);
+ 			}
++			mddev_unlock(mddev);
+ 		}
+ 	} else if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) ||
+ 		   test_bit(MD_RECOVERY_NEEDED, &mddev->recovery))
+diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
+index 4df28943d222..e8d3c1d35453 100644
+--- a/drivers/net/bonding/bond_options.c
++++ b/drivers/net/bonding/bond_options.c
+@@ -624,7 +624,7 @@ int __bond_opt_set(struct bonding *bond,
+ out:
+ 	if (ret)
+ 		bond_opt_error_interpret(bond, opt, ret, val);
+-	else
++	else if (bond->dev->reg_state == NETREG_REGISTERED)
+ 		call_netdevice_notifiers(NETDEV_CHANGEINFODATA, bond->dev);
+ 
+ 	return ret;
+diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
+index 7f05f309e935..da36bcf32404 100644
+--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
++++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
+@@ -1773,9 +1773,9 @@ int be_cmd_get_regs(struct be_adapter *adapter, u32 buf_len, void *buf)
+ 	total_size = buf_len;
+ 
+ 	get_fat_cmd.size = sizeof(struct be_cmd_req_get_fat) + 60*1024;
+-	get_fat_cmd.va = pci_alloc_consistent(adapter->pdev,
+-					      get_fat_cmd.size,
+-					      &get_fat_cmd.dma);
++	get_fat_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
++					     get_fat_cmd.size,
++					     &get_fat_cmd.dma, GFP_ATOMIC);
+ 	if (!get_fat_cmd.va) {
+ 		dev_err(&adapter->pdev->dev,
+ 			"Memory allocation failure while reading FAT data\n");
+@@ -1820,8 +1820,8 @@ int be_cmd_get_regs(struct be_adapter *adapter, u32 buf_len, void *buf)
+ 		log_offset += buf_size;
+ 	}
+ err:
+-	pci_free_consistent(adapter->pdev, get_fat_cmd.size,
+-			    get_fat_cmd.va, get_fat_cmd.dma);
++	dma_free_coherent(&adapter->pdev->dev, get_fat_cmd.size,
++			  get_fat_cmd.va, get_fat_cmd.dma);
+ 	spin_unlock_bh(&adapter->mcc_lock);
+ 	return status;
+ }
+@@ -2272,12 +2272,12 @@ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
+ 		return -EINVAL;
+ 
+ 	cmd.size = sizeof(struct be_cmd_resp_port_type);
+-	cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
++	cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
++				     GFP_ATOMIC);
+ 	if (!cmd.va) {
+ 		dev_err(&adapter->pdev->dev, "Memory allocation failed\n");
+ 		return -ENOMEM;
+ 	}
+-	memset(cmd.va, 0, cmd.size);
+ 
+ 	spin_lock_bh(&adapter->mcc_lock);
+ 
+@@ -2302,7 +2302,7 @@ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
+ 	}
+ err:
+ 	spin_unlock_bh(&adapter->mcc_lock);
+-	pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
++	dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
+ 	return status;
+ }
+ 
+@@ -2777,7 +2777,8 @@ int be_cmd_get_phy_info(struct be_adapter *adapter)
+ 		goto err;
+ 	}
+ 	cmd.size = sizeof(struct be_cmd_req_get_phy_info);
+-	cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
++	cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
++				     GFP_ATOMIC);
+ 	if (!cmd.va) {
+ 		dev_err(&adapter->pdev->dev, "Memory alloc failure\n");
+ 		status = -ENOMEM;
+@@ -2811,7 +2812,7 @@ int be_cmd_get_phy_info(struct be_adapter *adapter)
+ 				BE_SUPPORTED_SPEED_1GBPS;
+ 		}
+ 	}
+-	pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
++	dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
+ err:
+ 	spin_unlock_bh(&adapter->mcc_lock);
+ 	return status;
+@@ -2862,8 +2863,9 @@ int be_cmd_get_cntl_attributes(struct be_adapter *adapter)
+ 
+ 	memset(&attribs_cmd, 0, sizeof(struct be_dma_mem));
+ 	attribs_cmd.size = sizeof(struct be_cmd_resp_cntl_attribs);
+-	attribs_cmd.va = pci_alloc_consistent(adapter->pdev, attribs_cmd.size,
+-					      &attribs_cmd.dma);
++	attribs_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
++					     attribs_cmd.size,
++					     &attribs_cmd.dma, GFP_ATOMIC);
+ 	if (!attribs_cmd.va) {
+ 		dev_err(&adapter->pdev->dev, "Memory allocation failure\n");
+ 		status = -ENOMEM;
+@@ -2890,8 +2892,8 @@ int be_cmd_get_cntl_attributes(struct be_adapter *adapter)
+ err:
+ 	mutex_unlock(&adapter->mbox_lock);
+ 	if (attribs_cmd.va)
+-		pci_free_consistent(adapter->pdev, attribs_cmd.size,
+-				    attribs_cmd.va, attribs_cmd.dma);
++		dma_free_coherent(&adapter->pdev->dev, attribs_cmd.size,
++				  attribs_cmd.va, attribs_cmd.dma);
+ 	return status;
+ }
+ 
+@@ -3029,9 +3031,10 @@ int be_cmd_get_mac_from_list(struct be_adapter *adapter, u8 *mac,
+ 
+ 	memset(&get_mac_list_cmd, 0, sizeof(struct be_dma_mem));
+ 	get_mac_list_cmd.size = sizeof(struct be_cmd_resp_get_mac_list);
+-	get_mac_list_cmd.va = pci_alloc_consistent(adapter->pdev,
+-						   get_mac_list_cmd.size,
+-						   &get_mac_list_cmd.dma);
++	get_mac_list_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
++						  get_mac_list_cmd.size,
++						  &get_mac_list_cmd.dma,
++						  GFP_ATOMIC);
+ 
+ 	if (!get_mac_list_cmd.va) {
+ 		dev_err(&adapter->pdev->dev,
+@@ -3104,8 +3107,8 @@ int be_cmd_get_mac_from_list(struct be_adapter *adapter, u8 *mac,
+ 
+ out:
+ 	spin_unlock_bh(&adapter->mcc_lock);
+-	pci_free_consistent(adapter->pdev, get_mac_list_cmd.size,
+-			    get_mac_list_cmd.va, get_mac_list_cmd.dma);
++	dma_free_coherent(&adapter->pdev->dev, get_mac_list_cmd.size,
++			  get_mac_list_cmd.va, get_mac_list_cmd.dma);
+ 	return status;
+ }
+ 
+@@ -3158,8 +3161,8 @@ int be_cmd_set_mac_list(struct be_adapter *adapter, u8 *mac_array,
+ 
+ 	memset(&cmd, 0, sizeof(struct be_dma_mem));
+ 	cmd.size = sizeof(struct be_cmd_req_set_mac_list);
+-	cmd.va = dma_alloc_coherent(&adapter->pdev->dev, cmd.size,
+-				    &cmd.dma, GFP_KERNEL);
++	cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
++				     GFP_KERNEL);
+ 	if (!cmd.va)
+ 		return -ENOMEM;
+ 
+@@ -3348,7 +3351,8 @@ int be_cmd_get_acpi_wol_cap(struct be_adapter *adapter)
+ 
+ 	memset(&cmd, 0, sizeof(struct be_dma_mem));
+ 	cmd.size = sizeof(struct be_cmd_resp_acpi_wol_magic_config_v1);
+-	cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
++	cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
++				     GFP_ATOMIC);
+ 	if (!cmd.va) {
+ 		dev_err(&adapter->pdev->dev, "Memory allocation failure\n");
+ 		status = -ENOMEM;
+@@ -3383,7 +3387,8 @@ int be_cmd_get_acpi_wol_cap(struct be_adapter *adapter)
+ err:
+ 	mutex_unlock(&adapter->mbox_lock);
+ 	if (cmd.va)
+-		pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
++		dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va,
++				  cmd.dma);
+ 	return status;
+ 
+ }
+@@ -3397,8 +3402,9 @@ int be_cmd_set_fw_log_level(struct be_adapter *adapter, u32 level)
+ 
+ 	memset(&extfat_cmd, 0, sizeof(struct be_dma_mem));
+ 	extfat_cmd.size = sizeof(struct be_cmd_resp_get_ext_fat_caps);
+-	extfat_cmd.va = pci_alloc_consistent(adapter->pdev, extfat_cmd.size,
+-					     &extfat_cmd.dma);
++	extfat_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
++					    extfat_cmd.size, &extfat_cmd.dma,
++					    GFP_ATOMIC);
+ 	if (!extfat_cmd.va)
+ 		return -ENOMEM;
+ 
+@@ -3420,8 +3426,8 @@ int be_cmd_set_fw_log_level(struct be_adapter *adapter, u32 level)
+ 
+ 	status = be_cmd_set_ext_fat_capabilites(adapter, &extfat_cmd, cfgs);
+ err:
+-	pci_free_consistent(adapter->pdev, extfat_cmd.size, extfat_cmd.va,
+-			    extfat_cmd.dma);
++	dma_free_coherent(&adapter->pdev->dev, extfat_cmd.size, extfat_cmd.va,
++			  extfat_cmd.dma);
+ 	return status;
+ }
+ 
+@@ -3434,8 +3440,9 @@ int be_cmd_get_fw_log_level(struct be_adapter *adapter)
+ 
+ 	memset(&extfat_cmd, 0, sizeof(struct be_dma_mem));
+ 	extfat_cmd.size = sizeof(struct be_cmd_resp_get_ext_fat_caps);
+-	extfat_cmd.va = pci_alloc_consistent(adapter->pdev, extfat_cmd.size,
+-					     &extfat_cmd.dma);
++	extfat_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
++					    extfat_cmd.size, &extfat_cmd.dma,
++					    GFP_ATOMIC);
+ 
+ 	if (!extfat_cmd.va) {
+ 		dev_err(&adapter->pdev->dev, "%s: Memory allocation failure\n",
+@@ -3453,8 +3460,8 @@ int be_cmd_get_fw_log_level(struct be_adapter *adapter)
+ 				level = cfgs->module[0].trace_lvl[j].dbg_lvl;
+ 		}
+ 	}
+-	pci_free_consistent(adapter->pdev, extfat_cmd.size, extfat_cmd.va,
+-			    extfat_cmd.dma);
++	dma_free_coherent(&adapter->pdev->dev, extfat_cmd.size, extfat_cmd.va,
++			  extfat_cmd.dma);
+ err:
+ 	return level;
+ }
+@@ -3652,7 +3659,8 @@ int be_cmd_get_func_config(struct be_adapter *adapter, struct be_resources *res)
+ 
+ 	memset(&cmd, 0, sizeof(struct be_dma_mem));
+ 	cmd.size = sizeof(struct be_cmd_resp_get_func_config);
+-	cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
++	cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
++				     GFP_ATOMIC);
+ 	if (!cmd.va) {
+ 		dev_err(&adapter->pdev->dev, "Memory alloc failure\n");
+ 		status = -ENOMEM;
+@@ -3692,7 +3700,8 @@ int be_cmd_get_func_config(struct be_adapter *adapter, struct be_resources *res)
+ err:
+ 	mutex_unlock(&adapter->mbox_lock);
+ 	if (cmd.va)
+-		pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
++		dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va,
++				  cmd.dma);
+ 	return status;
+ }
+ 
+@@ -3713,7 +3722,8 @@ int be_cmd_get_profile_config(struct be_adapter *adapter,
+ 
+ 	memset(&cmd, 0, sizeof(struct be_dma_mem));
+ 	cmd.size = sizeof(struct be_cmd_resp_get_profile_config);
+-	cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
++	cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
++				     GFP_ATOMIC);
+ 	if (!cmd.va)
+ 		return -ENOMEM;
+ 
+@@ -3752,7 +3762,8 @@ int be_cmd_get_profile_config(struct be_adapter *adapter,
+ 		res->vf_if_cap_flags = vf_res->cap_flags;
+ err:
+ 	if (cmd.va)
+-		pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
++		dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va,
++				  cmd.dma);
+ 	return status;
+ }
+ 
+@@ -3767,7 +3778,8 @@ static int be_cmd_set_profile_config(struct be_adapter *adapter, void *desc,
+ 
+ 	memset(&cmd, 0, sizeof(struct be_dma_mem));
+ 	cmd.size = sizeof(struct be_cmd_req_set_profile_config);
+-	cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
++	cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
++				     GFP_ATOMIC);
+ 	if (!cmd.va)
+ 		return -ENOMEM;
+ 
+@@ -3783,7 +3795,8 @@ static int be_cmd_set_profile_config(struct be_adapter *adapter, void *desc,
+ 	status = be_cmd_notify_wait(adapter, &wrb);
+ 
+ 	if (cmd.va)
+-		pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
++		dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va,
++				  cmd.dma);
+ 	return status;
+ }
+ 
+diff --git a/drivers/net/ethernet/emulex/benet/be_ethtool.c b/drivers/net/ethernet/emulex/benet/be_ethtool.c
+index 4d2de4700769..22ffcd81a6b5 100644
+--- a/drivers/net/ethernet/emulex/benet/be_ethtool.c
++++ b/drivers/net/ethernet/emulex/benet/be_ethtool.c
+@@ -264,8 +264,8 @@ static int lancer_cmd_read_file(struct be_adapter *adapter, u8 *file_name,
+ 	int status = 0;
+ 
+ 	read_cmd.size = LANCER_READ_FILE_CHUNK;
+-	read_cmd.va = pci_alloc_consistent(adapter->pdev, read_cmd.size,
+-					   &read_cmd.dma);
++	read_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, read_cmd.size,
++					  &read_cmd.dma, GFP_ATOMIC);
+ 
+ 	if (!read_cmd.va) {
+ 		dev_err(&adapter->pdev->dev,
+@@ -289,8 +289,8 @@ static int lancer_cmd_read_file(struct be_adapter *adapter, u8 *file_name,
+ 			break;
+ 		}
+ 	}
+-	pci_free_consistent(adapter->pdev, read_cmd.size, read_cmd.va,
+-			    read_cmd.dma);
++	dma_free_coherent(&adapter->pdev->dev, read_cmd.size, read_cmd.va,
++			  read_cmd.dma);
+ 
+ 	return status;
+ }
+@@ -818,8 +818,9 @@ static int be_test_ddr_dma(struct be_adapter *adapter)
+ 	};
+ 
+ 	ddrdma_cmd.size = sizeof(struct be_cmd_req_ddrdma_test);
+-	ddrdma_cmd.va = dma_alloc_coherent(&adapter->pdev->dev, ddrdma_cmd.size,
+-					   &ddrdma_cmd.dma, GFP_KERNEL);
++	ddrdma_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
++					    ddrdma_cmd.size, &ddrdma_cmd.dma,
++					    GFP_KERNEL);
+ 	if (!ddrdma_cmd.va)
+ 		return -ENOMEM;
+ 
+@@ -941,8 +942,9 @@ static int be_read_eeprom(struct net_device *netdev,
+ 
+ 	memset(&eeprom_cmd, 0, sizeof(struct be_dma_mem));
+ 	eeprom_cmd.size = sizeof(struct be_cmd_req_seeprom_read);
+-	eeprom_cmd.va = dma_alloc_coherent(&adapter->pdev->dev, eeprom_cmd.size,
+-					   &eeprom_cmd.dma, GFP_KERNEL);
++	eeprom_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
++					    eeprom_cmd.size, &eeprom_cmd.dma,
++					    GFP_KERNEL);
+ 
+ 	if (!eeprom_cmd.va)
+ 		return -ENOMEM;
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index e6b790f0d9dc..893753f18098 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -4392,8 +4392,8 @@ static int lancer_fw_download(struct be_adapter *adapter,
+ 
+ 	flash_cmd.size = sizeof(struct lancer_cmd_req_write_object)
+ 				+ LANCER_FW_DOWNLOAD_CHUNK;
+-	flash_cmd.va = dma_alloc_coherent(dev, flash_cmd.size,
+-					  &flash_cmd.dma, GFP_KERNEL);
++	flash_cmd.va = dma_zalloc_coherent(dev, flash_cmd.size,
++					   &flash_cmd.dma, GFP_KERNEL);
+ 	if (!flash_cmd.va)
+ 		return -ENOMEM;
+ 
+@@ -4526,8 +4526,8 @@ static int be_fw_download(struct be_adapter *adapter, const struct firmware* fw)
+ 	}
+ 
+ 	flash_cmd.size = sizeof(struct be_cmd_write_flashrom);
+-	flash_cmd.va = dma_alloc_coherent(dev, flash_cmd.size, &flash_cmd.dma,
+-					  GFP_KERNEL);
++	flash_cmd.va = dma_zalloc_coherent(dev, flash_cmd.size, &flash_cmd.dma,
++					   GFP_KERNEL);
+ 	if (!flash_cmd.va)
+ 		return -ENOMEM;
+ 
+@@ -4941,10 +4941,10 @@ static int be_ctrl_init(struct be_adapter *adapter)
+ 		goto done;
+ 
+ 	mbox_mem_alloc->size = sizeof(struct be_mcc_mailbox) + 16;
+-	mbox_mem_alloc->va = dma_alloc_coherent(&adapter->pdev->dev,
+-						mbox_mem_alloc->size,
+-						&mbox_mem_alloc->dma,
+-						GFP_KERNEL);
++	mbox_mem_alloc->va = dma_zalloc_coherent(&adapter->pdev->dev,
++						 mbox_mem_alloc->size,
++						 &mbox_mem_alloc->dma,
++						 GFP_KERNEL);
+ 	if (!mbox_mem_alloc->va) {
+ 		status = -ENOMEM;
+ 		goto unmap_pci_bars;
+diff --git a/drivers/net/phy/dp83640.c b/drivers/net/phy/dp83640.c
+index e22e602beef3..c5789cdf7778 100644
+--- a/drivers/net/phy/dp83640.c
++++ b/drivers/net/phy/dp83640.c
+@@ -47,7 +47,7 @@
+ #define PSF_TX		0x1000
+ #define EXT_EVENT	1
+ #define CAL_EVENT	7
+-#define CAL_TRIGGER	7
++#define CAL_TRIGGER	1
+ #define DP83640_N_PINS	12
+ 
+ #define MII_DP83640_MICR 0x11
+@@ -495,7 +495,9 @@ static int ptp_dp83640_enable(struct ptp_clock_info *ptp,
+ 			else
+ 				evnt |= EVNT_RISE;
+ 		}
++		mutex_lock(&clock->extreg_lock);
+ 		ext_write(0, phydev, PAGE5, PTP_EVNT, evnt);
++		mutex_unlock(&clock->extreg_lock);
+ 		return 0;
+ 
+ 	case PTP_CLK_REQ_PEROUT:
+@@ -531,6 +533,8 @@ static u8 status_frame_src[6] = { 0x08, 0x00, 0x17, 0x0B, 0x6B, 0x0F };
+ 
+ static void enable_status_frames(struct phy_device *phydev, bool on)
+ {
++	struct dp83640_private *dp83640 = phydev->priv;
++	struct dp83640_clock *clock = dp83640->clock;
+ 	u16 cfg0 = 0, ver;
+ 
+ 	if (on)
+@@ -538,9 +542,13 @@ static void enable_status_frames(struct phy_device *phydev, bool on)
+ 
+ 	ver = (PSF_PTPVER & VERSIONPTP_MASK) << VERSIONPTP_SHIFT;
+ 
++	mutex_lock(&clock->extreg_lock);
++
+ 	ext_write(0, phydev, PAGE5, PSF_CFG0, cfg0);
+ 	ext_write(0, phydev, PAGE6, PSF_CFG1, ver);
+ 
++	mutex_unlock(&clock->extreg_lock);
++
+ 	if (!phydev->attached_dev) {
+ 		pr_warn("expected to find an attached netdevice\n");
+ 		return;
+@@ -837,7 +845,7 @@ static void decode_rxts(struct dp83640_private *dp83640,
+ 	list_del_init(&rxts->list);
+ 	phy2rxts(phy_rxts, rxts);
+ 
+-	spin_lock_irqsave(&dp83640->rx_queue.lock, flags);
++	spin_lock(&dp83640->rx_queue.lock);
+ 	skb_queue_walk(&dp83640->rx_queue, skb) {
+ 		struct dp83640_skb_info *skb_info;
+ 
+@@ -852,7 +860,7 @@ static void decode_rxts(struct dp83640_private *dp83640,
+ 			break;
+ 		}
+ 	}
+-	spin_unlock_irqrestore(&dp83640->rx_queue.lock, flags);
++	spin_unlock(&dp83640->rx_queue.lock);
+ 
+ 	if (!shhwtstamps)
+ 		list_add_tail(&rxts->list, &dp83640->rxts);
+@@ -1172,11 +1180,18 @@ static int dp83640_config_init(struct phy_device *phydev)
+ 
+ 	if (clock->chosen && !list_empty(&clock->phylist))
+ 		recalibrate(clock);
+-	else
++	else {
++		mutex_lock(&clock->extreg_lock);
+ 		enable_broadcast(phydev, clock->page, 1);
++		mutex_unlock(&clock->extreg_lock);
++	}
+ 
+ 	enable_status_frames(phydev, true);
++
++	mutex_lock(&clock->extreg_lock);
+ 	ext_write(0, phydev, PAGE4, PTP_CTL, PTP_ENABLE);
++	mutex_unlock(&clock->extreg_lock);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 52cd8db2c57d..757f28a4284c 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -1053,13 +1053,14 @@ int phy_init_eee(struct phy_device *phydev, bool clk_stop_enable)
+ {
+ 	/* According to 802.3az,the EEE is supported only in full duplex-mode.
+ 	 * Also EEE feature is active when core is operating with MII, GMII
+-	 * or RGMII. Internal PHYs are also allowed to proceed and should
+-	 * return an error if they do not support EEE.
++	 * or RGMII (all kinds). Internal PHYs are also allowed to proceed and
++	 * should return an error if they do not support EEE.
+ 	 */
+ 	if ((phydev->duplex == DUPLEX_FULL) &&
+ 	    ((phydev->interface == PHY_INTERFACE_MODE_MII) ||
+ 	    (phydev->interface == PHY_INTERFACE_MODE_GMII) ||
+-	    (phydev->interface == PHY_INTERFACE_MODE_RGMII) ||
++	    (phydev->interface >= PHY_INTERFACE_MODE_RGMII &&
++	     phydev->interface <= PHY_INTERFACE_MODE_RGMII_TXID) ||
+ 	     phy_is_internal(phydev))) {
+ 		int eee_lp, eee_cap, eee_adv;
+ 		u32 lp, cap, adv;
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index c3e4da9e79ca..8067b8fbb0ee 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -1182,7 +1182,7 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
+ 	 * payload data instead.
+ 	 */
+ 	usbnet_set_skb_tx_stats(skb_out, n,
+-				ctx->tx_curr_frame_payload - skb_out->len);
++				(long)ctx->tx_curr_frame_payload - skb_out->len);
+ 
+ 	return skb_out;
+ 
+diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
+index 794204e34fba..152131a10047 100644
+--- a/drivers/net/xen-netback/xenbus.c
++++ b/drivers/net/xen-netback/xenbus.c
+@@ -34,6 +34,8 @@ struct backend_info {
+ 	enum xenbus_state frontend_state;
+ 	struct xenbus_watch hotplug_status_watch;
+ 	u8 have_hotplug_status_watch:1;
++
++	const char *hotplug_script;
+ };
+ 
+ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue);
+@@ -236,6 +238,7 @@ static int netback_remove(struct xenbus_device *dev)
+ 		xenvif_free(be->vif);
+ 		be->vif = NULL;
+ 	}
++	kfree(be->hotplug_script);
+ 	kfree(be);
+ 	dev_set_drvdata(&dev->dev, NULL);
+ 	return 0;
+@@ -253,6 +256,7 @@ static int netback_probe(struct xenbus_device *dev,
+ 	struct xenbus_transaction xbt;
+ 	int err;
+ 	int sg;
++	const char *script;
+ 	struct backend_info *be = kzalloc(sizeof(struct backend_info),
+ 					  GFP_KERNEL);
+ 	if (!be) {
+@@ -345,6 +349,15 @@ static int netback_probe(struct xenbus_device *dev,
+ 	if (err)
+ 		pr_debug("Error writing multi-queue-max-queues\n");
+ 
++	script = xenbus_read(XBT_NIL, dev->nodename, "script", NULL);
++	if (IS_ERR(script)) {
++		err = PTR_ERR(script);
++		xenbus_dev_fatal(dev, err, "reading script");
++		goto fail;
++	}
++
++	be->hotplug_script = script;
++
+ 	err = xenbus_switch_state(dev, XenbusStateInitWait);
+ 	if (err)
+ 		goto fail;
+@@ -377,22 +390,14 @@ static int netback_uevent(struct xenbus_device *xdev,
+ 			  struct kobj_uevent_env *env)
+ {
+ 	struct backend_info *be = dev_get_drvdata(&xdev->dev);
+-	char *val;
+ 
+-	val = xenbus_read(XBT_NIL, xdev->nodename, "script", NULL);
+-	if (IS_ERR(val)) {
+-		int err = PTR_ERR(val);
+-		xenbus_dev_fatal(xdev, err, "reading script");
+-		return err;
+-	} else {
+-		if (add_uevent_var(env, "script=%s", val)) {
+-			kfree(val);
+-			return -ENOMEM;
+-		}
+-		kfree(val);
+-	}
++	if (!be)
++		return 0;
++
++	if (add_uevent_var(env, "script=%s", be->hotplug_script))
++		return -ENOMEM;
+ 
+-	if (!be || !be->vif)
++	if (!be->vif)
+ 		return 0;
+ 
+ 	return add_uevent_var(env, "vif=%s", be->vif->dev->name);
+@@ -736,6 +741,7 @@ static void connect(struct backend_info *be)
+ 			goto err;
+ 		}
+ 
++		queue->credit_bytes = credit_bytes;
+ 		queue->remaining_credit = credit_bytes;
+ 		queue->credit_usec = credit_usec;
+ 
+diff --git a/drivers/of/dynamic.c b/drivers/of/dynamic.c
+index 3351ef408125..53826b84e0ec 100644
+--- a/drivers/of/dynamic.c
++++ b/drivers/of/dynamic.c
+@@ -225,7 +225,7 @@ void __of_attach_node(struct device_node *np)
+ 	phandle = __of_get_property(np, "phandle", &sz);
+ 	if (!phandle)
+ 		phandle = __of_get_property(np, "linux,phandle", &sz);
+-	if (IS_ENABLED(PPC_PSERIES) && !phandle)
++	if (IS_ENABLED(CONFIG_PPC_PSERIES) && !phandle)
+ 		phandle = __of_get_property(np, "ibm,phandle", &sz);
+ 	np->phandle = (phandle && (sz >= 4)) ? be32_to_cpup(phandle) : 0;
+ 
+diff --git a/drivers/staging/ozwpan/ozhcd.c b/drivers/staging/ozwpan/ozhcd.c
+index 8543bb29a138..9737a979b8db 100644
+--- a/drivers/staging/ozwpan/ozhcd.c
++++ b/drivers/staging/ozwpan/ozhcd.c
+@@ -743,8 +743,8 @@ void oz_hcd_pd_reset(void *hpd, void *hport)
+ /*
+  * Context: softirq
+  */
+-void oz_hcd_get_desc_cnf(void *hport, u8 req_id, int status, const u8 *desc,
+-			int length, int offset, int total_size)
++void oz_hcd_get_desc_cnf(void *hport, u8 req_id, u8 status, const u8 *desc,
++			u8 length, u16 offset, u16 total_size)
+ {
+ 	struct oz_port *port = hport;
+ 	struct urb *urb;
+@@ -756,8 +756,8 @@ void oz_hcd_get_desc_cnf(void *hport, u8 req_id, int status, const u8 *desc,
+ 	if (!urb)
+ 		return;
+ 	if (status == 0) {
+-		int copy_len;
+-		int required_size = urb->transfer_buffer_length;
++		unsigned int copy_len;
++		unsigned int required_size = urb->transfer_buffer_length;
+ 
+ 		if (required_size > total_size)
+ 			required_size = total_size;
+diff --git a/drivers/staging/ozwpan/ozusbif.h b/drivers/staging/ozwpan/ozusbif.h
+index 4249fa374012..d2a6085345be 100644
+--- a/drivers/staging/ozwpan/ozusbif.h
++++ b/drivers/staging/ozwpan/ozusbif.h
+@@ -29,8 +29,8 @@ void oz_usb_request_heartbeat(void *hpd);
+ 
+ /* Confirmation functions.
+  */
+-void oz_hcd_get_desc_cnf(void *hport, u8 req_id, int status,
+-	const u8 *desc, int length, int offset, int total_size);
++void oz_hcd_get_desc_cnf(void *hport, u8 req_id, u8 status,
++	const u8 *desc, u8 length, u16 offset, u16 total_size);
+ void oz_hcd_control_cnf(void *hport, u8 req_id, u8 rcode,
+ 	const u8 *data, int data_len);
+ 
+diff --git a/drivers/staging/ozwpan/ozusbsvc1.c b/drivers/staging/ozwpan/ozusbsvc1.c
+index d434d8c6fff6..f660bb198c65 100644
+--- a/drivers/staging/ozwpan/ozusbsvc1.c
++++ b/drivers/staging/ozwpan/ozusbsvc1.c
+@@ -326,7 +326,11 @@ static void oz_usb_handle_ep_data(struct oz_usb_ctx *usb_ctx,
+ 			struct oz_multiple_fixed *body =
+ 				(struct oz_multiple_fixed *)data_hdr;
+ 			u8 *data = body->data;
+-			int n = (len - sizeof(struct oz_multiple_fixed)+1)
++			unsigned int n;
++			if (!body->unit_size ||
++				len < sizeof(struct oz_multiple_fixed) - 1)
++				break;
++			n = (len - (sizeof(struct oz_multiple_fixed) - 1))
+ 				/ body->unit_size;
+ 			while (n--) {
+ 				oz_hcd_data_ind(usb_ctx->hport, body->endpoint,
+@@ -390,10 +394,15 @@ void oz_usb_rx(struct oz_pd *pd, struct oz_elt *elt)
+ 	case OZ_GET_DESC_RSP: {
+ 			struct oz_get_desc_rsp *body =
+ 				(struct oz_get_desc_rsp *)usb_hdr;
+-			int data_len = elt->length -
+-					sizeof(struct oz_get_desc_rsp) + 1;
+-			u16 offs = le16_to_cpu(get_unaligned(&body->offset));
+-			u16 total_size =
++			u16 offs, total_size;
++			u8 data_len;
++
++			if (elt->length < sizeof(struct oz_get_desc_rsp) - 1)
++				break;
++			data_len = elt->length -
++					(sizeof(struct oz_get_desc_rsp) - 1);
++			offs = le16_to_cpu(get_unaligned(&body->offset));
++			total_size =
+ 				le16_to_cpu(get_unaligned(&body->total_size));
+ 			oz_dbg(ON, "USB_REQ_GET_DESCRIPTOR - cnf\n");
+ 			oz_hcd_get_desc_cnf(usb_ctx->hport, body->req_id,
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index cc57a3a6b02b..eee40b5cb025 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -162,6 +162,17 @@ static inline int tty_put_user(struct tty_struct *tty, unsigned char x,
+ 	return put_user(x, ptr);
+ }
+ 
++static inline int tty_copy_to_user(struct tty_struct *tty,
++					void __user *to,
++					const void *from,
++					unsigned long n)
++{
++	struct n_tty_data *ldata = tty->disc_data;
++
++	tty_audit_add_data(tty, to, n, ldata->icanon);
++	return copy_to_user(to, from, n);
++}
++
+ /**
+  *	n_tty_kick_worker - start input worker (if required)
+  *	@tty: terminal
+@@ -2084,12 +2095,12 @@ static int canon_copy_from_read_buf(struct tty_struct *tty,
+ 		    __func__, eol, found, n, c, size, more);
+ 
+ 	if (n > size) {
+-		ret = copy_to_user(*b, read_buf_addr(ldata, tail), size);
++		ret = tty_copy_to_user(tty, *b, read_buf_addr(ldata, tail), size);
+ 		if (ret)
+ 			return -EFAULT;
+-		ret = copy_to_user(*b + size, ldata->read_buf, n - size);
++		ret = tty_copy_to_user(tty, *b + size, ldata->read_buf, n - size);
+ 	} else
+-		ret = copy_to_user(*b, read_buf_addr(ldata, tail), n);
++		ret = tty_copy_to_user(tty, *b, read_buf_addr(ldata, tail), n);
+ 
+ 	if (ret)
+ 		return -EFAULT;
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 23061918b0e4..f74f400fcb57 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -959,6 +959,14 @@ static void dma_rx_callback(void *data)
+ 
+ 	status = dmaengine_tx_status(chan, (dma_cookie_t)0, &state);
+ 	count = RX_BUF_SIZE - state.residue;
++
++	if (readl(sport->port.membase + USR2) & USR2_IDLE) {
++		/* In condition [3] the SDMA counted up too early */
++		count--;
++
++		writel(USR2_IDLE, sport->port.membase + USR2);
++	}
++
+ 	dev_dbg(sport->port.dev, "We get %d bytes.\n", count);
+ 
+ 	if (count) {
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index d201910b892f..f176941a92dd 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -339,7 +339,7 @@
+ #define DWC3_DGCMD_SET_ENDPOINT_NRDY	0x0c
+ #define DWC3_DGCMD_RUN_SOC_BUS_LOOPBACK	0x10
+ 
+-#define DWC3_DGCMD_STATUS(n)		(((n) >> 15) & 1)
++#define DWC3_DGCMD_STATUS(n)		(((n) >> 12) & 0x0F)
+ #define DWC3_DGCMD_CMDACT		(1 << 10)
+ #define DWC3_DGCMD_CMDIOC		(1 << 8)
+ 
+@@ -355,7 +355,7 @@
+ #define DWC3_DEPCMD_PARAM_SHIFT		16
+ #define DWC3_DEPCMD_PARAM(x)		((x) << DWC3_DEPCMD_PARAM_SHIFT)
+ #define DWC3_DEPCMD_GET_RSC_IDX(x)	(((x) >> DWC3_DEPCMD_PARAM_SHIFT) & 0x7f)
+-#define DWC3_DEPCMD_STATUS(x)		(((x) >> 15) & 1)
++#define DWC3_DEPCMD_STATUS(x)		(((x) >> 12) & 0x0F)
+ #define DWC3_DEPCMD_HIPRI_FORCERM	(1 << 11)
+ #define DWC3_DEPCMD_CMDACT		(1 << 10)
+ #define DWC3_DEPCMD_CMDIOC		(1 << 8)
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index ec8ac1674854..36bf089b708f 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -3682,18 +3682,21 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ {
+ 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ 	unsigned long flags;
+-	int ret;
++	int ret, slot_id;
+ 	struct xhci_command *command;
+ 
+ 	command = xhci_alloc_command(xhci, false, false, GFP_KERNEL);
+ 	if (!command)
+ 		return 0;
+ 
++	/* xhci->slot_id and xhci->addr_dev are not thread-safe */
++	mutex_lock(&xhci->mutex);
+ 	spin_lock_irqsave(&xhci->lock, flags);
+ 	command->completion = &xhci->addr_dev;
+ 	ret = xhci_queue_slot_control(xhci, command, TRB_ENABLE_SLOT, 0);
+ 	if (ret) {
+ 		spin_unlock_irqrestore(&xhci->lock, flags);
++		mutex_unlock(&xhci->mutex);
+ 		xhci_dbg(xhci, "FIXME: allocate a command ring segment\n");
+ 		kfree(command);
+ 		return 0;
+@@ -3702,8 +3705,10 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ 	spin_unlock_irqrestore(&xhci->lock, flags);
+ 
+ 	wait_for_completion(command->completion);
++	slot_id = xhci->slot_id;
++	mutex_unlock(&xhci->mutex);
+ 
+-	if (!xhci->slot_id || command->status != COMP_SUCCESS) {
++	if (!slot_id || command->status != COMP_SUCCESS) {
+ 		xhci_err(xhci, "Error while assigning device slot ID\n");
+ 		xhci_err(xhci, "Max number of devices this xHCI host supports is %u.\n",
+ 				HCS_MAX_SLOTS(
+@@ -3728,11 +3733,11 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ 	 * xhci_discover_or_reset_device(), which may be called as part of
+ 	 * mass storage driver error handling.
+ 	 */
+-	if (!xhci_alloc_virt_device(xhci, xhci->slot_id, udev, GFP_NOIO)) {
++	if (!xhci_alloc_virt_device(xhci, slot_id, udev, GFP_NOIO)) {
+ 		xhci_warn(xhci, "Could not allocate xHCI USB device data structures\n");
+ 		goto disable_slot;
+ 	}
+-	udev->slot_id = xhci->slot_id;
++	udev->slot_id = slot_id;
+ 
+ #ifndef CONFIG_USB_DEFAULT_PERSIST
+ 	/*
+@@ -3778,12 +3783,15 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 	struct xhci_slot_ctx *slot_ctx;
+ 	struct xhci_input_control_ctx *ctrl_ctx;
+ 	u64 temp_64;
+-	struct xhci_command *command;
++	struct xhci_command *command = NULL;
++
++	mutex_lock(&xhci->mutex);
+ 
+ 	if (!udev->slot_id) {
+ 		xhci_dbg_trace(xhci, trace_xhci_dbg_address,
+ 				"Bad Slot ID %d", udev->slot_id);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out;
+ 	}
+ 
+ 	virt_dev = xhci->devs[udev->slot_id];
+@@ -3796,7 +3804,8 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 		 */
+ 		xhci_warn(xhci, "Virt dev invalid for slot_id 0x%x!\n",
+ 			udev->slot_id);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out;
+ 	}
+ 
+ 	if (setup == SETUP_CONTEXT_ONLY) {
+@@ -3804,13 +3813,15 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 		if (GET_SLOT_STATE(le32_to_cpu(slot_ctx->dev_state)) ==
+ 		    SLOT_STATE_DEFAULT) {
+ 			xhci_dbg(xhci, "Slot already in default state\n");
+-			return 0;
++			goto out;
+ 		}
+ 	}
+ 
+ 	command = xhci_alloc_command(xhci, false, false, GFP_KERNEL);
+-	if (!command)
+-		return -ENOMEM;
++	if (!command) {
++		ret = -ENOMEM;
++		goto out;
++	}
+ 
+ 	command->in_ctx = virt_dev->in_ctx;
+ 	command->completion = &xhci->addr_dev;
+@@ -3820,8 +3831,8 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 	if (!ctrl_ctx) {
+ 		xhci_warn(xhci, "%s: Could not get input context, bad type.\n",
+ 				__func__);
+-		kfree(command);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out;
+ 	}
+ 	/*
+ 	 * If this is the first Set Address since device plug-in or
+@@ -3848,8 +3859,7 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 		spin_unlock_irqrestore(&xhci->lock, flags);
+ 		xhci_dbg_trace(xhci, trace_xhci_dbg_address,
+ 				"FIXME: allocate a command ring segment");
+-		kfree(command);
+-		return ret;
++		goto out;
+ 	}
+ 	xhci_ring_cmd_db(xhci);
+ 	spin_unlock_irqrestore(&xhci->lock, flags);
+@@ -3896,10 +3906,8 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 		ret = -EINVAL;
+ 		break;
+ 	}
+-	if (ret) {
+-		kfree(command);
+-		return ret;
+-	}
++	if (ret)
++		goto out;
+ 	temp_64 = xhci_read_64(xhci, &xhci->op_regs->dcbaa_ptr);
+ 	xhci_dbg_trace(xhci, trace_xhci_dbg_address,
+ 			"Op regs DCBAA ptr = %#016llx", temp_64);
+@@ -3932,8 +3940,10 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 	xhci_dbg_trace(xhci, trace_xhci_dbg_address,
+ 		       "Internal device address = %d",
+ 		       le32_to_cpu(slot_ctx->dev_state) & DEV_ADDR_MASK);
++out:
++	mutex_unlock(&xhci->mutex);
+ 	kfree(command);
+-	return 0;
++	return ret;
+ }
+ 
+ int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
+@@ -4855,6 +4865,7 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks)
+ 		return 0;
+ 	}
+ 
++	mutex_init(&xhci->mutex);
+ 	xhci->cap_regs = hcd->regs;
+ 	xhci->op_regs = hcd->regs +
+ 		HC_LENGTH(readl(&xhci->cap_regs->hc_capbase));
+@@ -5011,4 +5022,12 @@ static int __init xhci_hcd_init(void)
+ 	BUILD_BUG_ON(sizeof(struct xhci_run_regs) != (8+8*128)*32/8);
+ 	return 0;
+ }
++
++/*
++ * If an init function is provided, an exit function must also be provided
++ * to allow module unload.
++ */
++static void __exit xhci_hcd_fini(void) { }
++
+ module_init(xhci_hcd_init);
++module_exit(xhci_hcd_fini);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index ea75e8ccd3c1..6977f8491fa7 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1497,6 +1497,8 @@ struct xhci_hcd {
+ 	struct list_head	lpm_failed_devs;
+ 
+ 	/* slot enabling and address device helpers */
++	/* these are not thread safe so use mutex */
++	struct mutex mutex;
+ 	struct completion	addr_dev;
+ 	int slot_id;
+ 	/* For USB 3.0 LPM enable/disable. */
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 9031750e7404..ffd739e31bfc 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -128,6 +128,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */
+ 	{ USB_DEVICE(0x10C4, 0x8977) },	/* CEL MeshWorks DevKit Device */
+ 	{ USB_DEVICE(0x10C4, 0x8998) }, /* KCF Technologies PRN */
++	{ USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */
+ 	{ USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */
+ 	{ USB_DEVICE(0x10C4, 0xEA61) }, /* Silicon Labs factory default */
+ 	{ USB_DEVICE(0x10C4, 0xEA70) }, /* Silicon Labs factory default */
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 8eb68a31cab6..4c8b3b82103d 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -699,6 +699,7 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(XSENS_VID, XSENS_AWINDA_DONGLE_PID) },
+ 	{ USB_DEVICE(XSENS_VID, XSENS_AWINDA_STATION_PID) },
+ 	{ USB_DEVICE(XSENS_VID, XSENS_CONVERTER_PID) },
++	{ USB_DEVICE(XSENS_VID, XSENS_MTDEVBOARD_PID) },
+ 	{ USB_DEVICE(XSENS_VID, XSENS_MTW_PID) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_OMNI1509) },
+ 	{ USB_DEVICE(MOBILITY_VID, MOBILITY_USB_SERIAL_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 4e4f46f3c89c..792e054126de 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -155,6 +155,7 @@
+ #define XSENS_AWINDA_STATION_PID 0x0101
+ #define XSENS_AWINDA_DONGLE_PID 0x0102
+ #define XSENS_MTW_PID		0x0200	/* Xsens MTw */
++#define XSENS_MTDEVBOARD_PID	0x0300	/* Motion Tracker Development Board */
+ #define XSENS_CONVERTER_PID	0xD00D	/* Xsens USB-serial converter */
+ 
+ /* Xsens devices using FTDI VID */
+diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
+index e894eb278d83..eba1b7ac7294 100644
+--- a/drivers/virtio/virtio_pci_common.c
++++ b/drivers/virtio/virtio_pci_common.c
+@@ -423,6 +423,7 @@ int vp_set_vq_affinity(struct virtqueue *vq, int cpu)
+ 		if (cpu == -1)
+ 			irq_set_affinity_hint(irq, NULL);
+ 		else {
++			cpumask_clear(mask);
+ 			cpumask_set_cpu(cpu, mask);
+ 			irq_set_affinity_hint(irq, mask);
+ 		}
+diff --git a/fs/aio.c b/fs/aio.c
+index a793f7023755..a1736e98c278 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -77,6 +77,11 @@ struct kioctx_cpu {
+ 	unsigned		reqs_available;
+ };
+ 
++struct ctx_rq_wait {
++	struct completion comp;
++	atomic_t count;
++};
++
+ struct kioctx {
+ 	struct percpu_ref	users;
+ 	atomic_t		dead;
+@@ -115,7 +120,7 @@ struct kioctx {
+ 	/*
+ 	 * signals when all in-flight requests are done
+ 	 */
+-	struct completion *requests_done;
++	struct ctx_rq_wait	*rq_wait;
+ 
+ 	struct {
+ 		/*
+@@ -539,8 +544,8 @@ static void free_ioctx_reqs(struct percpu_ref *ref)
+ 	struct kioctx *ctx = container_of(ref, struct kioctx, reqs);
+ 
+ 	/* At this point we know that there are no any in-flight requests */
+-	if (ctx->requests_done)
+-		complete(ctx->requests_done);
++	if (ctx->rq_wait && atomic_dec_and_test(&ctx->rq_wait->count))
++		complete(&ctx->rq_wait->comp);
+ 
+ 	INIT_WORK(&ctx->free_work, free_ioctx);
+ 	schedule_work(&ctx->free_work);
+@@ -751,7 +756,7 @@ err:
+  *	the rapid destruction of the kioctx.
+  */
+ static int kill_ioctx(struct mm_struct *mm, struct kioctx *ctx,
+-		struct completion *requests_done)
++		      struct ctx_rq_wait *wait)
+ {
+ 	struct kioctx_table *table;
+ 
+@@ -781,7 +786,7 @@ static int kill_ioctx(struct mm_struct *mm, struct kioctx *ctx,
+ 	if (ctx->mmap_size)
+ 		vm_munmap(ctx->mmap_base, ctx->mmap_size);
+ 
+-	ctx->requests_done = requests_done;
++	ctx->rq_wait = wait;
+ 	percpu_ref_kill(&ctx->users);
+ 	return 0;
+ }
+@@ -813,18 +818,24 @@ EXPORT_SYMBOL(wait_on_sync_kiocb);
+ void exit_aio(struct mm_struct *mm)
+ {
+ 	struct kioctx_table *table = rcu_dereference_raw(mm->ioctx_table);
+-	int i;
++	struct ctx_rq_wait wait;
++	int i, skipped;
+ 
+ 	if (!table)
+ 		return;
+ 
++	atomic_set(&wait.count, table->nr);
++	init_completion(&wait.comp);
++
++	skipped = 0;
+ 	for (i = 0; i < table->nr; ++i) {
+ 		struct kioctx *ctx = table->table[i];
+-		struct completion requests_done =
+-			COMPLETION_INITIALIZER_ONSTACK(requests_done);
+ 
+-		if (!ctx)
++		if (!ctx) {
++			skipped++;
+ 			continue;
++		}
++
+ 		/*
+ 		 * We don't need to bother with munmap() here - exit_mmap(mm)
+ 		 * is coming and it'll unmap everything. And we simply can't,
+@@ -833,10 +844,12 @@ void exit_aio(struct mm_struct *mm)
+ 		 * that it needs to unmap the area, just set it to 0.
+ 		 */
+ 		ctx->mmap_size = 0;
+-		kill_ioctx(mm, ctx, &requests_done);
++		kill_ioctx(mm, ctx, &wait);
++	}
+ 
++	if (!atomic_sub_and_test(skipped, &wait.count)) {
+ 		/* Wait until all IO for the context are done. */
+-		wait_for_completion(&requests_done);
++		wait_for_completion(&wait.comp);
+ 	}
+ 
+ 	RCU_INIT_POINTER(mm->ioctx_table, NULL);
+@@ -1321,15 +1334,17 @@ SYSCALL_DEFINE1(io_destroy, aio_context_t, ctx)
+ {
+ 	struct kioctx *ioctx = lookup_ioctx(ctx);
+ 	if (likely(NULL != ioctx)) {
+-		struct completion requests_done =
+-			COMPLETION_INITIALIZER_ONSTACK(requests_done);
++		struct ctx_rq_wait wait;
+ 		int ret;
+ 
++		init_completion(&wait.comp);
++		atomic_set(&wait.count, 1);
++
+ 		/* Pass requests_done to kill_ioctx() where it can be set
+ 		 * in a thread-safe way. If we try to set it here then we have
+ 		 * a race condition if two io_destroy() called simultaneously.
+ 		 */
+-		ret = kill_ioctx(current->mm, ioctx, &requests_done);
++		ret = kill_ioctx(current->mm, ioctx, &wait);
+ 		percpu_ref_put(&ioctx->users);
+ 
+ 		/* Wait until all IO for the context are done. Otherwise kernel
+@@ -1337,7 +1352,7 @@ SYSCALL_DEFINE1(io_destroy, aio_context_t, ctx)
+ 		 * is destroyed.
+ 		 */
+ 		if (!ret)
+-			wait_for_completion(&requests_done);
++			wait_for_completion(&wait.comp);
+ 
+ 		return ret;
+ 	}
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 8b33da6ec3dd..63be2a96ed6a 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -8535,6 +8535,24 @@ int btrfs_set_block_group_ro(struct btrfs_root *root,
+ 	trans = btrfs_join_transaction(root);
+ 	if (IS_ERR(trans))
+ 		return PTR_ERR(trans);
++	/*
++	 * if we are changing raid levels, try to allocate a corresponding
++	 * block group with the new raid level.
++	 */
++	alloc_flags = update_block_group_flags(root, cache->flags);
++	if (alloc_flags != cache->flags) {
++		ret = do_chunk_alloc(trans, root, alloc_flags,
++				     CHUNK_ALLOC_FORCE);
++		/*
++		 * ENOSPC is allowed here, we may have enough space
++		 * already allocated at the new raid level to
++		 * carry on
++		 */
++		if (ret == -ENOSPC)
++			ret = 0;
++		if (ret < 0)
++			goto out;
++	}
+ 
+ 	ret = set_block_group_ro(cache, 0);
+ 	if (!ret)
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index d688cfe5d496..782f3bc4651d 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -4514,8 +4514,11 @@ int extent_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ 		}
+ 		ret = fiemap_fill_next_extent(fieinfo, em_start, disko,
+ 					      em_len, flags);
+-		if (ret)
++		if (ret) {
++			if (ret == 1)
++				ret = 0;
+ 			goto out_free;
++		}
+ 	}
+ out_free:
+ 	free_extent_map(em);
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 2b4c5423672d..64e8fb639f72 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3206,6 +3206,8 @@ static int btrfs_clone(struct inode *src, struct inode *inode,
+ 	key.offset = off;
+ 
+ 	while (1) {
++		u64 next_key_min_offset = key.offset + 1;
++
+ 		/*
+ 		 * note the key will change type as we walk through the
+ 		 * tree.
+@@ -3286,7 +3288,7 @@ process_slot:
+ 			} else if (key.offset >= off + len) {
+ 				break;
+ 			}
+-
++			next_key_min_offset = key.offset + datal;
+ 			size = btrfs_item_size_nr(leaf, slot);
+ 			read_extent_buffer(leaf, buf,
+ 					   btrfs_item_ptr_offset(leaf, slot),
+@@ -3501,7 +3503,7 @@ process_slot:
+ 				break;
+ 		}
+ 		btrfs_release_path(path);
+-		key.offset++;
++		key.offset = next_key_min_offset;
+ 	}
+ 	ret = 0;
+ 
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index d6033f540cc7..571de5a08fe7 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -5852,19 +5852,20 @@ long btrfs_ioctl_send(struct file *mnt_file, void __user *arg_)
+ 				ret = PTR_ERR(clone_root);
+ 				goto out;
+ 			}
+-			clone_sources_to_rollback = i + 1;
+ 			spin_lock(&clone_root->root_item_lock);
+-			clone_root->send_in_progress++;
+-			if (!btrfs_root_readonly(clone_root)) {
++			if (!btrfs_root_readonly(clone_root) ||
++			    btrfs_root_dead(clone_root)) {
+ 				spin_unlock(&clone_root->root_item_lock);
+ 				srcu_read_unlock(&fs_info->subvol_srcu, index);
+ 				ret = -EPERM;
+ 				goto out;
+ 			}
++			clone_root->send_in_progress++;
+ 			spin_unlock(&clone_root->root_item_lock);
+ 			srcu_read_unlock(&fs_info->subvol_srcu, index);
+ 
+ 			sctx->clone_roots[i].root = clone_root;
++			clone_sources_to_rollback = i + 1;
+ 		}
+ 		vfree(clone_sources_tmp);
+ 		clone_sources_tmp = NULL;
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 05fef198ff94..e477ed67a49a 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -901,6 +901,15 @@ find_root:
+ 	if (IS_ERR(new_root))
+ 		return ERR_CAST(new_root);
+ 
++	if (!(sb->s_flags & MS_RDONLY)) {
++		int ret;
++		down_read(&fs_info->cleanup_work_sem);
++		ret = btrfs_orphan_cleanup(new_root);
++		up_read(&fs_info->cleanup_work_sem);
++		if (ret)
++			return ERR_PTR(ret);
++	}
++
+ 	dir_id = btrfs_root_dirid(&new_root->root_item);
+ setup_root:
+ 	location.objectid = dir_id;
+diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
+index aff923ae8c4b..d87d8eced064 100644
+--- a/include/linux/backing-dev.h
++++ b/include/linux/backing-dev.h
+@@ -116,7 +116,6 @@ __printf(3, 4)
+ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
+ 		const char *fmt, ...);
+ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
+-void bdi_unregister(struct backing_dev_info *bdi);
+ int __must_check bdi_setup_and_register(struct backing_dev_info *, char *);
+ void bdi_start_writeback(struct backing_dev_info *bdi, long nr_pages,
+ 			enum wb_reason reason);
+diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
+index 5976bdecf58b..9fe865ccc3f3 100644
+--- a/include/net/inet_connection_sock.h
++++ b/include/net/inet_connection_sock.h
+@@ -98,7 +98,8 @@ struct inet_connection_sock {
+ 	const struct tcp_congestion_ops *icsk_ca_ops;
+ 	const struct inet_connection_sock_af_ops *icsk_af_ops;
+ 	unsigned int		  (*icsk_sync_mss)(struct sock *sk, u32 pmtu);
+-	__u8			  icsk_ca_state:7,
++	__u8			  icsk_ca_state:6,
++				  icsk_ca_setsockopt:1,
+ 				  icsk_ca_dst_locked:1;
+ 	__u8			  icsk_retransmits;
+ 	__u8			  icsk_pending;
+diff --git a/include/net/sctp/sctp.h b/include/net/sctp/sctp.h
+index 856f01cb51dd..230775f5952a 100644
+--- a/include/net/sctp/sctp.h
++++ b/include/net/sctp/sctp.h
+@@ -571,11 +571,14 @@ static inline void sctp_v6_map_v4(union sctp_addr *addr)
+ /* Map v4 address to v4-mapped v6 address */
+ static inline void sctp_v4_map_v6(union sctp_addr *addr)
+ {
++	__be16 port;
++
++	port = addr->v4.sin_port;
++	addr->v6.sin6_addr.s6_addr32[3] = addr->v4.sin_addr.s_addr;
++	addr->v6.sin6_port = port;
+ 	addr->v6.sin6_family = AF_INET6;
+ 	addr->v6.sin6_flowinfo = 0;
+ 	addr->v6.sin6_scope_id = 0;
+-	addr->v6.sin6_port = addr->v4.sin_port;
+-	addr->v6.sin6_addr.s6_addr32[3] = addr->v4.sin_addr.s_addr;
+ 	addr->v6.sin6_addr.s6_addr32[0] = 0;
+ 	addr->v6.sin6_addr.s6_addr32[1] = 0;
+ 	addr->v6.sin6_addr.s6_addr32[2] = htonl(0x0000ffff);
+diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h
+index 5a14ead59696..885d3a380451 100644
+--- a/include/trace/events/writeback.h
++++ b/include/trace/events/writeback.h
+@@ -233,7 +233,6 @@ DEFINE_EVENT(writeback_class, name, \
+ DEFINE_WRITEBACK_EVENT(writeback_nowork);
+ DEFINE_WRITEBACK_EVENT(writeback_wake_background);
+ DEFINE_WRITEBACK_EVENT(writeback_bdi_register);
+-DEFINE_WRITEBACK_EVENT(writeback_bdi_unregister);
+ 
+ DECLARE_EVENT_CLASS(wbc_class,
+ 	TP_PROTO(struct writeback_control *wbc, struct backing_dev_info *bdi),
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 241213be507c..486d00c408b0 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -2166,7 +2166,7 @@ void task_numa_work(struct callback_head *work)
+ 	}
+ 	for (; vma; vma = vma->vm_next) {
+ 		if (!vma_migratable(vma) || !vma_policy_mof(vma) ||
+-			is_vm_hugetlb_page(vma)) {
++			is_vm_hugetlb_page(vma) || (vma->vm_flags & VM_MIXEDMAP)) {
+ 			continue;
+ 		}
+ 
+diff --git a/kernel/trace/ring_buffer_benchmark.c b/kernel/trace/ring_buffer_benchmark.c
+index 13d945c0d03f..1b28df2d9104 100644
+--- a/kernel/trace/ring_buffer_benchmark.c
++++ b/kernel/trace/ring_buffer_benchmark.c
+@@ -450,7 +450,7 @@ static int __init ring_buffer_benchmark_init(void)
+ 
+ 	if (producer_fifo >= 0) {
+ 		struct sched_param param = {
+-			.sched_priority = consumer_fifo
++			.sched_priority = producer_fifo
+ 		};
+ 		sched_setscheduler(producer, SCHED_FIFO, &param);
+ 	} else
+diff --git a/mm/backing-dev.c b/mm/backing-dev.c
+index 6dc4580df2af..000e7b3b9896 100644
+--- a/mm/backing-dev.c
++++ b/mm/backing-dev.c
+@@ -359,23 +359,6 @@ static void bdi_wb_shutdown(struct backing_dev_info *bdi)
+ 	flush_delayed_work(&bdi->wb.dwork);
+ }
+ 
+-/*
+- * Called when the device behind @bdi has been removed or ejected.
+- *
+- * We can't really do much here except for reducing the dirty ratio at
+- * the moment.  In the future we should be able to set a flag so that
+- * the filesystem can handle errors at mark_inode_dirty time instead
+- * of only at writeback time.
+- */
+-void bdi_unregister(struct backing_dev_info *bdi)
+-{
+-	if (WARN_ON_ONCE(!bdi->dev))
+-		return;
+-
+-	bdi_set_min_ratio(bdi, 0);
+-}
+-EXPORT_SYMBOL(bdi_unregister);
+-
+ static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
+ {
+ 	memset(wb, 0, sizeof(*wb));
+@@ -443,6 +426,7 @@ void bdi_destroy(struct backing_dev_info *bdi)
+ 	int i;
+ 
+ 	bdi_wb_shutdown(bdi);
++	bdi_set_min_ratio(bdi, 0);
+ 
+ 	WARN_ON(!list_empty(&bdi->work_list));
+ 	WARN_ON(delayed_work_pending(&bdi->wb.dwork));
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 65842d688b7c..93caba791cde 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1978,8 +1978,10 @@ void try_offline_node(int nid)
+ 		 * wait_table may be allocated from boot memory,
+ 		 * here only free if it's allocated by vmalloc.
+ 		 */
+-		if (is_vmalloc_addr(zone->wait_table))
++		if (is_vmalloc_addr(zone->wait_table)) {
+ 			vfree(zone->wait_table);
++			zone->wait_table = NULL;
++		}
+ 	}
+ }
+ EXPORT_SYMBOL(try_offline_node);
+diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
+index e0670d7054f9..659fb96672e4 100644
+--- a/net/bridge/br_fdb.c
++++ b/net/bridge/br_fdb.c
+@@ -796,9 +796,11 @@ static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge_port *p,
+ 	int err = 0;
+ 
+ 	if (ndm->ndm_flags & NTF_USE) {
++		local_bh_disable();
+ 		rcu_read_lock();
+ 		br_fdb_update(p->br, p, addr, vid, true);
+ 		rcu_read_unlock();
++		local_bh_enable();
+ 	} else {
+ 		spin_lock_bh(&p->br->hash_lock);
+ 		err = fdb_add_entry(p, addr, ndm->ndm_state,
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index c465876c7861..b0aee78dba41 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -1071,7 +1071,7 @@ static int br_ip6_multicast_mld2_report(struct net_bridge *br,
+ 
+ 		err = br_ip6_multicast_add_group(br, port, &grec->grec_mca,
+ 						 vid);
+-		if (!err)
++		if (err)
+ 			break;
+ 	}
+ 
+@@ -1821,7 +1821,7 @@ static void br_multicast_query_expired(struct net_bridge *br,
+ 	if (query->startup_sent < br->multicast_startup_query_count)
+ 		query->startup_sent++;
+ 
+-	RCU_INIT_POINTER(querier, NULL);
++	RCU_INIT_POINTER(querier->port, NULL);
+ 	br_multicast_send_query(br, NULL, query);
+ 	spin_unlock(&br->multicast_lock);
+ }
+diff --git a/net/caif/caif_socket.c b/net/caif/caif_socket.c
+index a6e2da0bc718..982101c12258 100644
+--- a/net/caif/caif_socket.c
++++ b/net/caif/caif_socket.c
+@@ -330,6 +330,10 @@ static long caif_stream_data_wait(struct sock *sk, long timeo)
+ 		release_sock(sk);
+ 		timeo = schedule_timeout(timeo);
+ 		lock_sock(sk);
++
++		if (sock_flag(sk, SOCK_DEAD))
++			break;
++
+ 		clear_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags);
+ 	}
+ 
+@@ -374,6 +378,10 @@ static int caif_stream_recvmsg(struct kiocb *iocb, struct socket *sock,
+ 		struct sk_buff *skb;
+ 
+ 		lock_sock(sk);
++		if (sock_flag(sk, SOCK_DEAD)) {
++			err = -ECONNRESET;
++			goto unlock;
++		}
+ 		skb = skb_dequeue(&sk->sk_receive_queue);
+ 		caif_check_flow_release(sk);
+ 
+diff --git a/net/ceph/crush/mapper.c b/net/ceph/crush/mapper.c
+index a1ef53c04415..b1f2d1f44d37 100644
+--- a/net/ceph/crush/mapper.c
++++ b/net/ceph/crush/mapper.c
+@@ -290,6 +290,7 @@ static int is_out(const struct crush_map *map,
+  * @type: the type of item to choose
+  * @out: pointer to output vector
+  * @outpos: our position in that vector
++ * @out_size: size of the out vector
+  * @tries: number of attempts to make
+  * @recurse_tries: number of attempts to have recursive chooseleaf make
+  * @local_retries: localized retries
+@@ -304,6 +305,7 @@ static int crush_choose_firstn(const struct crush_map *map,
+ 			       const __u32 *weight, int weight_max,
+ 			       int x, int numrep, int type,
+ 			       int *out, int outpos,
++			       int out_size,
+ 			       unsigned int tries,
+ 			       unsigned int recurse_tries,
+ 			       unsigned int local_retries,
+@@ -322,6 +324,7 @@ static int crush_choose_firstn(const struct crush_map *map,
+ 	int item = 0;
+ 	int itemtype;
+ 	int collide, reject;
++	int count = out_size;
+ 
+ 	dprintk("CHOOSE%s bucket %d x %d outpos %d numrep %d tries %d recurse_tries %d local_retries %d local_fallback_retries %d parent_r %d\n",
+ 		recurse_to_leaf ? "_LEAF" : "",
+@@ -329,7 +332,7 @@ static int crush_choose_firstn(const struct crush_map *map,
+ 		tries, recurse_tries, local_retries, local_fallback_retries,
+ 		parent_r);
+ 
+-	for (rep = outpos; rep < numrep; rep++) {
++	for (rep = outpos; rep < numrep && count > 0 ; rep++) {
+ 		/* keep trying until we get a non-out, non-colliding item */
+ 		ftotal = 0;
+ 		skip_rep = 0;
+@@ -403,7 +406,7 @@ static int crush_choose_firstn(const struct crush_map *map,
+ 							 map->buckets[-1-item],
+ 							 weight, weight_max,
+ 							 x, outpos+1, 0,
+-							 out2, outpos,
++							 out2, outpos, count,
+ 							 recurse_tries, 0,
+ 							 local_retries,
+ 							 local_fallback_retries,
+@@ -463,6 +466,7 @@ reject:
+ 		dprintk("CHOOSE got %d\n", item);
+ 		out[outpos] = item;
+ 		outpos++;
++		count--;
+ 	}
+ 
+ 	dprintk("CHOOSE returns %d\n", outpos);
+@@ -654,6 +658,7 @@ int crush_do_rule(const struct crush_map *map,
+ 	__u32 step;
+ 	int i, j;
+ 	int numrep;
++	int out_size;
+ 	/*
+ 	 * the original choose_total_tries value was off by one (it
+ 	 * counted "retries" and not "tries").  add one.
+@@ -761,6 +766,7 @@ int crush_do_rule(const struct crush_map *map,
+ 						x, numrep,
+ 						curstep->arg2,
+ 						o+osize, j,
++						result_max-osize,
+ 						choose_tries,
+ 						recurse_tries,
+ 						choose_local_retries,
+@@ -770,11 +776,13 @@ int crush_do_rule(const struct crush_map *map,
+ 						c+osize,
+ 						0);
+ 				} else {
++					out_size = ((numrep < (result_max-osize)) ?
++                                                    numrep : (result_max-osize));
+ 					crush_choose_indep(
+ 						map,
+ 						map->buckets[-1-w[i]],
+ 						weight, weight_max,
+-						x, numrep, numrep,
++						x, out_size, numrep,
+ 						curstep->arg2,
+ 						o+osize, j,
+ 						choose_tries,
+@@ -783,7 +791,7 @@ int crush_do_rule(const struct crush_map *map,
+ 						recurse_to_leaf,
+ 						c+osize,
+ 						0);
+-					osize += numrep;
++					osize += out_size;
+ 				}
+ 			}
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 22a53acdb5bb..e977e15c2ac0 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -5170,7 +5170,7 @@ static int __netdev_upper_dev_link(struct net_device *dev,
+ 	if (__netdev_find_adj(upper_dev, dev, &upper_dev->all_adj_list.upper))
+ 		return -EBUSY;
+ 
+-	if (__netdev_find_adj(dev, upper_dev, &dev->all_adj_list.upper))
++	if (__netdev_find_adj(dev, upper_dev, &dev->adj_list.upper))
+ 		return -EEXIST;
+ 
+ 	if (master && netdev_master_upper_dev_get(dev))
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 7ebed55b5f7d..a2b90e1fc115 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2337,6 +2337,9 @@ void rtmsg_ifinfo(int type, struct net_device *dev, unsigned int change,
+ {
+ 	struct sk_buff *skb;
+ 
++	if (dev->reg_state != NETREG_REGISTERED)
++		return;
++
+ 	skb = rtmsg_ifinfo_build_skb(type, dev, change, flags);
+ 	if (skb)
+ 		rtmsg_ifinfo_send(skb, dev, flags);
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 20fc0202cbbe..e262a087050b 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -903,6 +903,10 @@ static int ip_error(struct sk_buff *skb)
+ 	bool send;
+ 	int code;
+ 
++	/* IP on this device is disabled. */
++	if (!in_dev)
++		goto out;
++
+ 	net = dev_net(rt->dst.dev);
+ 	if (!IN_DEV_FORWARD(in_dev)) {
+ 		switch (rt->dst.error) {
+diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
+index 62856e185a93..9d2fbd88df93 100644
+--- a/net/ipv4/tcp_cong.c
++++ b/net/ipv4/tcp_cong.c
+@@ -187,6 +187,7 @@ static void tcp_reinit_congestion_control(struct sock *sk,
+ 
+ 	tcp_cleanup_congestion_control(sk);
+ 	icsk->icsk_ca_ops = ca;
++	icsk->icsk_ca_setsockopt = 1;
+ 
+ 	if (sk->sk_state != TCP_CLOSE && icsk->icsk_ca_ops->init)
+ 		icsk->icsk_ca_ops->init(sk);
+@@ -335,8 +336,10 @@ int tcp_set_congestion_control(struct sock *sk, const char *name)
+ 	rcu_read_lock();
+ 	ca = __tcp_ca_find_autoload(name);
+ 	/* No change asking for existing value */
+-	if (ca == icsk->icsk_ca_ops)
++	if (ca == icsk->icsk_ca_ops) {
++		icsk->icsk_ca_setsockopt = 1;
+ 		goto out;
++	}
+ 	if (!ca)
+ 		err = -ENOENT;
+ 	else if (!((ca->flags & TCP_CONG_NON_RESTRICTED) ||
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index dd11ac7798c6..50277af92485 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -316,7 +316,7 @@ void tcp_time_wait(struct sock *sk, int state, int timeo)
+ 			tw->tw_v6_daddr = sk->sk_v6_daddr;
+ 			tw->tw_v6_rcv_saddr = sk->sk_v6_rcv_saddr;
+ 			tw->tw_tclass = np->tclass;
+-			tw->tw_flowlabel = np->flow_label >> 12;
++			tw->tw_flowlabel = be32_to_cpu(np->flow_label & IPV6_FLOWLABEL_MASK);
+ 			tw->tw_ipv6only = sk->sk_ipv6only;
+ 		}
+ #endif
+@@ -437,7 +437,10 @@ void tcp_ca_openreq_child(struct sock *sk, const struct dst_entry *dst)
+ 		rcu_read_unlock();
+ 	}
+ 
+-	if (!ca_got_dst && !try_module_get(icsk->icsk_ca_ops->owner))
++	/* If no valid choice made yet, assign current system default ca. */
++	if (!ca_got_dst &&
++	    (!icsk->icsk_ca_setsockopt ||
++	     !try_module_get(icsk->icsk_ca_ops->owner)))
+ 		tcp_assign_congestion_control(sk);
+ 
+ 	tcp_set_ca_state(sk, TCP_CA_Open);
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 97ef1f8b7be8..51f17454bd7b 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -90,6 +90,7 @@
+ #include <linux/socket.h>
+ #include <linux/sockios.h>
+ #include <linux/igmp.h>
++#include <linux/inetdevice.h>
+ #include <linux/in.h>
+ #include <linux/errno.h>
+ #include <linux/timer.h>
+@@ -1348,10 +1349,8 @@ csum_copy_err:
+ 	}
+ 	unlock_sock_fast(sk, slow);
+ 
+-	if (noblock)
+-		return -EAGAIN;
+-
+-	/* starting over for a new packet */
++	/* starting over for a new packet, but check if we need to yield */
++	cond_resched();
+ 	msg->msg_flags &= ~MSG_TRUNC;
+ 	goto try_again;
+ }
+@@ -1968,6 +1967,7 @@ void udp_v4_early_demux(struct sk_buff *skb)
+ 	struct sock *sk;
+ 	struct dst_entry *dst;
+ 	int dif = skb->dev->ifindex;
++	int ours;
+ 
+ 	/* validate the packet */
+ 	if (!pskb_may_pull(skb, skb_transport_offset(skb) + sizeof(struct udphdr)))
+@@ -1977,14 +1977,24 @@ void udp_v4_early_demux(struct sk_buff *skb)
+ 	uh = udp_hdr(skb);
+ 
+ 	if (skb->pkt_type == PACKET_BROADCAST ||
+-	    skb->pkt_type == PACKET_MULTICAST)
++	    skb->pkt_type == PACKET_MULTICAST) {
++		struct in_device *in_dev = __in_dev_get_rcu(skb->dev);
++
++		if (!in_dev)
++			return;
++
++		ours = ip_check_mc_rcu(in_dev, iph->daddr, iph->saddr,
++				       iph->protocol);
++		if (!ours)
++			return;
+ 		sk = __udp4_lib_mcast_demux_lookup(net, uh->dest, iph->daddr,
+ 						   uh->source, iph->saddr, dif);
+-	else if (skb->pkt_type == PACKET_HOST)
++	} else if (skb->pkt_type == PACKET_HOST) {
+ 		sk = __udp4_lib_demux_lookup(net, uh->dest, iph->daddr,
+ 					     uh->source, iph->saddr, dif);
+-	else
++	} else {
+ 		return;
++	}
+ 
+ 	if (!sk)
+ 		return;
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 1f5e62229aaa..5ca3bc880fef 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -975,7 +975,7 @@ static void tcp_v6_timewait_ack(struct sock *sk, struct sk_buff *skb)
+ 			tcptw->tw_rcv_wnd >> tw->tw_rcv_wscale,
+ 			tcp_time_stamp + tcptw->tw_ts_offset,
+ 			tcptw->tw_ts_recent, tw->tw_bound_dev_if, tcp_twsk_md5_key(tcptw),
+-			tw->tw_tclass, (tw->tw_flowlabel << 12));
++			tw->tw_tclass, cpu_to_be32(tw->tw_flowlabel));
+ 
+ 	inet_twsk_put(tw);
+ }
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index d048d46779fc..1c9512aba77e 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -528,10 +528,8 @@ csum_copy_err:
+ 	}
+ 	unlock_sock_fast(sk, slow);
+ 
+-	if (noblock)
+-		return -EAGAIN;
+-
+-	/* starting over for a new packet */
++	/* starting over for a new packet, but check if we need to yield */
++	cond_resched();
+ 	msg->msg_flags &= ~MSG_TRUNC;
+ 	goto try_again;
+ }
+@@ -734,7 +732,9 @@ static bool __udp_v6_is_mcast_sock(struct net *net, struct sock *sk,
+ 	    (inet->inet_dport && inet->inet_dport != rmt_port) ||
+ 	    (!ipv6_addr_any(&sk->sk_v6_daddr) &&
+ 		    !ipv6_addr_equal(&sk->sk_v6_daddr, rmt_addr)) ||
+-	    (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif))
++	    (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif) ||
++	    (!ipv6_addr_any(&sk->sk_v6_rcv_saddr) &&
++		    !ipv6_addr_equal(&sk->sk_v6_rcv_saddr, loc_addr)))
+ 		return false;
+ 	if (!inet6_mc_check(sk, loc_addr, rmt_addr))
+ 		return false;
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index d1d7a8166f46..0e9c28dc86b7 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1052,7 +1052,7 @@ static int netlink_insert(struct sock *sk, u32 portid)
+ 	struct netlink_table *table = &nl_table[sk->sk_protocol];
+ 	int err;
+ 
+-	lock_sock(sk);
++	mutex_lock(&table->hash.mutex);
+ 
+ 	err = -EBUSY;
+ 	if (nlk_sk(sk)->portid)
+@@ -1069,11 +1069,12 @@ static int netlink_insert(struct sock *sk, u32 portid)
+ 	err = 0;
+ 	if (!__netlink_insert(table, sk)) {
+ 		err = -EADDRINUSE;
++		nlk_sk(sk)->portid = 0;
+ 		sock_put(sk);
+ 	}
+ 
+ err:
+-	release_sock(sk);
++	mutex_unlock(&table->hash.mutex);
+ 	return err;
+ }
+ 
+@@ -1082,10 +1083,12 @@ static void netlink_remove(struct sock *sk)
+ 	struct netlink_table *table;
+ 
+ 	table = &nl_table[sk->sk_protocol];
++	mutex_lock(&table->hash.mutex);
+ 	if (rhashtable_remove(&table->hash, &nlk_sk(sk)->node)) {
+ 		WARN_ON(atomic_read(&sk->sk_refcnt) == 1);
+ 		__sock_put(sk);
+ 	}
++	mutex_unlock(&table->hash.mutex);
+ 
+ 	netlink_table_grab();
+ 	if (nlk_sk(sk)->subscriptions) {
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index baef987fe2c0..d3328a19f5b2 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -81,6 +81,11 @@ int unregister_tcf_proto_ops(struct tcf_proto_ops *ops)
+ 	struct tcf_proto_ops *t;
+ 	int rc = -ENOENT;
+ 
++	/* Wait for outstanding call_rcu()s, if any, from a
++	 * tcf_proto_ops's destroy() handler.
++	 */
++	rcu_barrier();
++
+ 	write_lock(&cls_mod_lock);
+ 	list_for_each_entry(t, &tcf_proto_base, head) {
+ 		if (t == ops) {
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 243b7d169d61..d9c2ee6d2959 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -815,10 +815,8 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
+ 		if (dev->flags & IFF_UP)
+ 			dev_deactivate(dev);
+ 
+-		if (new && new->ops->attach) {
+-			new->ops->attach(new);
+-			num_q = 0;
+-		}
++		if (new && new->ops->attach)
++			goto skip;
+ 
+ 		for (i = 0; i < num_q; i++) {
+ 			struct netdev_queue *dev_queue = dev_ingress_queue(dev);
+@@ -834,12 +832,16 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
+ 				qdisc_destroy(old);
+ 		}
+ 
++skip:
+ 		if (!ingress) {
+ 			notify_and_destroy(net, skb, n, classid,
+ 					   dev->qdisc, new);
+ 			if (new && !new->ops->attach)
+ 				atomic_inc(&new->refcnt);
+ 			dev->qdisc = new ? : &noop_qdisc;
++
++			if (new && new->ops->attach)
++				new->ops->attach(new);
+ 		} else {
+ 			notify_and_destroy(net, skb, n, classid, old, new);
+ 		}
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 526b6edab018..146881f068e2 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -1887,6 +1887,10 @@ static long unix_stream_data_wait(struct sock *sk, long timeo,
+ 		unix_state_unlock(sk);
+ 		timeo = freezable_schedule_timeout(timeo);
+ 		unix_state_lock(sk);
++
++		if (sock_flag(sk, SOCK_DEAD))
++			break;
++
+ 		clear_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags);
+ 	}
+ 
+@@ -1947,6 +1951,10 @@ static int unix_stream_recvmsg(struct kiocb *iocb, struct socket *sock,
+ 		struct sk_buff *skb, *last;
+ 
+ 		unix_state_lock(sk);
++		if (sock_flag(sk, SOCK_DEAD)) {
++			err = -ECONNRESET;
++			goto unlock;
++		}
+ 		last = skb = skb_peek(&sk->sk_receive_queue);
+ again:
+ 		if (skb == NULL) {
+diff --git a/net/wireless/wext-compat.c b/net/wireless/wext-compat.c
+index 5b24d39d7903..318026617b57 100644
+--- a/net/wireless/wext-compat.c
++++ b/net/wireless/wext-compat.c
+@@ -1333,6 +1333,8 @@ static struct iw_statistics *cfg80211_wireless_stats(struct net_device *dev)
+ 	memcpy(bssid, wdev->current_bss->pub.bssid, ETH_ALEN);
+ 	wdev_unlock(wdev);
+ 
++	memset(&sinfo, 0, sizeof(sinfo));
++
+ 	if (rdev_get_station(rdev, dev, bssid, &sinfo))
+ 		return NULL;
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 93c78c3c4b95..a556d63564e6 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2167,6 +2167,7 @@ static const struct hda_fixup alc882_fixups[] = {
+ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x006c, "Acer Aspire 9810", ALC883_FIXUP_ACER_EAPD),
+ 	SND_PCI_QUIRK(0x1025, 0x0090, "Acer Aspire", ALC883_FIXUP_ACER_EAPD),
++	SND_PCI_QUIRK(0x1025, 0x0107, "Acer Aspire", ALC883_FIXUP_ACER_EAPD),
+ 	SND_PCI_QUIRK(0x1025, 0x010a, "Acer Ferrari 5000", ALC883_FIXUP_ACER_EAPD),
+ 	SND_PCI_QUIRK(0x1025, 0x0110, "Acer Aspire", ALC883_FIXUP_ACER_EAPD),
+ 	SND_PCI_QUIRK(0x1025, 0x0112, "Acer Aspire 9303", ALC883_FIXUP_ACER_EAPD),
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 3e2ef61c627b..8b7e391dd0b8 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -918,6 +918,7 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval,
+ 	case USB_ID(0x046d, 0x081d): /* HD Webcam c510 */
+ 	case USB_ID(0x046d, 0x0825): /* HD Webcam c270 */
+ 	case USB_ID(0x046d, 0x0826): /* HD Webcam c525 */
++	case USB_ID(0x046d, 0x08ca): /* Logitech Quickcam Fusion */
+ 	case USB_ID(0x046d, 0x0991):
+ 	/* Most audio usb devices lie about volume resolution.
+ 	 * Most Logitech webcams have res = 384.
+@@ -1582,12 +1583,6 @@ static int parse_audio_mixer_unit(struct mixer_build *state, int unitid,
+ 			      unitid);
+ 		return -EINVAL;
+ 	}
+-	/* no bmControls field (e.g. Maya44) -> ignore */
+-	if (desc->bLength <= 10 + input_pins) {
+-		usb_audio_dbg(state->chip, "MU %d has no bmControls field\n",
+-			      unitid);
+-		return 0;
+-	}
+ 
+ 	num_ins = 0;
+ 	ich = 0;
+@@ -1595,6 +1590,9 @@ static int parse_audio_mixer_unit(struct mixer_build *state, int unitid,
+ 		err = parse_audio_unit(state, desc->baSourceID[pin]);
+ 		if (err < 0)
+ 			continue;
++		/* no bmControls field (e.g. Maya44) -> ignore */
++		if (desc->bLength <= 10 + input_pins)
++			continue;
+ 		err = check_input_term(state, desc->baSourceID[pin], &iterm);
+ 		if (err < 0)
+ 			return err;
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index b703cb3cda19..e5000da9e9d7 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -437,6 +437,11 @@ static struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 		.map = ebox44_map,
+ 	},
+ 	{
++		/* MAYA44 USB+ */
++		.id = USB_ID(0x2573, 0x0008),
++		.map = maya44_map,
++	},
++	{
+ 		/* KEF X300A */
+ 		.id = USB_ID(0x27ac, 0x1000),
+ 		.map = scms_usb3318_map,
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index e21ec5abcc3a..2a408c60114b 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1120,6 +1120,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
+ 	case USB_ID(0x045E, 0x0772): /* MS Lifecam Studio */
+ 	case USB_ID(0x045E, 0x0779): /* MS Lifecam HD-3000 */
+ 	case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */
++	case USB_ID(0x074D, 0x3553): /* Outlaw RR2150 (Micronas UAC3553B) */
+ 		return true;
+ 	}
+ 	return false;
+@@ -1266,8 +1267,9 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ 		if (fp->altsetting == 2)
+ 			return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+ 		break;
+-	/* DIYINHK DSD DXD 384kHz USB to I2S/DSD */
+-	case USB_ID(0x20b1, 0x2009):
++
++	case USB_ID(0x20b1, 0x2009): /* DIYINHK DSD DXD 384kHz USB to I2S/DSD */
++	case USB_ID(0x20b1, 0x2023): /* JLsounds I2SoverUSB */
+ 		if (fp->altsetting == 3)
+ 			return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+ 		break;


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-06-23 15:38 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-06-23 15:38 UTC (permalink / raw
  To: gentoo-commits

commit:     f488c7966fddc9cf7870591f92b24781158ee7bd
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 23 15:38:54 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 23 15:38:54 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f488c796

Re-add 4.0 patchset

 0000_README                                        |   72 +
 1000_linux-4.0.1.patch                             |  479 ++
 1001_linux-4.0.2.patch                             | 8587 ++++++++++++++++++++
 1002_linux-4.0.3.patch                             | 2827 +++++++
 1003_linux-4.0.4.patch                             | 2713 +++++++
 1004_linux-4.0.5.patch                             | 4937 +++++++++++
 1500_XATTR_USER_PREFIX.patch                       |   54 +
 ...ble-link-security-restrictions-by-default.patch |   22 +
 2600_select-REGMAP_IRQ-for-rt5033.patch            |   30 +
 2700_ThinkPad-30-brightness-control-fix.patch      |   67 +
 2900_dev-root-proc-mount-fix.patch                 |   30 +
 2905_2disk-resume-image-fix.patch                  |   24 +
 2910_lz4-compression-fix.patch                     |   30 +
 4200_fbcondecor-3.19.patch                         | 2119 +++++
 ...able-additional-cpu-optimizations-for-gcc.patch |  327 +
 ...roups-kconfig-build-bits-for-BFQ-v7r7-4.0.patch |  104 +
 ...introduce-the-BFQ-v7r7-I-O-sched-for-4.0.patch1 | 6966 ++++++++++++++++
 ...rly-Queue-Merge-EQM-to-BFQ-v7r7-for-4.0.0.patch | 1222 +++
 ...-additional-cpu-optimizations-for-gcc-4.9.patch |  402 +
 19 files changed, 31012 insertions(+)

diff --git a/0000_README b/0000_README
index 9018993..0f63559 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,78 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-4.0.1.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.0.1
+
+Patch:  1001_linux-4.0.2.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.0.2
+
+Patch:  1002_linux-4.0.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.0.3
+
+Patch:  1003_linux-4.0.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.0.4
+
+Patch:  1004_linux-4.0.5.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.0.5
+
+Patch:  1500_XATTR_USER_PREFIX.patch
+From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
+Desc:   Support for namespace user.pax.* on tmpfs.
+
+Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
+From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc:   Enable link security restrictions by default.
+
+Patch:  2600_select-REGMAP_IRQ-for-rt5033.patch
+From:   http://git.kernel.org/
+Desc:   mfd: rt5033: MFD_RT5033 needs to select REGMAP_IRQ. See bug #546938.
+
+Patch:  2700_ThinkPad-30-brightness-control-fix.patch
+From:   Seth Forshee <seth.forshee@canonical.com>
+Desc:   ACPI: Disable Windows 8 compatibility for some Lenovo ThinkPads.
+
+Patch:  2900_dev-root-proc-mount-fix.patch
+From:   https://bugs.gentoo.org/show_bug.cgi?id=438380
+Desc:   Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.
+
+Patch:  2905_s2disk-resume-image-fix.patch
+From:   Al Viro <viro <at> ZenIV.linux.org.uk>
+Desc:   Do not lock when UMH is waiting on current thread spawned by linuxrc. (bug #481344)
+
+Patch:  2910_lz4-compression-fix.patch
+From:   https://bugs.gentoo.org/show_bug.cgi?id=546422
+Desc:   Fix for lz4 compression regression. Thanks to Christian Xia. See bug #546422.
+
+Patch:  4200_fbcondecor-3.19.patch
+From:   http://www.mepiscommunity.org/fbcondecor
+Desc:   Bootsplash ported by Marco. (Bug #539616)
+
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
+
+Patch:  5000_enable-additional-cpu-optimizations-for-gcc.patch
+From:   https://github.com/graysky2/kernel_gcc_patch/
+Desc:   Kernel patch enables gcc < v4.9 optimizations for additional CPUs.
+
+Patch:  5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r7-4.0.patch
+From:   http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc:   BFQ v7r7 patch 1 for 4.0: Build, cgroups and kconfig bits
+
+Patch:  5002_block-introduce-the-BFQ-v7r7-I-O-sched-for-4.0.patch1
+From:   http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc:   BFQ v7r7 patch 2 for 4.0: BFQ Scheduler
+
+Patch:  5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r7-for-4.0.0.patch
+From:   http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc:   BFQ v7r7 patch 3 for 4.0: Early Queue Merge (EQM)
+
+Patch:  5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
+From:   https://github.com/graysky2/kernel_gcc_patch/
+Desc:   Kernel patch enables gcc >= v4.9 optimizations for additional CPUs.

diff --git a/1000_linux-4.0.1.patch b/1000_linux-4.0.1.patch
new file mode 100644
index 0000000..ac58552
--- /dev/null
+++ b/1000_linux-4.0.1.patch
@@ -0,0 +1,479 @@
+diff --git a/Makefile b/Makefile
+index fbd43bfe4445..f499cd2f5738 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+index 4085c4b31047..355d5fea5be9 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+@@ -531,20 +531,8 @@ struct bnx2x_fastpath {
+ 	struct napi_struct	napi;
+ 
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+-	unsigned int state;
+-#define BNX2X_FP_STATE_IDLE		      0
+-#define BNX2X_FP_STATE_NAPI		(1 << 0)    /* NAPI owns this FP */
+-#define BNX2X_FP_STATE_POLL		(1 << 1)    /* poll owns this FP */
+-#define BNX2X_FP_STATE_DISABLED		(1 << 2)
+-#define BNX2X_FP_STATE_NAPI_YIELD	(1 << 3)    /* NAPI yielded this FP */
+-#define BNX2X_FP_STATE_POLL_YIELD	(1 << 4)    /* poll yielded this FP */
+-#define BNX2X_FP_OWNED	(BNX2X_FP_STATE_NAPI | BNX2X_FP_STATE_POLL)
+-#define BNX2X_FP_YIELD	(BNX2X_FP_STATE_NAPI_YIELD | BNX2X_FP_STATE_POLL_YIELD)
+-#define BNX2X_FP_LOCKED	(BNX2X_FP_OWNED | BNX2X_FP_STATE_DISABLED)
+-#define BNX2X_FP_USER_PEND (BNX2X_FP_STATE_POLL | BNX2X_FP_STATE_POLL_YIELD)
+-	/* protect state */
+-	spinlock_t lock;
+-#endif /* CONFIG_NET_RX_BUSY_POLL */
++	unsigned long		busy_poll_state;
++#endif
+ 
+ 	union host_hc_status_block	status_blk;
+ 	/* chip independent shortcuts into sb structure */
+@@ -619,104 +607,83 @@ struct bnx2x_fastpath {
+ #define bnx2x_fp_qstats(bp, fp)	(&((bp)->fp_stats[(fp)->index].eth_q_stats))
+ 
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+-static inline void bnx2x_fp_init_lock(struct bnx2x_fastpath *fp)
++
++enum bnx2x_fp_state {
++	BNX2X_STATE_FP_NAPI	= BIT(0), /* NAPI handler owns the queue */
++
++	BNX2X_STATE_FP_NAPI_REQ_BIT = 1, /* NAPI would like to own the queue */
++	BNX2X_STATE_FP_NAPI_REQ = BIT(1),
++
++	BNX2X_STATE_FP_POLL_BIT = 2,
++	BNX2X_STATE_FP_POLL     = BIT(2), /* busy_poll owns the queue */
++
++	BNX2X_STATE_FP_DISABLE_BIT = 3, /* queue is dismantled */
++};
++
++static inline void bnx2x_fp_busy_poll_init(struct bnx2x_fastpath *fp)
+ {
+-	spin_lock_init(&fp->lock);
+-	fp->state = BNX2X_FP_STATE_IDLE;
++	WRITE_ONCE(fp->busy_poll_state, 0);
+ }
+ 
+ /* called from the device poll routine to get ownership of a FP */
+ static inline bool bnx2x_fp_lock_napi(struct bnx2x_fastpath *fp)
+ {
+-	bool rc = true;
+-
+-	spin_lock_bh(&fp->lock);
+-	if (fp->state & BNX2X_FP_LOCKED) {
+-		WARN_ON(fp->state & BNX2X_FP_STATE_NAPI);
+-		fp->state |= BNX2X_FP_STATE_NAPI_YIELD;
+-		rc = false;
+-	} else {
+-		/* we don't care if someone yielded */
+-		fp->state = BNX2X_FP_STATE_NAPI;
++	unsigned long prev, old = READ_ONCE(fp->busy_poll_state);
++
++	while (1) {
++		switch (old) {
++		case BNX2X_STATE_FP_POLL:
++			/* make sure bnx2x_fp_lock_poll() wont starve us */
++			set_bit(BNX2X_STATE_FP_NAPI_REQ_BIT,
++				&fp->busy_poll_state);
++			/* fallthrough */
++		case BNX2X_STATE_FP_POLL | BNX2X_STATE_FP_NAPI_REQ:
++			return false;
++		default:
++			break;
++		}
++		prev = cmpxchg(&fp->busy_poll_state, old, BNX2X_STATE_FP_NAPI);
++		if (unlikely(prev != old)) {
++			old = prev;
++			continue;
++		}
++		return true;
+ 	}
+-	spin_unlock_bh(&fp->lock);
+-	return rc;
+ }
+ 
+-/* returns true is someone tried to get the FP while napi had it */
+-static inline bool bnx2x_fp_unlock_napi(struct bnx2x_fastpath *fp)
++static inline void bnx2x_fp_unlock_napi(struct bnx2x_fastpath *fp)
+ {
+-	bool rc = false;
+-
+-	spin_lock_bh(&fp->lock);
+-	WARN_ON(fp->state &
+-		(BNX2X_FP_STATE_POLL | BNX2X_FP_STATE_NAPI_YIELD));
+-
+-	if (fp->state & BNX2X_FP_STATE_POLL_YIELD)
+-		rc = true;
+-
+-	/* state ==> idle, unless currently disabled */
+-	fp->state &= BNX2X_FP_STATE_DISABLED;
+-	spin_unlock_bh(&fp->lock);
+-	return rc;
++	smp_wmb();
++	fp->busy_poll_state = 0;
+ }
+ 
+ /* called from bnx2x_low_latency_poll() */
+ static inline bool bnx2x_fp_lock_poll(struct bnx2x_fastpath *fp)
+ {
+-	bool rc = true;
+-
+-	spin_lock_bh(&fp->lock);
+-	if ((fp->state & BNX2X_FP_LOCKED)) {
+-		fp->state |= BNX2X_FP_STATE_POLL_YIELD;
+-		rc = false;
+-	} else {
+-		/* preserve yield marks */
+-		fp->state |= BNX2X_FP_STATE_POLL;
+-	}
+-	spin_unlock_bh(&fp->lock);
+-	return rc;
++	return cmpxchg(&fp->busy_poll_state, 0, BNX2X_STATE_FP_POLL) == 0;
+ }
+ 
+-/* returns true if someone tried to get the FP while it was locked */
+-static inline bool bnx2x_fp_unlock_poll(struct bnx2x_fastpath *fp)
++static inline void bnx2x_fp_unlock_poll(struct bnx2x_fastpath *fp)
+ {
+-	bool rc = false;
+-
+-	spin_lock_bh(&fp->lock);
+-	WARN_ON(fp->state & BNX2X_FP_STATE_NAPI);
+-
+-	if (fp->state & BNX2X_FP_STATE_POLL_YIELD)
+-		rc = true;
+-
+-	/* state ==> idle, unless currently disabled */
+-	fp->state &= BNX2X_FP_STATE_DISABLED;
+-	spin_unlock_bh(&fp->lock);
+-	return rc;
++	smp_mb__before_atomic();
++	clear_bit(BNX2X_STATE_FP_POLL_BIT, &fp->busy_poll_state);
+ }
+ 
+-/* true if a socket is polling, even if it did not get the lock */
++/* true if a socket is polling */
+ static inline bool bnx2x_fp_ll_polling(struct bnx2x_fastpath *fp)
+ {
+-	WARN_ON(!(fp->state & BNX2X_FP_OWNED));
+-	return fp->state & BNX2X_FP_USER_PEND;
++	return READ_ONCE(fp->busy_poll_state) & BNX2X_STATE_FP_POLL;
+ }
+ 
+ /* false if fp is currently owned */
+ static inline bool bnx2x_fp_ll_disable(struct bnx2x_fastpath *fp)
+ {
+-	int rc = true;
+-
+-	spin_lock_bh(&fp->lock);
+-	if (fp->state & BNX2X_FP_OWNED)
+-		rc = false;
+-	fp->state |= BNX2X_FP_STATE_DISABLED;
+-	spin_unlock_bh(&fp->lock);
++	set_bit(BNX2X_STATE_FP_DISABLE_BIT, &fp->busy_poll_state);
++	return !bnx2x_fp_ll_polling(fp);
+ 
+-	return rc;
+ }
+ #else
+-static inline void bnx2x_fp_init_lock(struct bnx2x_fastpath *fp)
++static inline void bnx2x_fp_busy_poll_init(struct bnx2x_fastpath *fp)
+ {
+ }
+ 
+@@ -725,9 +692,8 @@ static inline bool bnx2x_fp_lock_napi(struct bnx2x_fastpath *fp)
+ 	return true;
+ }
+ 
+-static inline bool bnx2x_fp_unlock_napi(struct bnx2x_fastpath *fp)
++static inline void bnx2x_fp_unlock_napi(struct bnx2x_fastpath *fp)
+ {
+-	return false;
+ }
+ 
+ static inline bool bnx2x_fp_lock_poll(struct bnx2x_fastpath *fp)
+@@ -735,9 +701,8 @@ static inline bool bnx2x_fp_lock_poll(struct bnx2x_fastpath *fp)
+ 	return false;
+ }
+ 
+-static inline bool bnx2x_fp_unlock_poll(struct bnx2x_fastpath *fp)
++static inline void bnx2x_fp_unlock_poll(struct bnx2x_fastpath *fp)
+ {
+-	return false;
+ }
+ 
+ static inline bool bnx2x_fp_ll_polling(struct bnx2x_fastpath *fp)
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+index 0a9faa134a9a..2f63467bce46 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -1849,7 +1849,7 @@ static void bnx2x_napi_enable_cnic(struct bnx2x *bp)
+ 	int i;
+ 
+ 	for_each_rx_queue_cnic(bp, i) {
+-		bnx2x_fp_init_lock(&bp->fp[i]);
++		bnx2x_fp_busy_poll_init(&bp->fp[i]);
+ 		napi_enable(&bnx2x_fp(bp, i, napi));
+ 	}
+ }
+@@ -1859,7 +1859,7 @@ static void bnx2x_napi_enable(struct bnx2x *bp)
+ 	int i;
+ 
+ 	for_each_eth_queue(bp, i) {
+-		bnx2x_fp_init_lock(&bp->fp[i]);
++		bnx2x_fp_busy_poll_init(&bp->fp[i]);
+ 		napi_enable(&bnx2x_fp(bp, i, napi));
+ 	}
+ }
+@@ -3191,9 +3191,10 @@ static int bnx2x_poll(struct napi_struct *napi, int budget)
+ 			}
+ 		}
+ 
++		bnx2x_fp_unlock_napi(fp);
++
+ 		/* Fall out from the NAPI loop if needed */
+-		if (!bnx2x_fp_unlock_napi(fp) &&
+-		    !(bnx2x_has_rx_work(fp) || bnx2x_has_tx_work(fp))) {
++		if (!(bnx2x_has_rx_work(fp) || bnx2x_has_tx_work(fp))) {
+ 
+ 			/* No need to update SB for FCoE L2 ring as long as
+ 			 * it's connected to the default SB and the SB
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index f8528a4cf54f..fceb637efd6b 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -1713,12 +1713,6 @@ static int vxlan6_xmit_skb(struct dst_entry *dst, struct sk_buff *skb,
+ 		}
+ 	}
+ 
+-	skb = iptunnel_handle_offloads(skb, udp_sum, type);
+-	if (IS_ERR(skb)) {
+-		err = -EINVAL;
+-		goto err;
+-	}
+-
+ 	skb_scrub_packet(skb, xnet);
+ 
+ 	min_headroom = LL_RESERVED_SPACE(dst->dev) + dst->header_len
+@@ -1738,6 +1732,12 @@ static int vxlan6_xmit_skb(struct dst_entry *dst, struct sk_buff *skb,
+ 		goto err;
+ 	}
+ 
++	skb = iptunnel_handle_offloads(skb, udp_sum, type);
++	if (IS_ERR(skb)) {
++		err = -EINVAL;
++		goto err;
++	}
++
+ 	vxh = (struct vxlanhdr *) __skb_push(skb, sizeof(*vxh));
+ 	vxh->vx_flags = htonl(VXLAN_HF_VNI);
+ 	vxh->vx_vni = md->vni;
+@@ -1798,10 +1798,6 @@ int vxlan_xmit_skb(struct rtable *rt, struct sk_buff *skb,
+ 		}
+ 	}
+ 
+-	skb = iptunnel_handle_offloads(skb, udp_sum, type);
+-	if (IS_ERR(skb))
+-		return PTR_ERR(skb);
+-
+ 	min_headroom = LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len
+ 			+ VXLAN_HLEN + sizeof(struct iphdr)
+ 			+ (skb_vlan_tag_present(skb) ? VLAN_HLEN : 0);
+@@ -1817,6 +1813,10 @@ int vxlan_xmit_skb(struct rtable *rt, struct sk_buff *skb,
+ 	if (WARN_ON(!skb))
+ 		return -ENOMEM;
+ 
++	skb = iptunnel_handle_offloads(skb, udp_sum, type);
++	if (IS_ERR(skb))
++		return PTR_ERR(skb);
++
+ 	vxh = (struct vxlanhdr *) __skb_push(skb, sizeof(*vxh));
+ 	vxh->vx_flags = htonl(VXLAN_HF_VNI);
+ 	vxh->vx_vni = md->vni;
+diff --git a/fs/exec.c b/fs/exec.c
+index c7f9b733406d..00400cf522dc 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1265,6 +1265,53 @@ static void check_unsafe_exec(struct linux_binprm *bprm)
+ 	spin_unlock(&p->fs->lock);
+ }
+ 
++static void bprm_fill_uid(struct linux_binprm *bprm)
++{
++	struct inode *inode;
++	unsigned int mode;
++	kuid_t uid;
++	kgid_t gid;
++
++	/* clear any previous set[ug]id data from a previous binary */
++	bprm->cred->euid = current_euid();
++	bprm->cred->egid = current_egid();
++
++	if (bprm->file->f_path.mnt->mnt_flags & MNT_NOSUID)
++		return;
++
++	if (task_no_new_privs(current))
++		return;
++
++	inode = file_inode(bprm->file);
++	mode = READ_ONCE(inode->i_mode);
++	if (!(mode & (S_ISUID|S_ISGID)))
++		return;
++
++	/* Be careful if suid/sgid is set */
++	mutex_lock(&inode->i_mutex);
++
++	/* reload atomically mode/uid/gid now that lock held */
++	mode = inode->i_mode;
++	uid = inode->i_uid;
++	gid = inode->i_gid;
++	mutex_unlock(&inode->i_mutex);
++
++	/* We ignore suid/sgid if there are no mappings for them in the ns */
++	if (!kuid_has_mapping(bprm->cred->user_ns, uid) ||
++		 !kgid_has_mapping(bprm->cred->user_ns, gid))
++		return;
++
++	if (mode & S_ISUID) {
++		bprm->per_clear |= PER_CLEAR_ON_SETID;
++		bprm->cred->euid = uid;
++	}
++
++	if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP)) {
++		bprm->per_clear |= PER_CLEAR_ON_SETID;
++		bprm->cred->egid = gid;
++	}
++}
++
+ /*
+  * Fill the binprm structure from the inode.
+  * Check permissions, then read the first 128 (BINPRM_BUF_SIZE) bytes
+@@ -1273,36 +1320,9 @@ static void check_unsafe_exec(struct linux_binprm *bprm)
+  */
+ int prepare_binprm(struct linux_binprm *bprm)
+ {
+-	struct inode *inode = file_inode(bprm->file);
+-	umode_t mode = inode->i_mode;
+ 	int retval;
+ 
+-
+-	/* clear any previous set[ug]id data from a previous binary */
+-	bprm->cred->euid = current_euid();
+-	bprm->cred->egid = current_egid();
+-
+-	if (!(bprm->file->f_path.mnt->mnt_flags & MNT_NOSUID) &&
+-	    !task_no_new_privs(current) &&
+-	    kuid_has_mapping(bprm->cred->user_ns, inode->i_uid) &&
+-	    kgid_has_mapping(bprm->cred->user_ns, inode->i_gid)) {
+-		/* Set-uid? */
+-		if (mode & S_ISUID) {
+-			bprm->per_clear |= PER_CLEAR_ON_SETID;
+-			bprm->cred->euid = inode->i_uid;
+-		}
+-
+-		/* Set-gid? */
+-		/*
+-		 * If setgid is set but no group execute bit then this
+-		 * is a candidate for mandatory locking, not a setgid
+-		 * executable.
+-		 */
+-		if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP)) {
+-			bprm->per_clear |= PER_CLEAR_ON_SETID;
+-			bprm->cred->egid = inode->i_gid;
+-		}
+-	}
++	bprm_fill_uid(bprm);
+ 
+ 	/* fill in binprm security blob */
+ 	retval = security_bprm_set_creds(bprm);
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index a28e09c7825d..36508e69e92a 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1380,7 +1380,8 @@ peek_stack:
+ 			/* tell verifier to check for equivalent states
+ 			 * after every call and jump
+ 			 */
+-			env->explored_states[t + 1] = STATE_LIST_MARK;
++			if (t + 1 < insn_cnt)
++				env->explored_states[t + 1] = STATE_LIST_MARK;
+ 		} else {
+ 			/* conditional jump with two edges */
+ 			ret = push_insn(t, t + 1, FALLTHROUGH, env);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 8e4ac97c8477..98d45fe72f51 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -4169,19 +4169,21 @@ EXPORT_SYMBOL(skb_try_coalesce);
+  */
+ void skb_scrub_packet(struct sk_buff *skb, bool xnet)
+ {
+-	if (xnet)
+-		skb_orphan(skb);
+ 	skb->tstamp.tv64 = 0;
+ 	skb->pkt_type = PACKET_HOST;
+ 	skb->skb_iif = 0;
+ 	skb->ignore_df = 0;
+ 	skb_dst_drop(skb);
+-	skb->mark = 0;
+ 	skb_sender_cpu_clear(skb);
+-	skb_init_secmark(skb);
+ 	secpath_reset(skb);
+ 	nf_reset(skb);
+ 	nf_reset_trace(skb);
++
++	if (!xnet)
++		return;
++
++	skb_orphan(skb);
++	skb->mark = 0;
+ }
+ EXPORT_SYMBOL_GPL(skb_scrub_packet);
+ 
+diff --git a/net/ipv4/geneve.c b/net/ipv4/geneve.c
+index 5a4828ba05ad..a566a2e4715b 100644
+--- a/net/ipv4/geneve.c
++++ b/net/ipv4/geneve.c
+@@ -113,10 +113,6 @@ int geneve_xmit_skb(struct geneve_sock *gs, struct rtable *rt,
+ 	int min_headroom;
+ 	int err;
+ 
+-	skb = udp_tunnel_handle_offloads(skb, csum);
+-	if (IS_ERR(skb))
+-		return PTR_ERR(skb);
+-
+ 	min_headroom = LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len
+ 			+ GENEVE_BASE_HLEN + opt_len + sizeof(struct iphdr)
+ 			+ (skb_vlan_tag_present(skb) ? VLAN_HLEN : 0);
+@@ -131,6 +127,10 @@ int geneve_xmit_skb(struct geneve_sock *gs, struct rtable *rt,
+ 	if (unlikely(!skb))
+ 		return -ENOMEM;
+ 
++	skb = udp_tunnel_handle_offloads(skb, csum);
++	if (IS_ERR(skb))
++		return PTR_ERR(skb);
++
+ 	gnvh = (struct genevehdr *)__skb_push(skb, sizeof(*gnvh) + opt_len);
+ 	geneve_build_header(gnvh, tun_flags, vni, opt_len, opt);
+ 
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 1db253e36045..d520492ba698 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -2929,6 +2929,8 @@ struct sk_buff *tcp_make_synack(struct sock *sk, struct dst_entry *dst,
+ 	}
+ #endif
+ 
++	/* Do not fool tcpdump (if any), clean our debris */
++	skb->tstamp.tv64 = 0;
+ 	return skb;
+ }
+ EXPORT_SYMBOL(tcp_make_synack);

diff --git a/1001_linux-4.0.2.patch b/1001_linux-4.0.2.patch
new file mode 100644
index 0000000..38a75b2
--- /dev/null
+++ b/1001_linux-4.0.2.patch
@@ -0,0 +1,8587 @@
+diff --git a/Documentation/networking/scaling.txt b/Documentation/networking/scaling.txt
+index 99ca40e..5c204df 100644
+--- a/Documentation/networking/scaling.txt
++++ b/Documentation/networking/scaling.txt
+@@ -282,7 +282,7 @@ following is true:
+ 
+ - The current CPU's queue head counter >= the recorded tail counter
+   value in rps_dev_flow[i]
+-- The current CPU is unset (equal to RPS_NO_CPU)
++- The current CPU is unset (>= nr_cpu_ids)
+ - The current CPU is offline
+ 
+ After this check, the packet is sent to the (possibly updated) current
+diff --git a/Documentation/virtual/kvm/devices/s390_flic.txt b/Documentation/virtual/kvm/devices/s390_flic.txt
+index 4ceef53..d1ad9d5 100644
+--- a/Documentation/virtual/kvm/devices/s390_flic.txt
++++ b/Documentation/virtual/kvm/devices/s390_flic.txt
+@@ -27,6 +27,9 @@ Groups:
+     Copies all floating interrupts into a buffer provided by userspace.
+     When the buffer is too small it returns -ENOMEM, which is the indication
+     for userspace to try again with a bigger buffer.
++    -ENOBUFS is returned when the allocation of a kernelspace buffer has
++    failed.
++    -EFAULT is returned when copying data to userspace failed.
+     All interrupts remain pending, i.e. are not deleted from the list of
+     currently pending interrupts.
+     attr->addr contains the userspace address of the buffer into which all
+diff --git a/Makefile b/Makefile
+index f499cd2..0649a60 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+diff --git a/arch/arm/boot/dts/at91-sama5d3_xplained.dts b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+index fec1fca..6c4bc53 100644
+--- a/arch/arm/boot/dts/at91-sama5d3_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+@@ -167,7 +167,13 @@
+ 
+ 			macb1: ethernet@f802c000 {
+ 				phy-mode = "rmii";
++				#address-cells = <1>;
++				#size-cells = <0>;
+ 				status = "okay";
++
++				ethernet-phy@1 {
++					reg = <0x1>;
++				};
+ 			};
+ 
+ 			dbgu: serial@ffffee00 {
+diff --git a/arch/arm/boot/dts/dove.dtsi b/arch/arm/boot/dts/dove.dtsi
+index a5441d5..3cc8b83 100644
+--- a/arch/arm/boot/dts/dove.dtsi
++++ b/arch/arm/boot/dts/dove.dtsi
+@@ -154,7 +154,7 @@
+ 
+ 			uart2: serial@12200 {
+ 				compatible = "ns16550a";
+-				reg = <0x12000 0x100>;
++				reg = <0x12200 0x100>;
+ 				reg-shift = <2>;
+ 				interrupts = <9>;
+ 				clocks = <&core_clk 0>;
+@@ -163,7 +163,7 @@
+ 
+ 			uart3: serial@12300 {
+ 				compatible = "ns16550a";
+-				reg = <0x12100 0x100>;
++				reg = <0x12300 0x100>;
+ 				reg-shift = <2>;
+ 				interrupts = <10>;
+ 				clocks = <&core_clk 0>;
+diff --git a/arch/arm/boot/dts/exynos5250-spring.dts b/arch/arm/boot/dts/exynos5250-spring.dts
+index f027754..c41600e 100644
+--- a/arch/arm/boot/dts/exynos5250-spring.dts
++++ b/arch/arm/boot/dts/exynos5250-spring.dts
+@@ -429,7 +429,6 @@
+ &mmc_0 {
+ 	status = "okay";
+ 	num-slots = <1>;
+-	supports-highspeed;
+ 	broken-cd;
+ 	card-detect-delay = <200>;
+ 	samsung,dw-mshc-ciu-div = <3>;
+@@ -437,11 +436,8 @@
+ 	samsung,dw-mshc-ddr-timing = <1 2>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&sd0_clk &sd0_cmd &sd0_cd &sd0_bus4 &sd0_bus8>;
+-
+-	slot@0 {
+-		reg = <0>;
+-		bus-width = <8>;
+-	};
++	bus-width = <8>;
++	cap-mmc-highspeed;
+ };
+ 
+ /*
+@@ -451,7 +447,6 @@
+ &mmc_1 {
+ 	status = "okay";
+ 	num-slots = <1>;
+-	supports-highspeed;
+ 	broken-cd;
+ 	card-detect-delay = <200>;
+ 	samsung,dw-mshc-ciu-div = <3>;
+@@ -459,11 +454,8 @@
+ 	samsung,dw-mshc-ddr-timing = <1 2>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&sd1_clk &sd1_cmd &sd1_cd &sd1_bus4>;
+-
+-	slot@0 {
+-		reg = <0>;
+-		bus-width = <4>;
+-	};
++	bus-width = <4>;
++	cap-sd-highspeed;
+ };
+ 
+ &pinctrl_0 {
+diff --git a/arch/arm/include/asm/elf.h b/arch/arm/include/asm/elf.h
+index afb9caf..674d03f 100644
+--- a/arch/arm/include/asm/elf.h
++++ b/arch/arm/include/asm/elf.h
+@@ -115,7 +115,7 @@ int dump_task_regs(struct task_struct *t, elf_gregset_t *elfregs);
+    the loader.  We need to make sure that it is out of the way of the program
+    that it will "exec", and that there is sufficient room for the brk.  */
+ 
+-#define ELF_ET_DYN_BASE	(2 * TASK_SIZE / 3)
++#define ELF_ET_DYN_BASE	(TASK_SIZE / 3 * 2)
+ 
+ /* When the program starts, a1 contains a pointer to a function to be 
+    registered with atexit, as per the SVR4 ABI.  A value of 0 means we 
+diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
+index 0db25bc..3a42ac6 100644
+--- a/arch/arm/include/uapi/asm/kvm.h
++++ b/arch/arm/include/uapi/asm/kvm.h
+@@ -195,8 +195,14 @@ struct kvm_arch_memory_slot {
+ #define KVM_ARM_IRQ_CPU_IRQ		0
+ #define KVM_ARM_IRQ_CPU_FIQ		1
+ 
+-/* Highest supported SPI, from VGIC_NR_IRQS */
++/*
++ * This used to hold the highest supported SPI, but it is now obsolete
++ * and only here to provide source code level compatibility with older
++ * userland. The highest SPI number can be set via KVM_DEV_ARM_VGIC_GRP_NR_IRQS.
++ */
++#ifndef __KERNEL__
+ #define KVM_ARM_IRQ_GIC_MAX		127
++#endif
+ 
+ /* PSCI interface */
+ #define KVM_PSCI_FN_BASE		0x95c1ba5e
+diff --git a/arch/arm/kernel/hibernate.c b/arch/arm/kernel/hibernate.c
+index c4cc50e..cfb354f 100644
+--- a/arch/arm/kernel/hibernate.c
++++ b/arch/arm/kernel/hibernate.c
+@@ -22,6 +22,7 @@
+ #include <asm/suspend.h>
+ #include <asm/memory.h>
+ #include <asm/sections.h>
++#include "reboot.h"
+ 
+ int pfn_is_nosave(unsigned long pfn)
+ {
+@@ -61,7 +62,7 @@ static int notrace arch_save_image(unsigned long unused)
+ 
+ 	ret = swsusp_save();
+ 	if (ret == 0)
+-		soft_restart(virt_to_phys(cpu_resume));
++		_soft_restart(virt_to_phys(cpu_resume), false);
+ 	return ret;
+ }
+ 
+@@ -86,7 +87,7 @@ static void notrace arch_restore_image(void *unused)
+ 	for (pbe = restore_pblist; pbe; pbe = pbe->next)
+ 		copy_page(pbe->orig_address, pbe->address);
+ 
+-	soft_restart(virt_to_phys(cpu_resume));
++	_soft_restart(virt_to_phys(cpu_resume), false);
+ }
+ 
+ static u64 resume_stack[PAGE_SIZE/2/sizeof(u64)] __nosavedata;
+diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
+index fdfa3a7..2bf1a16 100644
+--- a/arch/arm/kernel/process.c
++++ b/arch/arm/kernel/process.c
+@@ -41,6 +41,7 @@
+ #include <asm/system_misc.h>
+ #include <asm/mach/time.h>
+ #include <asm/tls.h>
++#include "reboot.h"
+ 
+ #ifdef CONFIG_CC_STACKPROTECTOR
+ #include <linux/stackprotector.h>
+@@ -95,7 +96,7 @@ static void __soft_restart(void *addr)
+ 	BUG();
+ }
+ 
+-void soft_restart(unsigned long addr)
++void _soft_restart(unsigned long addr, bool disable_l2)
+ {
+ 	u64 *stack = soft_restart_stack + ARRAY_SIZE(soft_restart_stack);
+ 
+@@ -104,7 +105,7 @@ void soft_restart(unsigned long addr)
+ 	local_fiq_disable();
+ 
+ 	/* Disable the L2 if we're the last man standing. */
+-	if (num_online_cpus() == 1)
++	if (disable_l2)
+ 		outer_disable();
+ 
+ 	/* Change to the new stack and continue with the reset. */
+@@ -114,6 +115,11 @@ void soft_restart(unsigned long addr)
+ 	BUG();
+ }
+ 
++void soft_restart(unsigned long addr)
++{
++	_soft_restart(addr, num_online_cpus() == 1);
++}
++
+ /*
+  * Function pointers to optional machine specific functions
+  */
+diff --git a/arch/arm/kernel/reboot.h b/arch/arm/kernel/reboot.h
+new file mode 100644
+index 0000000..c87f058
+--- /dev/null
++++ b/arch/arm/kernel/reboot.h
+@@ -0,0 +1,6 @@
++#ifndef REBOOT_H
++#define REBOOT_H
++
++extern void _soft_restart(unsigned long addr, bool disable_l2);
++
++#endif
+diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
+index 5560f74..b652af5 100644
+--- a/arch/arm/kvm/arm.c
++++ b/arch/arm/kvm/arm.c
+@@ -651,8 +651,7 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
+ 		if (!irqchip_in_kernel(kvm))
+ 			return -ENXIO;
+ 
+-		if (irq_num < VGIC_NR_PRIVATE_IRQS ||
+-		    irq_num > KVM_ARM_IRQ_GIC_MAX)
++		if (irq_num < VGIC_NR_PRIVATE_IRQS)
+ 			return -EINVAL;
+ 
+ 		return kvm_vgic_inject_irq(kvm, 0, irq_num, level);
+diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c
+index 8b9f5e2..4f4e222 100644
+--- a/arch/arm/mach-mvebu/pmsu.c
++++ b/arch/arm/mach-mvebu/pmsu.c
+@@ -415,6 +415,9 @@ static __init int armada_38x_cpuidle_init(void)
+ 	void __iomem *mpsoc_base;
+ 	u32 reg;
+ 
++	pr_warn("CPU idle is currently broken on Armada 38x: disabling");
++	return 0;
++
+ 	np = of_find_compatible_node(NULL, NULL,
+ 				     "marvell,armada-380-coherency-fabric");
+ 	if (!np)
+@@ -476,6 +479,16 @@ static int __init mvebu_v7_cpu_pm_init(void)
+ 		return 0;
+ 	of_node_put(np);
+ 
++	/*
++	 * Currently the CPU idle support for Armada 38x is broken, as
++	 * the CPU hotplug uses some of the CPU idle functions it is
++	 * broken too, so let's disable it
++	 */
++	if (of_machine_is_compatible("marvell,armada380")) {
++		cpu_hotplug_disable();
++		pr_warn("CPU hotplug support is currently broken on Armada 38x: disabling");
++	}
++
+ 	if (of_machine_is_compatible("marvell,armadaxp"))
+ 		ret = armada_xp_cpuidle_init();
+ 	else if (of_machine_is_compatible("marvell,armada370"))
+@@ -489,7 +502,8 @@ static int __init mvebu_v7_cpu_pm_init(void)
+ 		return ret;
+ 
+ 	mvebu_v7_pmsu_enable_l2_powerdown_onidle();
+-	platform_device_register(&mvebu_v7_cpuidle_device);
++	if (mvebu_v7_cpuidle_device.name)
++		platform_device_register(&mvebu_v7_cpuidle_device);
+ 	cpu_pm_register_notifier(&mvebu_v7_cpu_pm_notifier);
+ 
+ 	return 0;
+diff --git a/arch/arm/mach-s3c64xx/crag6410.h b/arch/arm/mach-s3c64xx/crag6410.h
+index 7bc6668..dcbe17f 100644
+--- a/arch/arm/mach-s3c64xx/crag6410.h
++++ b/arch/arm/mach-s3c64xx/crag6410.h
+@@ -14,6 +14,7 @@
+ #include <mach/gpio-samsung.h>
+ 
+ #define GLENFARCLAS_PMIC_IRQ_BASE	IRQ_BOARD_START
++#define BANFF_PMIC_IRQ_BASE		(IRQ_BOARD_START + 64)
+ 
+ #define PCA935X_GPIO_BASE		GPIO_BOARD_START
+ #define CODEC_GPIO_BASE			(GPIO_BOARD_START + 8)
+diff --git a/arch/arm/mach-s3c64xx/mach-crag6410.c b/arch/arm/mach-s3c64xx/mach-crag6410.c
+index 10b913b..65c426b 100644
+--- a/arch/arm/mach-s3c64xx/mach-crag6410.c
++++ b/arch/arm/mach-s3c64xx/mach-crag6410.c
+@@ -554,6 +554,7 @@ static struct wm831x_touch_pdata touch_pdata = {
+ 
+ static struct wm831x_pdata crag_pmic_pdata = {
+ 	.wm831x_num = 1,
++	.irq_base = BANFF_PMIC_IRQ_BASE,
+ 	.gpio_base = BANFF_PMIC_GPIO_BASE,
+ 	.soft_shutdown = true,
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 1b8e973..a6186c2 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -361,6 +361,27 @@ config ARM64_ERRATUM_832075
+ 
+ 	  If unsure, say Y.
+ 
++config ARM64_ERRATUM_845719
++	bool "Cortex-A53: 845719: a load might read incorrect data"
++	depends on COMPAT
++	default y
++	help
++	  This option adds an alternative code sequence to work around ARM
++	  erratum 845719 on Cortex-A53 parts up to r0p4.
++
++	  When running a compat (AArch32) userspace on an affected Cortex-A53
++	  part, a load at EL0 from a virtual address that matches the bottom 32
++	  bits of the virtual address used by a recent load at (AArch64) EL1
++	  might return incorrect data.
++
++	  The workaround is to write the contextidr_el1 register on exception
++	  return to a 32-bit task.
++	  Please note that this does not necessarily enable the workaround,
++	  as it depends on the alternative framework, which will only patch
++	  the kernel if an affected CPU is detected.
++
++	  If unsure, say Y.
++
+ endmenu
+ 
+ 
+@@ -470,6 +491,10 @@ config HOTPLUG_CPU
+ 
+ source kernel/Kconfig.preempt
+ 
++config UP_LATE_INIT
++       def_bool y
++       depends on !SMP
++
+ config HZ
+ 	int
+ 	default 100
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index 69ceedc..4d2a925 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -48,7 +48,7 @@ core-$(CONFIG_KVM) += arch/arm64/kvm/
+ core-$(CONFIG_XEN) += arch/arm64/xen/
+ core-$(CONFIG_CRYPTO) += arch/arm64/crypto/
+ libs-y		:= arch/arm64/lib/ $(libs-y)
+-libs-$(CONFIG_EFI_STUB) += drivers/firmware/efi/libstub/
++core-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a
+ 
+ # Default target when executing plain make
+ KBUILD_IMAGE	:= Image.gz
+diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
+index b6c16d5..3f0c53c 100644
+--- a/arch/arm64/include/asm/cpufeature.h
++++ b/arch/arm64/include/asm/cpufeature.h
+@@ -23,8 +23,9 @@
+ 
+ #define ARM64_WORKAROUND_CLEAN_CACHE		0
+ #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE	1
++#define ARM64_WORKAROUND_845719			2
+ 
+-#define ARM64_NCAPS				2
++#define ARM64_NCAPS				3
+ 
+ #ifndef __ASSEMBLY__
+ 
+diff --git a/arch/arm64/include/asm/smp_plat.h b/arch/arm64/include/asm/smp_plat.h
+index 59e2823..8dcd61e 100644
+--- a/arch/arm64/include/asm/smp_plat.h
++++ b/arch/arm64/include/asm/smp_plat.h
+@@ -40,4 +40,6 @@ static inline u32 mpidr_hash_size(void)
+ extern u64 __cpu_logical_map[NR_CPUS];
+ #define cpu_logical_map(cpu)    __cpu_logical_map[cpu]
+ 
++void __init do_post_cpus_up_work(void);
++
+ #endif /* __ASM_SMP_PLAT_H */
+diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
+index 3ef77a4..bc49a18 100644
+--- a/arch/arm64/include/uapi/asm/kvm.h
++++ b/arch/arm64/include/uapi/asm/kvm.h
+@@ -188,8 +188,14 @@ struct kvm_arch_memory_slot {
+ #define KVM_ARM_IRQ_CPU_IRQ		0
+ #define KVM_ARM_IRQ_CPU_FIQ		1
+ 
+-/* Highest supported SPI, from VGIC_NR_IRQS */
++/*
++ * This used to hold the highest supported SPI, but it is now obsolete
++ * and only here to provide source code level compatibility with older
++ * userland. The highest SPI number can be set via KVM_DEV_ARM_VGIC_GRP_NR_IRQS.
++ */
++#ifndef __KERNEL__
+ #define KVM_ARM_IRQ_GIC_MAX		127
++#endif
+ 
+ /* PSCI interface */
+ #define KVM_PSCI_FN_BASE		0x95c1ba5e
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index fa62637..ad6d523 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -88,7 +88,16 @@ struct arm64_cpu_capabilities arm64_errata[] = {
+ 	/* Cortex-A57 r0p0 - r1p2 */
+ 		.desc = "ARM erratum 832075",
+ 		.capability = ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE,
+-		MIDR_RANGE(MIDR_CORTEX_A57, 0x00, 0x12),
++		MIDR_RANGE(MIDR_CORTEX_A57, 0x00,
++			   (1 << MIDR_VARIANT_SHIFT) | 2),
++	},
++#endif
++#ifdef CONFIG_ARM64_ERRATUM_845719
++	{
++	/* Cortex-A53 r0p[01234] */
++		.desc = "ARM erratum 845719",
++		.capability = ARM64_WORKAROUND_845719,
++		MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x04),
+ 	},
+ #endif
+ 	{
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index cf21bb3..959fe87 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -21,8 +21,10 @@
+ #include <linux/init.h>
+ #include <linux/linkage.h>
+ 
++#include <asm/alternative-asm.h>
+ #include <asm/assembler.h>
+ #include <asm/asm-offsets.h>
++#include <asm/cpufeature.h>
+ #include <asm/errno.h>
+ #include <asm/esr.h>
+ #include <asm/thread_info.h>
+@@ -120,6 +122,24 @@
+ 	ct_user_enter
+ 	ldr	x23, [sp, #S_SP]		// load return stack pointer
+ 	msr	sp_el0, x23
++
++#ifdef CONFIG_ARM64_ERRATUM_845719
++	alternative_insn						\
++	"nop",								\
++	"tbz x22, #4, 1f",						\
++	ARM64_WORKAROUND_845719
++#ifdef CONFIG_PID_IN_CONTEXTIDR
++	alternative_insn						\
++	"nop; nop",							\
++	"mrs x29, contextidr_el1; msr contextidr_el1, x29; 1:",		\
++	ARM64_WORKAROUND_845719
++#else
++	alternative_insn						\
++	"nop",								\
++	"msr contextidr_el1, xzr; 1:",					\
++	ARM64_WORKAROUND_845719
++#endif
++#endif
+ 	.endif
+ 	msr	elr_el1, x21			// set up the return data
+ 	msr	spsr_el1, x22
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index 07f9305..c237ffb 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -426,6 +426,7 @@ __create_page_tables:
+ 	 */
+ 	mov	x0, x25
+ 	add	x1, x26, #SWAPPER_DIR_SIZE
++	dmb	sy
+ 	bl	__inval_cache_range
+ 
+ 	mov	lr, x27
+diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
+index e8420f6..781f469 100644
+--- a/arch/arm64/kernel/setup.c
++++ b/arch/arm64/kernel/setup.c
+@@ -207,6 +207,18 @@ static void __init smp_build_mpidr_hash(void)
+ }
+ #endif
+ 
++void __init do_post_cpus_up_work(void)
++{
++	apply_alternatives_all();
++}
++
++#ifdef CONFIG_UP_LATE_INIT
++void __init up_late_init(void)
++{
++	do_post_cpus_up_work();
++}
++#endif /* CONFIG_UP_LATE_INIT */
++
+ static void __init setup_processor(void)
+ {
+ 	struct cpu_info *cpu_info;
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index 328b8ce..4257369 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -309,7 +309,7 @@ void cpu_die(void)
+ void __init smp_cpus_done(unsigned int max_cpus)
+ {
+ 	pr_info("SMP: Total of %d processors activated.\n", num_online_cpus());
+-	apply_alternatives_all();
++	do_post_cpus_up_work();
+ }
+ 
+ void __init smp_prepare_boot_cpu(void)
+diff --git a/arch/c6x/kernel/time.c b/arch/c6x/kernel/time.c
+index 356ee84..04845aa 100644
+--- a/arch/c6x/kernel/time.c
++++ b/arch/c6x/kernel/time.c
+@@ -49,7 +49,7 @@ u64 sched_clock(void)
+ 	return (tsc * sched_clock_multiplier) >> SCHED_CLOCK_SHIFT;
+ }
+ 
+-void time_init(void)
++void __init time_init(void)
+ {
+ 	u64 tmp = (u64)NSEC_PER_SEC << SCHED_CLOCK_SHIFT;
+ 
+diff --git a/arch/mips/include/asm/asm-eva.h b/arch/mips/include/asm/asm-eva.h
+index e41c56e..1e38f0e 100644
+--- a/arch/mips/include/asm/asm-eva.h
++++ b/arch/mips/include/asm/asm-eva.h
+@@ -11,6 +11,36 @@
+ #define __ASM_ASM_EVA_H
+ 
+ #ifndef __ASSEMBLY__
++
++/* Kernel variants */
++
++#define kernel_cache(op, base)		"cache " op ", " base "\n"
++#define kernel_ll(reg, addr)		"ll " reg ", " addr "\n"
++#define kernel_sc(reg, addr)		"sc " reg ", " addr "\n"
++#define kernel_lw(reg, addr)		"lw " reg ", " addr "\n"
++#define kernel_lwl(reg, addr)		"lwl " reg ", " addr "\n"
++#define kernel_lwr(reg, addr)		"lwr " reg ", " addr "\n"
++#define kernel_lh(reg, addr)		"lh " reg ", " addr "\n"
++#define kernel_lb(reg, addr)		"lb " reg ", " addr "\n"
++#define kernel_lbu(reg, addr)		"lbu " reg ", " addr "\n"
++#define kernel_sw(reg, addr)		"sw " reg ", " addr "\n"
++#define kernel_swl(reg, addr)		"swl " reg ", " addr "\n"
++#define kernel_swr(reg, addr)		"swr " reg ", " addr "\n"
++#define kernel_sh(reg, addr)		"sh " reg ", " addr "\n"
++#define kernel_sb(reg, addr)		"sb " reg ", " addr "\n"
++
++#ifdef CONFIG_32BIT
++/*
++ * No 'sd' or 'ld' instructions in 32-bit but the code will
++ * do the correct thing
++ */
++#define kernel_sd(reg, addr)		user_sw(reg, addr)
++#define kernel_ld(reg, addr)		user_lw(reg, addr)
++#else
++#define kernel_sd(reg, addr)		"sd " reg", " addr "\n"
++#define kernel_ld(reg, addr)		"ld " reg", " addr "\n"
++#endif /* CONFIG_32BIT */
++
+ #ifdef CONFIG_EVA
+ 
+ #define __BUILD_EVA_INSN(insn, reg, addr)				\
+@@ -41,37 +71,60 @@
+ 
+ #else
+ 
+-#define user_cache(op, base)		"cache " op ", " base "\n"
+-#define user_ll(reg, addr)		"ll " reg ", " addr "\n"
+-#define user_sc(reg, addr)		"sc " reg ", " addr "\n"
+-#define user_lw(reg, addr)		"lw " reg ", " addr "\n"
+-#define user_lwl(reg, addr)		"lwl " reg ", " addr "\n"
+-#define user_lwr(reg, addr)		"lwr " reg ", " addr "\n"
+-#define user_lh(reg, addr)		"lh " reg ", " addr "\n"
+-#define user_lb(reg, addr)		"lb " reg ", " addr "\n"
+-#define user_lbu(reg, addr)		"lbu " reg ", " addr "\n"
+-#define user_sw(reg, addr)		"sw " reg ", " addr "\n"
+-#define user_swl(reg, addr)		"swl " reg ", " addr "\n"
+-#define user_swr(reg, addr)		"swr " reg ", " addr "\n"
+-#define user_sh(reg, addr)		"sh " reg ", " addr "\n"
+-#define user_sb(reg, addr)		"sb " reg ", " addr "\n"
++#define user_cache(op, base)		kernel_cache(op, base)
++#define user_ll(reg, addr)		kernel_ll(reg, addr)
++#define user_sc(reg, addr)		kernel_sc(reg, addr)
++#define user_lw(reg, addr)		kernel_lw(reg, addr)
++#define user_lwl(reg, addr)		kernel_lwl(reg, addr)
++#define user_lwr(reg, addr)		kernel_lwr(reg, addr)
++#define user_lh(reg, addr)		kernel_lh(reg, addr)
++#define user_lb(reg, addr)		kernel_lb(reg, addr)
++#define user_lbu(reg, addr)		kernel_lbu(reg, addr)
++#define user_sw(reg, addr)		kernel_sw(reg, addr)
++#define user_swl(reg, addr)		kernel_swl(reg, addr)
++#define user_swr(reg, addr)		kernel_swr(reg, addr)
++#define user_sh(reg, addr)		kernel_sh(reg, addr)
++#define user_sb(reg, addr)		kernel_sb(reg, addr)
+ 
+ #ifdef CONFIG_32BIT
+-/*
+- * No 'sd' or 'ld' instructions in 32-bit but the code will
+- * do the correct thing
+- */
+-#define user_sd(reg, addr)		user_sw(reg, addr)
+-#define user_ld(reg, addr)		user_lw(reg, addr)
++#define user_sd(reg, addr)		kernel_sw(reg, addr)
++#define user_ld(reg, addr)		kernel_lw(reg, addr)
+ #else
+-#define user_sd(reg, addr)		"sd " reg", " addr "\n"
+-#define user_ld(reg, addr)		"ld " reg", " addr "\n"
++#define user_sd(reg, addr)		kernel_sd(reg, addr)
++#define user_ld(reg, addr)		kernel_ld(reg, addr)
+ #endif /* CONFIG_32BIT */
+ 
+ #endif /* CONFIG_EVA */
+ 
+ #else /* __ASSEMBLY__ */
+ 
++#define kernel_cache(op, base)		cache op, base
++#define kernel_ll(reg, addr)		ll reg, addr
++#define kernel_sc(reg, addr)		sc reg, addr
++#define kernel_lw(reg, addr)		lw reg, addr
++#define kernel_lwl(reg, addr)		lwl reg, addr
++#define kernel_lwr(reg, addr)		lwr reg, addr
++#define kernel_lh(reg, addr)		lh reg, addr
++#define kernel_lb(reg, addr)		lb reg, addr
++#define kernel_lbu(reg, addr)		lbu reg, addr
++#define kernel_sw(reg, addr)		sw reg, addr
++#define kernel_swl(reg, addr)		swl reg, addr
++#define kernel_swr(reg, addr)		swr reg, addr
++#define kernel_sh(reg, addr)		sh reg, addr
++#define kernel_sb(reg, addr)		sb reg, addr
++
++#ifdef CONFIG_32BIT
++/*
++ * No 'sd' or 'ld' instructions in 32-bit but the code will
++ * do the correct thing
++ */
++#define kernel_sd(reg, addr)		user_sw(reg, addr)
++#define kernel_ld(reg, addr)		user_lw(reg, addr)
++#else
++#define kernel_sd(reg, addr)		sd reg, addr
++#define kernel_ld(reg, addr)		ld reg, addr
++#endif /* CONFIG_32BIT */
++
+ #ifdef CONFIG_EVA
+ 
+ #define __BUILD_EVA_INSN(insn, reg, addr)			\
+@@ -101,31 +154,27 @@
+ #define user_sd(reg, addr)		user_sw(reg, addr)
+ #else
+ 
+-#define user_cache(op, base)		cache op, base
+-#define user_ll(reg, addr)		ll reg, addr
+-#define user_sc(reg, addr)		sc reg, addr
+-#define user_lw(reg, addr)		lw reg, addr
+-#define user_lwl(reg, addr)		lwl reg, addr
+-#define user_lwr(reg, addr)		lwr reg, addr
+-#define user_lh(reg, addr)		lh reg, addr
+-#define user_lb(reg, addr)		lb reg, addr
+-#define user_lbu(reg, addr)		lbu reg, addr
+-#define user_sw(reg, addr)		sw reg, addr
+-#define user_swl(reg, addr)		swl reg, addr
+-#define user_swr(reg, addr)		swr reg, addr
+-#define user_sh(reg, addr)		sh reg, addr
+-#define user_sb(reg, addr)		sb reg, addr
++#define user_cache(op, base)		kernel_cache(op, base)
++#define user_ll(reg, addr)		kernel_ll(reg, addr)
++#define user_sc(reg, addr)		kernel_sc(reg, addr)
++#define user_lw(reg, addr)		kernel_lw(reg, addr)
++#define user_lwl(reg, addr)		kernel_lwl(reg, addr)
++#define user_lwr(reg, addr)		kernel_lwr(reg, addr)
++#define user_lh(reg, addr)		kernel_lh(reg, addr)
++#define user_lb(reg, addr)		kernel_lb(reg, addr)
++#define user_lbu(reg, addr)		kernel_lbu(reg, addr)
++#define user_sw(reg, addr)		kernel_sw(reg, addr)
++#define user_swl(reg, addr)		kernel_swl(reg, addr)
++#define user_swr(reg, addr)		kernel_swr(reg, addr)
++#define user_sh(reg, addr)		kernel_sh(reg, addr)
++#define user_sb(reg, addr)		kernel_sb(reg, addr)
+ 
+ #ifdef CONFIG_32BIT
+-/*
+- * No 'sd' or 'ld' instructions in 32-bit but the code will
+- * do the correct thing
+- */
+-#define user_sd(reg, addr)		user_sw(reg, addr)
+-#define user_ld(reg, addr)		user_lw(reg, addr)
++#define user_sd(reg, addr)		kernel_sw(reg, addr)
++#define user_ld(reg, addr)		kernel_lw(reg, addr)
+ #else
+-#define user_sd(reg, addr)		sd reg, addr
+-#define user_ld(reg, addr)		ld reg, addr
++#define user_sd(reg, addr)		kernel_sd(reg, addr)
++#define user_ld(reg, addr)		kernel_sd(reg, addr)
+ #endif /* CONFIG_32BIT */
+ 
+ #endif /* CONFIG_EVA */
+diff --git a/arch/mips/include/asm/fpu.h b/arch/mips/include/asm/fpu.h
+index dd083e9..9f26b07 100644
+--- a/arch/mips/include/asm/fpu.h
++++ b/arch/mips/include/asm/fpu.h
+@@ -170,6 +170,7 @@ static inline void lose_fpu(int save)
+ 		}
+ 		disable_msa();
+ 		clear_thread_flag(TIF_USEDMSA);
++		__disable_fpu();
+ 	} else if (is_fpu_owner()) {
+ 		if (save)
+ 			_save_fp(current);
+diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
+index ac4fc71..f722b05 100644
+--- a/arch/mips/include/asm/kvm_host.h
++++ b/arch/mips/include/asm/kvm_host.h
+@@ -322,6 +322,7 @@ enum mips_mmu_types {
+ #define T_TRAP			13	/* Trap instruction */
+ #define T_VCEI			14	/* Virtual coherency exception */
+ #define T_FPE			15	/* Floating point exception */
++#define T_MSADIS		21	/* MSA disabled exception */
+ #define T_WATCH			23	/* Watch address reference */
+ #define T_VCED			31	/* Virtual coherency data */
+ 
+@@ -578,6 +579,7 @@ struct kvm_mips_callbacks {
+ 	int (*handle_syscall)(struct kvm_vcpu *vcpu);
+ 	int (*handle_res_inst)(struct kvm_vcpu *vcpu);
+ 	int (*handle_break)(struct kvm_vcpu *vcpu);
++	int (*handle_msa_disabled)(struct kvm_vcpu *vcpu);
+ 	int (*vm_init)(struct kvm *kvm);
+ 	int (*vcpu_init)(struct kvm_vcpu *vcpu);
+ 	int (*vcpu_setup)(struct kvm_vcpu *vcpu);
+diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c
+index bbb6969..7659da2 100644
+--- a/arch/mips/kernel/unaligned.c
++++ b/arch/mips/kernel/unaligned.c
+@@ -109,10 +109,11 @@ static u32 unaligned_action;
+ extern void show_registers(struct pt_regs *regs);
+ 
+ #ifdef __BIG_ENDIAN
+-#define     LoadHW(addr, value, res)  \
++#define     _LoadHW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (".set\tnoat\n"        \
+-			"1:\t"user_lb("%0", "0(%2)")"\n"    \
+-			"2:\t"user_lbu("$1", "1(%2)")"\n\t" \
++			"1:\t"type##_lb("%0", "0(%2)")"\n"  \
++			"2:\t"type##_lbu("$1", "1(%2)")"\n\t"\
+ 			"sll\t%0, 0x8\n\t"                  \
+ 			"or\t%0, $1\n\t"                    \
+ 			"li\t%1, 0\n"                       \
+@@ -127,13 +128,15 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     LoadW(addr, value, res)   \
++#define     _LoadW(addr, value, res, type)   \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_lwl("%0", "(%2)")"\n"    \
+-			"2:\t"user_lwr("%0", "3(%2)")"\n\t" \
++			"1:\t"type##_lwl("%0", "(%2)")"\n"   \
++			"2:\t"type##_lwr("%0", "3(%2)")"\n\t"\
+ 			"li\t%1, 0\n"                       \
+ 			"3:\n\t"                            \
+ 			".insn\n\t"                         \
+@@ -146,21 +149,24 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
++
+ #else
+ /* MIPSR6 has no lwl instruction */
+-#define     LoadW(addr, value, res) \
++#define     _LoadW(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n"			    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_lb("%0", "0(%2)")"\n\t"    \
+-			"2:"user_lbu("$1", "1(%2)")"\n\t"   \
++			"1:"type##_lb("%0", "0(%2)")"\n\t"  \
++			"2:"type##_lbu("$1", "1(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"3:"user_lbu("$1", "2(%2)")"\n\t"   \
++			"3:"type##_lbu("$1", "2(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"4:"user_lbu("$1", "3(%2)")"\n\t"   \
++			"4:"type##_lbu("$1", "3(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+ 			"li\t%1, 0\n"			    \
+@@ -178,14 +184,17 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t4b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
++
+ #endif /* CONFIG_CPU_MIPSR6 */
+ 
+-#define     LoadHWU(addr, value, res) \
++#define     _LoadHWU(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+-			"1:\t"user_lbu("%0", "0(%2)")"\n"   \
+-			"2:\t"user_lbu("$1", "1(%2)")"\n\t" \
++			"1:\t"type##_lbu("%0", "0(%2)")"\n" \
++			"2:\t"type##_lbu("$1", "1(%2)")"\n\t"\
+ 			"sll\t%0, 0x8\n\t"                  \
+ 			"or\t%0, $1\n\t"                    \
+ 			"li\t%1, 0\n"                       \
+@@ -201,13 +210,15 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     LoadWU(addr, value, res)  \
++#define     _LoadWU(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_lwl("%0", "(%2)")"\n"    \
+-			"2:\t"user_lwr("%0", "3(%2)")"\n\t" \
++			"1:\t"type##_lwl("%0", "(%2)")"\n"  \
++			"2:\t"type##_lwr("%0", "3(%2)")"\n\t"\
+ 			"dsll\t%0, %0, 32\n\t"              \
+ 			"dsrl\t%0, %0, 32\n\t"              \
+ 			"li\t%1, 0\n"                       \
+@@ -222,9 +233,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+-#define     LoadDW(addr, value, res)  \
++#define     _LoadDW(addr, value, res)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tldl\t%0, (%2)\n"               \
+ 			"2:\tldr\t%0, 7(%2)\n\t"            \
+@@ -240,21 +253,24 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
++
+ #else
+ /* MIPSR6 has not lwl and ldl instructions */
+-#define	    LoadWU(addr, value, res) \
++#define	    _LoadWU(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_lbu("%0", "0(%2)")"\n\t"   \
+-			"2:"user_lbu("$1", "1(%2)")"\n\t"   \
++			"1:"type##_lbu("%0", "0(%2)")"\n\t" \
++			"2:"type##_lbu("$1", "1(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"3:"user_lbu("$1", "2(%2)")"\n\t"   \
++			"3:"type##_lbu("$1", "2(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"4:"user_lbu("$1", "3(%2)")"\n\t"   \
++			"4:"type##_lbu("$1", "3(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+ 			"li\t%1, 0\n"			    \
+@@ -272,9 +288,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t4b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+-#define     LoadDW(addr, value, res)  \
++#define     _LoadDW(addr, value, res)  \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -319,16 +337,19 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t8b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
++
+ #endif /* CONFIG_CPU_MIPSR6 */
+ 
+ 
+-#define     StoreHW(addr, value, res) \
++#define     _StoreHW(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+-			"1:\t"user_sb("%1", "1(%2)")"\n"    \
++			"1:\t"type##_sb("%1", "1(%2)")"\n"  \
+ 			"srl\t$1, %1, 0x8\n"                \
+-			"2:\t"user_sb("$1", "0(%2)")"\n"    \
++			"2:\t"type##_sb("$1", "0(%2)")"\n"  \
+ 			".set\tat\n\t"                      \
+ 			"li\t%0, 0\n"                       \
+ 			"3:\n\t"                            \
+@@ -342,13 +363,15 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=r" (res)                        \
+-			: "r" (value), "r" (addr), "i" (-EFAULT));
++			: "r" (value), "r" (addr), "i" (-EFAULT));\
++} while(0)
+ 
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     StoreW(addr, value, res)  \
++#define     _StoreW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_swl("%1", "(%2)")"\n"    \
+-			"2:\t"user_swr("%1", "3(%2)")"\n\t" \
++			"1:\t"type##_swl("%1", "(%2)")"\n"  \
++			"2:\t"type##_swr("%1", "3(%2)")"\n\t"\
+ 			"li\t%0, 0\n"                       \
+ 			"3:\n\t"                            \
+ 			".insn\n\t"                         \
+@@ -361,9 +384,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 		: "=r" (res)                                \
+-		: "r" (value), "r" (addr), "i" (-EFAULT));
++		: "r" (value), "r" (addr), "i" (-EFAULT));  \
++} while(0)
+ 
+-#define     StoreDW(addr, value, res) \
++#define     _StoreDW(addr, value, res) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tsdl\t%1,(%2)\n"                \
+ 			"2:\tsdr\t%1, 7(%2)\n\t"            \
+@@ -379,20 +404,23 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 		: "=r" (res)                                \
+-		: "r" (value), "r" (addr), "i" (-EFAULT));
++		: "r" (value), "r" (addr), "i" (-EFAULT));  \
++} while(0)
++
+ #else
+ /* MIPSR6 has no swl and sdl instructions */
+-#define     StoreW(addr, value, res)  \
++#define     _StoreW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_sb("%1", "3(%2)")"\n\t"    \
++			"1:"type##_sb("%1", "3(%2)")"\n\t"  \
+ 			"srl\t$1, %1, 0x8\n\t"		    \
+-			"2:"user_sb("$1", "2(%2)")"\n\t"    \
++			"2:"type##_sb("$1", "2(%2)")"\n\t"  \
+ 			"srl\t$1, $1,  0x8\n\t"		    \
+-			"3:"user_sb("$1", "1(%2)")"\n\t"    \
++			"3:"type##_sb("$1", "1(%2)")"\n\t"  \
+ 			"srl\t$1, $1, 0x8\n\t"		    \
+-			"4:"user_sb("$1", "0(%2)")"\n\t"    \
++			"4:"type##_sb("$1", "0(%2)")"\n\t"  \
+ 			".set\tpop\n\t"			    \
+ 			"li\t%0, 0\n"			    \
+ 			"10:\n\t"			    \
+@@ -409,9 +437,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			".previous"			    \
+ 		: "=&r" (res)			    	    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+-		: "memory");
++		: "memory");                                \
++} while(0)
+ 
+ #define     StoreDW(addr, value, res) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -451,15 +481,18 @@ extern void show_registers(struct pt_regs *regs);
+ 			".previous"			    \
+ 		: "=&r" (res)			    	    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+-		: "memory");
++		: "memory");                                \
++} while(0)
++
+ #endif /* CONFIG_CPU_MIPSR6 */
+ 
+ #else /* __BIG_ENDIAN */
+ 
+-#define     LoadHW(addr, value, res)  \
++#define     _LoadHW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (".set\tnoat\n"        \
+-			"1:\t"user_lb("%0", "1(%2)")"\n"    \
+-			"2:\t"user_lbu("$1", "0(%2)")"\n\t" \
++			"1:\t"type##_lb("%0", "1(%2)")"\n"  \
++			"2:\t"type##_lbu("$1", "0(%2)")"\n\t"\
+ 			"sll\t%0, 0x8\n\t"                  \
+ 			"or\t%0, $1\n\t"                    \
+ 			"li\t%1, 0\n"                       \
+@@ -474,13 +507,15 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     LoadW(addr, value, res)   \
++#define     _LoadW(addr, value, res, type)   \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_lwl("%0", "3(%2)")"\n"   \
+-			"2:\t"user_lwr("%0", "(%2)")"\n\t"  \
++			"1:\t"type##_lwl("%0", "3(%2)")"\n" \
++			"2:\t"type##_lwr("%0", "(%2)")"\n\t"\
+ 			"li\t%1, 0\n"                       \
+ 			"3:\n\t"                            \
+ 			".insn\n\t"                         \
+@@ -493,21 +528,24 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
++
+ #else
+ /* MIPSR6 has no lwl instruction */
+-#define     LoadW(addr, value, res) \
++#define     _LoadW(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n"			    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_lb("%0", "3(%2)")"\n\t"    \
+-			"2:"user_lbu("$1", "2(%2)")"\n\t"   \
++			"1:"type##_lb("%0", "3(%2)")"\n\t"  \
++			"2:"type##_lbu("$1", "2(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"3:"user_lbu("$1", "1(%2)")"\n\t"   \
++			"3:"type##_lbu("$1", "1(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"4:"user_lbu("$1", "0(%2)")"\n\t"   \
++			"4:"type##_lbu("$1", "0(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+ 			"li\t%1, 0\n"			    \
+@@ -525,15 +563,18 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t4b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
++
+ #endif /* CONFIG_CPU_MIPSR6 */
+ 
+ 
+-#define     LoadHWU(addr, value, res) \
++#define     _LoadHWU(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+-			"1:\t"user_lbu("%0", "1(%2)")"\n"   \
+-			"2:\t"user_lbu("$1", "0(%2)")"\n\t" \
++			"1:\t"type##_lbu("%0", "1(%2)")"\n" \
++			"2:\t"type##_lbu("$1", "0(%2)")"\n\t"\
+ 			"sll\t%0, 0x8\n\t"                  \
+ 			"or\t%0, $1\n\t"                    \
+ 			"li\t%1, 0\n"                       \
+@@ -549,13 +590,15 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     LoadWU(addr, value, res)  \
++#define     _LoadWU(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_lwl("%0", "3(%2)")"\n"   \
+-			"2:\t"user_lwr("%0", "(%2)")"\n\t"  \
++			"1:\t"type##_lwl("%0", "3(%2)")"\n" \
++			"2:\t"type##_lwr("%0", "(%2)")"\n\t"\
+ 			"dsll\t%0, %0, 32\n\t"              \
+ 			"dsrl\t%0, %0, 32\n\t"              \
+ 			"li\t%1, 0\n"                       \
+@@ -570,9 +613,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+-#define     LoadDW(addr, value, res)  \
++#define     _LoadDW(addr, value, res)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tldl\t%0, 7(%2)\n"              \
+ 			"2:\tldr\t%0, (%2)\n\t"             \
+@@ -588,21 +633,24 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=&r" (value), "=r" (res)         \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
++
+ #else
+ /* MIPSR6 has not lwl and ldl instructions */
+-#define	    LoadWU(addr, value, res) \
++#define	    _LoadWU(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_lbu("%0", "3(%2)")"\n\t"   \
+-			"2:"user_lbu("$1", "2(%2)")"\n\t"   \
++			"1:"type##_lbu("%0", "3(%2)")"\n\t" \
++			"2:"type##_lbu("$1", "2(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"3:"user_lbu("$1", "1(%2)")"\n\t"   \
++			"3:"type##_lbu("$1", "1(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+-			"4:"user_lbu("$1", "0(%2)")"\n\t"   \
++			"4:"type##_lbu("$1", "0(%2)")"\n\t" \
+ 			"sll\t%0, 0x8\n\t"		    \
+ 			"or\t%0, $1\n\t"		    \
+ 			"li\t%1, 0\n"			    \
+@@ -620,9 +668,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t4b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ 
+-#define     LoadDW(addr, value, res)  \
++#define     _LoadDW(addr, value, res)  \
++do {                                                        \
+ 		__asm__ __volatile__ (			    \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -667,15 +717,17 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t8b, 11b\n\t"		    \
+ 			".previous"			    \
+ 			: "=&r" (value), "=r" (res)	    \
+-			: "r" (addr), "i" (-EFAULT));
++			: "r" (addr), "i" (-EFAULT));       \
++} while(0)
+ #endif /* CONFIG_CPU_MIPSR6 */
+ 
+-#define     StoreHW(addr, value, res) \
++#define     _StoreHW(addr, value, res, type) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tnoat\n"                      \
+-			"1:\t"user_sb("%1", "0(%2)")"\n"    \
++			"1:\t"type##_sb("%1", "0(%2)")"\n"  \
+ 			"srl\t$1,%1, 0x8\n"                 \
+-			"2:\t"user_sb("$1", "1(%2)")"\n"    \
++			"2:\t"type##_sb("$1", "1(%2)")"\n"  \
+ 			".set\tat\n\t"                      \
+ 			"li\t%0, 0\n"                       \
+ 			"3:\n\t"                            \
+@@ -689,12 +741,15 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 			: "=r" (res)                        \
+-			: "r" (value), "r" (addr), "i" (-EFAULT));
++			: "r" (value), "r" (addr), "i" (-EFAULT));\
++} while(0)
++
+ #ifndef CONFIG_CPU_MIPSR6
+-#define     StoreW(addr, value, res)  \
++#define     _StoreW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+-			"1:\t"user_swl("%1", "3(%2)")"\n"   \
+-			"2:\t"user_swr("%1", "(%2)")"\n\t"  \
++			"1:\t"type##_swl("%1", "3(%2)")"\n" \
++			"2:\t"type##_swr("%1", "(%2)")"\n\t"\
+ 			"li\t%0, 0\n"                       \
+ 			"3:\n\t"                            \
+ 			".insn\n\t"                         \
+@@ -707,9 +762,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 		: "=r" (res)                                \
+-		: "r" (value), "r" (addr), "i" (-EFAULT));
++		: "r" (value), "r" (addr), "i" (-EFAULT));  \
++} while(0)
+ 
+-#define     StoreDW(addr, value, res) \
++#define     _StoreDW(addr, value, res) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			"1:\tsdl\t%1, 7(%2)\n"              \
+ 			"2:\tsdr\t%1, (%2)\n\t"             \
+@@ -725,20 +782,23 @@ extern void show_registers(struct pt_regs *regs);
+ 			STR(PTR)"\t2b, 4b\n\t"              \
+ 			".previous"                         \
+ 		: "=r" (res)                                \
+-		: "r" (value), "r" (addr), "i" (-EFAULT));
++		: "r" (value), "r" (addr), "i" (-EFAULT));  \
++} while(0)
++
+ #else
+ /* MIPSR6 has no swl and sdl instructions */
+-#define     StoreW(addr, value, res)  \
++#define     _StoreW(addr, value, res, type)  \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+-			"1:"user_sb("%1", "0(%2)")"\n\t"    \
++			"1:"type##_sb("%1", "0(%2)")"\n\t"  \
+ 			"srl\t$1, %1, 0x8\n\t"		    \
+-			"2:"user_sb("$1", "1(%2)")"\n\t"    \
++			"2:"type##_sb("$1", "1(%2)")"\n\t"  \
+ 			"srl\t$1, $1,  0x8\n\t"		    \
+-			"3:"user_sb("$1", "2(%2)")"\n\t"    \
++			"3:"type##_sb("$1", "2(%2)")"\n\t"  \
+ 			"srl\t$1, $1, 0x8\n\t"		    \
+-			"4:"user_sb("$1", "3(%2)")"\n\t"    \
++			"4:"type##_sb("$1", "3(%2)")"\n\t"  \
+ 			".set\tpop\n\t"			    \
+ 			"li\t%0, 0\n"			    \
+ 			"10:\n\t"			    \
+@@ -755,9 +815,11 @@ extern void show_registers(struct pt_regs *regs);
+ 			".previous"			    \
+ 		: "=&r" (res)			    	    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+-		: "memory");
++		: "memory");                                \
++} while(0)
+ 
+-#define     StoreDW(addr, value, res) \
++#define     _StoreDW(addr, value, res) \
++do {                                                        \
+ 		__asm__ __volatile__ (                      \
+ 			".set\tpush\n\t"		    \
+ 			".set\tnoat\n\t"		    \
+@@ -797,10 +859,28 @@ extern void show_registers(struct pt_regs *regs);
+ 			".previous"			    \
+ 		: "=&r" (res)			    	    \
+ 		: "r" (value), "r" (addr), "i" (-EFAULT)    \
+-		: "memory");
++		: "memory");                                \
++} while(0)
++
+ #endif /* CONFIG_CPU_MIPSR6 */
+ #endif
+ 
++#define LoadHWU(addr, value, res)	_LoadHWU(addr, value, res, kernel)
++#define LoadHWUE(addr, value, res)	_LoadHWU(addr, value, res, user)
++#define LoadWU(addr, value, res)	_LoadWU(addr, value, res, kernel)
++#define LoadWUE(addr, value, res)	_LoadWU(addr, value, res, user)
++#define LoadHW(addr, value, res)	_LoadHW(addr, value, res, kernel)
++#define LoadHWE(addr, value, res)	_LoadHW(addr, value, res, user)
++#define LoadW(addr, value, res)		_LoadW(addr, value, res, kernel)
++#define LoadWE(addr, value, res)	_LoadW(addr, value, res, user)
++#define LoadDW(addr, value, res)	_LoadDW(addr, value, res)
++
++#define StoreHW(addr, value, res)	_StoreHW(addr, value, res, kernel)
++#define StoreHWE(addr, value, res)	_StoreHW(addr, value, res, user)
++#define StoreW(addr, value, res)	_StoreW(addr, value, res, kernel)
++#define StoreWE(addr, value, res)	_StoreW(addr, value, res, user)
++#define StoreDW(addr, value, res)	_StoreDW(addr, value, res)
++
+ static void emulate_load_store_insn(struct pt_regs *regs,
+ 	void __user *addr, unsigned int __user *pc)
+ {
+@@ -872,7 +952,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 				set_fs(seg);
+ 				goto sigbus;
+ 			}
+-			LoadHW(addr, value, res);
++			LoadHWE(addr, value, res);
+ 			if (res) {
+ 				set_fs(seg);
+ 				goto fault;
+@@ -885,7 +965,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 				set_fs(seg);
+ 				goto sigbus;
+ 			}
+-				LoadW(addr, value, res);
++				LoadWE(addr, value, res);
+ 			if (res) {
+ 				set_fs(seg);
+ 				goto fault;
+@@ -898,7 +978,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 				set_fs(seg);
+ 				goto sigbus;
+ 			}
+-			LoadHWU(addr, value, res);
++			LoadHWUE(addr, value, res);
+ 			if (res) {
+ 				set_fs(seg);
+ 				goto fault;
+@@ -913,7 +993,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 			}
+ 			compute_return_epc(regs);
+ 			value = regs->regs[insn.spec3_format.rt];
+-			StoreHW(addr, value, res);
++			StoreHWE(addr, value, res);
+ 			if (res) {
+ 				set_fs(seg);
+ 				goto fault;
+@@ -926,7 +1006,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 			}
+ 			compute_return_epc(regs);
+ 			value = regs->regs[insn.spec3_format.rt];
+-			StoreW(addr, value, res);
++			StoreWE(addr, value, res);
+ 			if (res) {
+ 				set_fs(seg);
+ 				goto fault;
+@@ -943,7 +1023,15 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 		if (!access_ok(VERIFY_READ, addr, 2))
+ 			goto sigbus;
+ 
+-		LoadHW(addr, value, res);
++		if (config_enabled(CONFIG_EVA)) {
++			if (segment_eq(get_fs(), get_ds()))
++				LoadHW(addr, value, res);
++			else
++				LoadHWE(addr, value, res);
++		} else {
++			LoadHW(addr, value, res);
++		}
++
+ 		if (res)
+ 			goto fault;
+ 		compute_return_epc(regs);
+@@ -954,7 +1042,15 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 		if (!access_ok(VERIFY_READ, addr, 4))
+ 			goto sigbus;
+ 
+-		LoadW(addr, value, res);
++		if (config_enabled(CONFIG_EVA)) {
++			if (segment_eq(get_fs(), get_ds()))
++				LoadW(addr, value, res);
++			else
++				LoadWE(addr, value, res);
++		} else {
++			LoadW(addr, value, res);
++		}
++
+ 		if (res)
+ 			goto fault;
+ 		compute_return_epc(regs);
+@@ -965,7 +1061,15 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 		if (!access_ok(VERIFY_READ, addr, 2))
+ 			goto sigbus;
+ 
+-		LoadHWU(addr, value, res);
++		if (config_enabled(CONFIG_EVA)) {
++			if (segment_eq(get_fs(), get_ds()))
++				LoadHWU(addr, value, res);
++			else
++				LoadHWUE(addr, value, res);
++		} else {
++			LoadHWU(addr, value, res);
++		}
++
+ 		if (res)
+ 			goto fault;
+ 		compute_return_epc(regs);
+@@ -1024,7 +1128,16 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 
+ 		compute_return_epc(regs);
+ 		value = regs->regs[insn.i_format.rt];
+-		StoreHW(addr, value, res);
++
++		if (config_enabled(CONFIG_EVA)) {
++			if (segment_eq(get_fs(), get_ds()))
++				StoreHW(addr, value, res);
++			else
++				StoreHWE(addr, value, res);
++		} else {
++			StoreHW(addr, value, res);
++		}
++
+ 		if (res)
+ 			goto fault;
+ 		break;
+@@ -1035,7 +1148,16 @@ static void emulate_load_store_insn(struct pt_regs *regs,
+ 
+ 		compute_return_epc(regs);
+ 		value = regs->regs[insn.i_format.rt];
+-		StoreW(addr, value, res);
++
++		if (config_enabled(CONFIG_EVA)) {
++			if (segment_eq(get_fs(), get_ds()))
++				StoreW(addr, value, res);
++			else
++				StoreWE(addr, value, res);
++		} else {
++			StoreW(addr, value, res);
++		}
++
+ 		if (res)
+ 			goto fault;
+ 		break;
+diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
+index fb3e8df..838d3a6 100644
+--- a/arch/mips/kvm/emulate.c
++++ b/arch/mips/kvm/emulate.c
+@@ -2176,6 +2176,7 @@ enum emulation_result kvm_mips_check_privilege(unsigned long cause,
+ 		case T_SYSCALL:
+ 		case T_BREAK:
+ 		case T_RES_INST:
++		case T_MSADIS:
+ 			break;
+ 
+ 		case T_COP_UNUSABLE:
+diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
+index c9eccf5..f5e7dda 100644
+--- a/arch/mips/kvm/mips.c
++++ b/arch/mips/kvm/mips.c
+@@ -1119,6 +1119,10 @@ int kvm_mips_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
+ 		ret = kvm_mips_callbacks->handle_break(vcpu);
+ 		break;
+ 
++	case T_MSADIS:
++		ret = kvm_mips_callbacks->handle_msa_disabled(vcpu);
++		break;
++
+ 	default:
+ 		kvm_err("Exception Code: %d, not yet handled, @ PC: %p, inst: 0x%08x  BadVaddr: %#lx Status: %#lx\n",
+ 			exccode, opc, kvm_get_inst(opc, vcpu), badvaddr,
+diff --git a/arch/mips/kvm/trap_emul.c b/arch/mips/kvm/trap_emul.c
+index fd7257b..4372cc8 100644
+--- a/arch/mips/kvm/trap_emul.c
++++ b/arch/mips/kvm/trap_emul.c
+@@ -330,6 +330,33 @@ static int kvm_trap_emul_handle_break(struct kvm_vcpu *vcpu)
+ 	return ret;
+ }
+ 
++static int kvm_trap_emul_handle_msa_disabled(struct kvm_vcpu *vcpu)
++{
++	struct kvm_run *run = vcpu->run;
++	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
++	unsigned long cause = vcpu->arch.host_cp0_cause;
++	enum emulation_result er = EMULATE_DONE;
++	int ret = RESUME_GUEST;
++
++	/* No MSA supported in guest, guest reserved instruction exception */
++	er = kvm_mips_emulate_ri_exc(cause, opc, run, vcpu);
++
++	switch (er) {
++	case EMULATE_DONE:
++		ret = RESUME_GUEST;
++		break;
++
++	case EMULATE_FAIL:
++		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
++		ret = RESUME_HOST;
++		break;
++
++	default:
++		BUG();
++	}
++	return ret;
++}
++
+ static int kvm_trap_emul_vm_init(struct kvm *kvm)
+ {
+ 	return 0;
+@@ -470,6 +497,7 @@ static struct kvm_mips_callbacks kvm_trap_emul_callbacks = {
+ 	.handle_syscall = kvm_trap_emul_handle_syscall,
+ 	.handle_res_inst = kvm_trap_emul_handle_res_inst,
+ 	.handle_break = kvm_trap_emul_handle_break,
++	.handle_msa_disabled = kvm_trap_emul_handle_msa_disabled,
+ 
+ 	.vm_init = kvm_trap_emul_vm_init,
+ 	.vcpu_init = kvm_trap_emul_vcpu_init,
+diff --git a/arch/mips/loongson/loongson-3/irq.c b/arch/mips/loongson/loongson-3/irq.c
+index 21221ed..0f75b6b 100644
+--- a/arch/mips/loongson/loongson-3/irq.c
++++ b/arch/mips/loongson/loongson-3/irq.c
+@@ -44,6 +44,7 @@ void mach_irq_dispatch(unsigned int pending)
+ 
+ static struct irqaction cascade_irqaction = {
+ 	.handler = no_action,
++	.flags = IRQF_NO_SUSPEND,
+ 	.name = "cascade",
+ };
+ 
+diff --git a/arch/mips/mti-malta/malta-memory.c b/arch/mips/mti-malta/malta-memory.c
+index 8fddd2cd..efe366d 100644
+--- a/arch/mips/mti-malta/malta-memory.c
++++ b/arch/mips/mti-malta/malta-memory.c
+@@ -53,6 +53,12 @@ fw_memblock_t * __init fw_getmdesc(int eva)
+ 		pr_warn("memsize not set in YAMON, set to default (32Mb)\n");
+ 		physical_memsize = 0x02000000;
+ 	} else {
++		if (memsize > (256 << 20)) { /* memsize should be capped to 256M */
++			pr_warn("Unsupported memsize value (0x%lx) detected! "
++				"Using 0x10000000 (256M) instead\n",
++				memsize);
++			memsize = 256 << 20;
++		}
+ 		/* If ememsize is set, then set physical_memsize to that */
+ 		physical_memsize = ememsize ? : memsize;
+ 	}
+diff --git a/arch/mips/power/hibernate.S b/arch/mips/power/hibernate.S
+index 32a7c82..e7567c8 100644
+--- a/arch/mips/power/hibernate.S
++++ b/arch/mips/power/hibernate.S
+@@ -30,6 +30,8 @@ LEAF(swsusp_arch_suspend)
+ END(swsusp_arch_suspend)
+ 
+ LEAF(swsusp_arch_resume)
++	/* Avoid TLB mismatch during and after kernel resume */
++	jal local_flush_tlb_all
+ 	PTR_L t0, restore_pblist
+ 0:
+ 	PTR_L t1, PBE_ADDRESS(t0)   /* source */
+@@ -43,7 +45,6 @@ LEAF(swsusp_arch_resume)
+ 	bne t1, t3, 1b
+ 	PTR_L t0, PBE_NEXT(t0)
+ 	bnez t0, 0b
+-	jal local_flush_tlb_all /* Avoid TLB mismatch after kernel resume */
+ 	PTR_LA t0, saved_regs
+ 	PTR_L ra, PT_R31(t0)
+ 	PTR_L sp, PT_R29(t0)
+diff --git a/arch/powerpc/kernel/cacheinfo.c b/arch/powerpc/kernel/cacheinfo.c
+index ae77b7e..c641983 100644
+--- a/arch/powerpc/kernel/cacheinfo.c
++++ b/arch/powerpc/kernel/cacheinfo.c
+@@ -61,12 +61,22 @@ struct cache_type_info {
+ };
+ 
+ /* These are used to index the cache_type_info array. */
+-#define CACHE_TYPE_UNIFIED     0
+-#define CACHE_TYPE_INSTRUCTION 1
+-#define CACHE_TYPE_DATA        2
++#define CACHE_TYPE_UNIFIED     0 /* cache-size, cache-block-size, etc. */
++#define CACHE_TYPE_UNIFIED_D   1 /* d-cache-size, d-cache-block-size, etc */
++#define CACHE_TYPE_INSTRUCTION 2
++#define CACHE_TYPE_DATA        3
+ 
+ static const struct cache_type_info cache_type_info[] = {
+ 	{
++		/* Embedded systems that use cache-size, cache-block-size,
++		 * etc. for the Unified (typically L2) cache. */
++		.name            = "Unified",
++		.size_prop       = "cache-size",
++		.line_size_props = { "cache-line-size",
++				     "cache-block-size", },
++		.nr_sets_prop    = "cache-sets",
++	},
++	{
+ 		/* PowerPC Processor binding says the [di]-cache-*
+ 		 * must be equal on unified caches, so just use
+ 		 * d-cache properties. */
+@@ -293,7 +303,8 @@ static struct cache *cache_find_first_sibling(struct cache *cache)
+ {
+ 	struct cache *iter;
+ 
+-	if (cache->type == CACHE_TYPE_UNIFIED)
++	if (cache->type == CACHE_TYPE_UNIFIED ||
++	    cache->type == CACHE_TYPE_UNIFIED_D)
+ 		return cache;
+ 
+ 	list_for_each_entry(iter, &cache_list, list)
+@@ -324,16 +335,29 @@ static bool cache_node_is_unified(const struct device_node *np)
+ 	return of_get_property(np, "cache-unified", NULL);
+ }
+ 
+-static struct cache *cache_do_one_devnode_unified(struct device_node *node,
+-						  int level)
++/*
++ * Unified caches can have two different sets of tags.  Most embedded
++ * use cache-size, etc. for the unified cache size, but open firmware systems
++ * use d-cache-size, etc.   Check on initialization for which type we have, and
++ * return the appropriate structure type.  Assume it's embedded if it isn't
++ * open firmware.  If it's yet a 3rd type, then there will be missing entries
++ * in /sys/devices/system/cpu/cpu0/cache/index2/, and this code will need
++ * to be extended further.
++ */
++static int cache_is_unified_d(const struct device_node *np)
+ {
+-	struct cache *cache;
++	return of_get_property(np,
++		cache_type_info[CACHE_TYPE_UNIFIED_D].size_prop, NULL) ?
++		CACHE_TYPE_UNIFIED_D : CACHE_TYPE_UNIFIED;
++}
+ 
++/*
++ */
++static struct cache *cache_do_one_devnode_unified(struct device_node *node, int level)
++{
+ 	pr_debug("creating L%d ucache for %s\n", level, node->full_name);
+ 
+-	cache = new_cache(CACHE_TYPE_UNIFIED, level, node);
+-
+-	return cache;
++	return new_cache(cache_is_unified_d(node), level, node);
+ }
+ 
+ static struct cache *cache_do_one_devnode_split(struct device_node *node,
+diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
+index 7e408bf..cecbe00 100644
+--- a/arch/powerpc/mm/hugetlbpage.c
++++ b/arch/powerpc/mm/hugetlbpage.c
+@@ -581,6 +581,7 @@ static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud,
+ 	pmd = pmd_offset(pud, start);
+ 	pud_clear(pud);
+ 	pmd_free_tlb(tlb, pmd, start);
++	mm_dec_nr_pmds(tlb->mm);
+ }
+ 
+ static void hugetlb_free_pud_range(struct mmu_gather *tlb, pgd_t *pgd,
+diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
+index 2396dda..ead5535 100644
+--- a/arch/powerpc/perf/callchain.c
++++ b/arch/powerpc/perf/callchain.c
+@@ -243,7 +243,7 @@ static void perf_callchain_user_64(struct perf_callchain_entry *entry,
+ 	sp = regs->gpr[1];
+ 	perf_callchain_store(entry, next_ip);
+ 
+-	for (;;) {
++	while (entry->nr < PERF_MAX_STACK_DEPTH) {
+ 		fp = (unsigned long __user *) sp;
+ 		if (!valid_user_sp(sp, 1) || read_user_stack_64(fp, &next_sp))
+ 			return;
+diff --git a/arch/powerpc/platforms/cell/interrupt.c b/arch/powerpc/platforms/cell/interrupt.c
+index 4c11421..3af8324 100644
+--- a/arch/powerpc/platforms/cell/interrupt.c
++++ b/arch/powerpc/platforms/cell/interrupt.c
+@@ -163,7 +163,7 @@ static unsigned int iic_get_irq(void)
+ 
+ void iic_setup_cpu(void)
+ {
+-	out_be64(this_cpu_ptr(&cpu_iic.regs->prio), 0xff);
++	out_be64(&this_cpu_ptr(&cpu_iic)->regs->prio, 0xff);
+ }
+ 
+ u8 iic_get_target_id(int cpu)
+diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c
+index c7c8720..63db1b0 100644
+--- a/arch/powerpc/platforms/cell/iommu.c
++++ b/arch/powerpc/platforms/cell/iommu.c
+@@ -197,7 +197,7 @@ static int tce_build_cell(struct iommu_table *tbl, long index, long npages,
+ 
+ 	io_pte = (unsigned long *)tbl->it_base + (index - tbl->it_offset);
+ 
+-	for (i = 0; i < npages; i++, uaddr += tbl->it_page_shift)
++	for (i = 0; i < npages; i++, uaddr += (1 << tbl->it_page_shift))
+ 		io_pte[i] = base_pte | (__pa(uaddr) & CBE_IOPTE_RPN_Mask);
+ 
+ 	mb();
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index 6c9ff2b..1d9369e 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -1777,7 +1777,8 @@ static void pnv_ioda_setup_pe_seg(struct pci_controller *hose,
+ 				region.start += phb->ioda.io_segsize;
+ 				index++;
+ 			}
+-		} else if (res->flags & IORESOURCE_MEM) {
++		} else if ((res->flags & IORESOURCE_MEM) &&
++			   !pnv_pci_is_mem_pref_64(res->flags)) {
+ 			region.start = res->start -
+ 				       hose->mem_offset[0] -
+ 				       phb->ioda.m32_pci_base;
+diff --git a/arch/s390/kernel/suspend.c b/arch/s390/kernel/suspend.c
+index 1c4c5ac..d3236c9 100644
+--- a/arch/s390/kernel/suspend.c
++++ b/arch/s390/kernel/suspend.c
+@@ -138,6 +138,8 @@ int pfn_is_nosave(unsigned long pfn)
+ {
+ 	unsigned long nosave_begin_pfn = PFN_DOWN(__pa(&__nosave_begin));
+ 	unsigned long nosave_end_pfn = PFN_DOWN(__pa(&__nosave_end));
++	unsigned long eshared_pfn = PFN_DOWN(__pa(&_eshared)) - 1;
++	unsigned long stext_pfn = PFN_DOWN(__pa(&_stext));
+ 
+ 	/* Always save lowcore pages (LC protection might be enabled). */
+ 	if (pfn <= LC_PAGES)
+@@ -145,6 +147,8 @@ int pfn_is_nosave(unsigned long pfn)
+ 	if (pfn >= nosave_begin_pfn && pfn < nosave_end_pfn)
+ 		return 1;
+ 	/* Skip memory holes and read-only pages (NSS, DCSS, ...). */
++	if (pfn >= stext_pfn && pfn <= eshared_pfn)
++		return ipl_info.type == IPL_TYPE_NSS ? 1 : 0;
+ 	if (tprot(PFN_PHYS(pfn)))
+ 		return 1;
+ 	return 0;
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index 073b5f3..e7bc2fd 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -17,6 +17,7 @@
+ #include <linux/signal.h>
+ #include <linux/slab.h>
+ #include <linux/bitmap.h>
++#include <linux/vmalloc.h>
+ #include <asm/asm-offsets.h>
+ #include <asm/uaccess.h>
+ #include <asm/sclp.h>
+@@ -1332,10 +1333,10 @@ int kvm_s390_inject_vm(struct kvm *kvm,
+ 	return rc;
+ }
+ 
+-void kvm_s390_reinject_io_int(struct kvm *kvm,
++int kvm_s390_reinject_io_int(struct kvm *kvm,
+ 			      struct kvm_s390_interrupt_info *inti)
+ {
+-	__inject_vm(kvm, inti);
++	return __inject_vm(kvm, inti);
+ }
+ 
+ int s390int_to_s390irq(struct kvm_s390_interrupt *s390int,
+@@ -1455,61 +1456,66 @@ void kvm_s390_clear_float_irqs(struct kvm *kvm)
+ 	spin_unlock(&fi->lock);
+ }
+ 
+-static inline int copy_irq_to_user(struct kvm_s390_interrupt_info *inti,
+-				   u8 *addr)
++static void inti_to_irq(struct kvm_s390_interrupt_info *inti,
++		       struct kvm_s390_irq *irq)
+ {
+-	struct kvm_s390_irq __user *uptr = (struct kvm_s390_irq __user *) addr;
+-	struct kvm_s390_irq irq = {0};
+-
+-	irq.type = inti->type;
++	irq->type = inti->type;
+ 	switch (inti->type) {
+ 	case KVM_S390_INT_PFAULT_INIT:
+ 	case KVM_S390_INT_PFAULT_DONE:
+ 	case KVM_S390_INT_VIRTIO:
+ 	case KVM_S390_INT_SERVICE:
+-		irq.u.ext = inti->ext;
++		irq->u.ext = inti->ext;
+ 		break;
+ 	case KVM_S390_INT_IO_MIN...KVM_S390_INT_IO_MAX:
+-		irq.u.io = inti->io;
++		irq->u.io = inti->io;
+ 		break;
+ 	case KVM_S390_MCHK:
+-		irq.u.mchk = inti->mchk;
++		irq->u.mchk = inti->mchk;
+ 		break;
+-	default:
+-		return -EINVAL;
+ 	}
+-
+-	if (copy_to_user(uptr, &irq, sizeof(irq)))
+-		return -EFAULT;
+-
+-	return 0;
+ }
+ 
+-static int get_all_floating_irqs(struct kvm *kvm, __u8 *buf, __u64 len)
++static int get_all_floating_irqs(struct kvm *kvm, u8 __user *usrbuf, u64 len)
+ {
+ 	struct kvm_s390_interrupt_info *inti;
+ 	struct kvm_s390_float_interrupt *fi;
++	struct kvm_s390_irq *buf;
++	int max_irqs;
+ 	int ret = 0;
+ 	int n = 0;
+ 
++	if (len > KVM_S390_FLIC_MAX_BUFFER || len == 0)
++		return -EINVAL;
++
++	/*
++	 * We are already using -ENOMEM to signal
++	 * userspace it may retry with a bigger buffer,
++	 * so we need to use something else for this case
++	 */
++	buf = vzalloc(len);
++	if (!buf)
++		return -ENOBUFS;
++
++	max_irqs = len / sizeof(struct kvm_s390_irq);
++
+ 	fi = &kvm->arch.float_int;
+ 	spin_lock(&fi->lock);
+-
+ 	list_for_each_entry(inti, &fi->list, list) {
+-		if (len < sizeof(struct kvm_s390_irq)) {
++		if (n == max_irqs) {
+ 			/* signal userspace to try again */
+ 			ret = -ENOMEM;
+ 			break;
+ 		}
+-		ret = copy_irq_to_user(inti, buf);
+-		if (ret)
+-			break;
+-		buf += sizeof(struct kvm_s390_irq);
+-		len -= sizeof(struct kvm_s390_irq);
++		inti_to_irq(inti, &buf[n]);
+ 		n++;
+ 	}
+-
+ 	spin_unlock(&fi->lock);
++	if (!ret && n > 0) {
++		if (copy_to_user(usrbuf, buf, sizeof(struct kvm_s390_irq) * n))
++			ret = -EFAULT;
++	}
++	vfree(buf);
+ 
+ 	return ret < 0 ? ret : n;
+ }
+@@ -1520,7 +1526,7 @@ static int flic_get_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
+ 
+ 	switch (attr->group) {
+ 	case KVM_DEV_FLIC_GET_ALL_IRQS:
+-		r = get_all_floating_irqs(dev->kvm, (u8 *) attr->addr,
++		r = get_all_floating_irqs(dev->kvm, (u8 __user *) attr->addr,
+ 					  attr->attr);
+ 		break;
+ 	default:
+diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
+index c34109a..6995a30 100644
+--- a/arch/s390/kvm/kvm-s390.h
++++ b/arch/s390/kvm/kvm-s390.h
+@@ -151,8 +151,8 @@ int __must_check kvm_s390_inject_vcpu(struct kvm_vcpu *vcpu,
+ int __must_check kvm_s390_inject_program_int(struct kvm_vcpu *vcpu, u16 code);
+ struct kvm_s390_interrupt_info *kvm_s390_get_io_int(struct kvm *kvm,
+ 						    u64 cr6, u64 schid);
+-void kvm_s390_reinject_io_int(struct kvm *kvm,
+-			      struct kvm_s390_interrupt_info *inti);
++int kvm_s390_reinject_io_int(struct kvm *kvm,
++			     struct kvm_s390_interrupt_info *inti);
+ int kvm_s390_mask_adapter(struct kvm *kvm, unsigned int id, bool masked);
+ 
+ /* implemented in intercept.c */
+diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
+index 3511169..b982fbc 100644
+--- a/arch/s390/kvm/priv.c
++++ b/arch/s390/kvm/priv.c
+@@ -229,18 +229,19 @@ static int handle_tpi(struct kvm_vcpu *vcpu)
+ 	struct kvm_s390_interrupt_info *inti;
+ 	unsigned long len;
+ 	u32 tpi_data[3];
+-	int cc, rc;
++	int rc;
+ 	u64 addr;
+ 
+-	rc = 0;
+ 	addr = kvm_s390_get_base_disp_s(vcpu);
+ 	if (addr & 3)
+ 		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
+-	cc = 0;
++
+ 	inti = kvm_s390_get_io_int(vcpu->kvm, vcpu->arch.sie_block->gcr[6], 0);
+-	if (!inti)
+-		goto no_interrupt;
+-	cc = 1;
++	if (!inti) {
++		kvm_s390_set_psw_cc(vcpu, 0);
++		return 0;
++	}
++
+ 	tpi_data[0] = inti->io.subchannel_id << 16 | inti->io.subchannel_nr;
+ 	tpi_data[1] = inti->io.io_int_parm;
+ 	tpi_data[2] = inti->io.io_int_word;
+@@ -251,30 +252,38 @@ static int handle_tpi(struct kvm_vcpu *vcpu)
+ 		 */
+ 		len = sizeof(tpi_data) - 4;
+ 		rc = write_guest(vcpu, addr, &tpi_data, len);
+-		if (rc)
+-			return kvm_s390_inject_prog_cond(vcpu, rc);
++		if (rc) {
++			rc = kvm_s390_inject_prog_cond(vcpu, rc);
++			goto reinject_interrupt;
++		}
+ 	} else {
+ 		/*
+ 		 * Store the three-word I/O interruption code into
+ 		 * the appropriate lowcore area.
+ 		 */
+ 		len = sizeof(tpi_data);
+-		if (write_guest_lc(vcpu, __LC_SUBCHANNEL_ID, &tpi_data, len))
++		if (write_guest_lc(vcpu, __LC_SUBCHANNEL_ID, &tpi_data, len)) {
++			/* failed writes to the low core are not recoverable */
+ 			rc = -EFAULT;
++			goto reinject_interrupt;
++		}
+ 	}
++
++	/* irq was successfully handed to the guest */
++	kfree(inti);
++	kvm_s390_set_psw_cc(vcpu, 1);
++	return 0;
++reinject_interrupt:
+ 	/*
+ 	 * If we encounter a problem storing the interruption code, the
+ 	 * instruction is suppressed from the guest's view: reinject the
+ 	 * interrupt.
+ 	 */
+-	if (!rc)
++	if (kvm_s390_reinject_io_int(vcpu->kvm, inti)) {
+ 		kfree(inti);
+-	else
+-		kvm_s390_reinject_io_int(vcpu->kvm, inti);
+-no_interrupt:
+-	/* Set condition code and we're done. */
+-	if (!rc)
+-		kvm_s390_set_psw_cc(vcpu, cc);
++		rc = -EFAULT;
++	}
++	/* don't set the cc, a pgm irq was injected or we drop to user space */
+ 	return rc ? -EFAULT : 0;
+ }
+ 
+@@ -467,6 +476,7 @@ static void handle_stsi_3_2_2(struct kvm_vcpu *vcpu, struct sysinfo_3_2_2 *mem)
+ 	for (n = mem->count - 1; n > 0 ; n--)
+ 		memcpy(&mem->vm[n], &mem->vm[n - 1], sizeof(mem->vm[0]));
+ 
++	memset(&mem->vm[0], 0, sizeof(mem->vm[0]));
+ 	mem->vm[0].cpus_total = cpus;
+ 	mem->vm[0].cpus_configured = cpus;
+ 	mem->vm[0].cpus_standby = 0;
+diff --git a/arch/x86/include/asm/insn.h b/arch/x86/include/asm/insn.h
+index 47f29b1..e7814b7 100644
+--- a/arch/x86/include/asm/insn.h
++++ b/arch/x86/include/asm/insn.h
+@@ -69,7 +69,7 @@ struct insn {
+ 	const insn_byte_t *next_byte;
+ };
+ 
+-#define MAX_INSN_SIZE	16
++#define MAX_INSN_SIZE	15
+ 
+ #define X86_MODRM_MOD(modrm) (((modrm) & 0xc0) >> 6)
+ #define X86_MODRM_REG(modrm) (((modrm) & 0x38) >> 3)
+diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
+index a1410db..653dfa7 100644
+--- a/arch/x86/include/asm/mwait.h
++++ b/arch/x86/include/asm/mwait.h
+@@ -30,6 +30,14 @@ static inline void __mwait(unsigned long eax, unsigned long ecx)
+ 		     :: "a" (eax), "c" (ecx));
+ }
+ 
++static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
++{
++	trace_hardirqs_on();
++	/* "mwait %eax, %ecx;" */
++	asm volatile("sti; .byte 0x0f, 0x01, 0xc9;"
++		     :: "a" (eax), "c" (ecx));
++}
++
+ /*
+  * This uses new MONITOR/MWAIT instructions on P4 processors with PNI,
+  * which can obviate IPI to trigger checking of need_resched.
+diff --git a/arch/x86/include/asm/pvclock.h b/arch/x86/include/asm/pvclock.h
+index d6b078e..25b1cc0 100644
+--- a/arch/x86/include/asm/pvclock.h
++++ b/arch/x86/include/asm/pvclock.h
+@@ -95,6 +95,7 @@ unsigned __pvclock_read_cycles(const struct pvclock_vcpu_time_info *src,
+ 
+ struct pvclock_vsyscall_time_info {
+ 	struct pvclock_vcpu_time_info pvti;
++	u32 migrate_count;
+ } __attribute__((__aligned__(SMP_CACHE_BYTES)));
+ 
+ #define PVTI_SIZE sizeof(struct pvclock_vsyscall_time_info)
+diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
+index 0739833..666bcf1 100644
+--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
++++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
+@@ -557,6 +557,8 @@ struct event_constraint intel_core2_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c5, 0x1), /* BR_INST_RETIRED.MISPRED */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x1fc7, 0x1), /* SIMD_INST_RETURED.ANY */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1),    /* MEM_LOAD_RETIRED.* */
++	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
++	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+@@ -564,6 +566,8 @@ struct event_constraint intel_atom_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c0, 0x1), /* INST_RETIRED.ANY */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c5, 0x1), /* MISPREDICTED_BRANCH_RETIRED */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1),    /* MEM_LOAD_RETIRED.* */
++	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
++	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+@@ -587,6 +591,8 @@ struct event_constraint intel_nehalem_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x20c8, 0xf), /* ITLB_MISS_RETIRED */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf),    /* MEM_LOAD_RETIRED.* */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf),    /* FP_ASSIST.* */
++	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
++	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+@@ -602,6 +608,8 @@ struct event_constraint intel_westmere_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x20c8, 0xf), /* ITLB_MISS_RETIRED */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf),    /* MEM_LOAD_RETIRED.* */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf),    /* FP_ASSIST.* */
++	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
++	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 046e2d6..a388bb8 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -24,6 +24,7 @@
+ #include <asm/syscalls.h>
+ #include <asm/idle.h>
+ #include <asm/uaccess.h>
++#include <asm/mwait.h>
+ #include <asm/i387.h>
+ #include <asm/fpu-internal.h>
+ #include <asm/debugreg.h>
+@@ -399,6 +400,53 @@ static void amd_e400_idle(void)
+ 		default_idle();
+ }
+ 
++/*
++ * Intel Core2 and older machines prefer MWAIT over HALT for C1.
++ * We can't rely on cpuidle installing MWAIT, because it will not load
++ * on systems that support only C1 -- so the boot default must be MWAIT.
++ *
++ * Some AMD machines are the opposite, they depend on using HALT.
++ *
++ * So for default C1, which is used during boot until cpuidle loads,
++ * use MWAIT-C1 on Intel HW that has it, else use HALT.
++ */
++static int prefer_mwait_c1_over_halt(const struct cpuinfo_x86 *c)
++{
++	if (c->x86_vendor != X86_VENDOR_INTEL)
++		return 0;
++
++	if (!cpu_has(c, X86_FEATURE_MWAIT))
++		return 0;
++
++	return 1;
++}
++
++/*
++ * MONITOR/MWAIT with no hints, used for default default C1 state.
++ * This invokes MWAIT with interrutps enabled and no flags,
++ * which is backwards compatible with the original MWAIT implementation.
++ */
++
++static void mwait_idle(void)
++{
++	if (!current_set_polling_and_test()) {
++		if (this_cpu_has(X86_BUG_CLFLUSH_MONITOR)) {
++			smp_mb(); /* quirk */
++			clflush((void *)&current_thread_info()->flags);
++			smp_mb(); /* quirk */
++		}
++
++		__monitor((void *)&current_thread_info()->flags, 0, 0);
++		if (!need_resched())
++			__sti_mwait(0, 0);
++		else
++			local_irq_enable();
++	} else {
++		local_irq_enable();
++	}
++	__current_clr_polling();
++}
++
+ void select_idle_routine(const struct cpuinfo_x86 *c)
+ {
+ #ifdef CONFIG_SMP
+@@ -412,6 +460,9 @@ void select_idle_routine(const struct cpuinfo_x86 *c)
+ 		/* E400: APIC timer interrupt does not wake up CPU from C1e */
+ 		pr_info("using AMD E400 aware idle routine\n");
+ 		x86_idle = amd_e400_idle;
++	} else if (prefer_mwait_c1_over_halt(c)) {
++		pr_info("using mwait in idle threads\n");
++		x86_idle = mwait_idle;
+ 	} else
+ 		x86_idle = default_idle;
+ }
+diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
+index 2f355d2..e5ecd20 100644
+--- a/arch/x86/kernel/pvclock.c
++++ b/arch/x86/kernel/pvclock.c
+@@ -141,7 +141,46 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
+ 	set_normalized_timespec(ts, now.tv_sec, now.tv_nsec);
+ }
+ 
++static struct pvclock_vsyscall_time_info *pvclock_vdso_info;
++
++static struct pvclock_vsyscall_time_info *
++pvclock_get_vsyscall_user_time_info(int cpu)
++{
++	if (!pvclock_vdso_info) {
++		BUG();
++		return NULL;
++	}
++
++	return &pvclock_vdso_info[cpu];
++}
++
++struct pvclock_vcpu_time_info *pvclock_get_vsyscall_time_info(int cpu)
++{
++	return &pvclock_get_vsyscall_user_time_info(cpu)->pvti;
++}
++
+ #ifdef CONFIG_X86_64
++static int pvclock_task_migrate(struct notifier_block *nb, unsigned long l,
++			        void *v)
++{
++	struct task_migration_notifier *mn = v;
++	struct pvclock_vsyscall_time_info *pvti;
++
++	pvti = pvclock_get_vsyscall_user_time_info(mn->from_cpu);
++
++	/* this is NULL when pvclock vsyscall is not initialized */
++	if (unlikely(pvti == NULL))
++		return NOTIFY_DONE;
++
++	pvti->migrate_count++;
++
++	return NOTIFY_DONE;
++}
++
++static struct notifier_block pvclock_migrate = {
++	.notifier_call = pvclock_task_migrate,
++};
++
+ /*
+  * Initialize the generic pvclock vsyscall state.  This will allocate
+  * a/some page(s) for the per-vcpu pvclock information, set up a
+@@ -155,12 +194,17 @@ int __init pvclock_init_vsyscall(struct pvclock_vsyscall_time_info *i,
+ 
+ 	WARN_ON (size != PVCLOCK_VSYSCALL_NR_PAGES*PAGE_SIZE);
+ 
++	pvclock_vdso_info = i;
++
+ 	for (idx = 0; idx <= (PVCLOCK_FIXMAP_END-PVCLOCK_FIXMAP_BEGIN); idx++) {
+ 		__set_fixmap(PVCLOCK_FIXMAP_BEGIN + idx,
+ 			     __pa(i) + (idx*PAGE_SIZE),
+ 			     PAGE_KERNEL_VVAR);
+ 	}
+ 
++
++	register_task_migration_notifier(&pvclock_migrate);
++
+ 	return 0;
+ }
+ #endif
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index ae4f6d3..a60bd3a 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -3621,8 +3621,16 @@ static void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
+ 
+ static int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ {
+-	unsigned long hw_cr4 = cr4 | (to_vmx(vcpu)->rmode.vm86_active ?
+-		    KVM_RMODE_VM_CR4_ALWAYS_ON : KVM_PMODE_VM_CR4_ALWAYS_ON);
++	/*
++	 * Pass through host's Machine Check Enable value to hw_cr4, which
++	 * is in force while we are in guest mode.  Do not let guests control
++	 * this bit, even if host CR4.MCE == 0.
++	 */
++	unsigned long hw_cr4 =
++		(cr4_read_shadow() & X86_CR4_MCE) |
++		(cr4 & ~X86_CR4_MCE) |
++		(to_vmx(vcpu)->rmode.vm86_active ?
++		 KVM_RMODE_VM_CR4_ALWAYS_ON : KVM_PMODE_VM_CR4_ALWAYS_ON);
+ 
+ 	if (cr4 & X86_CR4_VMXE) {
+ 		/*
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 32bf19e..e222ba5 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -5775,7 +5775,6 @@ int kvm_arch_init(void *opaque)
+ 	kvm_set_mmio_spte_mask();
+ 
+ 	kvm_x86_ops = ops;
+-	kvm_init_msr_list();
+ 
+ 	kvm_mmu_set_mask_ptes(PT_USER_MASK, PT_ACCESSED_MASK,
+ 			PT_DIRTY_MASK, PT64_NX_MASK, 0);
+@@ -7209,7 +7208,14 @@ void kvm_arch_hardware_disable(void)
+ 
+ int kvm_arch_hardware_setup(void)
+ {
+-	return kvm_x86_ops->hardware_setup();
++	int r;
++
++	r = kvm_x86_ops->hardware_setup();
++	if (r != 0)
++		return r;
++
++	kvm_init_msr_list();
++	return 0;
+ }
+ 
+ void kvm_arch_hardware_unsetup(void)
+diff --git a/arch/x86/lib/insn.c b/arch/x86/lib/insn.c
+index 1313ae6..85994f5 100644
+--- a/arch/x86/lib/insn.c
++++ b/arch/x86/lib/insn.c
+@@ -52,6 +52,13 @@
+  */
+ void insn_init(struct insn *insn, const void *kaddr, int buf_len, int x86_64)
+ {
++	/*
++	 * Instructions longer than MAX_INSN_SIZE (15 bytes) are invalid
++	 * even if the input buffer is long enough to hold them.
++	 */
++	if (buf_len > MAX_INSN_SIZE)
++		buf_len = MAX_INSN_SIZE;
++
+ 	memset(insn, 0, sizeof(*insn));
+ 	insn->kaddr = kaddr;
+ 	insn->end_kaddr = kaddr + buf_len;
+diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
+index 1f33b3d..0a42327 100644
+--- a/arch/x86/lib/usercopy_64.c
++++ b/arch/x86/lib/usercopy_64.c
+@@ -82,7 +82,7 @@ copy_user_handle_tail(char *to, char *from, unsigned len)
+ 	clac();
+ 
+ 	/* If the destination is a kernel buffer, we always clear the end */
+-	if ((unsigned long)to >= TASK_SIZE_MAX)
++	if (!__addr_ok(to))
+ 		memset(to, 0, len);
+ 	return len;
+ }
+diff --git a/arch/x86/vdso/vclock_gettime.c b/arch/x86/vdso/vclock_gettime.c
+index 9793322..40d2473 100644
+--- a/arch/x86/vdso/vclock_gettime.c
++++ b/arch/x86/vdso/vclock_gettime.c
+@@ -82,18 +82,15 @@ static notrace cycle_t vread_pvclock(int *mode)
+ 	cycle_t ret;
+ 	u64 last;
+ 	u32 version;
++	u32 migrate_count;
+ 	u8 flags;
+ 	unsigned cpu, cpu1;
+ 
+ 
+ 	/*
+-	 * Note: hypervisor must guarantee that:
+-	 * 1. cpu ID number maps 1:1 to per-CPU pvclock time info.
+-	 * 2. that per-CPU pvclock time info is updated if the
+-	 *    underlying CPU changes.
+-	 * 3. that version is increased whenever underlying CPU
+-	 *    changes.
+-	 *
++	 * When looping to get a consistent (time-info, tsc) pair, we
++	 * also need to deal with the possibility we can switch vcpus,
++	 * so make sure we always re-fetch time-info for the current vcpu.
+ 	 */
+ 	do {
+ 		cpu = __getcpu() & VGETCPU_CPU_MASK;
+@@ -102,20 +99,27 @@ static notrace cycle_t vread_pvclock(int *mode)
+ 		 * __getcpu() calls (Gleb).
+ 		 */
+ 
+-		pvti = get_pvti(cpu);
++		/* Make sure migrate_count will change if we leave the VCPU. */
++		do {
++			pvti = get_pvti(cpu);
++			migrate_count = pvti->migrate_count;
++
++			cpu1 = cpu;
++			cpu = __getcpu() & VGETCPU_CPU_MASK;
++		} while (unlikely(cpu != cpu1));
+ 
+ 		version = __pvclock_read_cycles(&pvti->pvti, &ret, &flags);
+ 
+ 		/*
+ 		 * Test we're still on the cpu as well as the version.
+-		 * We could have been migrated just after the first
+-		 * vgetcpu but before fetching the version, so we
+-		 * wouldn't notice a version change.
++		 * - We must read TSC of pvti's VCPU.
++		 * - KVM doesn't follow the versioning protocol, so data could
++		 *   change before version if we left the VCPU.
+ 		 */
+-		cpu1 = __getcpu() & VGETCPU_CPU_MASK;
+-	} while (unlikely(cpu != cpu1 ||
+-			  (pvti->pvti.version & 1) ||
+-			  pvti->pvti.version != version));
++		smp_rmb();
++	} while (unlikely((pvti->pvti.version & 1) ||
++			  pvti->pvti.version != version ||
++			  pvti->migrate_count != migrate_count));
+ 
+ 	if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))
+ 		*mode = VCLOCK_NONE;
+diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
+index e31d494..87be10e 100644
+--- a/arch/xtensa/Kconfig
++++ b/arch/xtensa/Kconfig
+@@ -428,6 +428,36 @@ config DEFAULT_MEM_SIZE
+ 
+ 	  If unsure, leave the default value here.
+ 
++config XTFPGA_LCD
++	bool "Enable XTFPGA LCD driver"
++	depends on XTENSA_PLATFORM_XTFPGA
++	default n
++	help
++	  There's a 2x16 LCD on most of XTFPGA boards, kernel may output
++	  progress messages there during bootup/shutdown. It may be useful
++	  during board bringup.
++
++	  If unsure, say N.
++
++config XTFPGA_LCD_BASE_ADDR
++	hex "XTFPGA LCD base address"
++	depends on XTFPGA_LCD
++	default "0x0d0c0000"
++	help
++	  Base address of the LCD controller inside KIO region.
++	  Different boards from XTFPGA family have LCD controller at different
++	  addresses. Please consult prototyping user guide for your board for
++	  the correct address. Wrong address here may lead to hardware lockup.
++
++config XTFPGA_LCD_8BIT_ACCESS
++	bool "Use 8-bit access to XTFPGA LCD"
++	depends on XTFPGA_LCD
++	default n
++	help
++	  LCD may be connected with 4- or 8-bit interface, 8-bit access may
++	  only be used with 8-bit interface. Please consult prototyping user
++	  guide for your board for the correct interface width.
++
+ endmenu
+ 
+ menu "Executable file formats"
+diff --git a/arch/xtensa/include/uapi/asm/unistd.h b/arch/xtensa/include/uapi/asm/unistd.h
+index db5bb72..62d8465 100644
+--- a/arch/xtensa/include/uapi/asm/unistd.h
++++ b/arch/xtensa/include/uapi/asm/unistd.h
+@@ -715,7 +715,7 @@ __SYSCALL(323, sys_process_vm_writev, 6)
+ __SYSCALL(324, sys_name_to_handle_at, 5)
+ #define __NR_open_by_handle_at			325
+ __SYSCALL(325, sys_open_by_handle_at, 3)
+-#define __NR_sync_file_range			326
++#define __NR_sync_file_range2			326
+ __SYSCALL(326, sys_sync_file_range2, 6)
+ #define __NR_perf_event_open			327
+ __SYSCALL(327, sys_perf_event_open, 5)
+diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c
+index d05f8fe..17b1ef3 100644
+--- a/arch/xtensa/platforms/iss/network.c
++++ b/arch/xtensa/platforms/iss/network.c
+@@ -349,8 +349,8 @@ static void iss_net_timer(unsigned long priv)
+ {
+ 	struct iss_net_private *lp = (struct iss_net_private *)priv;
+ 
+-	spin_lock(&lp->lock);
+ 	iss_net_poll();
++	spin_lock(&lp->lock);
+ 	mod_timer(&lp->timer, jiffies + lp->timer_val);
+ 	spin_unlock(&lp->lock);
+ }
+@@ -361,7 +361,7 @@ static int iss_net_open(struct net_device *dev)
+ 	struct iss_net_private *lp = netdev_priv(dev);
+ 	int err;
+ 
+-	spin_lock(&lp->lock);
++	spin_lock_bh(&lp->lock);
+ 
+ 	err = lp->tp.open(lp);
+ 	if (err < 0)
+@@ -376,9 +376,11 @@ static int iss_net_open(struct net_device *dev)
+ 	while ((err = iss_net_rx(dev)) > 0)
+ 		;
+ 
+-	spin_lock(&opened_lock);
++	spin_unlock_bh(&lp->lock);
++	spin_lock_bh(&opened_lock);
+ 	list_add(&lp->opened_list, &opened);
+-	spin_unlock(&opened_lock);
++	spin_unlock_bh(&opened_lock);
++	spin_lock_bh(&lp->lock);
+ 
+ 	init_timer(&lp->timer);
+ 	lp->timer_val = ISS_NET_TIMER_VALUE;
+@@ -387,7 +389,7 @@ static int iss_net_open(struct net_device *dev)
+ 	mod_timer(&lp->timer, jiffies + lp->timer_val);
+ 
+ out:
+-	spin_unlock(&lp->lock);
++	spin_unlock_bh(&lp->lock);
+ 	return err;
+ }
+ 
+@@ -395,7 +397,7 @@ static int iss_net_close(struct net_device *dev)
+ {
+ 	struct iss_net_private *lp = netdev_priv(dev);
+ 	netif_stop_queue(dev);
+-	spin_lock(&lp->lock);
++	spin_lock_bh(&lp->lock);
+ 
+ 	spin_lock(&opened_lock);
+ 	list_del(&opened);
+@@ -405,18 +407,17 @@ static int iss_net_close(struct net_device *dev)
+ 
+ 	lp->tp.close(lp);
+ 
+-	spin_unlock(&lp->lock);
++	spin_unlock_bh(&lp->lock);
+ 	return 0;
+ }
+ 
+ static int iss_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct iss_net_private *lp = netdev_priv(dev);
+-	unsigned long flags;
+ 	int len;
+ 
+ 	netif_stop_queue(dev);
+-	spin_lock_irqsave(&lp->lock, flags);
++	spin_lock_bh(&lp->lock);
+ 
+ 	len = lp->tp.write(lp, &skb);
+ 
+@@ -438,7 +439,7 @@ static int iss_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		pr_err("%s: %s failed(%d)\n", dev->name, __func__, len);
+ 	}
+ 
+-	spin_unlock_irqrestore(&lp->lock, flags);
++	spin_unlock_bh(&lp->lock);
+ 
+ 	dev_kfree_skb(skb);
+ 	return NETDEV_TX_OK;
+@@ -466,9 +467,9 @@ static int iss_net_set_mac(struct net_device *dev, void *addr)
+ 
+ 	if (!is_valid_ether_addr(hwaddr->sa_data))
+ 		return -EADDRNOTAVAIL;
+-	spin_lock(&lp->lock);
++	spin_lock_bh(&lp->lock);
+ 	memcpy(dev->dev_addr, hwaddr->sa_data, ETH_ALEN);
+-	spin_unlock(&lp->lock);
++	spin_unlock_bh(&lp->lock);
+ 	return 0;
+ }
+ 
+@@ -520,11 +521,11 @@ static int iss_net_configure(int index, char *init)
+ 	*lp = (struct iss_net_private) {
+ 		.device_list		= LIST_HEAD_INIT(lp->device_list),
+ 		.opened_list		= LIST_HEAD_INIT(lp->opened_list),
+-		.lock			= __SPIN_LOCK_UNLOCKED(lp.lock),
+ 		.dev			= dev,
+ 		.index			= index,
+-		};
++	};
+ 
++	spin_lock_init(&lp->lock);
+ 	/*
+ 	 * If this name ends up conflicting with an existing registered
+ 	 * netdevice, that is OK, register_netdev{,ice}() will notice this
+diff --git a/arch/xtensa/platforms/xtfpga/Makefile b/arch/xtensa/platforms/xtfpga/Makefile
+index b9ae206..7839d38 100644
+--- a/arch/xtensa/platforms/xtfpga/Makefile
++++ b/arch/xtensa/platforms/xtfpga/Makefile
+@@ -6,4 +6,5 @@
+ #
+ # Note 2! The CFLAGS definitions are in the main makefile...
+ 
+-obj-y			= setup.o lcd.o
++obj-y			+= setup.o
++obj-$(CONFIG_XTFPGA_LCD) += lcd.o
+diff --git a/arch/xtensa/platforms/xtfpga/include/platform/hardware.h b/arch/xtensa/platforms/xtfpga/include/platform/hardware.h
+index 6edd20b..4e0af26 100644
+--- a/arch/xtensa/platforms/xtfpga/include/platform/hardware.h
++++ b/arch/xtensa/platforms/xtfpga/include/platform/hardware.h
+@@ -40,9 +40,6 @@
+ 
+ /* UART */
+ #define DUART16552_PADDR	(XCHAL_KIO_PADDR + 0x0D050020)
+-/* LCD instruction and data addresses. */
+-#define LCD_INSTR_ADDR		((char *)IOADDR(0x0D040000))
+-#define LCD_DATA_ADDR		((char *)IOADDR(0x0D040004))
+ 
+ /* Misc. */
+ #define XTFPGA_FPGAREGS_VADDR	IOADDR(0x0D020000)
+diff --git a/arch/xtensa/platforms/xtfpga/include/platform/lcd.h b/arch/xtensa/platforms/xtfpga/include/platform/lcd.h
+index 0e43564..4c8541e 100644
+--- a/arch/xtensa/platforms/xtfpga/include/platform/lcd.h
++++ b/arch/xtensa/platforms/xtfpga/include/platform/lcd.h
+@@ -11,10 +11,25 @@
+ #ifndef __XTENSA_XTAVNET_LCD_H
+ #define __XTENSA_XTAVNET_LCD_H
+ 
++#ifdef CONFIG_XTFPGA_LCD
+ /* Display string STR at position POS on the LCD. */
+ void lcd_disp_at_pos(char *str, unsigned char pos);
+ 
+ /* Shift the contents of the LCD display left or right. */
+ void lcd_shiftleft(void);
+ void lcd_shiftright(void);
++#else
++static inline void lcd_disp_at_pos(char *str, unsigned char pos)
++{
++}
++
++static inline void lcd_shiftleft(void)
++{
++}
++
++static inline void lcd_shiftright(void)
++{
++}
++#endif
++
+ #endif
+diff --git a/arch/xtensa/platforms/xtfpga/lcd.c b/arch/xtensa/platforms/xtfpga/lcd.c
+index 2872301..4dc0c1b 100644
+--- a/arch/xtensa/platforms/xtfpga/lcd.c
++++ b/arch/xtensa/platforms/xtfpga/lcd.c
+@@ -1,50 +1,63 @@
+ /*
+- * Driver for the LCD display on the Tensilica LX60 Board.
++ * Driver for the LCD display on the Tensilica XTFPGA board family.
++ * http://www.mytechcorp.com/cfdata/productFile/File1/MOC-16216B-B-A0A04.pdf
+  *
+  * This file is subject to the terms and conditions of the GNU General Public
+  * License.  See the file "COPYING" in the main directory of this archive
+  * for more details.
+  *
+  * Copyright (C) 2001, 2006 Tensilica Inc.
++ * Copyright (C) 2015 Cadence Design Systems Inc.
+  */
+ 
+-/*
+- *
+- * FIXME: this code is from the examples from the LX60 user guide.
+- *
+- * The lcd_pause function does busy waiting, which is probably not
+- * great. Maybe the code could be changed to use kernel timers, or
+- * change the hardware to not need to wait.
+- */
+-
++#include <linux/delay.h>
+ #include <linux/init.h>
+ #include <linux/io.h>
+ 
+ #include <platform/hardware.h>
+ #include <platform/lcd.h>
+-#include <linux/delay.h>
+ 
+-#define LCD_PAUSE_ITERATIONS	4000
++/* LCD instruction and data addresses. */
++#define LCD_INSTR_ADDR		((char *)IOADDR(CONFIG_XTFPGA_LCD_BASE_ADDR))
++#define LCD_DATA_ADDR		(LCD_INSTR_ADDR + 4)
++
+ #define LCD_CLEAR		0x1
+ #define LCD_DISPLAY_ON		0xc
+ 
+ /* 8bit and 2 lines display */
+ #define LCD_DISPLAY_MODE8BIT	0x38
++#define LCD_DISPLAY_MODE4BIT	0x28
+ #define LCD_DISPLAY_POS		0x80
+ #define LCD_SHIFT_LEFT		0x18
+ #define LCD_SHIFT_RIGHT		0x1c
+ 
++static void lcd_put_byte(u8 *addr, u8 data)
++{
++#ifdef CONFIG_XTFPGA_LCD_8BIT_ACCESS
++	ACCESS_ONCE(*addr) = data;
++#else
++	ACCESS_ONCE(*addr) = data & 0xf0;
++	ACCESS_ONCE(*addr) = (data << 4) & 0xf0;
++#endif
++}
++
+ static int __init lcd_init(void)
+ {
+-	*LCD_INSTR_ADDR = LCD_DISPLAY_MODE8BIT;
++	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE8BIT;
+ 	mdelay(5);
+-	*LCD_INSTR_ADDR = LCD_DISPLAY_MODE8BIT;
++	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE8BIT;
+ 	udelay(200);
+-	*LCD_INSTR_ADDR = LCD_DISPLAY_MODE8BIT;
++	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE8BIT;
++	udelay(50);
++#ifndef CONFIG_XTFPGA_LCD_8BIT_ACCESS
++	ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE4BIT;
++	udelay(50);
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_DISPLAY_MODE4BIT);
+ 	udelay(50);
+-	*LCD_INSTR_ADDR = LCD_DISPLAY_ON;
++#endif
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_DISPLAY_ON);
+ 	udelay(50);
+-	*LCD_INSTR_ADDR = LCD_CLEAR;
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_CLEAR);
+ 	mdelay(10);
+ 	lcd_disp_at_pos("XTENSA LINUX", 0);
+ 	return 0;
+@@ -52,10 +65,10 @@ static int __init lcd_init(void)
+ 
+ void lcd_disp_at_pos(char *str, unsigned char pos)
+ {
+-	*LCD_INSTR_ADDR = LCD_DISPLAY_POS | pos;
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_DISPLAY_POS | pos);
+ 	udelay(100);
+ 	while (*str != 0) {
+-		*LCD_DATA_ADDR = *str;
++		lcd_put_byte(LCD_DATA_ADDR, *str);
+ 		udelay(200);
+ 		str++;
+ 	}
+@@ -63,13 +76,13 @@ void lcd_disp_at_pos(char *str, unsigned char pos)
+ 
+ void lcd_shiftleft(void)
+ {
+-	*LCD_INSTR_ADDR = LCD_SHIFT_LEFT;
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_SHIFT_LEFT);
+ 	udelay(50);
+ }
+ 
+ void lcd_shiftright(void)
+ {
+-	*LCD_INSTR_ADDR = LCD_SHIFT_RIGHT;
++	lcd_put_byte(LCD_INSTR_ADDR, LCD_SHIFT_RIGHT);
+ 	udelay(50);
+ }
+ 
+diff --git a/drivers/acpi/acpica/evgpe.c b/drivers/acpi/acpica/evgpe.c
+index 5ed064e..ccf7932 100644
+--- a/drivers/acpi/acpica/evgpe.c
++++ b/drivers/acpi/acpica/evgpe.c
+@@ -92,6 +92,7 @@ acpi_ev_update_gpe_enable_mask(struct acpi_gpe_event_info *gpe_event_info)
+ 		ACPI_SET_BIT(gpe_register_info->enable_for_run,
+ 			     (u8)register_bit);
+ 	}
++	gpe_register_info->enable_mask = gpe_register_info->enable_for_run;
+ 
+ 	return_ACPI_STATUS(AE_OK);
+ }
+@@ -123,7 +124,7 @@ acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info)
+ 
+ 	/* Enable the requested GPE */
+ 
+-	status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE_SAVE);
++	status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE);
+ 	return_ACPI_STATUS(status);
+ }
+ 
+@@ -202,7 +203,7 @@ acpi_ev_remove_gpe_reference(struct acpi_gpe_event_info *gpe_event_info)
+ 		if (ACPI_SUCCESS(status)) {
+ 			status =
+ 			    acpi_hw_low_set_gpe(gpe_event_info,
+-						ACPI_GPE_DISABLE_SAVE);
++						ACPI_GPE_DISABLE);
+ 		}
+ 
+ 		if (ACPI_FAILURE(status)) {
+diff --git a/drivers/acpi/acpica/hwgpe.c b/drivers/acpi/acpica/hwgpe.c
+index 84bc550..af6514e 100644
+--- a/drivers/acpi/acpica/hwgpe.c
++++ b/drivers/acpi/acpica/hwgpe.c
+@@ -89,6 +89,8 @@ u32 acpi_hw_get_gpe_register_bit(struct acpi_gpe_event_info *gpe_event_info)
+  * RETURN:	Status
+  *
+  * DESCRIPTION: Enable or disable a single GPE in the parent enable register.
++ *              The enable_mask field of the involved GPE register must be
++ *              updated by the caller if necessary.
+  *
+  ******************************************************************************/
+ 
+@@ -119,7 +121,7 @@ acpi_hw_low_set_gpe(struct acpi_gpe_event_info *gpe_event_info, u32 action)
+ 	/* Set or clear just the bit that corresponds to this GPE */
+ 
+ 	register_bit = acpi_hw_get_gpe_register_bit(gpe_event_info);
+-	switch (action & ~ACPI_GPE_SAVE_MASK) {
++	switch (action) {
+ 	case ACPI_GPE_CONDITIONAL_ENABLE:
+ 
+ 		/* Only enable if the corresponding enable_mask bit is set */
+@@ -149,9 +151,6 @@ acpi_hw_low_set_gpe(struct acpi_gpe_event_info *gpe_event_info, u32 action)
+ 	/* Write the updated enable mask */
+ 
+ 	status = acpi_hw_write(enable_mask, &gpe_register_info->enable_address);
+-	if (ACPI_SUCCESS(status) && (action & ACPI_GPE_SAVE_MASK)) {
+-		gpe_register_info->enable_mask = (u8)enable_mask;
+-	}
+ 	return (status);
+ }
+ 
+@@ -286,10 +285,8 @@ acpi_hw_gpe_enable_write(u8 enable_mask,
+ {
+ 	acpi_status status;
+ 
++	gpe_register_info->enable_mask = enable_mask;
+ 	status = acpi_hw_write(enable_mask, &gpe_register_info->enable_address);
+-	if (ACPI_SUCCESS(status)) {
+-		gpe_register_info->enable_mask = enable_mask;
+-	}
+ 	return (status);
+ }
+ 
+diff --git a/drivers/acpi/acpica/tbinstal.c b/drivers/acpi/acpica/tbinstal.c
+index 9bad45e..7fbc2b9 100644
+--- a/drivers/acpi/acpica/tbinstal.c
++++ b/drivers/acpi/acpica/tbinstal.c
+@@ -346,7 +346,6 @@ acpi_tb_install_standard_table(acpi_physical_address address,
+ 				 */
+ 				acpi_tb_uninstall_table(&new_table_desc);
+ 				*table_index = i;
+-				(void)acpi_ut_release_mutex(ACPI_MTX_TABLES);
+ 				return_ACPI_STATUS(AE_OK);
+ 			}
+ 		}
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index bbca783..349f4fd 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -298,7 +298,11 @@ bool acpi_scan_is_offline(struct acpi_device *adev, bool uevent)
+ 	struct acpi_device_physical_node *pn;
+ 	bool offline = true;
+ 
+-	mutex_lock(&adev->physical_node_lock);
++	/*
++	 * acpi_container_offline() calls this for all of the container's
++	 * children under the container's physical_node_lock lock.
++	 */
++	mutex_lock_nested(&adev->physical_node_lock, SINGLE_DEPTH_NESTING);
+ 
+ 	list_for_each_entry(pn, &adev->physical_node_list, node)
+ 		if (device_supports_offline(pn->dev) && !pn->dev->offline) {
+diff --git a/drivers/base/bus.c b/drivers/base/bus.c
+index 876bae5..79bc203 100644
+--- a/drivers/base/bus.c
++++ b/drivers/base/bus.c
+@@ -515,11 +515,11 @@ int bus_add_device(struct device *dev)
+ 			goto out_put;
+ 		error = device_add_groups(dev, bus->dev_groups);
+ 		if (error)
+-			goto out_groups;
++			goto out_id;
+ 		error = sysfs_create_link(&bus->p->devices_kset->kobj,
+ 						&dev->kobj, dev_name(dev));
+ 		if (error)
+-			goto out_id;
++			goto out_groups;
+ 		error = sysfs_create_link(&dev->kobj,
+ 				&dev->bus->p->subsys.kobj, "subsystem");
+ 		if (error)
+diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
+index 6e64563..9c2ba1c 100644
+--- a/drivers/base/cacheinfo.c
++++ b/drivers/base/cacheinfo.c
+@@ -62,15 +62,21 @@ static int cache_setup_of_node(unsigned int cpu)
+ 		return -ENOENT;
+ 	}
+ 
+-	while (np && index < cache_leaves(cpu)) {
++	while (index < cache_leaves(cpu)) {
+ 		this_leaf = this_cpu_ci->info_list + index;
+ 		if (this_leaf->level != 1)
+ 			np = of_find_next_cache_node(np);
+ 		else
+ 			np = of_node_get(np);/* cpu node itself */
++		if (!np)
++			break;
+ 		this_leaf->of_node = np;
+ 		index++;
+ 	}
++
++	if (index != cache_leaves(cpu)) /* not all OF nodes populated */
++		return -ENOENT;
++
+ 	return 0;
+ }
+ 
+@@ -189,8 +195,11 @@ static int detect_cache_attributes(unsigned int cpu)
+ 	 * will be set up here only if they are not populated already
+ 	 */
+ 	ret = cache_shared_cpu_map_setup(cpu);
+-	if (ret)
++	if (ret) {
++		pr_warn("Unable to detect cache hierarcy from DT for CPU %d\n",
++			cpu);
+ 		goto free_ci;
++	}
+ 	return 0;
+ 
+ free_ci:
+diff --git a/drivers/base/platform.c b/drivers/base/platform.c
+index 9421fed..e68ab79 100644
+--- a/drivers/base/platform.c
++++ b/drivers/base/platform.c
+@@ -101,6 +101,15 @@ int platform_get_irq(struct platform_device *dev, unsigned int num)
+ 	}
+ 
+ 	r = platform_get_resource(dev, IORESOURCE_IRQ, num);
++	/*
++	 * The resources may pass trigger flags to the irqs that need
++	 * to be set up. It so happens that the trigger flags for
++	 * IORESOURCE_BITS correspond 1-to-1 to the IRQF_TRIGGER*
++	 * settings.
++	 */
++	if (r && r->flags & IORESOURCE_BITS)
++		irqd_set_trigger_type(irq_get_irq_data(r->start),
++				      r->flags & IORESOURCE_BITS);
+ 
+ 	return r ? r->start : -ENXIO;
+ #endif
+diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c
+index de4c849..288547a 100644
+--- a/drivers/bluetooth/ath3k.c
++++ b/drivers/bluetooth/ath3k.c
+@@ -65,6 +65,7 @@ static const struct usb_device_id ath3k_table[] = {
+ 	/* Atheros AR3011 with sflash firmware*/
+ 	{ USB_DEVICE(0x0489, 0xE027) },
+ 	{ USB_DEVICE(0x0489, 0xE03D) },
++	{ USB_DEVICE(0x04F2, 0xAFF1) },
+ 	{ USB_DEVICE(0x0930, 0x0215) },
+ 	{ USB_DEVICE(0x0CF3, 0x3002) },
+ 	{ USB_DEVICE(0x0CF3, 0xE019) },
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 8bfc4c2..2c527da 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -159,6 +159,7 @@ static const struct usb_device_id blacklist_table[] = {
+ 	/* Atheros 3011 with sflash firmware */
+ 	{ USB_DEVICE(0x0489, 0xe027), .driver_info = BTUSB_IGNORE },
+ 	{ USB_DEVICE(0x0489, 0xe03d), .driver_info = BTUSB_IGNORE },
++	{ USB_DEVICE(0x04f2, 0xaff1), .driver_info = BTUSB_IGNORE },
+ 	{ USB_DEVICE(0x0930, 0x0215), .driver_info = BTUSB_IGNORE },
+ 	{ USB_DEVICE(0x0cf3, 0x3002), .driver_info = BTUSB_IGNORE },
+ 	{ USB_DEVICE(0x0cf3, 0xe019), .driver_info = BTUSB_IGNORE },
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index e096e9c..283f00a 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -170,6 +170,41 @@ static void tpm_dev_del_device(struct tpm_chip *chip)
+ 	device_unregister(&chip->dev);
+ }
+ 
++static int tpm1_chip_register(struct tpm_chip *chip)
++{
++	int rc;
++
++	if (chip->flags & TPM_CHIP_FLAG_TPM2)
++		return 0;
++
++	rc = tpm_sysfs_add_device(chip);
++	if (rc)
++		return rc;
++
++	rc = tpm_add_ppi(chip);
++	if (rc) {
++		tpm_sysfs_del_device(chip);
++		return rc;
++	}
++
++	chip->bios_dir = tpm_bios_log_setup(chip->devname);
++
++	return 0;
++}
++
++static void tpm1_chip_unregister(struct tpm_chip *chip)
++{
++	if (chip->flags & TPM_CHIP_FLAG_TPM2)
++		return;
++
++	if (chip->bios_dir)
++		tpm_bios_log_teardown(chip->bios_dir);
++
++	tpm_remove_ppi(chip);
++
++	tpm_sysfs_del_device(chip);
++}
++
+ /*
+  * tpm_chip_register() - create a character device for the TPM chip
+  * @chip: TPM chip to use.
+@@ -185,22 +220,13 @@ int tpm_chip_register(struct tpm_chip *chip)
+ {
+ 	int rc;
+ 
+-	/* Populate sysfs for TPM1 devices. */
+-	if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
+-		rc = tpm_sysfs_add_device(chip);
+-		if (rc)
+-			goto del_misc;
+-
+-		rc = tpm_add_ppi(chip);
+-		if (rc)
+-			goto del_sysfs;
+-
+-		chip->bios_dir = tpm_bios_log_setup(chip->devname);
+-	}
++	rc = tpm1_chip_register(chip);
++	if (rc)
++		return rc;
+ 
+ 	rc = tpm_dev_add_device(chip);
+ 	if (rc)
+-		return rc;
++		goto out_err;
+ 
+ 	/* Make the chip available. */
+ 	spin_lock(&driver_lock);
+@@ -210,10 +236,8 @@ int tpm_chip_register(struct tpm_chip *chip)
+ 	chip->flags |= TPM_CHIP_FLAG_REGISTERED;
+ 
+ 	return 0;
+-del_sysfs:
+-	tpm_sysfs_del_device(chip);
+-del_misc:
+-	tpm_dev_del_device(chip);
++out_err:
++	tpm1_chip_unregister(chip);
+ 	return rc;
+ }
+ EXPORT_SYMBOL_GPL(tpm_chip_register);
+@@ -238,13 +262,7 @@ void tpm_chip_unregister(struct tpm_chip *chip)
+ 	spin_unlock(&driver_lock);
+ 	synchronize_rcu();
+ 
+-	if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
+-		if (chip->bios_dir)
+-			tpm_bios_log_teardown(chip->bios_dir);
+-		tpm_remove_ppi(chip);
+-		tpm_sysfs_del_device(chip);
+-	}
+-
++	tpm1_chip_unregister(chip);
+ 	tpm_dev_del_device(chip);
+ }
+ EXPORT_SYMBOL_GPL(tpm_chip_unregister);
+diff --git a/drivers/clk/at91/clk-usb.c b/drivers/clk/at91/clk-usb.c
+index a23ac0c..0b7c3e8 100644
+--- a/drivers/clk/at91/clk-usb.c
++++ b/drivers/clk/at91/clk-usb.c
+@@ -56,22 +56,55 @@ static unsigned long at91sam9x5_clk_usb_recalc_rate(struct clk_hw *hw,
+ 	return DIV_ROUND_CLOSEST(parent_rate, (usbdiv + 1));
+ }
+ 
+-static long at91sam9x5_clk_usb_round_rate(struct clk_hw *hw, unsigned long rate,
+-					  unsigned long *parent_rate)
++static long at91sam9x5_clk_usb_determine_rate(struct clk_hw *hw,
++					      unsigned long rate,
++					      unsigned long min_rate,
++					      unsigned long max_rate,
++					      unsigned long *best_parent_rate,
++					      struct clk_hw **best_parent_hw)
+ {
+-	unsigned long div;
++	struct clk *parent = NULL;
++	long best_rate = -EINVAL;
++	unsigned long tmp_rate;
++	int best_diff = -1;
++	int tmp_diff;
++	int i;
+ 
+-	if (!rate)
+-		return -EINVAL;
++	for (i = 0; i < __clk_get_num_parents(hw->clk); i++) {
++		int div;
+ 
+-	if (rate >= *parent_rate)
+-		return *parent_rate;
++		parent = clk_get_parent_by_index(hw->clk, i);
++		if (!parent)
++			continue;
++
++		for (div = 1; div < SAM9X5_USB_MAX_DIV + 2; div++) {
++			unsigned long tmp_parent_rate;
++
++			tmp_parent_rate = rate * div;
++			tmp_parent_rate = __clk_round_rate(parent,
++							   tmp_parent_rate);
++			tmp_rate = DIV_ROUND_CLOSEST(tmp_parent_rate, div);
++			if (tmp_rate < rate)
++				tmp_diff = rate - tmp_rate;
++			else
++				tmp_diff = tmp_rate - rate;
++
++			if (best_diff < 0 || best_diff > tmp_diff) {
++				best_rate = tmp_rate;
++				best_diff = tmp_diff;
++				*best_parent_rate = tmp_parent_rate;
++				*best_parent_hw = __clk_get_hw(parent);
++			}
++
++			if (!best_diff || tmp_rate < rate)
++				break;
++		}
+ 
+-	div = DIV_ROUND_CLOSEST(*parent_rate, rate);
+-	if (div > SAM9X5_USB_MAX_DIV + 1)
+-		div = SAM9X5_USB_MAX_DIV + 1;
++		if (!best_diff)
++			break;
++	}
+ 
+-	return DIV_ROUND_CLOSEST(*parent_rate, div);
++	return best_rate;
+ }
+ 
+ static int at91sam9x5_clk_usb_set_parent(struct clk_hw *hw, u8 index)
+@@ -121,7 +154,7 @@ static int at91sam9x5_clk_usb_set_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ static const struct clk_ops at91sam9x5_usb_ops = {
+ 	.recalc_rate = at91sam9x5_clk_usb_recalc_rate,
+-	.round_rate = at91sam9x5_clk_usb_round_rate,
++	.determine_rate = at91sam9x5_clk_usb_determine_rate,
+ 	.get_parent = at91sam9x5_clk_usb_get_parent,
+ 	.set_parent = at91sam9x5_clk_usb_set_parent,
+ 	.set_rate = at91sam9x5_clk_usb_set_rate,
+@@ -159,7 +192,7 @@ static const struct clk_ops at91sam9n12_usb_ops = {
+ 	.disable = at91sam9n12_clk_usb_disable,
+ 	.is_enabled = at91sam9n12_clk_usb_is_enabled,
+ 	.recalc_rate = at91sam9x5_clk_usb_recalc_rate,
+-	.round_rate = at91sam9x5_clk_usb_round_rate,
++	.determine_rate = at91sam9x5_clk_usb_determine_rate,
+ 	.set_rate = at91sam9x5_clk_usb_set_rate,
+ };
+ 
+@@ -179,7 +212,8 @@ at91sam9x5_clk_register_usb(struct at91_pmc *pmc, const char *name,
+ 	init.ops = &at91sam9x5_usb_ops;
+ 	init.parent_names = parent_names;
+ 	init.num_parents = num_parents;
+-	init.flags = CLK_SET_RATE_GATE | CLK_SET_PARENT_GATE;
++	init.flags = CLK_SET_RATE_GATE | CLK_SET_PARENT_GATE |
++		     CLK_SET_RATE_PARENT;
+ 
+ 	usb->hw.init = &init;
+ 	usb->pmc = pmc;
+@@ -207,7 +241,7 @@ at91sam9n12_clk_register_usb(struct at91_pmc *pmc, const char *name,
+ 	init.ops = &at91sam9n12_usb_ops;
+ 	init.parent_names = &parent_name;
+ 	init.num_parents = 1;
+-	init.flags = CLK_SET_RATE_GATE;
++	init.flags = CLK_SET_RATE_GATE | CLK_SET_RATE_PARENT;
+ 
+ 	usb->hw.init = &init;
+ 	usb->pmc = pmc;
+diff --git a/drivers/clk/qcom/clk-rcg.c b/drivers/clk/qcom/clk-rcg.c
+index 0039bd7..466f30c 100644
+--- a/drivers/clk/qcom/clk-rcg.c
++++ b/drivers/clk/qcom/clk-rcg.c
+@@ -495,6 +495,57 @@ static int clk_rcg_bypass_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	return __clk_rcg_set_rate(rcg, rcg->freq_tbl);
+ }
+ 
++/*
++ * This type of clock has a glitch-free mux that switches between the output of
++ * the M/N counter and an always on clock source (XO). When clk_set_rate() is
++ * called we need to make sure that we don't switch to the M/N counter if it
++ * isn't clocking because the mux will get stuck and the clock will stop
++ * outputting a clock. This can happen if the framework isn't aware that this
++ * clock is on and so clk_set_rate() doesn't turn on the new parent. To fix
++ * this we switch the mux in the enable/disable ops and reprogram the M/N
++ * counter in the set_rate op. We also make sure to switch away from the M/N
++ * counter in set_rate if software thinks the clock is off.
++ */
++static int clk_rcg_lcc_set_rate(struct clk_hw *hw, unsigned long rate,
++				unsigned long parent_rate)
++{
++	struct clk_rcg *rcg = to_clk_rcg(hw);
++	const struct freq_tbl *f;
++	int ret;
++	u32 gfm = BIT(10);
++
++	f = qcom_find_freq(rcg->freq_tbl, rate);
++	if (!f)
++		return -EINVAL;
++
++	/* Switch to XO to avoid glitches */
++	regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, 0);
++	ret = __clk_rcg_set_rate(rcg, f);
++	/* Switch back to M/N if it's clocking */
++	if (__clk_is_enabled(hw->clk))
++		regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, gfm);
++
++	return ret;
++}
++
++static int clk_rcg_lcc_enable(struct clk_hw *hw)
++{
++	struct clk_rcg *rcg = to_clk_rcg(hw);
++	u32 gfm = BIT(10);
++
++	/* Use M/N */
++	return regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, gfm);
++}
++
++static void clk_rcg_lcc_disable(struct clk_hw *hw)
++{
++	struct clk_rcg *rcg = to_clk_rcg(hw);
++	u32 gfm = BIT(10);
++
++	/* Use XO */
++	regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, 0);
++}
++
+ static int __clk_dyn_rcg_set_rate(struct clk_hw *hw, unsigned long rate)
+ {
+ 	struct clk_dyn_rcg *rcg = to_clk_dyn_rcg(hw);
+@@ -543,6 +594,17 @@ const struct clk_ops clk_rcg_bypass_ops = {
+ };
+ EXPORT_SYMBOL_GPL(clk_rcg_bypass_ops);
+ 
++const struct clk_ops clk_rcg_lcc_ops = {
++	.enable = clk_rcg_lcc_enable,
++	.disable = clk_rcg_lcc_disable,
++	.get_parent = clk_rcg_get_parent,
++	.set_parent = clk_rcg_set_parent,
++	.recalc_rate = clk_rcg_recalc_rate,
++	.determine_rate = clk_rcg_determine_rate,
++	.set_rate = clk_rcg_lcc_set_rate,
++};
++EXPORT_SYMBOL_GPL(clk_rcg_lcc_ops);
++
+ const struct clk_ops clk_dyn_rcg_ops = {
+ 	.enable = clk_enable_regmap,
+ 	.is_enabled = clk_is_enabled_regmap,
+diff --git a/drivers/clk/qcom/clk-rcg.h b/drivers/clk/qcom/clk-rcg.h
+index 687e41f..d09d06b 100644
+--- a/drivers/clk/qcom/clk-rcg.h
++++ b/drivers/clk/qcom/clk-rcg.h
+@@ -96,6 +96,7 @@ struct clk_rcg {
+ 
+ extern const struct clk_ops clk_rcg_ops;
+ extern const struct clk_ops clk_rcg_bypass_ops;
++extern const struct clk_ops clk_rcg_lcc_ops;
+ 
+ #define to_clk_rcg(_hw) container_of(to_clk_regmap(_hw), struct clk_rcg, clkr)
+ 
+diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
+index 742acfa..381f274 100644
+--- a/drivers/clk/qcom/clk-rcg2.c
++++ b/drivers/clk/qcom/clk-rcg2.c
+@@ -243,7 +243,7 @@ static int clk_rcg2_configure(struct clk_rcg2 *rcg, const struct freq_tbl *f)
+ 	mask |= CFG_SRC_SEL_MASK | CFG_MODE_MASK;
+ 	cfg = f->pre_div << CFG_SRC_DIV_SHIFT;
+ 	cfg |= rcg->parent_map[f->src] << CFG_SRC_SEL_SHIFT;
+-	if (rcg->mnd_width && f->n)
++	if (rcg->mnd_width && f->n && (f->m != f->n))
+ 		cfg |= CFG_MODE_DUAL_EDGE;
+ 	ret = regmap_update_bits(rcg->clkr.regmap,
+ 			rcg->cmd_rcgr + CFG_REG, mask, cfg);
+diff --git a/drivers/clk/qcom/gcc-ipq806x.c b/drivers/clk/qcom/gcc-ipq806x.c
+index cbdc31d..a015bb0 100644
+--- a/drivers/clk/qcom/gcc-ipq806x.c
++++ b/drivers/clk/qcom/gcc-ipq806x.c
+@@ -525,8 +525,8 @@ static struct freq_tbl clk_tbl_gsbi_qup[] = {
+ 	{ 10800000, P_PXO,  1, 2,  5 },
+ 	{ 15060000, P_PLL8, 1, 2, 51 },
+ 	{ 24000000, P_PLL8, 4, 1,  4 },
++	{ 25000000, P_PXO,  1, 0,  0 },
+ 	{ 25600000, P_PLL8, 1, 1, 15 },
+-	{ 27000000, P_PXO,  1, 0,  0 },
+ 	{ 48000000, P_PLL8, 4, 1,  2 },
+ 	{ 51200000, P_PLL8, 1, 2, 15 },
+ 	{ }
+diff --git a/drivers/clk/qcom/lcc-ipq806x.c b/drivers/clk/qcom/lcc-ipq806x.c
+index c9ff27b..a6d3a67 100644
+--- a/drivers/clk/qcom/lcc-ipq806x.c
++++ b/drivers/clk/qcom/lcc-ipq806x.c
+@@ -294,14 +294,14 @@ static struct clk_regmap_mux pcm_clk = {
+ };
+ 
+ static struct freq_tbl clk_tbl_aif_osr[] = {
+-	{  22050, P_PLL4, 1, 147, 20480 },
+-	{  32000, P_PLL4, 1,   1,    96 },
+-	{  44100, P_PLL4, 1, 147, 10240 },
+-	{  48000, P_PLL4, 1,   1,    64 },
+-	{  88200, P_PLL4, 1, 147,  5120 },
+-	{  96000, P_PLL4, 1,   1,    32 },
+-	{ 176400, P_PLL4, 1, 147,  2560 },
+-	{ 192000, P_PLL4, 1,   1,    16 },
++	{  2822400, P_PLL4, 1, 147, 20480 },
++	{  4096000, P_PLL4, 1,   1,    96 },
++	{  5644800, P_PLL4, 1, 147, 10240 },
++	{  6144000, P_PLL4, 1,   1,    64 },
++	{ 11289600, P_PLL4, 1, 147,  5120 },
++	{ 12288000, P_PLL4, 1,   1,    32 },
++	{ 22579200, P_PLL4, 1, 147,  2560 },
++	{ 24576000, P_PLL4, 1,   1,    16 },
+ 	{ },
+ };
+ 
+@@ -360,7 +360,7 @@ static struct clk_branch spdif_clk = {
+ };
+ 
+ static struct freq_tbl clk_tbl_ahbix[] = {
+-	{ 131072, P_PLL4, 1, 1, 3 },
++	{ 131072000, P_PLL4, 1, 1, 3 },
+ 	{ },
+ };
+ 
+@@ -386,13 +386,12 @@ static struct clk_rcg ahbix_clk = {
+ 	.freq_tbl = clk_tbl_ahbix,
+ 	.clkr = {
+ 		.enable_reg = 0x38,
+-		.enable_mask = BIT(10), /* toggle the gfmux to select mn/pxo */
++		.enable_mask = BIT(11),
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "ahbix",
+ 			.parent_names = lcc_pxo_pll4,
+ 			.num_parents = 2,
+-			.ops = &clk_rcg_ops,
+-			.flags = CLK_SET_RATE_GATE,
++			.ops = &clk_rcg_lcc_ops,
+ 		},
+ 	},
+ };
+diff --git a/drivers/clk/samsung/clk-exynos4.c b/drivers/clk/samsung/clk-exynos4.c
+index 51462e8..714d6ba 100644
+--- a/drivers/clk/samsung/clk-exynos4.c
++++ b/drivers/clk/samsung/clk-exynos4.c
+@@ -1354,7 +1354,7 @@ static struct samsung_pll_clock exynos4x12_plls[nr_plls] __initdata = {
+ 			VPLL_LOCK, VPLL_CON0, NULL),
+ };
+ 
+-static void __init exynos4_core_down_clock(enum exynos4_soc soc)
++static void __init exynos4x12_core_down_clock(void)
+ {
+ 	unsigned int tmp;
+ 
+@@ -1373,11 +1373,9 @@ static void __init exynos4_core_down_clock(enum exynos4_soc soc)
+ 	__raw_writel(tmp, reg_base + PWR_CTRL1);
+ 
+ 	/*
+-	 * Disable the clock up feature on Exynos4x12, in case it was
+-	 * enabled by bootloader.
++	 * Disable the clock up feature in case it was enabled by bootloader.
+ 	 */
+-	if (exynos4_soc == EXYNOS4X12)
+-		__raw_writel(0x0, reg_base + E4X12_PWR_CTRL2);
++	__raw_writel(0x0, reg_base + E4X12_PWR_CTRL2);
+ }
+ 
+ /* register exynos4 clocks */
+@@ -1474,7 +1472,8 @@ static void __init exynos4_clk_init(struct device_node *np,
+ 	samsung_clk_register_alias(ctx, exynos4_aliases,
+ 			ARRAY_SIZE(exynos4_aliases));
+ 
+-	exynos4_core_down_clock(soc);
++	if (soc == EXYNOS4X12)
++		exynos4x12_core_down_clock();
+ 	exynos4_clk_sleep_init();
+ 
+ 	samsung_clk_of_add_provider(np, ctx);
+diff --git a/drivers/clk/tegra/clk-tegra124.c b/drivers/clk/tegra/clk-tegra124.c
+index 9a893f2..23ce0af 100644
+--- a/drivers/clk/tegra/clk-tegra124.c
++++ b/drivers/clk/tegra/clk-tegra124.c
+@@ -1110,16 +1110,18 @@ static __init void tegra124_periph_clk_init(void __iomem *clk_base,
+ 					1, 2);
+ 	clks[TEGRA124_CLK_XUSB_SS_DIV2] = clk;
+ 
+-	clk = clk_register_gate(NULL, "plld_dsi", "plld_out0", 0,
++	clk = clk_register_gate(NULL, "pll_d_dsi_out", "pll_d_out0", 0,
+ 				clk_base + PLLD_MISC, 30, 0, &pll_d_lock);
+-	clks[TEGRA124_CLK_PLLD_DSI] = clk;
++	clks[TEGRA124_CLK_PLL_D_DSI_OUT] = clk;
+ 
+-	clk = tegra_clk_register_periph_gate("dsia", "plld_dsi", 0, clk_base,
+-					     0, 48, periph_clk_enb_refcnt);
++	clk = tegra_clk_register_periph_gate("dsia", "pll_d_dsi_out", 0,
++					     clk_base, 0, 48,
++					     periph_clk_enb_refcnt);
+ 	clks[TEGRA124_CLK_DSIA] = clk;
+ 
+-	clk = tegra_clk_register_periph_gate("dsib", "plld_dsi", 0, clk_base,
+-					     0, 82, periph_clk_enb_refcnt);
++	clk = tegra_clk_register_periph_gate("dsib", "pll_d_dsi_out", 0,
++					     clk_base, 0, 82,
++					     periph_clk_enb_refcnt);
+ 	clks[TEGRA124_CLK_DSIB] = clk;
+ 
+ 	/* emc mux */
+diff --git a/drivers/clk/tegra/clk.c b/drivers/clk/tegra/clk.c
+index 9ddb754..7a1df61 100644
+--- a/drivers/clk/tegra/clk.c
++++ b/drivers/clk/tegra/clk.c
+@@ -272,7 +272,7 @@ void __init tegra_add_of_provider(struct device_node *np)
+ 	of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data);
+ 
+ 	rst_ctlr.of_node = np;
+-	rst_ctlr.nr_resets = clk_num * 32;
++	rst_ctlr.nr_resets = periph_banks * 32;
+ 	reset_controller_register(&rst_ctlr);
+ }
+ 
+diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
+index 42f95a4..9a28b7e 100644
+--- a/drivers/crypto/omap-aes.c
++++ b/drivers/crypto/omap-aes.c
+@@ -554,15 +554,23 @@ static int omap_aes_crypt_dma_stop(struct omap_aes_dev *dd)
+ 	return err;
+ }
+ 
+-static int omap_aes_check_aligned(struct scatterlist *sg)
++static int omap_aes_check_aligned(struct scatterlist *sg, int total)
+ {
++	int len = 0;
++
+ 	while (sg) {
+ 		if (!IS_ALIGNED(sg->offset, 4))
+ 			return -1;
+ 		if (!IS_ALIGNED(sg->length, AES_BLOCK_SIZE))
+ 			return -1;
++
++		len += sg->length;
+ 		sg = sg_next(sg);
+ 	}
++
++	if (len != total)
++		return -1;
++
+ 	return 0;
+ }
+ 
+@@ -633,8 +641,8 @@ static int omap_aes_handle_queue(struct omap_aes_dev *dd,
+ 	dd->in_sg = req->src;
+ 	dd->out_sg = req->dst;
+ 
+-	if (omap_aes_check_aligned(dd->in_sg) ||
+-	    omap_aes_check_aligned(dd->out_sg)) {
++	if (omap_aes_check_aligned(dd->in_sg, dd->total) ||
++	    omap_aes_check_aligned(dd->out_sg, dd->total)) {
+ 		if (omap_aes_copy_sgs(dd))
+ 			pr_err("Failed to copy SGs for unaligned cases\n");
+ 		dd->sgs_copied = 1;
+diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c
+index d0bc123..1a54205 100644
+--- a/drivers/gpio/gpio-mvebu.c
++++ b/drivers/gpio/gpio-mvebu.c
+@@ -320,11 +320,13 @@ static void mvebu_gpio_edge_irq_mask(struct irq_data *d)
+ {
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
++	struct irq_chip_type *ct = irq_data_get_chip_type(d);
+ 	u32 mask = 1 << (d->irq - gc->irq_base);
+ 
+ 	irq_gc_lock(gc);
+-	gc->mask_cache &= ~mask;
+-	writel_relaxed(gc->mask_cache, mvebu_gpioreg_edge_mask(mvchip));
++	ct->mask_cache_priv &= ~mask;
++
++	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_edge_mask(mvchip));
+ 	irq_gc_unlock(gc);
+ }
+ 
+@@ -332,11 +334,13 @@ static void mvebu_gpio_edge_irq_unmask(struct irq_data *d)
+ {
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
++	struct irq_chip_type *ct = irq_data_get_chip_type(d);
++
+ 	u32 mask = 1 << (d->irq - gc->irq_base);
+ 
+ 	irq_gc_lock(gc);
+-	gc->mask_cache |= mask;
+-	writel_relaxed(gc->mask_cache, mvebu_gpioreg_edge_mask(mvchip));
++	ct->mask_cache_priv |= mask;
++	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_edge_mask(mvchip));
+ 	irq_gc_unlock(gc);
+ }
+ 
+@@ -344,11 +348,13 @@ static void mvebu_gpio_level_irq_mask(struct irq_data *d)
+ {
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
++	struct irq_chip_type *ct = irq_data_get_chip_type(d);
++
+ 	u32 mask = 1 << (d->irq - gc->irq_base);
+ 
+ 	irq_gc_lock(gc);
+-	gc->mask_cache &= ~mask;
+-	writel_relaxed(gc->mask_cache, mvebu_gpioreg_level_mask(mvchip));
++	ct->mask_cache_priv &= ~mask;
++	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_level_mask(mvchip));
+ 	irq_gc_unlock(gc);
+ }
+ 
+@@ -356,11 +362,13 @@ static void mvebu_gpio_level_irq_unmask(struct irq_data *d)
+ {
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct mvebu_gpio_chip *mvchip = gc->private;
++	struct irq_chip_type *ct = irq_data_get_chip_type(d);
++
+ 	u32 mask = 1 << (d->irq - gc->irq_base);
+ 
+ 	irq_gc_lock(gc);
+-	gc->mask_cache |= mask;
+-	writel_relaxed(gc->mask_cache, mvebu_gpioreg_level_mask(mvchip));
++	ct->mask_cache_priv |= mask;
++	writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_level_mask(mvchip));
+ 	irq_gc_unlock(gc);
+ }
+ 
+diff --git a/drivers/gpu/drm/exynos/exynos_dp_core.c b/drivers/gpu/drm/exynos/exynos_dp_core.c
+index bf17a60..1dbfba5 100644
+--- a/drivers/gpu/drm/exynos/exynos_dp_core.c
++++ b/drivers/gpu/drm/exynos/exynos_dp_core.c
+@@ -32,10 +32,16 @@
+ #include <drm/bridge/ptn3460.h>
+ 
+ #include "exynos_dp_core.h"
++#include "exynos_drm_fimd.h"
+ 
+ #define ctx_from_connector(c)	container_of(c, struct exynos_dp_device, \
+ 					connector)
+ 
++static inline struct exynos_drm_crtc *dp_to_crtc(struct exynos_dp_device *dp)
++{
++	return to_exynos_crtc(dp->encoder->crtc);
++}
++
+ static inline struct exynos_dp_device *
+ display_to_dp(struct exynos_drm_display *d)
+ {
+@@ -1070,6 +1076,8 @@ static void exynos_dp_poweron(struct exynos_dp_device *dp)
+ 		}
+ 	}
+ 
++	fimd_dp_clock_enable(dp_to_crtc(dp), true);
++
+ 	clk_prepare_enable(dp->clock);
+ 	exynos_dp_phy_init(dp);
+ 	exynos_dp_init_dp(dp);
+@@ -1094,6 +1102,8 @@ static void exynos_dp_poweroff(struct exynos_dp_device *dp)
+ 	exynos_dp_phy_exit(dp);
+ 	clk_disable_unprepare(dp->clock);
+ 
++	fimd_dp_clock_enable(dp_to_crtc(dp), false);
++
+ 	if (dp->panel) {
+ 		if (drm_panel_unprepare(dp->panel))
+ 			DRM_ERROR("failed to turnoff the panel\n");
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.c b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+index 33a10ce..5d58f6c 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_fimd.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+@@ -32,6 +32,7 @@
+ #include "exynos_drm_fbdev.h"
+ #include "exynos_drm_crtc.h"
+ #include "exynos_drm_iommu.h"
++#include "exynos_drm_fimd.h"
+ 
+ /*
+  * FIMD stands for Fully Interactive Mobile Display and
+@@ -1233,6 +1234,24 @@ static int fimd_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++void fimd_dp_clock_enable(struct exynos_drm_crtc *crtc, bool enable)
++{
++	struct fimd_context *ctx = crtc->ctx;
++	u32 val;
++
++	/*
++	 * Only Exynos 5250, 5260, 5410 and 542x requires enabling DP/MIE
++	 * clock. On these SoCs the bootloader may enable it but any
++	 * power domain off/on will reset it to disable state.
++	 */
++	if (ctx->driver_data != &exynos5_fimd_driver_data)
++		return;
++
++	val = enable ? DP_MIE_CLK_DP_ENABLE : DP_MIE_CLK_DISABLE;
++	writel(DP_MIE_CLK_DP_ENABLE, ctx->regs + DP_MIE_CLKCON);
++}
++EXPORT_SYMBOL_GPL(fimd_dp_clock_enable);
++
+ struct platform_driver fimd_driver = {
+ 	.probe		= fimd_probe,
+ 	.remove		= fimd_remove,
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.h b/drivers/gpu/drm/exynos/exynos_drm_fimd.h
+new file mode 100644
+index 0000000..b4fcaa5
+--- /dev/null
++++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.h
+@@ -0,0 +1,15 @@
++/*
++ * Copyright (c) 2015 Samsung Electronics Co., Ltd.
++ *
++ * This program is free software; you can redistribute  it and/or modify it
++ * under  the terms of  the GNU General  Public License as published by the
++ * Free Software Foundation;  either version 2 of the  License, or (at your
++ * option) any later version.
++ */
++
++#ifndef _EXYNOS_DRM_FIMD_H_
++#define _EXYNOS_DRM_FIMD_H_
++
++extern void fimd_dp_clock_enable(struct exynos_drm_crtc *crtc, bool enable);
++
++#endif /* _EXYNOS_DRM_FIMD_H_ */
+diff --git a/drivers/gpu/drm/i2c/adv7511.c b/drivers/gpu/drm/i2c/adv7511.c
+index fa140e0..60ab1f7 100644
+--- a/drivers/gpu/drm/i2c/adv7511.c
++++ b/drivers/gpu/drm/i2c/adv7511.c
+@@ -33,6 +33,7 @@ struct adv7511 {
+ 
+ 	unsigned int current_edid_segment;
+ 	uint8_t edid_buf[256];
++	bool edid_read;
+ 
+ 	wait_queue_head_t wq;
+ 	struct drm_encoder *encoder;
+@@ -379,69 +380,71 @@ static bool adv7511_hpd(struct adv7511 *adv7511)
+ 	return false;
+ }
+ 
+-static irqreturn_t adv7511_irq_handler(int irq, void *devid)
+-{
+-	struct adv7511 *adv7511 = devid;
+-
+-	if (adv7511_hpd(adv7511))
+-		drm_helper_hpd_irq_event(adv7511->encoder->dev);
+-
+-	wake_up_all(&adv7511->wq);
+-
+-	return IRQ_HANDLED;
+-}
+-
+-static unsigned int adv7511_is_interrupt_pending(struct adv7511 *adv7511,
+-						 unsigned int irq)
++static int adv7511_irq_process(struct adv7511 *adv7511)
+ {
+ 	unsigned int irq0, irq1;
+-	unsigned int pending;
+ 	int ret;
+ 
+ 	ret = regmap_read(adv7511->regmap, ADV7511_REG_INT(0), &irq0);
+ 	if (ret < 0)
+-		return 0;
++		return ret;
++
+ 	ret = regmap_read(adv7511->regmap, ADV7511_REG_INT(1), &irq1);
+ 	if (ret < 0)
+-		return 0;
++		return ret;
++
++	regmap_write(adv7511->regmap, ADV7511_REG_INT(0), irq0);
++	regmap_write(adv7511->regmap, ADV7511_REG_INT(1), irq1);
++
++	if (irq0 & ADV7511_INT0_HDP)
++		drm_helper_hpd_irq_event(adv7511->encoder->dev);
++
++	if (irq0 & ADV7511_INT0_EDID_READY || irq1 & ADV7511_INT1_DDC_ERROR) {
++		adv7511->edid_read = true;
++
++		if (adv7511->i2c_main->irq)
++			wake_up_all(&adv7511->wq);
++	}
++
++	return 0;
++}
+ 
+-	pending = (irq1 << 8) | irq0;
++static irqreturn_t adv7511_irq_handler(int irq, void *devid)
++{
++	struct adv7511 *adv7511 = devid;
++	int ret;
+ 
+-	return pending & irq;
++	ret = adv7511_irq_process(adv7511);
++	return ret < 0 ? IRQ_NONE : IRQ_HANDLED;
+ }
+ 
+-static int adv7511_wait_for_interrupt(struct adv7511 *adv7511, int irq,
+-				      int timeout)
++/* -----------------------------------------------------------------------------
++ * EDID retrieval
++ */
++
++static int adv7511_wait_for_edid(struct adv7511 *adv7511, int timeout)
+ {
+-	unsigned int pending;
+ 	int ret;
+ 
+ 	if (adv7511->i2c_main->irq) {
+ 		ret = wait_event_interruptible_timeout(adv7511->wq,
+-				adv7511_is_interrupt_pending(adv7511, irq),
+-				msecs_to_jiffies(timeout));
+-		if (ret <= 0)
+-			return 0;
+-		pending = adv7511_is_interrupt_pending(adv7511, irq);
++				adv7511->edid_read, msecs_to_jiffies(timeout));
+ 	} else {
+-		if (timeout < 25)
+-			timeout = 25;
+-		do {
+-			pending = adv7511_is_interrupt_pending(adv7511, irq);
+-			if (pending)
++		for (; timeout > 0; timeout -= 25) {
++			ret = adv7511_irq_process(adv7511);
++			if (ret < 0)
+ 				break;
++
++			if (adv7511->edid_read)
++				break;
++
+ 			msleep(25);
+-			timeout -= 25;
+-		} while (timeout >= 25);
++		}
+ 	}
+ 
+-	return pending;
++	return adv7511->edid_read ? 0 : -EIO;
+ }
+ 
+-/* -----------------------------------------------------------------------------
+- * EDID retrieval
+- */
+-
+ static int adv7511_get_edid_block(void *data, u8 *buf, unsigned int block,
+ 				  size_t len)
+ {
+@@ -463,19 +466,14 @@ static int adv7511_get_edid_block(void *data, u8 *buf, unsigned int block,
+ 			return ret;
+ 
+ 		if (status != 2) {
++			adv7511->edid_read = false;
+ 			regmap_write(adv7511->regmap, ADV7511_REG_EDID_SEGMENT,
+ 				     block);
+-			ret = adv7511_wait_for_interrupt(adv7511,
+-					ADV7511_INT0_EDID_READY |
+-					ADV7511_INT1_DDC_ERROR, 200);
+-
+-			if (!(ret & ADV7511_INT0_EDID_READY))
+-				return -EIO;
++			ret = adv7511_wait_for_edid(adv7511, 200);
++			if (ret < 0)
++				return ret;
+ 		}
+ 
+-		regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
+-			     ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
+-
+ 		/* Break this apart, hopefully more I2C controllers will
+ 		 * support 64 byte transfers than 256 byte transfers
+ 		 */
+@@ -528,7 +526,9 @@ static int adv7511_get_modes(struct drm_encoder *encoder,
+ 	/* Reading the EDID only works if the device is powered */
+ 	if (adv7511->dpms_mode != DRM_MODE_DPMS_ON) {
+ 		regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
+-			     ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
++			     ADV7511_INT0_EDID_READY);
++		regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
++			     ADV7511_INT1_DDC_ERROR);
+ 		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
+ 				   ADV7511_POWER_POWER_DOWN, 0);
+ 		adv7511->current_edid_segment = -1;
+@@ -563,7 +563,9 @@ static void adv7511_encoder_dpms(struct drm_encoder *encoder, int mode)
+ 		adv7511->current_edid_segment = -1;
+ 
+ 		regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
+-			     ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
++			     ADV7511_INT0_EDID_READY);
++		regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
++			     ADV7511_INT1_DDC_ERROR);
+ 		regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
+ 				   ADV7511_POWER_POWER_DOWN, 0);
+ 		/*
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index 5c66b56..ec4d932 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -1042,7 +1042,7 @@ static void vlv_save_gunit_s0ix_state(struct drm_i915_private *dev_priv)
+ 		s->lra_limits[i] = I915_READ(GEN7_LRA_LIMITS_BASE + i * 4);
+ 
+ 	s->media_max_req_count	= I915_READ(GEN7_MEDIA_MAX_REQ_COUNT);
+-	s->gfx_max_req_count	= I915_READ(GEN7_MEDIA_MAX_REQ_COUNT);
++	s->gfx_max_req_count	= I915_READ(GEN7_GFX_MAX_REQ_COUNT);
+ 
+ 	s->render_hwsp		= I915_READ(RENDER_HWS_PGA_GEN7);
+ 	s->ecochk		= I915_READ(GAM_ECOCHK);
+@@ -1124,7 +1124,7 @@ static void vlv_restore_gunit_s0ix_state(struct drm_i915_private *dev_priv)
+ 		I915_WRITE(GEN7_LRA_LIMITS_BASE + i * 4, s->lra_limits[i]);
+ 
+ 	I915_WRITE(GEN7_MEDIA_MAX_REQ_COUNT, s->media_max_req_count);
+-	I915_WRITE(GEN7_MEDIA_MAX_REQ_COUNT, s->gfx_max_req_count);
++	I915_WRITE(GEN7_GFX_MAX_REQ_COUNT, s->gfx_max_req_count);
+ 
+ 	I915_WRITE(RENDER_HWS_PGA_GEN7,	s->render_hwsp);
+ 	I915_WRITE(GAM_ECOCHK,		s->ecochk);
+diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
+index ede5bbb..07320cb 100644
+--- a/drivers/gpu/drm/i915/i915_irq.c
++++ b/drivers/gpu/drm/i915/i915_irq.c
+@@ -3718,14 +3718,12 @@ static int i8xx_irq_postinstall(struct drm_device *dev)
+ 		~(I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
+ 		  I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
+ 		  I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
+-		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT |
+-		  I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT);
++		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT);
+ 	I915_WRITE16(IMR, dev_priv->irq_mask);
+ 
+ 	I915_WRITE16(IER,
+ 		     I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
+ 		     I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
+-		     I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT |
+ 		     I915_USER_INTERRUPT);
+ 	POSTING_READ16(IER);
+ 
+@@ -3887,14 +3885,12 @@ static int i915_irq_postinstall(struct drm_device *dev)
+ 		  I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
+ 		  I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
+ 		  I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
+-		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT |
+-		  I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT);
++		  I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT);
+ 
+ 	enable_mask =
+ 		I915_ASLE_INTERRUPT |
+ 		I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
+ 		I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
+-		I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT |
+ 		I915_USER_INTERRUPT;
+ 
+ 	if (I915_HAS_HOTPLUG(dev)) {
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 33b3d0a2..f536ff2 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -1740,6 +1740,7 @@ enum punit_power_well {
+ #define   GMBUS_CYCLE_INDEX	(2<<25)
+ #define   GMBUS_CYCLE_STOP	(4<<25)
+ #define   GMBUS_BYTE_COUNT_SHIFT 16
++#define   GMBUS_BYTE_COUNT_MAX   256U
+ #define   GMBUS_SLAVE_INDEX_SHIFT 8
+ #define   GMBUS_SLAVE_ADDR_SHIFT 1
+ #define   GMBUS_SLAVE_READ	(1<<0)
+diff --git a/drivers/gpu/drm/i915/intel_i2c.c b/drivers/gpu/drm/i915/intel_i2c.c
+index b31088a..56e437e 100644
+--- a/drivers/gpu/drm/i915/intel_i2c.c
++++ b/drivers/gpu/drm/i915/intel_i2c.c
+@@ -270,18 +270,17 @@ gmbus_wait_idle(struct drm_i915_private *dev_priv)
+ }
+ 
+ static int
+-gmbus_xfer_read(struct drm_i915_private *dev_priv, struct i2c_msg *msg,
+-		u32 gmbus1_index)
++gmbus_xfer_read_chunk(struct drm_i915_private *dev_priv,
++		      unsigned short addr, u8 *buf, unsigned int len,
++		      u32 gmbus1_index)
+ {
+ 	int reg_offset = dev_priv->gpio_mmio_base;
+-	u16 len = msg->len;
+-	u8 *buf = msg->buf;
+ 
+ 	I915_WRITE(GMBUS1 + reg_offset,
+ 		   gmbus1_index |
+ 		   GMBUS_CYCLE_WAIT |
+ 		   (len << GMBUS_BYTE_COUNT_SHIFT) |
+-		   (msg->addr << GMBUS_SLAVE_ADDR_SHIFT) |
++		   (addr << GMBUS_SLAVE_ADDR_SHIFT) |
+ 		   GMBUS_SLAVE_READ | GMBUS_SW_RDY);
+ 	while (len) {
+ 		int ret;
+@@ -303,11 +302,35 @@ gmbus_xfer_read(struct drm_i915_private *dev_priv, struct i2c_msg *msg,
+ }
+ 
+ static int
+-gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
++gmbus_xfer_read(struct drm_i915_private *dev_priv, struct i2c_msg *msg,
++		u32 gmbus1_index)
+ {
+-	int reg_offset = dev_priv->gpio_mmio_base;
+-	u16 len = msg->len;
+ 	u8 *buf = msg->buf;
++	unsigned int rx_size = msg->len;
++	unsigned int len;
++	int ret;
++
++	do {
++		len = min(rx_size, GMBUS_BYTE_COUNT_MAX);
++
++		ret = gmbus_xfer_read_chunk(dev_priv, msg->addr,
++					    buf, len, gmbus1_index);
++		if (ret)
++			return ret;
++
++		rx_size -= len;
++		buf += len;
++	} while (rx_size != 0);
++
++	return 0;
++}
++
++static int
++gmbus_xfer_write_chunk(struct drm_i915_private *dev_priv,
++		       unsigned short addr, u8 *buf, unsigned int len)
++{
++	int reg_offset = dev_priv->gpio_mmio_base;
++	unsigned int chunk_size = len;
+ 	u32 val, loop;
+ 
+ 	val = loop = 0;
+@@ -319,8 +342,8 @@ gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
+ 	I915_WRITE(GMBUS3 + reg_offset, val);
+ 	I915_WRITE(GMBUS1 + reg_offset,
+ 		   GMBUS_CYCLE_WAIT |
+-		   (msg->len << GMBUS_BYTE_COUNT_SHIFT) |
+-		   (msg->addr << GMBUS_SLAVE_ADDR_SHIFT) |
++		   (chunk_size << GMBUS_BYTE_COUNT_SHIFT) |
++		   (addr << GMBUS_SLAVE_ADDR_SHIFT) |
+ 		   GMBUS_SLAVE_WRITE | GMBUS_SW_RDY);
+ 	while (len) {
+ 		int ret;
+@@ -337,6 +360,29 @@ gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
+ 		if (ret)
+ 			return ret;
+ 	}
++
++	return 0;
++}
++
++static int
++gmbus_xfer_write(struct drm_i915_private *dev_priv, struct i2c_msg *msg)
++{
++	u8 *buf = msg->buf;
++	unsigned int tx_size = msg->len;
++	unsigned int len;
++	int ret;
++
++	do {
++		len = min(tx_size, GMBUS_BYTE_COUNT_MAX);
++
++		ret = gmbus_xfer_write_chunk(dev_priv, msg->addr, buf, len);
++		if (ret)
++			return ret;
++
++		buf += len;
++		tx_size -= len;
++	} while (tx_size != 0);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c
+index 86807ee..9bd5611 100644
+--- a/drivers/gpu/drm/radeon/atombios_crtc.c
++++ b/drivers/gpu/drm/radeon/atombios_crtc.c
+@@ -330,8 +330,10 @@ atombios_set_crtc_dtd_timing(struct drm_crtc *crtc,
+ 		misc |= ATOM_COMPOSITESYNC;
+ 	if (mode->flags & DRM_MODE_FLAG_INTERLACE)
+ 		misc |= ATOM_INTERLACE;
+-	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
+ 		misc |= ATOM_DOUBLE_CLOCK_MODE;
++	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++		misc |= ATOM_H_REPLICATIONBY2 | ATOM_V_REPLICATIONBY2;
+ 
+ 	args.susModeMiscInfo.usAccess = cpu_to_le16(misc);
+ 	args.ucCRTC = radeon_crtc->crtc_id;
+@@ -374,8 +376,10 @@ static void atombios_crtc_set_timing(struct drm_crtc *crtc,
+ 		misc |= ATOM_COMPOSITESYNC;
+ 	if (mode->flags & DRM_MODE_FLAG_INTERLACE)
+ 		misc |= ATOM_INTERLACE;
+-	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++	if (mode->flags & DRM_MODE_FLAG_DBLCLK)
+ 		misc |= ATOM_DOUBLE_CLOCK_MODE;
++	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
++		misc |= ATOM_H_REPLICATIONBY2 | ATOM_V_REPLICATIONBY2;
+ 
+ 	args.susModeMiscInfo.usAccess = cpu_to_le16(misc);
+ 	args.ucCRTC = radeon_crtc->crtc_id;
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 9c47867..7fe5590 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -459,6 +459,10 @@
+ #define USB_DEVICE_ID_UGCI_FLYING	0x0020
+ #define USB_DEVICE_ID_UGCI_FIGHTING	0x0030
+ 
++#define USB_VENDOR_ID_HP		0x03f0
++#define USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE	0x0a4a
++#define USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE		0x134a
++
+ #define USB_VENDOR_ID_HUION		0x256c
+ #define USB_DEVICE_ID_HUION_TABLET	0x006e
+ 
+diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c
+index a821277..4e3ae9f 100644
+--- a/drivers/hid/usbhid/hid-quirks.c
++++ b/drivers/hid/usbhid/hid-quirks.c
+@@ -78,6 +78,8 @@ static const struct hid_blacklist {
+ 	{ USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS },
+ 	{ USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET },
++	{ USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL },
++	{ USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL },
+ 	{ USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077, HID_QUIRK_ALWAYS_POLL },
+ 	{ USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET },
+ 	{ USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3, HID_QUIRK_NO_INIT_REPORTS },
+diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
+index 2978f5e..00bc30e 100644
+--- a/drivers/hv/channel.c
++++ b/drivers/hv/channel.c
+@@ -135,7 +135,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
+ 			   GFP_KERNEL);
+ 	if (!open_info) {
+ 		err = -ENOMEM;
+-		goto error0;
++		goto error_gpadl;
+ 	}
+ 
+ 	init_completion(&open_info->waitevent);
+@@ -151,7 +151,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
+ 
+ 	if (userdatalen > MAX_USER_DEFINED_BYTES) {
+ 		err = -EINVAL;
+-		goto error0;
++		goto error_gpadl;
+ 	}
+ 
+ 	if (userdatalen)
+@@ -195,6 +195,9 @@ error1:
+ 	list_del(&open_info->msglistentry);
+ 	spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
+ 
++error_gpadl:
++	vmbus_teardown_gpadl(newchannel, newchannel->ringbuffer_gpadlhandle);
++
+ error0:
+ 	free_pages((unsigned long)out,
+ 		get_order(send_ringbuffer_size + recv_ringbuffer_size));
+diff --git a/drivers/i2c/busses/i2c-rk3x.c b/drivers/i2c/busses/i2c-rk3x.c
+index 5f96b1b..019d542 100644
+--- a/drivers/i2c/busses/i2c-rk3x.c
++++ b/drivers/i2c/busses/i2c-rk3x.c
+@@ -833,7 +833,7 @@ static int rk3x_i2c_xfer(struct i2c_adapter *adap,
+ 	clk_disable(i2c->clk);
+ 	spin_unlock_irqrestore(&i2c->lock, flags);
+ 
+-	return ret;
++	return ret < 0 ? ret : num;
+ }
+ 
+ static u32 rk3x_i2c_func(struct i2c_adapter *adap)
+diff --git a/drivers/i2c/i2c-core.c b/drivers/i2c/i2c-core.c
+index edf274c..8143162 100644
+--- a/drivers/i2c/i2c-core.c
++++ b/drivers/i2c/i2c-core.c
+@@ -596,6 +596,7 @@ int i2c_generic_scl_recovery(struct i2c_adapter *adap)
+ 	adap->bus_recovery_info->set_scl(adap, 1);
+ 	return i2c_generic_recovery(adap);
+ }
++EXPORT_SYMBOL_GPL(i2c_generic_scl_recovery);
+ 
+ int i2c_generic_gpio_recovery(struct i2c_adapter *adap)
+ {
+@@ -610,6 +611,7 @@ int i2c_generic_gpio_recovery(struct i2c_adapter *adap)
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(i2c_generic_gpio_recovery);
+ 
+ int i2c_recover_bus(struct i2c_adapter *adap)
+ {
+@@ -619,6 +621,7 @@ int i2c_recover_bus(struct i2c_adapter *adap)
+ 	dev_dbg(&adap->dev, "Trying i2c bus recovery\n");
+ 	return adap->bus_recovery_info->recover_bus(adap);
+ }
++EXPORT_SYMBOL_GPL(i2c_recover_bus);
+ 
+ static int i2c_device_probe(struct device *dev)
+ {
+@@ -1410,6 +1413,8 @@ static int i2c_register_adapter(struct i2c_adapter *adap)
+ 
+ 	dev_dbg(&adap->dev, "adapter [%s] registered\n", adap->name);
+ 
++	pm_runtime_no_callbacks(&adap->dev);
++
+ #ifdef CONFIG_I2C_COMPAT
+ 	res = class_compat_create_link(i2c_adapter_compat_class, &adap->dev,
+ 				       adap->dev.parent);
+diff --git a/drivers/i2c/i2c-mux.c b/drivers/i2c/i2c-mux.c
+index 593f7ca..06cc1ff 100644
+--- a/drivers/i2c/i2c-mux.c
++++ b/drivers/i2c/i2c-mux.c
+@@ -32,8 +32,9 @@ struct i2c_mux_priv {
+ 	struct i2c_algorithm algo;
+ 
+ 	struct i2c_adapter *parent;
+-	void *mux_priv;	/* the mux chip/device */
+-	u32  chan_id;	/* the channel id */
++	struct device *mux_dev;
++	void *mux_priv;
++	u32 chan_id;
+ 
+ 	int (*select)(struct i2c_adapter *, void *mux_priv, u32 chan_id);
+ 	int (*deselect)(struct i2c_adapter *, void *mux_priv, u32 chan_id);
+@@ -119,6 +120,7 @@ struct i2c_adapter *i2c_add_mux_adapter(struct i2c_adapter *parent,
+ 
+ 	/* Set up private adapter data */
+ 	priv->parent = parent;
++	priv->mux_dev = mux_dev;
+ 	priv->mux_priv = mux_priv;
+ 	priv->chan_id = chan_id;
+ 	priv->select = select;
+@@ -203,7 +205,7 @@ void i2c_del_mux_adapter(struct i2c_adapter *adap)
+ 	char symlink_name[20];
+ 
+ 	snprintf(symlink_name, sizeof(symlink_name), "channel-%u", priv->chan_id);
+-	sysfs_remove_link(&adap->dev.parent->kobj, symlink_name);
++	sysfs_remove_link(&priv->mux_dev->kobj, symlink_name);
+ 
+ 	sysfs_remove_link(&priv->adap.dev.kobj, "mux_device");
+ 	i2c_del_adapter(adap);
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index b0e5852..44d1d79 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -218,18 +218,10 @@ static struct cpuidle_state byt_cstates[] = {
+ 		.enter = &intel_idle,
+ 		.enter_freeze = intel_idle_freeze, },
+ 	{
+-		.name = "C1E-BYT",
+-		.desc = "MWAIT 0x01",
+-		.flags = MWAIT2flg(0x01),
+-		.exit_latency = 15,
+-		.target_residency = 30,
+-		.enter = &intel_idle,
+-		.enter_freeze = intel_idle_freeze, },
+-	{
+ 		.name = "C6N-BYT",
+ 		.desc = "MWAIT 0x58",
+ 		.flags = MWAIT2flg(0x58) | CPUIDLE_FLAG_TLB_FLUSHED,
+-		.exit_latency = 40,
++		.exit_latency = 300,
+ 		.target_residency = 275,
+ 		.enter = &intel_idle,
+ 		.enter_freeze = intel_idle_freeze, },
+@@ -237,7 +229,7 @@ static struct cpuidle_state byt_cstates[] = {
+ 		.name = "C6S-BYT",
+ 		.desc = "MWAIT 0x52",
+ 		.flags = MWAIT2flg(0x52) | CPUIDLE_FLAG_TLB_FLUSHED,
+-		.exit_latency = 140,
++		.exit_latency = 500,
+ 		.target_residency = 560,
+ 		.enter = &intel_idle,
+ 		.enter_freeze = intel_idle_freeze, },
+@@ -246,7 +238,7 @@ static struct cpuidle_state byt_cstates[] = {
+ 		.desc = "MWAIT 0x60",
+ 		.flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED,
+ 		.exit_latency = 1200,
+-		.target_residency = 1500,
++		.target_residency = 4000,
+ 		.enter = &intel_idle,
+ 		.enter_freeze = intel_idle_freeze, },
+ 	{
+diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
+index 8c014b5..38acb3c 100644
+--- a/drivers/infiniband/core/umem.c
++++ b/drivers/infiniband/core/umem.c
+@@ -99,12 +99,15 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
+ 	if (dmasync)
+ 		dma_set_attr(DMA_ATTR_WRITE_BARRIER, &attrs);
+ 
++	if (!size)
++		return ERR_PTR(-EINVAL);
++
+ 	/*
+ 	 * If the combination of the addr and size requested for this memory
+ 	 * region causes an integer overflow, return error.
+ 	 */
+-	if ((PAGE_ALIGN(addr + size) <= size) ||
+-	    (PAGE_ALIGN(addr + size) <= addr))
++	if (((addr + size) < addr) ||
++	    PAGE_ALIGN(addr + size) < (addr + size))
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	if (!can_do_mlock())
+diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
+index ed2bd67..fbde33a 100644
+--- a/drivers/infiniband/hw/mlx4/qp.c
++++ b/drivers/infiniband/hw/mlx4/qp.c
+@@ -2605,8 +2605,7 @@ static int build_lso_seg(struct mlx4_wqe_lso_seg *wqe, struct ib_send_wr *wr,
+ 
+ 	memcpy(wqe->header, wr->wr.ud.header, wr->wr.ud.hlen);
+ 
+-	*lso_hdr_sz  = cpu_to_be32((wr->wr.ud.mss - wr->wr.ud.hlen) << 16 |
+-				   wr->wr.ud.hlen);
++	*lso_hdr_sz  = cpu_to_be32(wr->wr.ud.mss << 16 | wr->wr.ud.hlen);
+ 	*lso_seg_len = halign;
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/ulp/iser/iser_initiator.c b/drivers/infiniband/ulp/iser/iser_initiator.c
+index 20e859a..76eb57b 100644
+--- a/drivers/infiniband/ulp/iser/iser_initiator.c
++++ b/drivers/infiniband/ulp/iser/iser_initiator.c
+@@ -409,8 +409,8 @@ int iser_send_command(struct iscsi_conn *conn,
+ 	if (scsi_prot_sg_count(sc)) {
+ 		prot_buf->buf  = scsi_prot_sglist(sc);
+ 		prot_buf->size = scsi_prot_sg_count(sc);
+-		prot_buf->data_len = data_buf->data_len >>
+-				     ilog2(sc->device->sector_size) * 8;
++		prot_buf->data_len = (data_buf->data_len >>
++				     ilog2(sc->device->sector_size)) * 8;
+ 	}
+ 
+ 	if (hdr->flags & ISCSI_FLAG_CMD_READ) {
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index 075b19c..147029a 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -222,7 +222,7 @@ fail:
+ static void
+ isert_free_rx_descriptors(struct isert_conn *isert_conn)
+ {
+-	struct ib_device *ib_dev = isert_conn->conn_cm_id->device;
++	struct ib_device *ib_dev = isert_conn->conn_device->ib_device;
+ 	struct iser_rx_desc *rx_desc;
+ 	int i;
+ 
+@@ -719,8 +719,8 @@ out:
+ static void
+ isert_connect_release(struct isert_conn *isert_conn)
+ {
+-	struct ib_device *ib_dev = isert_conn->conn_cm_id->device;
+ 	struct isert_device *device = isert_conn->conn_device;
++	struct ib_device *ib_dev = device->ib_device;
+ 
+ 	isert_dbg("conn %p\n", isert_conn);
+ 
+@@ -728,7 +728,8 @@ isert_connect_release(struct isert_conn *isert_conn)
+ 		isert_conn_free_fastreg_pool(isert_conn);
+ 
+ 	isert_free_rx_descriptors(isert_conn);
+-	rdma_destroy_id(isert_conn->conn_cm_id);
++	if (isert_conn->conn_cm_id)
++		rdma_destroy_id(isert_conn->conn_cm_id);
+ 
+ 	if (isert_conn->conn_qp) {
+ 		struct isert_comp *comp = isert_conn->conn_qp->recv_cq->cq_context;
+@@ -878,12 +879,15 @@ isert_disconnected_handler(struct rdma_cm_id *cma_id,
+ 	return 0;
+ }
+ 
+-static void
++static int
+ isert_connect_error(struct rdma_cm_id *cma_id)
+ {
+ 	struct isert_conn *isert_conn = cma_id->qp->qp_context;
+ 
++	isert_conn->conn_cm_id = NULL;
+ 	isert_put_conn(isert_conn);
++
++	return -1;
+ }
+ 
+ static int
+@@ -912,7 +916,7 @@ isert_cma_handler(struct rdma_cm_id *cma_id, struct rdma_cm_event *event)
+ 	case RDMA_CM_EVENT_REJECTED:       /* FALLTHRU */
+ 	case RDMA_CM_EVENT_UNREACHABLE:    /* FALLTHRU */
+ 	case RDMA_CM_EVENT_CONNECT_ERROR:
+-		isert_connect_error(cma_id);
++		ret = isert_connect_error(cma_id);
+ 		break;
+ 	default:
+ 		isert_err("Unhandled RDMA CMA event: %d\n", event->event);
+@@ -1861,11 +1865,13 @@ isert_completion_rdma_read(struct iser_tx_desc *tx_desc,
+ 	cmd->i_state = ISTATE_RECEIVED_LAST_DATAOUT;
+ 	spin_unlock_bh(&cmd->istate_lock);
+ 
+-	if (ret)
++	if (ret) {
++		target_put_sess_cmd(se_cmd->se_sess, se_cmd);
+ 		transport_send_check_condition_and_sense(se_cmd,
+ 							 se_cmd->pi_err, 0);
+-	else
++	} else {
+ 		target_execute_cmd(se_cmd);
++	}
+ }
+ 
+ static void
+diff --git a/drivers/input/mouse/alps.c b/drivers/input/mouse/alps.c
+index 27bcdbc..ea6cb64 100644
+--- a/drivers/input/mouse/alps.c
++++ b/drivers/input/mouse/alps.c
+@@ -1159,13 +1159,14 @@ static void alps_report_bare_ps2_packet(struct psmouse *psmouse,
+ 					bool report_buttons)
+ {
+ 	struct alps_data *priv = psmouse->private;
+-	struct input_dev *dev;
++	struct input_dev *dev, *dev2 = NULL;
+ 
+ 	/* Figure out which device to use to report the bare packet */
+ 	if (priv->proto_version == ALPS_PROTO_V2 &&
+ 	    (priv->flags & ALPS_DUALPOINT)) {
+ 		/* On V2 devices the DualPoint Stick reports bare packets */
+ 		dev = priv->dev2;
++		dev2 = psmouse->dev;
+ 	} else if (unlikely(IS_ERR_OR_NULL(priv->dev3))) {
+ 		/* Register dev3 mouse if we received PS/2 packet first time */
+ 		if (!IS_ERR(priv->dev3))
+@@ -1177,7 +1178,7 @@ static void alps_report_bare_ps2_packet(struct psmouse *psmouse,
+ 	}
+ 
+ 	if (report_buttons)
+-		alps_report_buttons(dev, NULL,
++		alps_report_buttons(dev, dev2,
+ 				packet[0] & 1, packet[0] & 2, packet[0] & 4);
+ 
+ 	input_report_rel(dev, REL_X,
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index 6e22682..991dc6b 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -893,6 +893,21 @@ static psmouse_ret_t elantech_process_byte(struct psmouse *psmouse)
+ }
+ 
+ /*
++ * This writes the reg_07 value again to the hardware at the end of every
++ * set_rate call because the register loses its value. reg_07 allows setting
++ * absolute mode on v4 hardware
++ */
++static void elantech_set_rate_restore_reg_07(struct psmouse *psmouse,
++		unsigned int rate)
++{
++	struct elantech_data *etd = psmouse->private;
++
++	etd->original_set_rate(psmouse, rate);
++	if (elantech_write_reg(psmouse, 0x07, etd->reg_07))
++		psmouse_err(psmouse, "restoring reg_07 failed\n");
++}
++
++/*
+  * Put the touchpad into absolute mode
+  */
+ static int elantech_set_absolute_mode(struct psmouse *psmouse)
+@@ -1094,6 +1109,8 @@ static int elantech_get_resolution_v4(struct psmouse *psmouse,
+  * Asus K53SV              0x450f01        78, 15, 0c      2 hw buttons
+  * Asus G46VW              0x460f02        00, 18, 0c      2 hw buttons
+  * Asus G750JX             0x360f00        00, 16, 0c      2 hw buttons
++ * Asus TP500LN            0x381f17        10, 14, 0e      clickpad
++ * Asus X750JN             0x381f17        10, 14, 0e      clickpad
+  * Asus UX31               0x361f00        20, 15, 0e      clickpad
+  * Asus UX32VD             0x361f02        00, 15, 0e      clickpad
+  * Avatar AVIU-145A2       0x361f00        ?               clickpad
+@@ -1635,6 +1652,11 @@ int elantech_init(struct psmouse *psmouse)
+ 		goto init_fail;
+ 	}
+ 
++	if (etd->fw_version == 0x381f17) {
++		etd->original_set_rate = psmouse->set_rate;
++		psmouse->set_rate = elantech_set_rate_restore_reg_07;
++	}
++
+ 	if (elantech_set_input_params(psmouse)) {
+ 		psmouse_err(psmouse, "failed to query touchpad range.\n");
+ 		goto init_fail;
+diff --git a/drivers/input/mouse/elantech.h b/drivers/input/mouse/elantech.h
+index 6f3afec..f965d15 100644
+--- a/drivers/input/mouse/elantech.h
++++ b/drivers/input/mouse/elantech.h
+@@ -142,6 +142,7 @@ struct elantech_data {
+ 	struct finger_pos mt[ETP_MAX_FINGERS];
+ 	unsigned char parity[256];
+ 	int (*send_cmd)(struct psmouse *psmouse, unsigned char c, unsigned char *param);
++	void (*original_set_rate)(struct psmouse *psmouse, unsigned int rate);
+ };
+ 
+ #ifdef CONFIG_MOUSE_PS2_ELANTECH
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 713a962..41473929 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -925,11 +925,10 @@ static int crypt_convert(struct crypt_config *cc,
+ 
+ 		switch (r) {
+ 		/* async */
++		case -EINPROGRESS:
+ 		case -EBUSY:
+ 			wait_for_completion(&ctx->restart);
+ 			reinit_completion(&ctx->restart);
+-			/* fall through*/
+-		case -EINPROGRESS:
+ 			ctx->req = NULL;
+ 			ctx->cc_sector++;
+ 			continue;
+@@ -1346,10 +1345,8 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
+ 	struct dm_crypt_io *io = container_of(ctx, struct dm_crypt_io, ctx);
+ 	struct crypt_config *cc = io->cc;
+ 
+-	if (error == -EINPROGRESS) {
+-		complete(&ctx->restart);
++	if (error == -EINPROGRESS)
+ 		return;
+-	}
+ 
+ 	if (!error && cc->iv_gen_ops && cc->iv_gen_ops->post)
+ 		error = cc->iv_gen_ops->post(cc, iv_of_dmreq(cc, dmreq), dmreq);
+@@ -1360,12 +1357,15 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
+ 	crypt_free_req(cc, req_of_dmreq(cc, dmreq), io->base_bio);
+ 
+ 	if (!atomic_dec_and_test(&ctx->cc_pending))
+-		return;
++		goto done;
+ 
+ 	if (bio_data_dir(io->base_bio) == READ)
+ 		kcryptd_crypt_read_done(io);
+ 	else
+ 		kcryptd_crypt_write_io_submit(io, 1);
++done:
++	if (!completion_done(&ctx->restart))
++		complete(&ctx->restart);
+ }
+ 
+ static void kcryptd_crypt(struct work_struct *work)
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 717daad..e617878 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -249,6 +249,7 @@ static void md_make_request(struct request_queue *q, struct bio *bio)
+ 	const int rw = bio_data_dir(bio);
+ 	struct mddev *mddev = q->queuedata;
+ 	unsigned int sectors;
++	int cpu;
+ 
+ 	if (mddev == NULL || mddev->pers == NULL
+ 	    || !mddev->ready) {
+@@ -284,7 +285,10 @@ static void md_make_request(struct request_queue *q, struct bio *bio)
+ 	sectors = bio_sectors(bio);
+ 	mddev->pers->make_request(mddev, bio);
+ 
+-	generic_start_io_acct(rw, sectors, &mddev->gendisk->part0);
++	cpu = part_stat_lock();
++	part_stat_inc(cpu, &mddev->gendisk->part0, ios[rw]);
++	part_stat_add(cpu, &mddev->gendisk->part0, sectors[rw], sectors);
++	part_stat_unlock();
+ 
+ 	if (atomic_dec_and_test(&mddev->active_io) && mddev->suspended)
+ 		wake_up(&mddev->sb_wait);
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index 3ed9f42..3b5d7f7 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -313,7 +313,7 @@ static struct strip_zone *find_zone(struct r0conf *conf,
+ 
+ /*
+  * remaps the bio to the target device. we separate two flows.
+- * power 2 flow and a general flow for the sake of perfromance
++ * power 2 flow and a general flow for the sake of performance
+ */
+ static struct md_rdev *map_sector(struct mddev *mddev, struct strip_zone *zone,
+ 				sector_t sector, sector_t *sector_offset)
+@@ -524,6 +524,7 @@ static void raid0_make_request(struct mddev *mddev, struct bio *bio)
+ 			split = bio;
+ 		}
+ 
++		sector = bio->bi_iter.bi_sector;
+ 		zone = find_zone(mddev->private, &sector);
+ 		tmp_dev = map_sector(mddev, zone, sector, &sector);
+ 		split->bi_bdev = tmp_dev->bdev;
+diff --git a/drivers/media/rc/img-ir/img-ir-core.c b/drivers/media/rc/img-ir/img-ir-core.c
+index 77c78de..7020659 100644
+--- a/drivers/media/rc/img-ir/img-ir-core.c
++++ b/drivers/media/rc/img-ir/img-ir-core.c
+@@ -146,7 +146,7 @@ static int img_ir_remove(struct platform_device *pdev)
+ {
+ 	struct img_ir_priv *priv = platform_get_drvdata(pdev);
+ 
+-	free_irq(priv->irq, img_ir_isr);
++	free_irq(priv->irq, priv);
+ 	img_ir_remove_hw(priv);
+ 	img_ir_remove_raw(priv);
+ 
+diff --git a/drivers/media/usb/stk1160/stk1160-v4l.c b/drivers/media/usb/stk1160/stk1160-v4l.c
+index 65a326c..749ad56 100644
+--- a/drivers/media/usb/stk1160/stk1160-v4l.c
++++ b/drivers/media/usb/stk1160/stk1160-v4l.c
+@@ -240,6 +240,11 @@ static int stk1160_stop_streaming(struct stk1160 *dev)
+ 	if (mutex_lock_interruptible(&dev->v4l_lock))
+ 		return -ERESTARTSYS;
+ 
++	/*
++	 * Once URBs are cancelled, the URB complete handler
++	 * won't be running. This is required to safely release the
++	 * current buffer (dev->isoc_ctl.buf).
++	 */
+ 	stk1160_cancel_isoc(dev);
+ 
+ 	/*
+@@ -620,8 +625,16 @@ void stk1160_clear_queue(struct stk1160 *dev)
+ 		stk1160_info("buffer [%p/%d] aborted\n",
+ 				buf, buf->vb.v4l2_buf.index);
+ 	}
+-	/* It's important to clear current buffer */
+-	dev->isoc_ctl.buf = NULL;
++
++	/* It's important to release the current buffer */
++	if (dev->isoc_ctl.buf) {
++		buf = dev->isoc_ctl.buf;
++		dev->isoc_ctl.buf = NULL;
++
++		vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR);
++		stk1160_info("buffer [%p/%d] aborted\n",
++				buf, buf->vb.v4l2_buf.index);
++	}
+ 	spin_unlock_irqrestore(&dev->buf_lock, flags);
+ }
+ 
+diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
+index fc145d2..922a750 100644
+--- a/drivers/memstick/core/mspro_block.c
++++ b/drivers/memstick/core/mspro_block.c
+@@ -758,7 +758,7 @@ static int mspro_block_complete_req(struct memstick_dev *card, int error)
+ 
+ 		if (error || (card->current_mrq.tpc == MSPRO_CMD_STOP)) {
+ 			if (msb->data_dir == READ) {
+-				for (cnt = 0; cnt < msb->current_seg; cnt++)
++				for (cnt = 0; cnt < msb->current_seg; cnt++) {
+ 					t_len += msb->req_sg[cnt].length
+ 						 / msb->page_size;
+ 
+@@ -766,6 +766,7 @@ static int mspro_block_complete_req(struct memstick_dev *card, int error)
+ 						t_len += msb->current_page - 1;
+ 
+ 					t_len *= msb->page_size;
++				}
+ 			}
+ 		} else
+ 			t_len = blk_rq_bytes(msb->block_req);
+diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c
+index 2a87f69..1aed3b7 100644
+--- a/drivers/mfd/mfd-core.c
++++ b/drivers/mfd/mfd-core.c
+@@ -128,7 +128,7 @@ static int mfd_add_device(struct device *parent, int id,
+ 	int platform_id;
+ 	int r;
+ 
+-	if (id < 0)
++	if (id == PLATFORM_DEVID_AUTO)
+ 		platform_id = id;
+ 	else
+ 		platform_id = id + cell->id;
+diff --git a/drivers/mmc/host/sunxi-mmc.c b/drivers/mmc/host/sunxi-mmc.c
+index e8a4218..459ed1b 100644
+--- a/drivers/mmc/host/sunxi-mmc.c
++++ b/drivers/mmc/host/sunxi-mmc.c
+@@ -930,7 +930,9 @@ static int sunxi_mmc_resource_request(struct sunxi_mmc_host *host,
+ 		return PTR_ERR(host->clk_sample);
+ 	}
+ 
+-	host->reset = devm_reset_control_get(&pdev->dev, "ahb");
++	host->reset = devm_reset_control_get_optional(&pdev->dev, "ahb");
++	if (PTR_ERR(host->reset) == -EPROBE_DEFER)
++		return PTR_ERR(host->reset);
+ 
+ 	ret = clk_prepare_enable(host->clk_ahb);
+ 	if (ret) {
+diff --git a/drivers/mmc/host/tmio_mmc_pio.c b/drivers/mmc/host/tmio_mmc_pio.c
+index a31c357..dba7e1c 100644
+--- a/drivers/mmc/host/tmio_mmc_pio.c
++++ b/drivers/mmc/host/tmio_mmc_pio.c
+@@ -1073,8 +1073,6 @@ EXPORT_SYMBOL(tmio_mmc_host_alloc);
+ void tmio_mmc_host_free(struct tmio_mmc_host *host)
+ {
+ 	mmc_free_host(host->mmc);
+-
+-	host->mmc = NULL;
+ }
+ EXPORT_SYMBOL(tmio_mmc_host_free);
+ 
+diff --git a/drivers/mtd/ubi/attach.c b/drivers/mtd/ubi/attach.c
+index 9d2e16f..b5e1548 100644
+--- a/drivers/mtd/ubi/attach.c
++++ b/drivers/mtd/ubi/attach.c
+@@ -410,7 +410,7 @@ int ubi_compare_lebs(struct ubi_device *ubi, const struct ubi_ainf_peb *aeb,
+ 		second_is_newer = !second_is_newer;
+ 	} else {
+ 		dbg_bld("PEB %d CRC is OK", pnum);
+-		bitflips = !!err;
++		bitflips |= !!err;
+ 	}
+ 	mutex_unlock(&ubi->buf_mutex);
+ 
+diff --git a/drivers/mtd/ubi/cdev.c b/drivers/mtd/ubi/cdev.c
+index d647e50..d16fccf 100644
+--- a/drivers/mtd/ubi/cdev.c
++++ b/drivers/mtd/ubi/cdev.c
+@@ -455,7 +455,7 @@ static long vol_cdev_ioctl(struct file *file, unsigned int cmd,
+ 		/* Validate the request */
+ 		err = -EINVAL;
+ 		if (req.lnum < 0 || req.lnum >= vol->reserved_pebs ||
+-		    req.bytes < 0 || req.lnum >= vol->usable_leb_size)
++		    req.bytes < 0 || req.bytes > vol->usable_leb_size)
+ 			break;
+ 
+ 		err = get_exclusive(desc);
+diff --git a/drivers/mtd/ubi/eba.c b/drivers/mtd/ubi/eba.c
+index 16e34b3..8c9a710 100644
+--- a/drivers/mtd/ubi/eba.c
++++ b/drivers/mtd/ubi/eba.c
+@@ -1419,7 +1419,8 @@ int ubi_eba_init(struct ubi_device *ubi, struct ubi_attach_info *ai)
+ 				 * during re-size.
+ 				 */
+ 				ubi_move_aeb_to_list(av, aeb, &ai->erase);
+-			vol->eba_tbl[aeb->lnum] = aeb->pnum;
++			else
++				vol->eba_tbl[aeb->lnum] = aeb->pnum;
+ 		}
+ 	}
+ 
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index 8f7bde6..0bd92d8 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -1002,7 +1002,7 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+ 				int shutdown)
+ {
+ 	int err, scrubbing = 0, torture = 0, protect = 0, erroneous = 0;
+-	int vol_id = -1, uninitialized_var(lnum);
++	int vol_id = -1, lnum = -1;
+ #ifdef CONFIG_MTD_UBI_FASTMAP
+ 	int anchor = wrk->anchor;
+ #endif
+diff --git a/drivers/net/ethernet/cadence/macb.c b/drivers/net/ethernet/cadence/macb.c
+index 81d4153..77bf133 100644
+--- a/drivers/net/ethernet/cadence/macb.c
++++ b/drivers/net/ethernet/cadence/macb.c
+@@ -2165,7 +2165,7 @@ static void macb_configure_caps(struct macb *bp)
+ 		}
+ 	}
+ 
+-	if (MACB_BFEXT(IDNUM, macb_readl(bp, MID)) == 0x2)
++	if (MACB_BFEXT(IDNUM, macb_readl(bp, MID)) >= 0x2)
+ 		bp->caps |= MACB_CAPS_MACB_IS_GEM;
+ 
+ 	if (macb_is_gem(bp)) {
+diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
+index 7f997d3..a71c446 100644
+--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
++++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
+@@ -144,6 +144,11 @@ static bool e1000_clean_rx_irq(struct e1000_adapter *adapter,
+ static bool e1000_clean_jumbo_rx_irq(struct e1000_adapter *adapter,
+ 				     struct e1000_rx_ring *rx_ring,
+ 				     int *work_done, int work_to_do);
++static void e1000_alloc_dummy_rx_buffers(struct e1000_adapter *adapter,
++					 struct e1000_rx_ring *rx_ring,
++					 int cleaned_count)
++{
++}
+ static void e1000_alloc_rx_buffers(struct e1000_adapter *adapter,
+ 				   struct e1000_rx_ring *rx_ring,
+ 				   int cleaned_count);
+@@ -3552,8 +3557,11 @@ static int e1000_change_mtu(struct net_device *netdev, int new_mtu)
+ 		msleep(1);
+ 	/* e1000_down has a dependency on max_frame_size */
+ 	hw->max_frame_size = max_frame;
+-	if (netif_running(netdev))
++	if (netif_running(netdev)) {
++		/* prevent buffers from being reallocated */
++		adapter->alloc_rx_buf = e1000_alloc_dummy_rx_buffers;
+ 		e1000_down(adapter);
++	}
+ 
+ 	/* NOTE: netdev_alloc_skb reserves 16 bytes, and typically NET_IP_ALIGN
+ 	 * means we reserve 2 more, this pushes us to allocate from the next
+diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
+index af829c5..7ace07d 100644
+--- a/drivers/net/ethernet/marvell/pxa168_eth.c
++++ b/drivers/net/ethernet/marvell/pxa168_eth.c
+@@ -1508,7 +1508,8 @@ static int pxa168_eth_probe(struct platform_device *pdev)
+ 		np = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
+ 		if (!np) {
+ 			dev_err(&pdev->dev, "missing phy-handle\n");
+-			return -EINVAL;
++			err = -EINVAL;
++			goto err_netdev;
+ 		}
+ 		of_property_read_u32(np, "reg", &pep->phy_addr);
+ 		pep->phy_intf = of_get_phy_mode(pdev->dev.of_node);
+@@ -1526,7 +1527,7 @@ static int pxa168_eth_probe(struct platform_device *pdev)
+ 	pep->smi_bus = mdiobus_alloc();
+ 	if (pep->smi_bus == NULL) {
+ 		err = -ENOMEM;
+-		goto err_base;
++		goto err_netdev;
+ 	}
+ 	pep->smi_bus->priv = pep;
+ 	pep->smi_bus->name = "pxa168_eth smi";
+@@ -1551,13 +1552,10 @@ err_mdiobus:
+ 	mdiobus_unregister(pep->smi_bus);
+ err_free_mdio:
+ 	mdiobus_free(pep->smi_bus);
+-err_base:
+-	iounmap(pep->base);
+ err_netdev:
+ 	free_netdev(dev);
+ err_clk:
+-	clk_disable(clk);
+-	clk_put(clk);
++	clk_disable_unprepare(clk);
+ 	return err;
+ }
+ 
+@@ -1574,13 +1572,9 @@ static int pxa168_eth_remove(struct platform_device *pdev)
+ 	if (pep->phy)
+ 		phy_disconnect(pep->phy);
+ 	if (pep->clk) {
+-		clk_disable(pep->clk);
+-		clk_put(pep->clk);
+-		pep->clk = NULL;
++		clk_disable_unprepare(pep->clk);
+ 	}
+ 
+-	iounmap(pep->base);
+-	pep->base = NULL;
+ 	mdiobus_unregister(pep->smi_bus);
+ 	mdiobus_free(pep->smi_bus);
+ 	unregister_netdev(dev);
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+index a7b58ba..3dccf01 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+@@ -981,20 +981,21 @@ static int mlx4_en_check_rxfh_func(struct net_device *dev, u8 hfunc)
+ 	struct mlx4_en_priv *priv = netdev_priv(dev);
+ 
+ 	/* check if requested function is supported by the device */
+-	if ((hfunc == ETH_RSS_HASH_TOP &&
+-	     !(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_TOP)) ||
+-	    (hfunc == ETH_RSS_HASH_XOR &&
+-	     !(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_XOR)))
+-		return -EINVAL;
++	if (hfunc == ETH_RSS_HASH_TOP) {
++		if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_TOP))
++			return -EINVAL;
++		if (!(dev->features & NETIF_F_RXHASH))
++			en_warn(priv, "Toeplitz hash function should be used in conjunction with RX hashing for optimal performance\n");
++		return 0;
++	} else if (hfunc == ETH_RSS_HASH_XOR) {
++		if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_XOR))
++			return -EINVAL;
++		if (dev->features & NETIF_F_RXHASH)
++			en_warn(priv, "Enabling both XOR Hash function and RX Hashing can limit RPS functionality\n");
++		return 0;
++	}
+ 
+-	priv->rss_hash_fn = hfunc;
+-	if (hfunc == ETH_RSS_HASH_TOP && !(dev->features & NETIF_F_RXHASH))
+-		en_warn(priv,
+-			"Toeplitz hash function should be used in conjunction with RX hashing for optimal performance\n");
+-	if (hfunc == ETH_RSS_HASH_XOR && (dev->features & NETIF_F_RXHASH))
+-		en_warn(priv,
+-			"Enabling both XOR Hash function and RX Hashing can limit RPS functionality\n");
+-	return 0;
++	return -EINVAL;
+ }
+ 
+ static int mlx4_en_get_rxfh(struct net_device *dev, u32 *ring_index, u8 *key,
+@@ -1068,6 +1069,8 @@ static int mlx4_en_set_rxfh(struct net_device *dev, const u32 *ring_index,
+ 		priv->prof->rss_rings = rss_rings;
+ 	if (key)
+ 		memcpy(priv->rss_key, key, MLX4_EN_RSS_KEY_SIZE);
++	if (hfunc !=  ETH_RSS_HASH_NO_CHANGE)
++		priv->rss_hash_fn = hfunc;
+ 
+ 	if (port_up) {
+ 		err = mlx4_en_start_port(dev);
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index af034db..9d15566 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -1716,6 +1716,7 @@ ppp_receive_frame(struct ppp *ppp, struct sk_buff *skb, struct channel *pch)
+ {
+ 	/* note: a 0-length skb is used as an error indication */
+ 	if (skb->len > 0) {
++		skb_checksum_complete_unset(skb);
+ #ifdef CONFIG_PPP_MULTILINK
+ 		/* XXX do channel-level decompression here */
+ 		if (PPP_PROTO(skb) == PPP_MP)
+diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
+index 90a714c..23806c2 100644
+--- a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
++++ b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
+@@ -321,6 +321,7 @@ static struct usb_device_id rtl8192c_usb_ids[] = {
+ 	{RTL_USB_DEVICE(0x07b8, 0x8188, rtl92cu_hal_cfg)}, /*Abocom - Abocom*/
+ 	{RTL_USB_DEVICE(0x07b8, 0x8189, rtl92cu_hal_cfg)}, /*Funai - Abocom*/
+ 	{RTL_USB_DEVICE(0x0846, 0x9041, rtl92cu_hal_cfg)}, /*NetGear WNA1000M*/
++	{RTL_USB_DEVICE(0x0b05, 0x17ba, rtl92cu_hal_cfg)}, /*ASUS-Edimax*/
+ 	{RTL_USB_DEVICE(0x0bda, 0x5088, rtl92cu_hal_cfg)}, /*Thinkware-CC&C*/
+ 	{RTL_USB_DEVICE(0x0df6, 0x0052, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/
+ 	{RTL_USB_DEVICE(0x0df6, 0x005c, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/
+@@ -377,6 +378,7 @@ static struct usb_device_id rtl8192c_usb_ids[] = {
+ 	{RTL_USB_DEVICE(0x2001, 0x3307, rtl92cu_hal_cfg)}, /*D-Link-Cameo*/
+ 	{RTL_USB_DEVICE(0x2001, 0x3309, rtl92cu_hal_cfg)}, /*D-Link-Alpha*/
+ 	{RTL_USB_DEVICE(0x2001, 0x330a, rtl92cu_hal_cfg)}, /*D-Link-Alpha*/
++	{RTL_USB_DEVICE(0x2001, 0x330d, rtl92cu_hal_cfg)}, /*D-Link DWA-131 */
+ 	{RTL_USB_DEVICE(0x2019, 0xab2b, rtl92cu_hal_cfg)}, /*Planex -Abocom*/
+ 	{RTL_USB_DEVICE(0x20f4, 0x624d, rtl92cu_hal_cfg)}, /*TRENDNet*/
+ 	{RTL_USB_DEVICE(0x2357, 0x0100, rtl92cu_hal_cfg)}, /*TP-Link WN8200ND*/
+diff --git a/drivers/net/wireless/ti/wl18xx/debugfs.c b/drivers/net/wireless/ti/wl18xx/debugfs.c
+index c93fae9..5fbd223 100644
+--- a/drivers/net/wireless/ti/wl18xx/debugfs.c
++++ b/drivers/net/wireless/ti/wl18xx/debugfs.c
+@@ -139,7 +139,7 @@ WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, protection_filter, "%u");
+ WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, accum_arp_pend_requests, "%u");
+ WL18XX_DEBUGFS_FWSTATS_FILE(rx_filter, max_arp_queue_dep, "%u");
+ 
+-WL18XX_DEBUGFS_FWSTATS_FILE(rx_rate, rx_frames_per_rates, "%u");
++WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY(rx_rate, rx_frames_per_rates, 50);
+ 
+ WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY(aggr_size, tx_agg_vs_rate,
+ 				  AGGR_STATS_TX_AGG*AGGR_STATS_TX_RATE);
+diff --git a/drivers/net/wireless/ti/wlcore/debugfs.h b/drivers/net/wireless/ti/wlcore/debugfs.h
+index 0f2cfb0..bf14676 100644
+--- a/drivers/net/wireless/ti/wlcore/debugfs.h
++++ b/drivers/net/wireless/ti/wlcore/debugfs.h
+@@ -26,8 +26,8 @@
+ 
+ #include "wlcore.h"
+ 
+-int wl1271_format_buffer(char __user *userbuf, size_t count,
+-			 loff_t *ppos, char *fmt, ...);
++__printf(4, 5) int wl1271_format_buffer(char __user *userbuf, size_t count,
++					loff_t *ppos, char *fmt, ...);
+ 
+ int wl1271_debugfs_init(struct wl1271 *wl);
+ void wl1271_debugfs_exit(struct wl1271 *wl);
+diff --git a/drivers/nfc/st21nfcb/i2c.c b/drivers/nfc/st21nfcb/i2c.c
+index eb88693..7b53a5c 100644
+--- a/drivers/nfc/st21nfcb/i2c.c
++++ b/drivers/nfc/st21nfcb/i2c.c
+@@ -109,7 +109,7 @@ static int st21nfcb_nci_i2c_write(void *phy_id, struct sk_buff *skb)
+ 		return phy->ndlc->hard_fault;
+ 
+ 	r = i2c_master_send(client, skb->data, skb->len);
+-	if (r == -EREMOTEIO) {  /* Retry, chip was in standby */
++	if (r < 0) {  /* Retry, chip was in standby */
+ 		usleep_range(1000, 4000);
+ 		r = i2c_master_send(client, skb->data, skb->len);
+ 	}
+@@ -148,7 +148,7 @@ static int st21nfcb_nci_i2c_read(struct st21nfcb_i2c_phy *phy,
+ 	struct i2c_client *client = phy->i2c_dev;
+ 
+ 	r = i2c_master_recv(client, buf, ST21NFCB_NCI_I2C_MIN_SIZE);
+-	if (r == -EREMOTEIO) {  /* Retry, chip was in standby */
++	if (r < 0) {  /* Retry, chip was in standby */
+ 		usleep_range(1000, 4000);
+ 		r = i2c_master_recv(client, buf, ST21NFCB_NCI_I2C_MIN_SIZE);
+ 	}
+diff --git a/drivers/platform/x86/compal-laptop.c b/drivers/platform/x86/compal-laptop.c
+index 15c0fab..bceb30b 100644
+--- a/drivers/platform/x86/compal-laptop.c
++++ b/drivers/platform/x86/compal-laptop.c
+@@ -1026,9 +1026,9 @@ static int compal_probe(struct platform_device *pdev)
+ 	if (err)
+ 		return err;
+ 
+-	hwmon_dev = hwmon_device_register_with_groups(&pdev->dev,
+-						      "compal", data,
+-						      compal_hwmon_groups);
++	hwmon_dev = devm_hwmon_device_register_with_groups(&pdev->dev,
++							   "compal", data,
++							   compal_hwmon_groups);
+ 	if (IS_ERR(hwmon_dev)) {
+ 		err = PTR_ERR(hwmon_dev);
+ 		goto remove;
+@@ -1036,7 +1036,9 @@ static int compal_probe(struct platform_device *pdev)
+ 
+ 	/* Power supply */
+ 	initialize_power_supply_data(data);
+-	power_supply_register(&compal_device->dev, &data->psy);
++	err = power_supply_register(&compal_device->dev, &data->psy);
++	if (err < 0)
++		goto remove;
+ 
+ 	platform_set_drvdata(pdev, data);
+ 
+diff --git a/drivers/power/ipaq_micro_battery.c b/drivers/power/ipaq_micro_battery.c
+index 9d69460..96b15e0 100644
+--- a/drivers/power/ipaq_micro_battery.c
++++ b/drivers/power/ipaq_micro_battery.c
+@@ -226,6 +226,7 @@ static struct power_supply micro_ac_power = {
+ static int micro_batt_probe(struct platform_device *pdev)
+ {
+ 	struct micro_battery *mb;
++	int ret;
+ 
+ 	mb = devm_kzalloc(&pdev->dev, sizeof(*mb), GFP_KERNEL);
+ 	if (!mb)
+@@ -233,14 +234,30 @@ static int micro_batt_probe(struct platform_device *pdev)
+ 
+ 	mb->micro = dev_get_drvdata(pdev->dev.parent);
+ 	mb->wq = create_singlethread_workqueue("ipaq-battery-wq");
++	if (!mb->wq)
++		return -ENOMEM;
++
+ 	INIT_DELAYED_WORK(&mb->update, micro_battery_work);
+ 	platform_set_drvdata(pdev, mb);
+ 	queue_delayed_work(mb->wq, &mb->update, 1);
+-	power_supply_register(&pdev->dev, &micro_batt_power);
+-	power_supply_register(&pdev->dev, &micro_ac_power);
++
++	ret = power_supply_register(&pdev->dev, &micro_batt_power);
++	if (ret < 0)
++		goto batt_err;
++
++	ret = power_supply_register(&pdev->dev, &micro_ac_power);
++	if (ret < 0)
++		goto ac_err;
+ 
+ 	dev_info(&pdev->dev, "iPAQ micro battery driver\n");
+ 	return 0;
++
++ac_err:
++	power_supply_unregister(&micro_ac_power);
++batt_err:
++	cancel_delayed_work_sync(&mb->update);
++	destroy_workqueue(mb->wq);
++	return ret;
+ }
+ 
+ static int micro_batt_remove(struct platform_device *pdev)
+@@ -251,6 +268,7 @@ static int micro_batt_remove(struct platform_device *pdev)
+ 	power_supply_unregister(&micro_ac_power);
+ 	power_supply_unregister(&micro_batt_power);
+ 	cancel_delayed_work_sync(&mb->update);
++	destroy_workqueue(mb->wq);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/power/lp8788-charger.c b/drivers/power/lp8788-charger.c
+index 21fc233..176dab2 100644
+--- a/drivers/power/lp8788-charger.c
++++ b/drivers/power/lp8788-charger.c
+@@ -417,8 +417,10 @@ static int lp8788_psy_register(struct platform_device *pdev,
+ 	pchg->battery.num_properties = ARRAY_SIZE(lp8788_battery_prop);
+ 	pchg->battery.get_property = lp8788_battery_get_property;
+ 
+-	if (power_supply_register(&pdev->dev, &pchg->battery))
++	if (power_supply_register(&pdev->dev, &pchg->battery)) {
++		power_supply_unregister(&pchg->charger);
+ 		return -EPERM;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/power/twl4030_madc_battery.c b/drivers/power/twl4030_madc_battery.c
+index 7ef445a..cf90760 100644
+--- a/drivers/power/twl4030_madc_battery.c
++++ b/drivers/power/twl4030_madc_battery.c
+@@ -192,6 +192,7 @@ static int twl4030_madc_battery_probe(struct platform_device *pdev)
+ {
+ 	struct twl4030_madc_battery *twl4030_madc_bat;
+ 	struct twl4030_madc_bat_platform_data *pdata = pdev->dev.platform_data;
++	int ret = 0;
+ 
+ 	twl4030_madc_bat = kzalloc(sizeof(*twl4030_madc_bat), GFP_KERNEL);
+ 	if (!twl4030_madc_bat)
+@@ -216,9 +217,11 @@ static int twl4030_madc_battery_probe(struct platform_device *pdev)
+ 
+ 	twl4030_madc_bat->pdata = pdata;
+ 	platform_set_drvdata(pdev, twl4030_madc_bat);
+-	power_supply_register(&pdev->dev, &twl4030_madc_bat->psy);
++	ret = power_supply_register(&pdev->dev, &twl4030_madc_bat->psy);
++	if (ret < 0)
++		kfree(twl4030_madc_bat);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int twl4030_madc_battery_remove(struct platform_device *pdev)
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index 675b5e7..5a0800d 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -1584,11 +1584,11 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
+ 			fp_possible = io_info.fpOkForIo;
+ 	}
+ 
+-	/* Use smp_processor_id() for now until cmd->request->cpu is CPU
++	/* Use raw_smp_processor_id() for now until cmd->request->cpu is CPU
+ 	   id by default, not CPU group id, otherwise all MSI-X queues won't
+ 	   be utilized */
+ 	cmd->request_desc->SCSIIO.MSIxIndex = instance->msix_vectors ?
+-		smp_processor_id() % instance->msix_vectors : 0;
++		raw_smp_processor_id() % instance->msix_vectors : 0;
+ 
+ 	if (fp_possible) {
+ 		megasas_set_pd_lba(io_request, scp->cmd_len, &io_info, scp,
+@@ -1693,7 +1693,10 @@ megasas_build_dcdb_fusion(struct megasas_instance *instance,
+ 			<< MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT;
+ 		cmd->request_desc->SCSIIO.DevHandle = io_request->DevHandle;
+ 		cmd->request_desc->SCSIIO.MSIxIndex =
+-			instance->msix_vectors ? smp_processor_id() % instance->msix_vectors : 0;
++			instance->msix_vectors ?
++				raw_smp_processor_id() %
++					instance->msix_vectors :
++				0;
+ 		os_timeout_value = scmd->request->timeout / HZ;
+ 
+ 		if (instance->secure_jbod_support &&
+diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
+index 2d5ab6d..454536c 100644
+--- a/drivers/scsi/mvsas/mv_sas.c
++++ b/drivers/scsi/mvsas/mv_sas.c
+@@ -441,14 +441,11 @@ static u32 mvs_get_ncq_tag(struct sas_task *task, u32 *tag)
+ static int mvs_task_prep_ata(struct mvs_info *mvi,
+ 			     struct mvs_task_exec_info *tei)
+ {
+-	struct sas_ha_struct *sha = mvi->sas;
+ 	struct sas_task *task = tei->task;
+ 	struct domain_device *dev = task->dev;
+ 	struct mvs_device *mvi_dev = dev->lldd_dev;
+ 	struct mvs_cmd_hdr *hdr = tei->hdr;
+ 	struct asd_sas_port *sas_port = dev->port;
+-	struct sas_phy *sphy = dev->phy;
+-	struct asd_sas_phy *sas_phy = sha->sas_phy[sphy->number];
+ 	struct mvs_slot_info *slot;
+ 	void *buf_prd;
+ 	u32 tag = tei->tag, hdr_tag;
+@@ -468,7 +465,7 @@ static int mvs_task_prep_ata(struct mvs_info *mvi,
+ 	slot->tx = mvi->tx_prod;
+ 	del_q = TXQ_MODE_I | tag |
+ 		(TXQ_CMD_STP << TXQ_CMD_SHIFT) |
+-		(MVS_PHY_ID << TXQ_PHY_SHIFT) |
++		((sas_port->phy_mask & TXQ_PHY_MASK) << TXQ_PHY_SHIFT) |
+ 		(mvi_dev->taskfileset << TXQ_SRS_SHIFT);
+ 	mvi->tx[mvi->tx_prod] = cpu_to_le32(del_q);
+ 
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 6b78476..3290a3e 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3100,6 +3100,7 @@ static void scsi_disk_release(struct device *dev)
+ 	ida_remove(&sd_index_ida, sdkp->index);
+ 	spin_unlock(&sd_index_lock);
+ 
++	blk_integrity_unregister(disk);
+ 	disk->private_data = NULL;
+ 	put_disk(disk);
+ 	put_device(&sdkp->device->sdev_gendev);
+diff --git a/drivers/scsi/sd_dif.c b/drivers/scsi/sd_dif.c
+index 14c7d42..5c06d29 100644
+--- a/drivers/scsi/sd_dif.c
++++ b/drivers/scsi/sd_dif.c
+@@ -77,7 +77,7 @@ void sd_dif_config_host(struct scsi_disk *sdkp)
+ 
+ 		disk->integrity->flags |= BLK_INTEGRITY_DEVICE_CAPABLE;
+ 
+-		if (!sdkp)
++		if (!sdkp->ATO)
+ 			return;
+ 
+ 		if (type == SD_DIF_TYPE3_PROTECTION)
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index efc6e44..bf8c5c1 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -746,21 +746,22 @@ static unsigned int copy_to_bounce_buffer(struct scatterlist *orig_sgl,
+ 			if (bounce_sgl[j].length == PAGE_SIZE) {
+ 				/* full..move to next entry */
+ 				sg_kunmap_atomic(bounce_addr);
++				bounce_addr = 0;
+ 				j++;
++			}
+ 
+-				/* if we need to use another bounce buffer */
+-				if (srclen || i != orig_sgl_count - 1)
+-					bounce_addr = sg_kmap_atomic(bounce_sgl,j);
++			/* if we need to use another bounce buffer */
++			if (srclen && bounce_addr == 0)
++				bounce_addr = sg_kmap_atomic(bounce_sgl, j);
+ 
+-			} else if (srclen == 0 && i == orig_sgl_count - 1) {
+-				/* unmap the last bounce that is < PAGE_SIZE */
+-				sg_kunmap_atomic(bounce_addr);
+-			}
+ 		}
+ 
+ 		sg_kunmap_atomic(src_addr - orig_sgl[i].offset);
+ 	}
+ 
++	if (bounce_addr)
++		sg_kunmap_atomic(bounce_addr);
++
+ 	local_irq_restore(flags);
+ 
+ 	return total_copied;
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 6fea4af..aea3a67 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -370,8 +370,6 @@ static int __maybe_unused mx51_ecspi_config(struct spi_imx_data *spi_imx,
+ 	if (spi_imx->dma_is_inited) {
+ 		dma = readl(spi_imx->base + MX51_ECSPI_DMA);
+ 
+-		spi_imx->tx_wml = spi_imx_get_fifosize(spi_imx) / 2;
+-		spi_imx->rx_wml = spi_imx_get_fifosize(spi_imx) / 2;
+ 		spi_imx->rxt_wml = spi_imx_get_fifosize(spi_imx) / 2;
+ 		rx_wml_cfg = spi_imx->rx_wml << MX51_ECSPI_DMA_RX_WML_OFFSET;
+ 		tx_wml_cfg = spi_imx->tx_wml << MX51_ECSPI_DMA_TX_WML_OFFSET;
+@@ -868,6 +866,8 @@ static int spi_imx_sdma_init(struct device *dev, struct spi_imx_data *spi_imx,
+ 	master->max_dma_len = MAX_SDMA_BD_BYTES;
+ 	spi_imx->bitbang.master->flags = SPI_MASTER_MUST_RX |
+ 					 SPI_MASTER_MUST_TX;
++	spi_imx->tx_wml = spi_imx_get_fifosize(spi_imx) / 2;
++	spi_imx->rx_wml = spi_imx_get_fifosize(spi_imx) / 2;
+ 	spi_imx->dma_is_inited = 1;
+ 
+ 	return 0;
+diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
+index 4eb7a98..7bf5186 100644
+--- a/drivers/spi/spidev.c
++++ b/drivers/spi/spidev.c
+@@ -245,7 +245,10 @@ static int spidev_message(struct spidev_data *spidev,
+ 		k_tmp->len = u_tmp->len;
+ 
+ 		total += k_tmp->len;
+-		if (total > bufsiz) {
++		/* Check total length of transfers.  Also check each
++		 * transfer length to avoid arithmetic overflow.
++		 */
++		if (total > bufsiz || k_tmp->len > bufsiz) {
+ 			status = -EMSGSIZE;
+ 			goto done;
+ 		}
+diff --git a/drivers/staging/android/sync.c b/drivers/staging/android/sync.c
+index 7bdb62b..f83e00c 100644
+--- a/drivers/staging/android/sync.c
++++ b/drivers/staging/android/sync.c
+@@ -114,7 +114,7 @@ void sync_timeline_signal(struct sync_timeline *obj)
+ 	list_for_each_entry_safe(pt, next, &obj->active_list_head,
+ 				 active_list) {
+ 		if (fence_is_signaled_locked(&pt->base))
+-			list_del(&pt->active_list);
++			list_del_init(&pt->active_list);
+ 	}
+ 
+ 	spin_unlock_irqrestore(&obj->child_list_lock, flags);
+diff --git a/drivers/staging/panel/panel.c b/drivers/staging/panel/panel.c
+index 6ed35b6..04fc217 100644
+--- a/drivers/staging/panel/panel.c
++++ b/drivers/staging/panel/panel.c
+@@ -335,11 +335,11 @@ static unsigned char lcd_bits[LCD_PORTS][LCD_BITS][BIT_STATES];
+  * LCD types
+  */
+ #define LCD_TYPE_NONE		0
+-#define LCD_TYPE_OLD		1
+-#define LCD_TYPE_KS0074		2
+-#define LCD_TYPE_HANTRONIX	3
+-#define LCD_TYPE_NEXCOM		4
+-#define LCD_TYPE_CUSTOM		5
++#define LCD_TYPE_CUSTOM		1
++#define LCD_TYPE_OLD		2
++#define LCD_TYPE_KS0074		3
++#define LCD_TYPE_HANTRONIX	4
++#define LCD_TYPE_NEXCOM		5
+ 
+ /*
+  * keypad types
+@@ -502,7 +502,7 @@ MODULE_PARM_DESC(keypad_type,
+ static int lcd_type = NOT_SET;
+ module_param(lcd_type, int, 0000);
+ MODULE_PARM_DESC(lcd_type,
+-		 "LCD type: 0=none, 1=old //, 2=serial ks0074, 3=hantronix //, 4=nexcom //, 5=compiled-in");
++		 "LCD type: 0=none, 1=compiled-in, 2=old, 3=serial ks0074, 4=hantronix, 5=nexcom");
+ 
+ static int lcd_height = NOT_SET;
+ module_param(lcd_height, int, 0000);
+diff --git a/drivers/staging/vt6655/rxtx.c b/drivers/staging/vt6655/rxtx.c
+index 07ce3fd..fdf5c56 100644
+--- a/drivers/staging/vt6655/rxtx.c
++++ b/drivers/staging/vt6655/rxtx.c
+@@ -1308,10 +1308,18 @@ int vnt_generate_fifo_header(struct vnt_private *priv, u32 dma_idx,
+ 			    priv->hw->conf.chandef.chan->hw_value);
+ 	}
+ 
+-	if (current_rate > RATE_11M)
+-		pkt_type = (u8)priv->byPacketType;
+-	else
++	if (current_rate > RATE_11M) {
++		if (info->band == IEEE80211_BAND_5GHZ) {
++			pkt_type = PK_TYPE_11A;
++		} else {
++			if (tx_rate->flags & IEEE80211_TX_RC_USE_CTS_PROTECT)
++				pkt_type = PK_TYPE_11GB;
++			else
++				pkt_type = PK_TYPE_11GA;
++		}
++	} else {
+ 		pkt_type = PK_TYPE_11B;
++	}
+ 
+ 	/*Set fifo controls */
+ 	if (pkt_type == PK_TYPE_11A)
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index 77d6425..5e35612 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -537,7 +537,7 @@ static struct iscsit_transport iscsi_target_transport = {
+ 
+ static int __init iscsi_target_init_module(void)
+ {
+-	int ret = 0;
++	int ret = 0, size;
+ 
+ 	pr_debug("iSCSI-Target "ISCSIT_VERSION"\n");
+ 
+@@ -546,6 +546,7 @@ static int __init iscsi_target_init_module(void)
+ 		pr_err("Unable to allocate memory for iscsit_global\n");
+ 		return -1;
+ 	}
++	spin_lock_init(&iscsit_global->ts_bitmap_lock);
+ 	mutex_init(&auth_id_lock);
+ 	spin_lock_init(&sess_idr_lock);
+ 	idr_init(&tiqn_idr);
+@@ -555,15 +556,11 @@ static int __init iscsi_target_init_module(void)
+ 	if (ret < 0)
+ 		goto out;
+ 
+-	ret = iscsi_thread_set_init();
+-	if (ret < 0)
++	size = BITS_TO_LONGS(ISCSIT_BITMAP_BITS) * sizeof(long);
++	iscsit_global->ts_bitmap = vzalloc(size);
++	if (!iscsit_global->ts_bitmap) {
++		pr_err("Unable to allocate iscsit_global->ts_bitmap\n");
+ 		goto configfs_out;
+-
+-	if (iscsi_allocate_thread_sets(TARGET_THREAD_SET_COUNT) !=
+-			TARGET_THREAD_SET_COUNT) {
+-		pr_err("iscsi_allocate_thread_sets() returned"
+-			" unexpected value!\n");
+-		goto ts_out1;
+ 	}
+ 
+ 	lio_qr_cache = kmem_cache_create("lio_qr_cache",
+@@ -572,7 +569,7 @@ static int __init iscsi_target_init_module(void)
+ 	if (!lio_qr_cache) {
+ 		pr_err("nable to kmem_cache_create() for"
+ 				" lio_qr_cache\n");
+-		goto ts_out2;
++		goto bitmap_out;
+ 	}
+ 
+ 	lio_dr_cache = kmem_cache_create("lio_dr_cache",
+@@ -617,10 +614,8 @@ dr_out:
+ 	kmem_cache_destroy(lio_dr_cache);
+ qr_out:
+ 	kmem_cache_destroy(lio_qr_cache);
+-ts_out2:
+-	iscsi_deallocate_thread_sets();
+-ts_out1:
+-	iscsi_thread_set_free();
++bitmap_out:
++	vfree(iscsit_global->ts_bitmap);
+ configfs_out:
+ 	iscsi_target_deregister_configfs();
+ out:
+@@ -630,8 +625,6 @@ out:
+ 
+ static void __exit iscsi_target_cleanup_module(void)
+ {
+-	iscsi_deallocate_thread_sets();
+-	iscsi_thread_set_free();
+ 	iscsit_release_discovery_tpg();
+ 	iscsit_unregister_transport(&iscsi_target_transport);
+ 	kmem_cache_destroy(lio_qr_cache);
+@@ -641,6 +634,7 @@ static void __exit iscsi_target_cleanup_module(void)
+ 
+ 	iscsi_target_deregister_configfs();
+ 
++	vfree(iscsit_global->ts_bitmap);
+ 	kfree(iscsit_global);
+ }
+ 
+@@ -3715,17 +3709,16 @@ static int iscsit_send_reject(
+ 
+ void iscsit_thread_get_cpumask(struct iscsi_conn *conn)
+ {
+-	struct iscsi_thread_set *ts = conn->thread_set;
+ 	int ord, cpu;
+ 	/*
+-	 * thread_id is assigned from iscsit_global->ts_bitmap from
+-	 * within iscsi_thread_set.c:iscsi_allocate_thread_sets()
++	 * bitmap_id is assigned from iscsit_global->ts_bitmap from
++	 * within iscsit_start_kthreads()
+ 	 *
+-	 * Here we use thread_id to determine which CPU that this
+-	 * iSCSI connection's iscsi_thread_set will be scheduled to
++	 * Here we use bitmap_id to determine which CPU that this
++	 * iSCSI connection's RX/TX threads will be scheduled to
+ 	 * execute upon.
+ 	 */
+-	ord = ts->thread_id % cpumask_weight(cpu_online_mask);
++	ord = conn->bitmap_id % cpumask_weight(cpu_online_mask);
+ 	for_each_online_cpu(cpu) {
+ 		if (ord-- == 0) {
+ 			cpumask_set_cpu(cpu, conn->conn_cpumask);
+@@ -3914,7 +3907,7 @@ check_rsp_state:
+ 	switch (state) {
+ 	case ISTATE_SEND_LOGOUTRSP:
+ 		if (!iscsit_logout_post_handler(cmd, conn))
+-			goto restart;
++			return -ECONNRESET;
+ 		/* fall through */
+ 	case ISTATE_SEND_STATUS:
+ 	case ISTATE_SEND_ASYNCMSG:
+@@ -3942,8 +3935,6 @@ check_rsp_state:
+ 
+ err:
+ 	return -1;
+-restart:
+-	return -EAGAIN;
+ }
+ 
+ static int iscsit_handle_response_queue(struct iscsi_conn *conn)
+@@ -3970,21 +3961,13 @@ static int iscsit_handle_response_queue(struct iscsi_conn *conn)
+ int iscsi_target_tx_thread(void *arg)
+ {
+ 	int ret = 0;
+-	struct iscsi_conn *conn;
+-	struct iscsi_thread_set *ts = arg;
++	struct iscsi_conn *conn = arg;
+ 	/*
+ 	 * Allow ourselves to be interrupted by SIGINT so that a
+ 	 * connection recovery / failure event can be triggered externally.
+ 	 */
+ 	allow_signal(SIGINT);
+ 
+-restart:
+-	conn = iscsi_tx_thread_pre_handler(ts);
+-	if (!conn)
+-		goto out;
+-
+-	ret = 0;
+-
+ 	while (!kthread_should_stop()) {
+ 		/*
+ 		 * Ensure that both TX and RX per connection kthreads
+@@ -3993,11 +3976,9 @@ restart:
+ 		iscsit_thread_check_cpumask(conn, current, 1);
+ 
+ 		wait_event_interruptible(conn->queues_wq,
+-					 !iscsit_conn_all_queues_empty(conn) ||
+-					 ts->status == ISCSI_THREAD_SET_RESET);
++					 !iscsit_conn_all_queues_empty(conn));
+ 
+-		if ((ts->status == ISCSI_THREAD_SET_RESET) ||
+-		     signal_pending(current))
++		if (signal_pending(current))
+ 			goto transport_err;
+ 
+ get_immediate:
+@@ -4008,15 +3989,14 @@ get_immediate:
+ 		ret = iscsit_handle_response_queue(conn);
+ 		if (ret == 1)
+ 			goto get_immediate;
+-		else if (ret == -EAGAIN)
+-			goto restart;
++		else if (ret == -ECONNRESET)
++			goto out;
+ 		else if (ret < 0)
+ 			goto transport_err;
+ 	}
+ 
+ transport_err:
+ 	iscsit_take_action_for_connection_exit(conn);
+-	goto restart;
+ out:
+ 	return 0;
+ }
+@@ -4111,8 +4091,7 @@ int iscsi_target_rx_thread(void *arg)
+ 	int ret;
+ 	u8 buffer[ISCSI_HDR_LEN], opcode;
+ 	u32 checksum = 0, digest = 0;
+-	struct iscsi_conn *conn = NULL;
+-	struct iscsi_thread_set *ts = arg;
++	struct iscsi_conn *conn = arg;
+ 	struct kvec iov;
+ 	/*
+ 	 * Allow ourselves to be interrupted by SIGINT so that a
+@@ -4120,11 +4099,6 @@ int iscsi_target_rx_thread(void *arg)
+ 	 */
+ 	allow_signal(SIGINT);
+ 
+-restart:
+-	conn = iscsi_rx_thread_pre_handler(ts);
+-	if (!conn)
+-		goto out;
+-
+ 	if (conn->conn_transport->transport_type == ISCSI_INFINIBAND) {
+ 		struct completion comp;
+ 		int rc;
+@@ -4134,7 +4108,7 @@ restart:
+ 		if (rc < 0)
+ 			goto transport_err;
+ 
+-		goto out;
++		goto transport_err;
+ 	}
+ 
+ 	while (!kthread_should_stop()) {
+@@ -4210,8 +4184,6 @@ transport_err:
+ 	if (!signal_pending(current))
+ 		atomic_set(&conn->transport_failed, 1);
+ 	iscsit_take_action_for_connection_exit(conn);
+-	goto restart;
+-out:
+ 	return 0;
+ }
+ 
+@@ -4273,7 +4245,24 @@ int iscsit_close_connection(
+ 	if (conn->conn_transport->transport_type == ISCSI_TCP)
+ 		complete(&conn->conn_logout_comp);
+ 
+-	iscsi_release_thread_set(conn);
++	if (!strcmp(current->comm, ISCSI_RX_THREAD_NAME)) {
++		if (conn->tx_thread &&
++		    cmpxchg(&conn->tx_thread_active, true, false)) {
++			send_sig(SIGINT, conn->tx_thread, 1);
++			kthread_stop(conn->tx_thread);
++		}
++	} else if (!strcmp(current->comm, ISCSI_TX_THREAD_NAME)) {
++		if (conn->rx_thread &&
++		    cmpxchg(&conn->rx_thread_active, true, false)) {
++			send_sig(SIGINT, conn->rx_thread, 1);
++			kthread_stop(conn->rx_thread);
++		}
++	}
++
++	spin_lock(&iscsit_global->ts_bitmap_lock);
++	bitmap_release_region(iscsit_global->ts_bitmap, conn->bitmap_id,
++			      get_order(1));
++	spin_unlock(&iscsit_global->ts_bitmap_lock);
+ 
+ 	iscsit_stop_timers_for_cmds(conn);
+ 	iscsit_stop_nopin_response_timer(conn);
+@@ -4551,15 +4540,13 @@ static void iscsit_logout_post_handler_closesession(
+ 	struct iscsi_conn *conn)
+ {
+ 	struct iscsi_session *sess = conn->sess;
+-
+-	iscsi_set_thread_clear(conn, ISCSI_CLEAR_TX_THREAD);
+-	iscsi_set_thread_set_signal(conn, ISCSI_SIGNAL_TX_THREAD);
++	int sleep = cmpxchg(&conn->tx_thread_active, true, false);
+ 
+ 	atomic_set(&conn->conn_logout_remove, 0);
+ 	complete(&conn->conn_logout_comp);
+ 
+ 	iscsit_dec_conn_usage_count(conn);
+-	iscsit_stop_session(sess, 1, 1);
++	iscsit_stop_session(sess, sleep, sleep);
+ 	iscsit_dec_session_usage_count(sess);
+ 	target_put_session(sess->se_sess);
+ }
+@@ -4567,13 +4554,12 @@ static void iscsit_logout_post_handler_closesession(
+ static void iscsit_logout_post_handler_samecid(
+ 	struct iscsi_conn *conn)
+ {
+-	iscsi_set_thread_clear(conn, ISCSI_CLEAR_TX_THREAD);
+-	iscsi_set_thread_set_signal(conn, ISCSI_SIGNAL_TX_THREAD);
++	int sleep = cmpxchg(&conn->tx_thread_active, true, false);
+ 
+ 	atomic_set(&conn->conn_logout_remove, 0);
+ 	complete(&conn->conn_logout_comp);
+ 
+-	iscsit_cause_connection_reinstatement(conn, 1);
++	iscsit_cause_connection_reinstatement(conn, sleep);
+ 	iscsit_dec_conn_usage_count(conn);
+ }
+ 
+diff --git a/drivers/target/iscsi/iscsi_target_erl0.c b/drivers/target/iscsi/iscsi_target_erl0.c
+index bdd8731..e008ed2 100644
+--- a/drivers/target/iscsi/iscsi_target_erl0.c
++++ b/drivers/target/iscsi/iscsi_target_erl0.c
+@@ -860,7 +860,10 @@ void iscsit_connection_reinstatement_rcfr(struct iscsi_conn *conn)
+ 	}
+ 	spin_unlock_bh(&conn->state_lock);
+ 
+-	iscsi_thread_set_force_reinstatement(conn);
++	if (conn->tx_thread && conn->tx_thread_active)
++		send_sig(SIGINT, conn->tx_thread, 1);
++	if (conn->rx_thread && conn->rx_thread_active)
++		send_sig(SIGINT, conn->rx_thread, 1);
+ 
+ sleep:
+ 	wait_for_completion(&conn->conn_wait_rcfr_comp);
+@@ -885,10 +888,10 @@ void iscsit_cause_connection_reinstatement(struct iscsi_conn *conn, int sleep)
+ 		return;
+ 	}
+ 
+-	if (iscsi_thread_set_force_reinstatement(conn) < 0) {
+-		spin_unlock_bh(&conn->state_lock);
+-		return;
+-	}
++	if (conn->tx_thread && conn->tx_thread_active)
++		send_sig(SIGINT, conn->tx_thread, 1);
++	if (conn->rx_thread && conn->rx_thread_active)
++		send_sig(SIGINT, conn->rx_thread, 1);
+ 
+ 	atomic_set(&conn->connection_reinstatement, 1);
+ 	if (!sleep) {
+diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
+index 153fb66..345f073 100644
+--- a/drivers/target/iscsi/iscsi_target_login.c
++++ b/drivers/target/iscsi/iscsi_target_login.c
+@@ -699,6 +699,51 @@ static void iscsi_post_login_start_timers(struct iscsi_conn *conn)
+ 		iscsit_start_nopin_timer(conn);
+ }
+ 
++int iscsit_start_kthreads(struct iscsi_conn *conn)
++{
++	int ret = 0;
++
++	spin_lock(&iscsit_global->ts_bitmap_lock);
++	conn->bitmap_id = bitmap_find_free_region(iscsit_global->ts_bitmap,
++					ISCSIT_BITMAP_BITS, get_order(1));
++	spin_unlock(&iscsit_global->ts_bitmap_lock);
++
++	if (conn->bitmap_id < 0) {
++		pr_err("bitmap_find_free_region() failed for"
++		       " iscsit_start_kthreads()\n");
++		return -ENOMEM;
++	}
++
++	conn->tx_thread = kthread_run(iscsi_target_tx_thread, conn,
++				      "%s", ISCSI_TX_THREAD_NAME);
++	if (IS_ERR(conn->tx_thread)) {
++		pr_err("Unable to start iscsi_target_tx_thread\n");
++		ret = PTR_ERR(conn->tx_thread);
++		goto out_bitmap;
++	}
++	conn->tx_thread_active = true;
++
++	conn->rx_thread = kthread_run(iscsi_target_rx_thread, conn,
++				      "%s", ISCSI_RX_THREAD_NAME);
++	if (IS_ERR(conn->rx_thread)) {
++		pr_err("Unable to start iscsi_target_rx_thread\n");
++		ret = PTR_ERR(conn->rx_thread);
++		goto out_tx;
++	}
++	conn->rx_thread_active = true;
++
++	return 0;
++out_tx:
++	kthread_stop(conn->tx_thread);
++	conn->tx_thread_active = false;
++out_bitmap:
++	spin_lock(&iscsit_global->ts_bitmap_lock);
++	bitmap_release_region(iscsit_global->ts_bitmap, conn->bitmap_id,
++			      get_order(1));
++	spin_unlock(&iscsit_global->ts_bitmap_lock);
++	return ret;
++}
++
+ int iscsi_post_login_handler(
+ 	struct iscsi_np *np,
+ 	struct iscsi_conn *conn,
+@@ -709,7 +754,7 @@ int iscsi_post_login_handler(
+ 	struct se_session *se_sess = sess->se_sess;
+ 	struct iscsi_portal_group *tpg = sess->tpg;
+ 	struct se_portal_group *se_tpg = &tpg->tpg_se_tpg;
+-	struct iscsi_thread_set *ts;
++	int rc;
+ 
+ 	iscsit_inc_conn_usage_count(conn);
+ 
+@@ -724,7 +769,6 @@ int iscsi_post_login_handler(
+ 	/*
+ 	 * SCSI Initiator -> SCSI Target Port Mapping
+ 	 */
+-	ts = iscsi_get_thread_set();
+ 	if (!zero_tsih) {
+ 		iscsi_set_session_parameters(sess->sess_ops,
+ 				conn->param_list, 0);
+@@ -751,9 +795,11 @@ int iscsi_post_login_handler(
+ 			sess->sess_ops->InitiatorName);
+ 		spin_unlock_bh(&sess->conn_lock);
+ 
+-		iscsi_post_login_start_timers(conn);
++		rc = iscsit_start_kthreads(conn);
++		if (rc)
++			return rc;
+ 
+-		iscsi_activate_thread_set(conn, ts);
++		iscsi_post_login_start_timers(conn);
+ 		/*
+ 		 * Determine CPU mask to ensure connection's RX and TX kthreads
+ 		 * are scheduled on the same CPU.
+@@ -810,8 +856,11 @@ int iscsi_post_login_handler(
+ 		" iSCSI Target Portal Group: %hu\n", tpg->nsessions, tpg->tpgt);
+ 	spin_unlock_bh(&se_tpg->session_lock);
+ 
++	rc = iscsit_start_kthreads(conn);
++	if (rc)
++		return rc;
++
+ 	iscsi_post_login_start_timers(conn);
+-	iscsi_activate_thread_set(conn, ts);
+ 	/*
+ 	 * Determine CPU mask to ensure connection's RX and TX kthreads
+ 	 * are scheduled on the same CPU.
+diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
+index 44620fb..cbb0cc2 100644
+--- a/drivers/target/target_core_file.c
++++ b/drivers/target/target_core_file.c
+@@ -264,40 +264,32 @@ static int fd_do_prot_rw(struct se_cmd *cmd, struct fd_prot *fd_prot,
+ 	struct se_device *se_dev = cmd->se_dev;
+ 	struct fd_dev *dev = FD_DEV(se_dev);
+ 	struct file *prot_fd = dev->fd_prot_file;
+-	struct scatterlist *sg;
+ 	loff_t pos = (cmd->t_task_lba * se_dev->prot_length);
+ 	unsigned char *buf;
+-	u32 prot_size, len, size;
+-	int rc, ret = 1, i;
++	u32 prot_size;
++	int rc, ret = 1;
+ 
+ 	prot_size = (cmd->data_length / se_dev->dev_attrib.block_size) *
+ 		     se_dev->prot_length;
+ 
+ 	if (!is_write) {
+-		fd_prot->prot_buf = vzalloc(prot_size);
++		fd_prot->prot_buf = kzalloc(prot_size, GFP_KERNEL);
+ 		if (!fd_prot->prot_buf) {
+ 			pr_err("Unable to allocate fd_prot->prot_buf\n");
+ 			return -ENOMEM;
+ 		}
+ 		buf = fd_prot->prot_buf;
+ 
+-		fd_prot->prot_sg_nents = cmd->t_prot_nents;
+-		fd_prot->prot_sg = kzalloc(sizeof(struct scatterlist) *
+-					   fd_prot->prot_sg_nents, GFP_KERNEL);
++		fd_prot->prot_sg_nents = 1;
++		fd_prot->prot_sg = kzalloc(sizeof(struct scatterlist),
++					   GFP_KERNEL);
+ 		if (!fd_prot->prot_sg) {
+ 			pr_err("Unable to allocate fd_prot->prot_sg\n");
+-			vfree(fd_prot->prot_buf);
++			kfree(fd_prot->prot_buf);
+ 			return -ENOMEM;
+ 		}
+-		size = prot_size;
+-
+-		for_each_sg(fd_prot->prot_sg, sg, fd_prot->prot_sg_nents, i) {
+-
+-			len = min_t(u32, PAGE_SIZE, size);
+-			sg_set_buf(sg, buf, len);
+-			size -= len;
+-			buf += len;
+-		}
++		sg_init_table(fd_prot->prot_sg, fd_prot->prot_sg_nents);
++		sg_set_buf(fd_prot->prot_sg, buf, prot_size);
+ 	}
+ 
+ 	if (is_write) {
+@@ -318,7 +310,7 @@ static int fd_do_prot_rw(struct se_cmd *cmd, struct fd_prot *fd_prot,
+ 
+ 	if (is_write || ret < 0) {
+ 		kfree(fd_prot->prot_sg);
+-		vfree(fd_prot->prot_buf);
++		kfree(fd_prot->prot_buf);
+ 	}
+ 
+ 	return ret;
+@@ -549,6 +541,56 @@ fd_execute_write_same(struct se_cmd *cmd)
+ 	return 0;
+ }
+ 
++static int
++fd_do_prot_fill(struct se_device *se_dev, sector_t lba, sector_t nolb,
++		void *buf, size_t bufsize)
++{
++	struct fd_dev *fd_dev = FD_DEV(se_dev);
++	struct file *prot_fd = fd_dev->fd_prot_file;
++	sector_t prot_length, prot;
++	loff_t pos = lba * se_dev->prot_length;
++
++	if (!prot_fd) {
++		pr_err("Unable to locate fd_dev->fd_prot_file\n");
++		return -ENODEV;
++	}
++
++	prot_length = nolb * se_dev->prot_length;
++
++	for (prot = 0; prot < prot_length;) {
++		sector_t len = min_t(sector_t, bufsize, prot_length - prot);
++		ssize_t ret = kernel_write(prot_fd, buf, len, pos + prot);
++
++		if (ret != len) {
++			pr_err("vfs_write to prot file failed: %zd\n", ret);
++			return ret < 0 ? ret : -ENODEV;
++		}
++		prot += ret;
++	}
++
++	return 0;
++}
++
++static int
++fd_do_prot_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
++{
++	void *buf;
++	int rc;
++
++	buf = (void *)__get_free_page(GFP_KERNEL);
++	if (!buf) {
++		pr_err("Unable to allocate FILEIO prot buf\n");
++		return -ENOMEM;
++	}
++	memset(buf, 0xff, PAGE_SIZE);
++
++	rc = fd_do_prot_fill(cmd->se_dev, lba, nolb, buf, PAGE_SIZE);
++
++	free_page((unsigned long)buf);
++
++	return rc;
++}
++
+ static sense_reason_t
+ fd_do_unmap(struct se_cmd *cmd, void *priv, sector_t lba, sector_t nolb)
+ {
+@@ -556,6 +598,12 @@ fd_do_unmap(struct se_cmd *cmd, void *priv, sector_t lba, sector_t nolb)
+ 	struct inode *inode = file->f_mapping->host;
+ 	int ret;
+ 
++	if (cmd->se_dev->dev_attrib.pi_prot_type) {
++		ret = fd_do_prot_unmap(cmd, lba, nolb);
++		if (ret)
++			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
++	}
++
+ 	if (S_ISBLK(inode->i_mode)) {
+ 		/* The backend is block device, use discard */
+ 		struct block_device *bdev = inode->i_bdev;
+@@ -658,11 +706,11 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
+ 						 0, fd_prot.prot_sg, 0);
+ 			if (rc) {
+ 				kfree(fd_prot.prot_sg);
+-				vfree(fd_prot.prot_buf);
++				kfree(fd_prot.prot_buf);
+ 				return rc;
+ 			}
+ 			kfree(fd_prot.prot_sg);
+-			vfree(fd_prot.prot_buf);
++			kfree(fd_prot.prot_buf);
+ 		}
+ 	} else {
+ 		memset(&fd_prot, 0, sizeof(struct fd_prot));
+@@ -678,7 +726,7 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
+ 						  0, fd_prot.prot_sg, 0);
+ 			if (rc) {
+ 				kfree(fd_prot.prot_sg);
+-				vfree(fd_prot.prot_buf);
++				kfree(fd_prot.prot_buf);
+ 				return rc;
+ 			}
+ 		}
+@@ -714,7 +762,7 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
+ 
+ 	if (ret < 0) {
+ 		kfree(fd_prot.prot_sg);
+-		vfree(fd_prot.prot_buf);
++		kfree(fd_prot.prot_buf);
+ 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+ 	}
+ 
+@@ -878,48 +926,28 @@ static int fd_init_prot(struct se_device *dev)
+ 
+ static int fd_format_prot(struct se_device *dev)
+ {
+-	struct fd_dev *fd_dev = FD_DEV(dev);
+-	struct file *prot_fd = fd_dev->fd_prot_file;
+-	sector_t prot_length, prot;
+ 	unsigned char *buf;
+-	loff_t pos = 0;
+ 	int unit_size = FDBD_FORMAT_UNIT_SIZE * dev->dev_attrib.block_size;
+-	int rc, ret = 0, size, len;
++	int ret;
+ 
+ 	if (!dev->dev_attrib.pi_prot_type) {
+ 		pr_err("Unable to format_prot while pi_prot_type == 0\n");
+ 		return -ENODEV;
+ 	}
+-	if (!prot_fd) {
+-		pr_err("Unable to locate fd_dev->fd_prot_file\n");
+-		return -ENODEV;
+-	}
+ 
+ 	buf = vzalloc(unit_size);
+ 	if (!buf) {
+ 		pr_err("Unable to allocate FILEIO prot buf\n");
+ 		return -ENOMEM;
+ 	}
+-	prot_length = (dev->transport->get_blocks(dev) + 1) * dev->prot_length;
+-	size = prot_length;
+ 
+ 	pr_debug("Using FILEIO prot_length: %llu\n",
+-		 (unsigned long long)prot_length);
++		 (unsigned long long)(dev->transport->get_blocks(dev) + 1) *
++					dev->prot_length);
+ 
+ 	memset(buf, 0xff, unit_size);
+-	for (prot = 0; prot < prot_length; prot += unit_size) {
+-		len = min(unit_size, size);
+-		rc = kernel_write(prot_fd, buf, len, pos);
+-		if (rc != len) {
+-			pr_err("vfs_write to prot file failed: %d\n", rc);
+-			ret = -ENODEV;
+-			goto out;
+-		}
+-		pos += len;
+-		size -= len;
+-	}
+-
+-out:
++	ret = fd_do_prot_fill(dev, 0, dev->transport->get_blocks(dev) + 1,
++			      buf, unit_size);
+ 	vfree(buf);
+ 	return ret;
+ }
+diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
+index 3e72974..755bd9b3 100644
+--- a/drivers/target/target_core_sbc.c
++++ b/drivers/target/target_core_sbc.c
+@@ -312,7 +312,7 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
+ 	return 0;
+ }
+ 
+-static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd)
++static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd, bool success)
+ {
+ 	unsigned char *buf, *addr;
+ 	struct scatterlist *sg;
+@@ -376,7 +376,7 @@ sbc_execute_rw(struct se_cmd *cmd)
+ 			       cmd->data_direction);
+ }
+ 
+-static sense_reason_t compare_and_write_post(struct se_cmd *cmd)
++static sense_reason_t compare_and_write_post(struct se_cmd *cmd, bool success)
+ {
+ 	struct se_device *dev = cmd->se_dev;
+ 
+@@ -399,7 +399,7 @@ static sense_reason_t compare_and_write_post(struct se_cmd *cmd)
+ 	return TCM_NO_SENSE;
+ }
+ 
+-static sense_reason_t compare_and_write_callback(struct se_cmd *cmd)
++static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool success)
+ {
+ 	struct se_device *dev = cmd->se_dev;
+ 	struct scatterlist *write_sg = NULL, *sg;
+@@ -414,11 +414,16 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd)
+ 
+ 	/*
+ 	 * Handle early failure in transport_generic_request_failure(),
+-	 * which will not have taken ->caw_mutex yet..
++	 * which will not have taken ->caw_sem yet..
+ 	 */
+-	if (!cmd->t_data_sg || !cmd->t_bidi_data_sg)
++	if (!success && (!cmd->t_data_sg || !cmd->t_bidi_data_sg))
+ 		return TCM_NO_SENSE;
+ 	/*
++	 * Handle special case for zero-length COMPARE_AND_WRITE
++	 */
++	if (!cmd->data_length)
++		goto out;
++	/*
+ 	 * Immediately exit + release dev->caw_sem if command has already
+ 	 * been failed with a non-zero SCSI status.
+ 	 */
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index ac3cbab..f786de0 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -1615,11 +1615,11 @@ void transport_generic_request_failure(struct se_cmd *cmd,
+ 	transport_complete_task_attr(cmd);
+ 	/*
+ 	 * Handle special case for COMPARE_AND_WRITE failure, where the
+-	 * callback is expected to drop the per device ->caw_mutex.
++	 * callback is expected to drop the per device ->caw_sem.
+ 	 */
+ 	if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) &&
+ 	     cmd->transport_complete_callback)
+-		cmd->transport_complete_callback(cmd);
++		cmd->transport_complete_callback(cmd, false);
+ 
+ 	switch (sense_reason) {
+ 	case TCM_NON_EXISTENT_LUN:
+@@ -1975,8 +1975,12 @@ static void target_complete_ok_work(struct work_struct *work)
+ 	if (cmd->transport_complete_callback) {
+ 		sense_reason_t rc;
+ 
+-		rc = cmd->transport_complete_callback(cmd);
++		rc = cmd->transport_complete_callback(cmd, true);
+ 		if (!rc && !(cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE_POST)) {
++			if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) &&
++			    !cmd->data_length)
++				goto queue_rsp;
++
+ 			return;
+ 		} else if (rc) {
+ 			ret = transport_send_check_condition_and_sense(cmd,
+@@ -1990,6 +1994,7 @@ static void target_complete_ok_work(struct work_struct *work)
+ 		}
+ 	}
+ 
++queue_rsp:
+ 	switch (cmd->data_direction) {
+ 	case DMA_FROM_DEVICE:
+ 		spin_lock(&cmd->se_lun->lun_sep_lock);
+@@ -2094,6 +2099,16 @@ static inline void transport_reset_sgl_orig(struct se_cmd *cmd)
+ static inline void transport_free_pages(struct se_cmd *cmd)
+ {
+ 	if (cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) {
++		/*
++		 * Release special case READ buffer payload required for
++		 * SG_TO_MEM_NOALLOC to function with COMPARE_AND_WRITE
++		 */
++		if (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) {
++			transport_free_sgl(cmd->t_bidi_data_sg,
++					   cmd->t_bidi_data_nents);
++			cmd->t_bidi_data_sg = NULL;
++			cmd->t_bidi_data_nents = 0;
++		}
+ 		transport_reset_sgl_orig(cmd);
+ 		return;
+ 	}
+@@ -2246,6 +2261,7 @@ sense_reason_t
+ transport_generic_new_cmd(struct se_cmd *cmd)
+ {
+ 	int ret = 0;
++	bool zero_flag = !(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB);
+ 
+ 	/*
+ 	 * Determine is the TCM fabric module has already allocated physical
+@@ -2254,7 +2270,6 @@ transport_generic_new_cmd(struct se_cmd *cmd)
+ 	 */
+ 	if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) &&
+ 	    cmd->data_length) {
+-		bool zero_flag = !(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB);
+ 
+ 		if ((cmd->se_cmd_flags & SCF_BIDI) ||
+ 		    (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE)) {
+@@ -2285,6 +2300,20 @@ transport_generic_new_cmd(struct se_cmd *cmd)
+ 				       cmd->data_length, zero_flag);
+ 		if (ret < 0)
+ 			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
++	} else if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) &&
++		    cmd->data_length) {
++		/*
++		 * Special case for COMPARE_AND_WRITE with fabrics
++		 * using SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC.
++		 */
++		u32 caw_length = cmd->t_task_nolb *
++				 cmd->se_dev->dev_attrib.block_size;
++
++		ret = target_alloc_sgl(&cmd->t_bidi_data_sg,
++				       &cmd->t_bidi_data_nents,
++				       caw_length, zero_flag);
++		if (ret < 0)
++			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+ 	}
+ 	/*
+ 	 * If this command is not a write we can execute it right here,
+diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
+index deae122..d465ace 100644
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -3444,7 +3444,8 @@ void serial8250_suspend_port(int line)
+ 	    port->type != PORT_8250) {
+ 		unsigned char canary = 0xa5;
+ 		serial_out(up, UART_SCR, canary);
+-		up->canary = canary;
++		if (serial_in(up, UART_SCR) == canary)
++			up->canary = canary;
+ 	}
+ 
+ 	uart_suspend_port(&serial8250_reg, port);
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index 6ae5b85..7a80250 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -629,6 +629,7 @@ static const struct acpi_device_id dw8250_acpi_match[] = {
+ 	{ "80860F0A", 0 },
+ 	{ "8086228A", 0 },
+ 	{ "APMC0D08", 0},
++	{ "AMD0020", 0 },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(acpi, dw8250_acpi_match);
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 0eb29b1..2306191 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -818,7 +818,7 @@ static irqreturn_t imx_int(int irq, void *dev_id)
+ 	if (sts2 & USR2_ORE) {
+ 		dev_err(sport->port.dev, "Rx FIFO overrun\n");
+ 		sport->port.icount.overrun++;
+-		writel(sts2 | USR2_ORE, sport->port.membase + USR2);
++		writel(USR2_ORE, sport->port.membase + USR2);
+ 	}
+ 
+ 	return IRQ_HANDLED;
+@@ -1181,10 +1181,12 @@ static int imx_startup(struct uart_port *port)
+ 		imx_uart_dma_init(sport);
+ 
+ 	spin_lock_irqsave(&sport->port.lock, flags);
++
+ 	/*
+ 	 * Finally, clear and enable interrupts
+ 	 */
+ 	writel(USR1_RTSD, sport->port.membase + USR1);
++	writel(USR2_ORE, sport->port.membase + USR2);
+ 
+ 	if (sport->dma_is_inited && !sport->dma_is_enabled)
+ 		imx_enable_dma(sport);
+@@ -1199,10 +1201,6 @@ static int imx_startup(struct uart_port *port)
+ 
+ 	writel(temp, sport->port.membase + UCR1);
+ 
+-	/* Clear any pending ORE flag before enabling interrupt */
+-	temp = readl(sport->port.membase + USR2);
+-	writel(temp | USR2_ORE, sport->port.membase + USR2);
+-
+ 	temp = readl(sport->port.membase + UCR4);
+ 	temp |= UCR4_OREN;
+ 	writel(temp, sport->port.membase + UCR4);
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index a051a7a..a81f9dd 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -245,7 +245,7 @@ static void wdm_int_callback(struct urb *urb)
+ 	case USB_CDC_NOTIFY_RESPONSE_AVAILABLE:
+ 		dev_dbg(&desc->intf->dev,
+ 			"NOTIFY_RESPONSE_AVAILABLE received: index %d len %d",
+-			dr->wIndex, dr->wLength);
++			le16_to_cpu(dr->wIndex), le16_to_cpu(dr->wLength));
+ 		break;
+ 
+ 	case USB_CDC_NOTIFY_NETWORK_CONNECTION:
+@@ -262,7 +262,9 @@ static void wdm_int_callback(struct urb *urb)
+ 		clear_bit(WDM_POLL_RUNNING, &desc->flags);
+ 		dev_err(&desc->intf->dev,
+ 			"unknown notification %d received: index %d len %d\n",
+-			dr->bNotificationType, dr->wIndex, dr->wLength);
++			dr->bNotificationType,
++			le16_to_cpu(dr->wIndex),
++			le16_to_cpu(dr->wLength));
+ 		goto exit;
+ 	}
+ 
+@@ -408,7 +410,7 @@ static ssize_t wdm_write
+ 			     USB_RECIP_INTERFACE);
+ 	req->bRequest = USB_CDC_SEND_ENCAPSULATED_COMMAND;
+ 	req->wValue = 0;
+-	req->wIndex = desc->inum;
++	req->wIndex = desc->inum; /* already converted */
+ 	req->wLength = cpu_to_le16(count);
+ 	set_bit(WDM_IN_USE, &desc->flags);
+ 	desc->outbuf = buf;
+@@ -422,7 +424,7 @@ static ssize_t wdm_write
+ 		rv = usb_translate_errors(rv);
+ 	} else {
+ 		dev_dbg(&desc->intf->dev, "Tx URB has been submitted index=%d",
+-			req->wIndex);
++			le16_to_cpu(req->wIndex));
+ 	}
+ out:
+ 	usb_autopm_put_interface(desc->intf);
+@@ -820,7 +822,7 @@ static int wdm_create(struct usb_interface *intf, struct usb_endpoint_descriptor
+ 	desc->irq->bRequestType = (USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE);
+ 	desc->irq->bRequest = USB_CDC_GET_ENCAPSULATED_RESPONSE;
+ 	desc->irq->wValue = 0;
+-	desc->irq->wIndex = desc->inum;
++	desc->irq->wIndex = desc->inum; /* already converted */
+ 	desc->irq->wLength = cpu_to_le16(desc->wMaxCommand);
+ 
+ 	usb_fill_control_urb(
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index d7c3d5a..3b71516 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -3406,10 +3406,10 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
+ 	if (status) {
+ 		dev_dbg(&port_dev->dev, "can't resume, status %d\n", status);
+ 	} else {
+-		/* drive resume for at least 20 msec */
++		/* drive resume for USB_RESUME_TIMEOUT msec */
+ 		dev_dbg(&udev->dev, "usb %sresume\n",
+ 				(PMSG_IS_AUTO(msg) ? "auto-" : ""));
+-		msleep(25);
++		msleep(USB_RESUME_TIMEOUT);
+ 
+ 		/* Virtual root hubs can trigger on GET_PORT_STATUS to
+ 		 * stop resume signaling.  Then finish the resume
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index c78c874..758b7e0 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -1521,7 +1521,7 @@ static int dwc2_hcd_hub_control(struct dwc2_hsotg *hsotg, u16 typereq,
+ 			dev_dbg(hsotg->dev,
+ 				"ClearPortFeature USB_PORT_FEAT_SUSPEND\n");
+ 			writel(0, hsotg->regs + PCGCTL);
+-			usleep_range(20000, 40000);
++			msleep(USB_RESUME_TIMEOUT);
+ 
+ 			hprt0 = dwc2_read_hprt0(hsotg);
+ 			hprt0 |= HPRT0_RES;
+diff --git a/drivers/usb/gadget/legacy/printer.c b/drivers/usb/gadget/legacy/printer.c
+index 9054598..6385c19 100644
+--- a/drivers/usb/gadget/legacy/printer.c
++++ b/drivers/usb/gadget/legacy/printer.c
+@@ -1031,6 +1031,15 @@ unknown:
+ 		break;
+ 	}
+ 	/* host either stalls (value < 0) or reports success */
++	if (value >= 0) {
++		req->length = value;
++		req->zero = value < wLength;
++		value = usb_ep_queue(cdev->gadget->ep0, req, GFP_ATOMIC);
++		if (value < 0) {
++			ERROR(dev, "%s:%d Error!\n", __func__, __LINE__);
++			req->status = 0;
++		}
++	}
+ 	return value;
+ }
+ 
+diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
+index 85e56d1..f4d88df 100644
+--- a/drivers/usb/host/ehci-hcd.c
++++ b/drivers/usb/host/ehci-hcd.c
+@@ -792,12 +792,12 @@ static irqreturn_t ehci_irq (struct usb_hcd *hcd)
+ 					ehci->reset_done[i] == 0))
+ 				continue;
+ 
+-			/* start 20 msec resume signaling from this port,
+-			 * and make hub_wq collect PORT_STAT_C_SUSPEND to
+-			 * stop that signaling.  Use 5 ms extra for safety,
+-			 * like usb_port_resume() does.
++			/* start USB_RESUME_TIMEOUT msec resume signaling from
++			 * this port, and make hub_wq collect
++			 * PORT_STAT_C_SUSPEND to stop that signaling.
+ 			 */
+-			ehci->reset_done[i] = jiffies + msecs_to_jiffies(25);
++			ehci->reset_done[i] = jiffies +
++				msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			set_bit(i, &ehci->resuming_ports);
+ 			ehci_dbg (ehci, "port %d remote wakeup\n", i + 1);
+ 			usb_hcd_start_port_resume(&hcd->self, i);
+diff --git a/drivers/usb/host/ehci-hub.c b/drivers/usb/host/ehci-hub.c
+index 87cf86f..7354d01 100644
+--- a/drivers/usb/host/ehci-hub.c
++++ b/drivers/usb/host/ehci-hub.c
+@@ -471,10 +471,13 @@ static int ehci_bus_resume (struct usb_hcd *hcd)
+ 		ehci_writel(ehci, temp, &ehci->regs->port_status [i]);
+ 	}
+ 
+-	/* msleep for 20ms only if code is trying to resume port */
++	/*
++	 * msleep for USB_RESUME_TIMEOUT ms only if code is trying to resume
++	 * port
++	 */
+ 	if (resume_needed) {
+ 		spin_unlock_irq(&ehci->lock);
+-		msleep(20);
++		msleep(USB_RESUME_TIMEOUT);
+ 		spin_lock_irq(&ehci->lock);
+ 		if (ehci->shutdown)
+ 			goto shutdown;
+@@ -942,7 +945,7 @@ int ehci_hub_control(
+ 			temp &= ~PORT_WAKE_BITS;
+ 			ehci_writel(ehci, temp | PORT_RESUME, status_reg);
+ 			ehci->reset_done[wIndex] = jiffies
+-					+ msecs_to_jiffies(20);
++					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			set_bit(wIndex, &ehci->resuming_ports);
+ 			usb_hcd_start_port_resume(&hcd->self, wIndex);
+ 			break;
+diff --git a/drivers/usb/host/fotg210-hcd.c b/drivers/usb/host/fotg210-hcd.c
+index 475b21f..7a6681f 100644
+--- a/drivers/usb/host/fotg210-hcd.c
++++ b/drivers/usb/host/fotg210-hcd.c
+@@ -1595,7 +1595,7 @@ static int fotg210_hub_control(
+ 			/* resume signaling for 20 msec */
+ 			fotg210_writel(fotg210, temp | PORT_RESUME, status_reg);
+ 			fotg210->reset_done[wIndex] = jiffies
+-					+ msecs_to_jiffies(20);
++					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			break;
+ 		case USB_PORT_FEAT_C_SUSPEND:
+ 			clear_bit(wIndex, &fotg210->port_c_suspend);
+diff --git a/drivers/usb/host/fusbh200-hcd.c b/drivers/usb/host/fusbh200-hcd.c
+index a83eefe..ba77e2e 100644
+--- a/drivers/usb/host/fusbh200-hcd.c
++++ b/drivers/usb/host/fusbh200-hcd.c
+@@ -1550,10 +1550,9 @@ static int fusbh200_hub_control (
+ 			if ((temp & PORT_PE) == 0)
+ 				goto error;
+ 
+-			/* resume signaling for 20 msec */
+ 			fusbh200_writel(fusbh200, temp | PORT_RESUME, status_reg);
+ 			fusbh200->reset_done[wIndex] = jiffies
+-					+ msecs_to_jiffies(20);
++					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			break;
+ 		case USB_PORT_FEAT_C_SUSPEND:
+ 			clear_bit(wIndex, &fusbh200->port_c_suspend);
+diff --git a/drivers/usb/host/isp116x-hcd.c b/drivers/usb/host/isp116x-hcd.c
+index 113d0cc..9ef5644 100644
+--- a/drivers/usb/host/isp116x-hcd.c
++++ b/drivers/usb/host/isp116x-hcd.c
+@@ -1490,7 +1490,7 @@ static int isp116x_bus_resume(struct usb_hcd *hcd)
+ 	spin_unlock_irq(&isp116x->lock);
+ 
+ 	hcd->state = HC_STATE_RESUMING;
+-	msleep(20);
++	msleep(USB_RESUME_TIMEOUT);
+ 
+ 	/* Go operational */
+ 	spin_lock_irq(&isp116x->lock);
+diff --git a/drivers/usb/host/oxu210hp-hcd.c b/drivers/usb/host/oxu210hp-hcd.c
+index ef7efb2..28a2866 100644
+--- a/drivers/usb/host/oxu210hp-hcd.c
++++ b/drivers/usb/host/oxu210hp-hcd.c
+@@ -2500,11 +2500,12 @@ static irqreturn_t oxu210_hcd_irq(struct usb_hcd *hcd)
+ 					|| oxu->reset_done[i] != 0)
+ 				continue;
+ 
+-			/* start 20 msec resume signaling from this port,
+-			 * and make hub_wq collect PORT_STAT_C_SUSPEND to
++			/* start USB_RESUME_TIMEOUT resume signaling from this
++			 * port, and make hub_wq collect PORT_STAT_C_SUSPEND to
+ 			 * stop that signaling.
+ 			 */
+-			oxu->reset_done[i] = jiffies + msecs_to_jiffies(20);
++			oxu->reset_done[i] = jiffies +
++				msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			oxu_dbg(oxu, "port %d remote wakeup\n", i + 1);
+ 			mod_timer(&hcd->rh_timer, oxu->reset_done[i]);
+ 		}
+diff --git a/drivers/usb/host/r8a66597-hcd.c b/drivers/usb/host/r8a66597-hcd.c
+index bdc82fe..54a4170 100644
+--- a/drivers/usb/host/r8a66597-hcd.c
++++ b/drivers/usb/host/r8a66597-hcd.c
+@@ -2301,7 +2301,7 @@ static int r8a66597_bus_resume(struct usb_hcd *hcd)
+ 		rh->port &= ~USB_PORT_STAT_SUSPEND;
+ 		rh->port |= USB_PORT_STAT_C_SUSPEND << 16;
+ 		r8a66597_mdfy(r8a66597, RESUME, RESUME | UACT, dvstctr_reg);
+-		msleep(50);
++		msleep(USB_RESUME_TIMEOUT);
+ 		r8a66597_mdfy(r8a66597, UACT, RESUME | UACT, dvstctr_reg);
+ 	}
+ 
+diff --git a/drivers/usb/host/sl811-hcd.c b/drivers/usb/host/sl811-hcd.c
+index 4f4ba1e..9118cd8 100644
+--- a/drivers/usb/host/sl811-hcd.c
++++ b/drivers/usb/host/sl811-hcd.c
+@@ -1259,7 +1259,7 @@ sl811h_hub_control(
+ 			sl811_write(sl811, SL11H_CTLREG1, sl811->ctrl1);
+ 
+ 			mod_timer(&sl811->timer, jiffies
+-					+ msecs_to_jiffies(20));
++					+ msecs_to_jiffies(USB_RESUME_TIMEOUT));
+ 			break;
+ 		case USB_PORT_FEAT_POWER:
+ 			port_power(sl811, 0);
+diff --git a/drivers/usb/host/uhci-hub.c b/drivers/usb/host/uhci-hub.c
+index 19ba5ea..7b3d1af 100644
+--- a/drivers/usb/host/uhci-hub.c
++++ b/drivers/usb/host/uhci-hub.c
+@@ -166,7 +166,7 @@ static void uhci_check_ports(struct uhci_hcd *uhci)
+ 				/* Port received a wakeup request */
+ 				set_bit(port, &uhci->resuming_ports);
+ 				uhci->ports_timeout = jiffies +
+-						msecs_to_jiffies(25);
++					msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 				usb_hcd_start_port_resume(
+ 						&uhci_to_hcd(uhci)->self, port);
+ 
+@@ -338,7 +338,8 @@ static int uhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 			uhci_finish_suspend(uhci, port, port_addr);
+ 
+ 			/* USB v2.0 7.1.7.5 */
+-			uhci->ports_timeout = jiffies + msecs_to_jiffies(50);
++			uhci->ports_timeout = jiffies +
++				msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			break;
+ 		case USB_PORT_FEAT_POWER:
+ 			/* UHCI has no power switching */
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 73485fa..eeedde8 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1574,7 +1574,7 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ 		} else {
+ 			xhci_dbg(xhci, "resume HS port %d\n", port_id);
+ 			bus_state->resume_done[faked_port_index] = jiffies +
+-				msecs_to_jiffies(20);
++				msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			set_bit(faked_port_index, &bus_state->resuming_ports);
+ 			mod_timer(&hcd->rh_timer,
+ 				  bus_state->resume_done[faked_port_index]);
+diff --git a/drivers/usb/isp1760/isp1760-hcd.c b/drivers/usb/isp1760/isp1760-hcd.c
+index 3cb98b1..7911b6b 100644
+--- a/drivers/usb/isp1760/isp1760-hcd.c
++++ b/drivers/usb/isp1760/isp1760-hcd.c
+@@ -1869,7 +1869,7 @@ static int isp1760_hub_control(struct usb_hcd *hcd, u16 typeReq,
+ 				reg_write32(hcd->regs, HC_PORTSC1,
+ 							temp | PORT_RESUME);
+ 				priv->reset_done = jiffies +
+-					msecs_to_jiffies(20);
++					msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 			}
+ 			break;
+ 		case USB_PORT_FEAT_C_SUSPEND:
+diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
+index 067920f..ec0ee3b 100644
+--- a/drivers/usb/musb/musb_core.c
++++ b/drivers/usb/musb/musb_core.c
+@@ -99,6 +99,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/io.h>
+ #include <linux/dma-mapping.h>
++#include <linux/usb.h>
+ 
+ #include "musb_core.h"
+ 
+@@ -562,7 +563,7 @@ static irqreturn_t musb_stage0_irq(struct musb *musb, u8 int_usb,
+ 						(USB_PORT_STAT_C_SUSPEND << 16)
+ 						| MUSB_PORT_STAT_RESUME;
+ 				musb->rh_timer = jiffies
+-						 + msecs_to_jiffies(20);
++					+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
+ 				musb->need_finish_resume = 1;
+ 
+ 				musb->xceiv->otg->state = OTG_STATE_A_HOST;
+@@ -1597,16 +1598,30 @@ irqreturn_t musb_interrupt(struct musb *musb)
+ 		is_host_active(musb) ? "host" : "peripheral",
+ 		musb->int_usb, musb->int_tx, musb->int_rx);
+ 
+-	/* the core can interrupt us for multiple reasons; docs have
+-	 * a generic interrupt flowchart to follow
++	/**
++	 * According to Mentor Graphics' documentation, flowchart on page 98,
++	 * IRQ should be handled as follows:
++	 *
++	 * . Resume IRQ
++	 * . Session Request IRQ
++	 * . VBUS Error IRQ
++	 * . Suspend IRQ
++	 * . Connect IRQ
++	 * . Disconnect IRQ
++	 * . Reset/Babble IRQ
++	 * . SOF IRQ (we're not using this one)
++	 * . Endpoint 0 IRQ
++	 * . TX Endpoints
++	 * . RX Endpoints
++	 *
++	 * We will be following that flowchart in order to avoid any problems
++	 * that might arise with internal Finite State Machine.
+ 	 */
++
+ 	if (musb->int_usb)
+ 		retval |= musb_stage0_irq(musb, musb->int_usb,
+ 				devctl);
+ 
+-	/* "stage 1" is handling endpoint irqs */
+-
+-	/* handle endpoint 0 first */
+ 	if (musb->int_tx & 1) {
+ 		if (is_host_active(musb))
+ 			retval |= musb_h_ep0_irq(musb);
+@@ -1614,37 +1629,31 @@ irqreturn_t musb_interrupt(struct musb *musb)
+ 			retval |= musb_g_ep0_irq(musb);
+ 	}
+ 
+-	/* RX on endpoints 1-15 */
+-	reg = musb->int_rx >> 1;
++	reg = musb->int_tx >> 1;
+ 	ep_num = 1;
+ 	while (reg) {
+ 		if (reg & 1) {
+-			/* musb_ep_select(musb->mregs, ep_num); */
+-			/* REVISIT just retval = ep->rx_irq(...) */
+ 			retval = IRQ_HANDLED;
+ 			if (is_host_active(musb))
+-				musb_host_rx(musb, ep_num);
++				musb_host_tx(musb, ep_num);
+ 			else
+-				musb_g_rx(musb, ep_num);
++				musb_g_tx(musb, ep_num);
+ 		}
+-
+ 		reg >>= 1;
+ 		ep_num++;
+ 	}
+ 
+-	/* TX on endpoints 1-15 */
+-	reg = musb->int_tx >> 1;
++	reg = musb->int_rx >> 1;
+ 	ep_num = 1;
+ 	while (reg) {
+ 		if (reg & 1) {
+-			/* musb_ep_select(musb->mregs, ep_num); */
+-			/* REVISIT just retval |= ep->tx_irq(...) */
+ 			retval = IRQ_HANDLED;
+ 			if (is_host_active(musb))
+-				musb_host_tx(musb, ep_num);
++				musb_host_rx(musb, ep_num);
+ 			else
+-				musb_g_tx(musb, ep_num);
++				musb_g_rx(musb, ep_num);
+ 		}
++
+ 		reg >>= 1;
+ 		ep_num++;
+ 	}
+@@ -2463,7 +2472,7 @@ static int musb_resume(struct device *dev)
+ 	if (musb->need_finish_resume) {
+ 		musb->need_finish_resume = 0;
+ 		schedule_delayed_work(&musb->finish_resume_work,
+-				      msecs_to_jiffies(20));
++				      msecs_to_jiffies(USB_RESUME_TIMEOUT));
+ 	}
+ 
+ 	/*
+@@ -2506,7 +2515,7 @@ static int musb_runtime_resume(struct device *dev)
+ 	if (musb->need_finish_resume) {
+ 		musb->need_finish_resume = 0;
+ 		schedule_delayed_work(&musb->finish_resume_work,
+-				msecs_to_jiffies(20));
++				msecs_to_jiffies(USB_RESUME_TIMEOUT));
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/usb/musb/musb_virthub.c b/drivers/usb/musb/musb_virthub.c
+index 294e159..5428ed1 100644
+--- a/drivers/usb/musb/musb_virthub.c
++++ b/drivers/usb/musb/musb_virthub.c
+@@ -136,7 +136,7 @@ void musb_port_suspend(struct musb *musb, bool do_suspend)
+ 		/* later, GetPortStatus will stop RESUME signaling */
+ 		musb->port1_status |= MUSB_PORT_STAT_RESUME;
+ 		schedule_delayed_work(&musb->finish_resume_work,
+-				      msecs_to_jiffies(20));
++				      msecs_to_jiffies(USB_RESUME_TIMEOUT));
+ 	}
+ }
+ 
+diff --git a/drivers/usb/phy/phy.c b/drivers/usb/phy/phy.c
+index 2f9735b..d1cd6b5 100644
+--- a/drivers/usb/phy/phy.c
++++ b/drivers/usb/phy/phy.c
+@@ -81,7 +81,9 @@ static void devm_usb_phy_release(struct device *dev, void *res)
+ 
+ static int devm_usb_phy_match(struct device *dev, void *res, void *match_data)
+ {
+-	return res == match_data;
++	struct usb_phy **phy = res;
++
++	return *phy == match_data;
+ }
+ 
+ /**
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 995986b..d925f55 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -862,6 +862,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ 	    i < loc->elf_ex.e_phnum; i++, elf_ppnt++) {
+ 		int elf_prot = 0, elf_flags;
+ 		unsigned long k, vaddr;
++		unsigned long total_size = 0;
+ 
+ 		if (elf_ppnt->p_type != PT_LOAD)
+ 			continue;
+@@ -924,10 +925,16 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ #else
+ 			load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
+ #endif
++			total_size = total_mapping_size(elf_phdata,
++							loc->elf_ex.e_phnum);
++			if (!total_size) {
++				error = -EINVAL;
++				goto out_free_dentry;
++			}
+ 		}
+ 
+ 		error = elf_map(bprm->file, load_bias + vaddr, elf_ppnt,
+-				elf_prot, elf_flags, 0);
++				elf_prot, elf_flags, total_size);
+ 		if (BAD_ADDR(error)) {
+ 			retval = IS_ERR((void *)error) ?
+ 				PTR_ERR((void*)error) : -EINVAL;
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 8b353ad..0a795c9 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -6956,12 +6956,11 @@ static int __btrfs_free_reserved_extent(struct btrfs_root *root,
+ 		return -ENOSPC;
+ 	}
+ 
+-	if (btrfs_test_opt(root, DISCARD))
+-		ret = btrfs_discard_extent(root, start, len, NULL);
+-
+ 	if (pin)
+ 		pin_down_extent(root, cache, start, len, 1);
+ 	else {
++		if (btrfs_test_opt(root, DISCARD))
++			ret = btrfs_discard_extent(root, start, len, NULL);
+ 		btrfs_add_free_space(cache, start, len);
+ 		btrfs_update_reserved_bytes(cache, len, RESERVE_FREE, delalloc);
+ 	}
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 74609b9..f23d4be 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -2897,6 +2897,9 @@ static int btrfs_extent_same(struct inode *src, u64 loff, u64 len,
+ 	if (src == dst)
+ 		return -EINVAL;
+ 
++	if (len == 0)
++		return 0;
++
+ 	btrfs_double_lock(src, loff, dst, dst_loff, len);
+ 
+ 	ret = extent_same_check_offsets(src, loff, len);
+@@ -3626,6 +3629,11 @@ static noinline long btrfs_ioctl_clone(struct file *file, unsigned long srcfd,
+ 	if (off + len == src->i_size)
+ 		len = ALIGN(src->i_size, bs) - off;
+ 
++	if (len == 0) {
++		ret = 0;
++		goto out_unlock;
++	}
++
+ 	/* verify the end result is block aligned */
+ 	if (!IS_ALIGNED(off, bs) || !IS_ALIGNED(off + len, bs) ||
+ 	    !IS_ALIGNED(destoff, bs))
+diff --git a/fs/btrfs/xattr.c b/fs/btrfs/xattr.c
+index 883b936..45ea704 100644
+--- a/fs/btrfs/xattr.c
++++ b/fs/btrfs/xattr.c
+@@ -364,22 +364,42 @@ const struct xattr_handler *btrfs_xattr_handlers[] = {
+ /*
+  * Check if the attribute is in a supported namespace.
+  *
+- * This applied after the check for the synthetic attributes in the system
++ * This is applied after the check for the synthetic attributes in the system
+  * namespace.
+  */
+-static bool btrfs_is_valid_xattr(const char *name)
++static int btrfs_is_valid_xattr(const char *name)
+ {
+-	return !strncmp(name, XATTR_SECURITY_PREFIX,
+-			XATTR_SECURITY_PREFIX_LEN) ||
+-	       !strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN) ||
+-	       !strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) ||
+-	       !strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN) ||
+-		!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN);
++	int len = strlen(name);
++	int prefixlen = 0;
++
++	if (!strncmp(name, XATTR_SECURITY_PREFIX,
++			XATTR_SECURITY_PREFIX_LEN))
++		prefixlen = XATTR_SECURITY_PREFIX_LEN;
++	else if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
++		prefixlen = XATTR_SYSTEM_PREFIX_LEN;
++	else if (!strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN))
++		prefixlen = XATTR_TRUSTED_PREFIX_LEN;
++	else if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN))
++		prefixlen = XATTR_USER_PREFIX_LEN;
++	else if (!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN))
++		prefixlen = XATTR_BTRFS_PREFIX_LEN;
++	else
++		return -EOPNOTSUPP;
++
++	/*
++	 * The name cannot consist of just prefix
++	 */
++	if (len <= prefixlen)
++		return -EINVAL;
++
++	return 0;
+ }
+ 
+ ssize_t btrfs_getxattr(struct dentry *dentry, const char *name,
+ 		       void *buffer, size_t size)
+ {
++	int ret;
++
+ 	/*
+ 	 * If this is a request for a synthetic attribute in the system.*
+ 	 * namespace use the generic infrastructure to resolve a handler
+@@ -388,8 +408,9 @@ ssize_t btrfs_getxattr(struct dentry *dentry, const char *name,
+ 	if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
+ 		return generic_getxattr(dentry, name, buffer, size);
+ 
+-	if (!btrfs_is_valid_xattr(name))
+-		return -EOPNOTSUPP;
++	ret = btrfs_is_valid_xattr(name);
++	if (ret)
++		return ret;
+ 	return __btrfs_getxattr(dentry->d_inode, name, buffer, size);
+ }
+ 
+@@ -397,6 +418,7 @@ int btrfs_setxattr(struct dentry *dentry, const char *name, const void *value,
+ 		   size_t size, int flags)
+ {
+ 	struct btrfs_root *root = BTRFS_I(dentry->d_inode)->root;
++	int ret;
+ 
+ 	/*
+ 	 * The permission on security.* and system.* is not checked
+@@ -413,8 +435,9 @@ int btrfs_setxattr(struct dentry *dentry, const char *name, const void *value,
+ 	if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
+ 		return generic_setxattr(dentry, name, value, size, flags);
+ 
+-	if (!btrfs_is_valid_xattr(name))
+-		return -EOPNOTSUPP;
++	ret = btrfs_is_valid_xattr(name);
++	if (ret)
++		return ret;
+ 
+ 	if (!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN))
+ 		return btrfs_set_prop(dentry->d_inode, name,
+@@ -430,6 +453,7 @@ int btrfs_setxattr(struct dentry *dentry, const char *name, const void *value,
+ int btrfs_removexattr(struct dentry *dentry, const char *name)
+ {
+ 	struct btrfs_root *root = BTRFS_I(dentry->d_inode)->root;
++	int ret;
+ 
+ 	/*
+ 	 * The permission on security.* and system.* is not checked
+@@ -446,8 +470,9 @@ int btrfs_removexattr(struct dentry *dentry, const char *name)
+ 	if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
+ 		return generic_removexattr(dentry, name);
+ 
+-	if (!btrfs_is_valid_xattr(name))
+-		return -EOPNOTSUPP;
++	ret = btrfs_is_valid_xattr(name);
++	if (ret)
++		return ret;
+ 
+ 	if (!strncmp(name, XATTR_BTRFS_PREFIX, XATTR_BTRFS_PREFIX_LEN))
+ 		return btrfs_set_prop(dentry->d_inode, name,
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 28fe71a..aae7011 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1865,7 +1865,7 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ 			  struct inode *inode)
+ {
+ 	struct inode *dir = dentry->d_parent->d_inode;
+-	struct buffer_head *bh;
++	struct buffer_head *bh = NULL;
+ 	struct ext4_dir_entry_2 *de;
+ 	struct ext4_dir_entry_tail *t;
+ 	struct super_block *sb;
+@@ -1889,14 +1889,14 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ 			return retval;
+ 		if (retval == 1) {
+ 			retval = 0;
+-			return retval;
++			goto out;
+ 		}
+ 	}
+ 
+ 	if (is_dx(dir)) {
+ 		retval = ext4_dx_add_entry(handle, dentry, inode);
+ 		if (!retval || (retval != ERR_BAD_DX_DIR))
+-			return retval;
++			goto out;
+ 		ext4_clear_inode_flag(dir, EXT4_INODE_INDEX);
+ 		dx_fallback++;
+ 		ext4_mark_inode_dirty(handle, dir);
+@@ -1908,14 +1908,15 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ 			return PTR_ERR(bh);
+ 
+ 		retval = add_dirent_to_buf(handle, dentry, inode, NULL, bh);
+-		if (retval != -ENOSPC) {
+-			brelse(bh);
+-			return retval;
+-		}
++		if (retval != -ENOSPC)
++			goto out;
+ 
+ 		if (blocks == 1 && !dx_fallback &&
+-		    EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_DIR_INDEX))
+-			return make_indexed_dir(handle, dentry, inode, bh);
++		    EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_DIR_INDEX)) {
++			retval = make_indexed_dir(handle, dentry, inode, bh);
++			bh = NULL; /* make_indexed_dir releases bh */
++			goto out;
++		}
+ 		brelse(bh);
+ 	}
+ 	bh = ext4_append(handle, dir, &block);
+@@ -1931,6 +1932,7 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry,
+ 	}
+ 
+ 	retval = add_dirent_to_buf(handle, dentry, inode, de, bh);
++out:
+ 	brelse(bh);
+ 	if (retval == 0)
+ 		ext4_set_inode_state(inode, EXT4_STATE_NEWENTRY);
+diff --git a/fs/lockd/svcsubs.c b/fs/lockd/svcsubs.c
+index 665ef5a..a563ddb 100644
+--- a/fs/lockd/svcsubs.c
++++ b/fs/lockd/svcsubs.c
+@@ -31,7 +31,7 @@
+ static struct hlist_head	nlm_files[FILE_NRHASH];
+ static DEFINE_MUTEX(nlm_file_mutex);
+ 
+-#ifdef NFSD_DEBUG
++#ifdef CONFIG_SUNRPC_DEBUG
+ static inline void nlm_debug_print_fh(char *msg, struct nfs_fh *f)
+ {
+ 	u32 *fhp = (u32*)f->data;
+diff --git a/fs/namei.c b/fs/namei.c
+index c83145a..caa38a2 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -1591,7 +1591,8 @@ static inline int walk_component(struct nameidata *nd, struct path *path,
+ 
+ 	if (should_follow_link(path->dentry, follow)) {
+ 		if (nd->flags & LOOKUP_RCU) {
+-			if (unlikely(unlazy_walk(nd, path->dentry))) {
++			if (unlikely(nd->path.mnt != path->mnt ||
++				     unlazy_walk(nd, path->dentry))) {
+ 				err = -ECHILD;
+ 				goto out_err;
+ 			}
+@@ -3047,7 +3048,8 @@ finish_lookup:
+ 
+ 	if (should_follow_link(path->dentry, !symlink_ok)) {
+ 		if (nd->flags & LOOKUP_RCU) {
+-			if (unlikely(unlazy_walk(nd, path->dentry))) {
++			if (unlikely(nd->path.mnt != path->mnt ||
++				     unlazy_walk(nd, path->dentry))) {
+ 				error = -ECHILD;
+ 				goto out;
+ 			}
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 82ef140..4622ee3 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -632,14 +632,17 @@ struct mount *__lookup_mnt(struct vfsmount *mnt, struct dentry *dentry)
+  */
+ struct mount *__lookup_mnt_last(struct vfsmount *mnt, struct dentry *dentry)
+ {
+-	struct mount *p, *res;
+-	res = p = __lookup_mnt(mnt, dentry);
++	struct mount *p, *res = NULL;
++	p = __lookup_mnt(mnt, dentry);
+ 	if (!p)
+ 		goto out;
++	if (!(p->mnt.mnt_flags & MNT_UMOUNT))
++		res = p;
+ 	hlist_for_each_entry_continue(p, mnt_hash) {
+ 		if (&p->mnt_parent->mnt != mnt || p->mnt_mountpoint != dentry)
+ 			break;
+-		res = p;
++		if (!(p->mnt.mnt_flags & MNT_UMOUNT))
++			res = p;
+ 	}
+ out:
+ 	return res;
+@@ -795,10 +798,8 @@ static void __touch_mnt_namespace(struct mnt_namespace *ns)
+ /*
+  * vfsmount lock must be held for write
+  */
+-static void detach_mnt(struct mount *mnt, struct path *old_path)
++static void unhash_mnt(struct mount *mnt)
+ {
+-	old_path->dentry = mnt->mnt_mountpoint;
+-	old_path->mnt = &mnt->mnt_parent->mnt;
+ 	mnt->mnt_parent = mnt;
+ 	mnt->mnt_mountpoint = mnt->mnt.mnt_root;
+ 	list_del_init(&mnt->mnt_child);
+@@ -811,6 +812,26 @@ static void detach_mnt(struct mount *mnt, struct path *old_path)
+ /*
+  * vfsmount lock must be held for write
+  */
++static void detach_mnt(struct mount *mnt, struct path *old_path)
++{
++	old_path->dentry = mnt->mnt_mountpoint;
++	old_path->mnt = &mnt->mnt_parent->mnt;
++	unhash_mnt(mnt);
++}
++
++/*
++ * vfsmount lock must be held for write
++ */
++static void umount_mnt(struct mount *mnt)
++{
++	/* old mountpoint will be dropped when we can do that */
++	mnt->mnt_ex_mountpoint = mnt->mnt_mountpoint;
++	unhash_mnt(mnt);
++}
++
++/*
++ * vfsmount lock must be held for write
++ */
+ void mnt_set_mountpoint(struct mount *mnt,
+ 			struct mountpoint *mp,
+ 			struct mount *child_mnt)
+@@ -1078,6 +1099,13 @@ static void mntput_no_expire(struct mount *mnt)
+ 	rcu_read_unlock();
+ 
+ 	list_del(&mnt->mnt_instance);
++
++	if (unlikely(!list_empty(&mnt->mnt_mounts))) {
++		struct mount *p, *tmp;
++		list_for_each_entry_safe(p, tmp, &mnt->mnt_mounts,  mnt_child) {
++			umount_mnt(p);
++		}
++	}
+ 	unlock_mount_hash();
+ 
+ 	if (likely(!(mnt->mnt.mnt_flags & MNT_INTERNAL))) {
+@@ -1319,49 +1347,63 @@ static inline void namespace_lock(void)
+ 	down_write(&namespace_sem);
+ }
+ 
++enum umount_tree_flags {
++	UMOUNT_SYNC = 1,
++	UMOUNT_PROPAGATE = 2,
++	UMOUNT_CONNECTED = 4,
++};
+ /*
+  * mount_lock must be held
+  * namespace_sem must be held for write
+- * how = 0 => just this tree, don't propagate
+- * how = 1 => propagate; we know that nobody else has reference to any victims
+- * how = 2 => lazy umount
+  */
+-void umount_tree(struct mount *mnt, int how)
++static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
+ {
+-	HLIST_HEAD(tmp_list);
++	LIST_HEAD(tmp_list);
+ 	struct mount *p;
+ 
++	if (how & UMOUNT_PROPAGATE)
++		propagate_mount_unlock(mnt);
++
++	/* Gather the mounts to umount */
+ 	for (p = mnt; p; p = next_mnt(p, mnt)) {
+-		hlist_del_init_rcu(&p->mnt_hash);
+-		hlist_add_head(&p->mnt_hash, &tmp_list);
++		p->mnt.mnt_flags |= MNT_UMOUNT;
++		list_move(&p->mnt_list, &tmp_list);
+ 	}
+ 
+-	hlist_for_each_entry(p, &tmp_list, mnt_hash)
++	/* Hide the mounts from mnt_mounts */
++	list_for_each_entry(p, &tmp_list, mnt_list) {
+ 		list_del_init(&p->mnt_child);
++	}
+ 
+-	if (how)
++	/* Add propogated mounts to the tmp_list */
++	if (how & UMOUNT_PROPAGATE)
+ 		propagate_umount(&tmp_list);
+ 
+-	while (!hlist_empty(&tmp_list)) {
+-		p = hlist_entry(tmp_list.first, struct mount, mnt_hash);
+-		hlist_del_init_rcu(&p->mnt_hash);
++	while (!list_empty(&tmp_list)) {
++		bool disconnect;
++		p = list_first_entry(&tmp_list, struct mount, mnt_list);
+ 		list_del_init(&p->mnt_expire);
+ 		list_del_init(&p->mnt_list);
+ 		__touch_mnt_namespace(p->mnt_ns);
+ 		p->mnt_ns = NULL;
+-		if (how < 2)
++		if (how & UMOUNT_SYNC)
+ 			p->mnt.mnt_flags |= MNT_SYNC_UMOUNT;
+ 
+-		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt, &unmounted);
++		disconnect = !(((how & UMOUNT_CONNECTED) &&
++				mnt_has_parent(p) &&
++				(p->mnt_parent->mnt.mnt_flags & MNT_UMOUNT)) ||
++			       IS_MNT_LOCKED_AND_LAZY(p));
++
++		pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt,
++				 disconnect ? &unmounted : NULL);
+ 		if (mnt_has_parent(p)) {
+-			hlist_del_init(&p->mnt_mp_list);
+-			put_mountpoint(p->mnt_mp);
+ 			mnt_add_count(p->mnt_parent, -1);
+-			/* old mountpoint will be dropped when we can do that */
+-			p->mnt_ex_mountpoint = p->mnt_mountpoint;
+-			p->mnt_mountpoint = p->mnt.mnt_root;
+-			p->mnt_parent = p;
+-			p->mnt_mp = NULL;
++			if (!disconnect) {
++				/* Don't forget about p */
++				list_add_tail(&p->mnt_child, &p->mnt_parent->mnt_mounts);
++			} else {
++				umount_mnt(p);
++			}
+ 		}
+ 		change_mnt_propagation(p, MS_PRIVATE);
+ 	}
+@@ -1447,14 +1489,14 @@ static int do_umount(struct mount *mnt, int flags)
+ 
+ 	if (flags & MNT_DETACH) {
+ 		if (!list_empty(&mnt->mnt_list))
+-			umount_tree(mnt, 2);
++			umount_tree(mnt, UMOUNT_PROPAGATE);
+ 		retval = 0;
+ 	} else {
+ 		shrink_submounts(mnt);
+ 		retval = -EBUSY;
+ 		if (!propagate_mount_busy(mnt, 2)) {
+ 			if (!list_empty(&mnt->mnt_list))
+-				umount_tree(mnt, 1);
++				umount_tree(mnt, UMOUNT_PROPAGATE|UMOUNT_SYNC);
+ 			retval = 0;
+ 		}
+ 	}
+@@ -1480,13 +1522,20 @@ void __detach_mounts(struct dentry *dentry)
+ 
+ 	namespace_lock();
+ 	mp = lookup_mountpoint(dentry);
+-	if (!mp)
++	if (IS_ERR_OR_NULL(mp))
+ 		goto out_unlock;
+ 
+ 	lock_mount_hash();
+ 	while (!hlist_empty(&mp->m_list)) {
+ 		mnt = hlist_entry(mp->m_list.first, struct mount, mnt_mp_list);
+-		umount_tree(mnt, 2);
++		if (mnt->mnt.mnt_flags & MNT_UMOUNT) {
++			struct mount *p, *tmp;
++			list_for_each_entry_safe(p, tmp, &mnt->mnt_mounts,  mnt_child) {
++				hlist_add_head(&p->mnt_umount.s_list, &unmounted);
++				umount_mnt(p);
++			}
++		}
++		else umount_tree(mnt, UMOUNT_CONNECTED);
+ 	}
+ 	unlock_mount_hash();
+ 	put_mountpoint(mp);
+@@ -1648,7 +1697,7 @@ struct mount *copy_tree(struct mount *mnt, struct dentry *dentry,
+ out:
+ 	if (res) {
+ 		lock_mount_hash();
+-		umount_tree(res, 0);
++		umount_tree(res, UMOUNT_SYNC);
+ 		unlock_mount_hash();
+ 	}
+ 	return q;
+@@ -1672,7 +1721,7 @@ void drop_collected_mounts(struct vfsmount *mnt)
+ {
+ 	namespace_lock();
+ 	lock_mount_hash();
+-	umount_tree(real_mount(mnt), 0);
++	umount_tree(real_mount(mnt), UMOUNT_SYNC);
+ 	unlock_mount_hash();
+ 	namespace_unlock();
+ }
+@@ -1855,7 +1904,7 @@ static int attach_recursive_mnt(struct mount *source_mnt,
+  out_cleanup_ids:
+ 	while (!hlist_empty(&tree_list)) {
+ 		child = hlist_entry(tree_list.first, struct mount, mnt_hash);
+-		umount_tree(child, 0);
++		umount_tree(child, UMOUNT_SYNC);
+ 	}
+ 	unlock_mount_hash();
+ 	cleanup_group_ids(source_mnt, NULL);
+@@ -2035,7 +2084,7 @@ static int do_loopback(struct path *path, const char *old_name,
+ 	err = graft_tree(mnt, parent, mp);
+ 	if (err) {
+ 		lock_mount_hash();
+-		umount_tree(mnt, 0);
++		umount_tree(mnt, UMOUNT_SYNC);
+ 		unlock_mount_hash();
+ 	}
+ out2:
+@@ -2406,7 +2455,7 @@ void mark_mounts_for_expiry(struct list_head *mounts)
+ 	while (!list_empty(&graveyard)) {
+ 		mnt = list_first_entry(&graveyard, struct mount, mnt_expire);
+ 		touch_mnt_namespace(mnt->mnt_ns);
+-		umount_tree(mnt, 1);
++		umount_tree(mnt, UMOUNT_PROPAGATE|UMOUNT_SYNC);
+ 	}
+ 	unlock_mount_hash();
+ 	namespace_unlock();
+@@ -2477,7 +2526,7 @@ static void shrink_submounts(struct mount *mnt)
+ 			m = list_first_entry(&graveyard, struct mount,
+ 						mnt_expire);
+ 			touch_mnt_namespace(m->mnt_ns);
+-			umount_tree(m, 1);
++			umount_tree(m, UMOUNT_PROPAGATE|UMOUNT_SYNC);
+ 		}
+ 	}
+ }
+diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
+index 351be920..8d129bb 100644
+--- a/fs/nfs/callback.c
++++ b/fs/nfs/callback.c
+@@ -128,7 +128,7 @@ nfs41_callback_svc(void *vrqstp)
+ 		if (try_to_freeze())
+ 			continue;
+ 
+-		prepare_to_wait(&serv->sv_cb_waitq, &wq, TASK_UNINTERRUPTIBLE);
++		prepare_to_wait(&serv->sv_cb_waitq, &wq, TASK_INTERRUPTIBLE);
+ 		spin_lock_bh(&serv->sv_cb_lock);
+ 		if (!list_empty(&serv->sv_cb_list)) {
+ 			req = list_first_entry(&serv->sv_cb_list,
+@@ -142,10 +142,10 @@ nfs41_callback_svc(void *vrqstp)
+ 				error);
+ 		} else {
+ 			spin_unlock_bh(&serv->sv_cb_lock);
+-			/* schedule_timeout to game the hung task watchdog */
+-			schedule_timeout(60 * HZ);
++			schedule();
+ 			finish_wait(&serv->sv_cb_waitq, &wq);
+ 		}
++		flush_signals(current);
+ 	}
+ 	return 0;
+ }
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index e907c8c..ab21ef1 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -129,22 +129,25 @@ nfs_direct_good_bytes(struct nfs_direct_req *dreq, struct nfs_pgio_header *hdr)
+ 	int i;
+ 	ssize_t count;
+ 
+-	WARN_ON_ONCE(hdr->pgio_mirror_idx >= dreq->mirror_count);
+-
+-	count = dreq->mirrors[hdr->pgio_mirror_idx].count;
+-	if (count + dreq->io_start < hdr->io_start + hdr->good_bytes) {
+-		count = hdr->io_start + hdr->good_bytes - dreq->io_start;
+-		dreq->mirrors[hdr->pgio_mirror_idx].count = count;
+-	}
+-
+-	/* update the dreq->count by finding the minimum agreed count from all
+-	 * mirrors */
+-	count = dreq->mirrors[0].count;
++	if (dreq->mirror_count == 1) {
++		dreq->mirrors[hdr->pgio_mirror_idx].count += hdr->good_bytes;
++		dreq->count += hdr->good_bytes;
++	} else {
++		/* mirrored writes */
++		count = dreq->mirrors[hdr->pgio_mirror_idx].count;
++		if (count + dreq->io_start < hdr->io_start + hdr->good_bytes) {
++			count = hdr->io_start + hdr->good_bytes - dreq->io_start;
++			dreq->mirrors[hdr->pgio_mirror_idx].count = count;
++		}
++		/* update the dreq->count by finding the minimum agreed count from all
++		 * mirrors */
++		count = dreq->mirrors[0].count;
+ 
+-	for (i = 1; i < dreq->mirror_count; i++)
+-		count = min(count, dreq->mirrors[i].count);
++		for (i = 1; i < dreq->mirror_count; i++)
++			count = min(count, dreq->mirrors[i].count);
+ 
+-	dreq->count = count;
++		dreq->count = count;
++	}
+ }
+ 
+ /*
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index 5c399ec..d494ea2 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -7365,6 +7365,11 @@ nfs4_stat_to_errno(int stat)
+ 	.p_name   = #proc,					\
+ }
+ 
++#define STUB(proc)		\
++[NFSPROC4_CLNT_##proc] = {	\
++	.p_name = #proc,	\
++}
++
+ struct rpc_procinfo	nfs4_procedures[] = {
+ 	PROC(READ,		enc_read,		dec_read),
+ 	PROC(WRITE,		enc_write,		dec_write),
+@@ -7417,6 +7422,7 @@ struct rpc_procinfo	nfs4_procedures[] = {
+ 	PROC(SECINFO_NO_NAME,	enc_secinfo_no_name,	dec_secinfo_no_name),
+ 	PROC(TEST_STATEID,	enc_test_stateid,	dec_test_stateid),
+ 	PROC(FREE_STATEID,	enc_free_stateid,	dec_free_stateid),
++	STUB(GETDEVICELIST),
+ 	PROC(BIND_CONN_TO_SESSION,
+ 			enc_bind_conn_to_session, dec_bind_conn_to_session),
+ 	PROC(DESTROY_CLIENTID,	enc_destroy_clientid,	dec_destroy_clientid),
+diff --git a/fs/nfs/read.c b/fs/nfs/read.c
+index 568ecf0..848d8b1 100644
+--- a/fs/nfs/read.c
++++ b/fs/nfs/read.c
+@@ -284,7 +284,7 @@ int nfs_readpage(struct file *file, struct page *page)
+ 	dprintk("NFS: nfs_readpage (%p %ld@%lu)\n",
+ 		page, PAGE_CACHE_SIZE, page_file_index(page));
+ 	nfs_inc_stats(inode, NFSIOS_VFSREADPAGE);
+-	nfs_inc_stats(inode, NFSIOS_READPAGES);
++	nfs_add_stats(inode, NFSIOS_READPAGES, 1);
+ 
+ 	/*
+ 	 * Try to flush any pending writes to the file..
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 849ed78..41b3f1096 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -580,7 +580,7 @@ static int nfs_do_writepage(struct page *page, struct writeback_control *wbc, st
+ 	int ret;
+ 
+ 	nfs_inc_stats(inode, NFSIOS_VFSWRITEPAGE);
+-	nfs_inc_stats(inode, NFSIOS_WRITEPAGES);
++	nfs_add_stats(inode, NFSIOS_WRITEPAGES, 1);
+ 
+ 	nfs_pageio_cond_complete(pgio, page_file_index(page));
+ 	ret = nfs_page_async_flush(pgio, page, wbc->sync_mode == WB_SYNC_NONE);
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 92b9d97..5416968 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1030,6 +1030,8 @@ nfsd4_fallocate(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		dprintk("NFSD: nfsd4_fallocate: couldn't process stateid!\n");
+ 		return status;
+ 	}
++	if (!file)
++		return nfserr_bad_stateid;
+ 
+ 	status = nfsd4_vfs_fallocate(rqstp, &cstate->current_fh, file,
+ 				     fallocate->falloc_offset,
+@@ -1069,6 +1071,8 @@ nfsd4_seek(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		dprintk("NFSD: nfsd4_seek: couldn't process stateid!\n");
+ 		return status;
+ 	}
++	if (!file)
++		return nfserr_bad_stateid;
+ 
+ 	switch (seek->seek_whence) {
+ 	case NFS4_CONTENT_DATA:
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 8ba1d88..ee1cccd 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1139,7 +1139,7 @@ hash_sessionid(struct nfs4_sessionid *sessionid)
+ 	return sid->sequence % SESSION_HASH_SIZE;
+ }
+ 
+-#ifdef NFSD_DEBUG
++#ifdef CONFIG_SUNRPC_DEBUG
+ static inline void
+ dump_sessionid(const char *fn, struct nfs4_sessionid *sessionid)
+ {
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 5fb7e78..5b33ce1 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3422,6 +3422,7 @@ nfsd4_encode_read(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 	unsigned long maxcount;
+ 	struct xdr_stream *xdr = &resp->xdr;
+ 	struct file *file = read->rd_filp;
++	struct svc_fh *fhp = read->rd_fhp;
+ 	int starting_len = xdr->buf->len;
+ 	struct raparms *ra;
+ 	__be32 *p;
+@@ -3445,12 +3446,15 @@ nfsd4_encode_read(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 	maxcount = min_t(unsigned long, maxcount, (xdr->buf->buflen - xdr->buf->len));
+ 	maxcount = min_t(unsigned long, maxcount, read->rd_length);
+ 
+-	if (!read->rd_filp) {
++	if (read->rd_filp)
++		err = nfsd_permission(resp->rqstp, fhp->fh_export,
++				fhp->fh_dentry,
++				NFSD_MAY_READ|NFSD_MAY_OWNER_OVERRIDE);
++	else
+ 		err = nfsd_get_tmp_read_open(resp->rqstp, read->rd_fhp,
+ 						&file, &ra);
+-		if (err)
+-			goto err_truncate;
+-	}
++	if (err)
++		goto err_truncate;
+ 
+ 	if (file->f_op->splice_read && test_bit(RQ_SPLICE_OK, &resp->rqstp->rq_flags))
+ 		err = nfsd4_encode_splice_read(resp, read, file, maxcount);
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index aa47d75..9690cb4 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1250,15 +1250,15 @@ static int __init init_nfsd(void)
+ 	int retval;
+ 	printk(KERN_INFO "Installing knfsd (copyright (C) 1996 okir@monad.swb.de).\n");
+ 
+-	retval = register_cld_notifier();
+-	if (retval)
+-		return retval;
+ 	retval = register_pernet_subsys(&nfsd_net_ops);
+ 	if (retval < 0)
+-		goto out_unregister_notifier;
+-	retval = nfsd4_init_slabs();
++		return retval;
++	retval = register_cld_notifier();
+ 	if (retval)
+ 		goto out_unregister_pernet;
++	retval = nfsd4_init_slabs();
++	if (retval)
++		goto out_unregister_notifier;
+ 	retval = nfsd4_init_pnfs();
+ 	if (retval)
+ 		goto out_free_slabs;
+@@ -1290,10 +1290,10 @@ out_exit_pnfs:
+ 	nfsd4_exit_pnfs();
+ out_free_slabs:
+ 	nfsd4_free_slabs();
+-out_unregister_pernet:
+-	unregister_pernet_subsys(&nfsd_net_ops);
+ out_unregister_notifier:
+ 	unregister_cld_notifier();
++out_unregister_pernet:
++	unregister_pernet_subsys(&nfsd_net_ops);
+ 	return retval;
+ }
+ 
+@@ -1308,8 +1308,8 @@ static void __exit exit_nfsd(void)
+ 	nfsd4_exit_pnfs();
+ 	nfsd_fault_inject_cleanup();
+ 	unregister_filesystem(&nfsd_fs_type);
+-	unregister_pernet_subsys(&nfsd_net_ops);
+ 	unregister_cld_notifier();
++	unregister_pernet_subsys(&nfsd_net_ops);
+ }
+ 
+ MODULE_AUTHOR("Olaf Kirch <okir@monad.swb.de>");
+diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
+index 565c4da..cf98052 100644
+--- a/fs/nfsd/nfsd.h
++++ b/fs/nfsd/nfsd.h
+@@ -24,7 +24,7 @@
+ #include "export.h"
+ 
+ #undef ifdebug
+-#ifdef NFSD_DEBUG
++#ifdef CONFIG_SUNRPC_DEBUG
+ # define ifdebug(flag)		if (nfsd_debug & NFSDDBG_##flag)
+ #else
+ # define ifdebug(flag)		if (0)
+diff --git a/fs/open.c b/fs/open.c
+index 33f9cbf..44a3be1 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -570,6 +570,7 @@ static int chown_common(struct path *path, uid_t user, gid_t group)
+ 	uid = make_kuid(current_user_ns(), user);
+ 	gid = make_kgid(current_user_ns(), group);
+ 
++retry_deleg:
+ 	newattrs.ia_valid =  ATTR_CTIME;
+ 	if (user != (uid_t) -1) {
+ 		if (!uid_valid(uid))
+@@ -586,7 +587,6 @@ static int chown_common(struct path *path, uid_t user, gid_t group)
+ 	if (!S_ISDIR(inode->i_mode))
+ 		newattrs.ia_valid |=
+ 			ATTR_KILL_SUID | ATTR_KILL_SGID | ATTR_KILL_PRIV;
+-retry_deleg:
+ 	mutex_lock(&inode->i_mutex);
+ 	error = security_path_chown(path, uid, gid);
+ 	if (!error)
+diff --git a/fs/pnode.c b/fs/pnode.c
+index 260ac8f..6367e1e 100644
+--- a/fs/pnode.c
++++ b/fs/pnode.c
+@@ -362,6 +362,46 @@ int propagate_mount_busy(struct mount *mnt, int refcnt)
+ }
+ 
+ /*
++ * Clear MNT_LOCKED when it can be shown to be safe.
++ *
++ * mount_lock lock must be held for write
++ */
++void propagate_mount_unlock(struct mount *mnt)
++{
++	struct mount *parent = mnt->mnt_parent;
++	struct mount *m, *child;
++
++	BUG_ON(parent == mnt);
++
++	for (m = propagation_next(parent, parent); m;
++			m = propagation_next(m, parent)) {
++		child = __lookup_mnt_last(&m->mnt, mnt->mnt_mountpoint);
++		if (child)
++			child->mnt.mnt_flags &= ~MNT_LOCKED;
++	}
++}
++
++/*
++ * Mark all mounts that the MNT_LOCKED logic will allow to be unmounted.
++ */
++static void mark_umount_candidates(struct mount *mnt)
++{
++	struct mount *parent = mnt->mnt_parent;
++	struct mount *m;
++
++	BUG_ON(parent == mnt);
++
++	for (m = propagation_next(parent, parent); m;
++			m = propagation_next(m, parent)) {
++		struct mount *child = __lookup_mnt_last(&m->mnt,
++						mnt->mnt_mountpoint);
++		if (child && (!IS_MNT_LOCKED(child) || IS_MNT_MARKED(m))) {
++			SET_MNT_MARK(child);
++		}
++	}
++}
++
++/*
+  * NOTE: unmounting 'mnt' naturally propagates to all other mounts its
+  * parent propagates to.
+  */
+@@ -378,13 +418,16 @@ static void __propagate_umount(struct mount *mnt)
+ 		struct mount *child = __lookup_mnt_last(&m->mnt,
+ 						mnt->mnt_mountpoint);
+ 		/*
+-		 * umount the child only if the child has no
+-		 * other children
++		 * umount the child only if the child has no children
++		 * and the child is marked safe to unmount.
+ 		 */
+-		if (child && list_empty(&child->mnt_mounts)) {
++		if (!child || !IS_MNT_MARKED(child))
++			continue;
++		CLEAR_MNT_MARK(child);
++		if (list_empty(&child->mnt_mounts)) {
+ 			list_del_init(&child->mnt_child);
+-			hlist_del_init_rcu(&child->mnt_hash);
+-			hlist_add_before_rcu(&child->mnt_hash, &mnt->mnt_hash);
++			child->mnt.mnt_flags |= MNT_UMOUNT;
++			list_move_tail(&child->mnt_list, &mnt->mnt_list);
+ 		}
+ 	}
+ }
+@@ -396,11 +439,14 @@ static void __propagate_umount(struct mount *mnt)
+  *
+  * vfsmount lock must be held for write
+  */
+-int propagate_umount(struct hlist_head *list)
++int propagate_umount(struct list_head *list)
+ {
+ 	struct mount *mnt;
+ 
+-	hlist_for_each_entry(mnt, list, mnt_hash)
++	list_for_each_entry_reverse(mnt, list, mnt_list)
++		mark_umount_candidates(mnt);
++
++	list_for_each_entry(mnt, list, mnt_list)
+ 		__propagate_umount(mnt);
+ 	return 0;
+ }
+diff --git a/fs/pnode.h b/fs/pnode.h
+index 4a24635..7114ce6 100644
+--- a/fs/pnode.h
++++ b/fs/pnode.h
+@@ -19,6 +19,9 @@
+ #define IS_MNT_MARKED(m) ((m)->mnt.mnt_flags & MNT_MARKED)
+ #define SET_MNT_MARK(m) ((m)->mnt.mnt_flags |= MNT_MARKED)
+ #define CLEAR_MNT_MARK(m) ((m)->mnt.mnt_flags &= ~MNT_MARKED)
++#define IS_MNT_LOCKED(m) ((m)->mnt.mnt_flags & MNT_LOCKED)
++#define IS_MNT_LOCKED_AND_LAZY(m) \
++	(((m)->mnt.mnt_flags & (MNT_LOCKED|MNT_SYNC_UMOUNT)) == MNT_LOCKED)
+ 
+ #define CL_EXPIRE    		0x01
+ #define CL_SLAVE     		0x02
+@@ -40,14 +43,14 @@ static inline void set_mnt_shared(struct mount *mnt)
+ void change_mnt_propagation(struct mount *, int);
+ int propagate_mnt(struct mount *, struct mountpoint *, struct mount *,
+ 		struct hlist_head *);
+-int propagate_umount(struct hlist_head *);
++int propagate_umount(struct list_head *);
+ int propagate_mount_busy(struct mount *, int);
++void propagate_mount_unlock(struct mount *);
+ void mnt_release_group_id(struct mount *);
+ int get_dominating_id(struct mount *mnt, const struct path *root);
+ unsigned int mnt_get_count(struct mount *mnt);
+ void mnt_set_mountpoint(struct mount *, struct mountpoint *,
+ 			struct mount *);
+-void umount_tree(struct mount *, int);
+ struct mount *copy_tree(struct mount *, struct dentry *, int);
+ bool is_path_reachable(struct mount *, struct dentry *,
+ 			 const struct path *root);
+diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h
+index b034f10..0d58525 100644
+--- a/include/acpi/actypes.h
++++ b/include/acpi/actypes.h
+@@ -199,9 +199,29 @@ typedef int s32;
+ typedef s32 acpi_native_int;
+ 
+ typedef u32 acpi_size;
++
++#ifdef ACPI_32BIT_PHYSICAL_ADDRESS
++
++/*
++ * OSPMs can define this to shrink the size of the structures for 32-bit
++ * none PAE environment. ASL compiler may always define this to generate
++ * 32-bit OSPM compliant tables.
++ */
+ typedef u32 acpi_io_address;
+ typedef u32 acpi_physical_address;
+ 
++#else				/* ACPI_32BIT_PHYSICAL_ADDRESS */
++
++/*
++ * It is reported that, after some calculations, the physical addresses can
++ * wrap over the 32-bit boundary on 32-bit PAE environment.
++ * https://bugzilla.kernel.org/show_bug.cgi?id=87971
++ */
++typedef u64 acpi_io_address;
++typedef u64 acpi_physical_address;
++
++#endif				/* ACPI_32BIT_PHYSICAL_ADDRESS */
++
+ #define ACPI_MAX_PTR                    ACPI_UINT32_MAX
+ #define ACPI_SIZE_MAX                   ACPI_UINT32_MAX
+ 
+@@ -736,10 +756,6 @@ typedef u32 acpi_event_status;
+ #define ACPI_GPE_ENABLE                 0
+ #define ACPI_GPE_DISABLE                1
+ #define ACPI_GPE_CONDITIONAL_ENABLE     2
+-#define ACPI_GPE_SAVE_MASK              4
+-
+-#define ACPI_GPE_ENABLE_SAVE            (ACPI_GPE_ENABLE | ACPI_GPE_SAVE_MASK)
+-#define ACPI_GPE_DISABLE_SAVE           (ACPI_GPE_DISABLE | ACPI_GPE_SAVE_MASK)
+ 
+ /*
+  * GPE info flags - Per GPE
+diff --git a/include/acpi/platform/acenv.h b/include/acpi/platform/acenv.h
+index ad74dc5..ecdf940 100644
+--- a/include/acpi/platform/acenv.h
++++ b/include/acpi/platform/acenv.h
+@@ -76,6 +76,7 @@
+ #define ACPI_LARGE_NAMESPACE_NODE
+ #define ACPI_DATA_TABLE_DISASSEMBLY
+ #define ACPI_SINGLE_THREADED
++#define ACPI_32BIT_PHYSICAL_ADDRESS
+ #endif
+ 
+ /* acpi_exec configuration. Multithreaded with full AML debugger */
+diff --git a/include/dt-bindings/clock/tegra124-car-common.h b/include/dt-bindings/clock/tegra124-car-common.h
+index ae2eb17..a215609 100644
+--- a/include/dt-bindings/clock/tegra124-car-common.h
++++ b/include/dt-bindings/clock/tegra124-car-common.h
+@@ -297,7 +297,7 @@
+ #define TEGRA124_CLK_PLL_C4 270
+ #define TEGRA124_CLK_PLL_DP 271
+ #define TEGRA124_CLK_PLL_E_MUX 272
+-#define TEGRA124_CLK_PLLD_DSI 273
++#define TEGRA124_CLK_PLL_D_DSI_OUT 273
+ /* 274 */
+ /* 275 */
+ /* 276 */
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index bbfceb7..33b52fb 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -48,7 +48,7 @@ struct bpf_map *bpf_map_get(struct fd f);
+ 
+ /* function argument constraints */
+ enum bpf_arg_type {
+-	ARG_ANYTHING = 0,	/* any argument is ok */
++	ARG_DONTCARE = 0,	/* unused argument in helper function */
+ 
+ 	/* the following constraints used to prototype
+ 	 * bpf_map_lookup/update/delete_elem() functions
+@@ -62,6 +62,8 @@ enum bpf_arg_type {
+ 	 */
+ 	ARG_PTR_TO_STACK,	/* any pointer to eBPF program stack */
+ 	ARG_CONST_STACK_SIZE,	/* number of bytes accessed from stack */
++
++	ARG_ANYTHING,		/* any (initialized) argument is ok */
+ };
+ 
+ /* type of values returned from helper functions */
+diff --git a/include/linux/mount.h b/include/linux/mount.h
+index c2c561d..564beee 100644
+--- a/include/linux/mount.h
++++ b/include/linux/mount.h
+@@ -61,6 +61,7 @@ struct mnt_namespace;
+ #define MNT_DOOMED		0x1000000
+ #define MNT_SYNC_UMOUNT		0x2000000
+ #define MNT_MARKED		0x4000000
++#define MNT_UMOUNT		0x8000000
+ 
+ struct vfsmount {
+ 	struct dentry *mnt_root;	/* root of the mounted tree */
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index a419b65..51348f7 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -176,6 +176,14 @@ extern void get_iowait_load(unsigned long *nr_waiters, unsigned long *load);
+ extern void calc_global_load(unsigned long ticks);
+ extern void update_cpu_load_nohz(void);
+ 
++/* Notifier for when a task gets migrated to a new CPU */
++struct task_migration_notifier {
++	struct task_struct *task;
++	int from_cpu;
++	int to_cpu;
++};
++extern void register_task_migration_notifier(struct notifier_block *n);
++
+ extern unsigned long get_parent_ip(unsigned long addr);
+ 
+ extern void dump_cpu_task(int cpu);
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index f54d665..bdccc4b 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -769,6 +769,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
+ 
+ struct sk_buff *__alloc_skb(unsigned int size, gfp_t priority, int flags,
+ 			    int node);
++struct sk_buff *__build_skb(void *data, unsigned int frag_size);
+ struct sk_buff *build_skb(void *data, unsigned int frag_size);
+ static inline struct sk_buff *alloc_skb(unsigned int size,
+ 					gfp_t priority)
+@@ -3013,6 +3014,18 @@ static inline bool __skb_checksum_validate_needed(struct sk_buff *skb,
+  */
+ #define CHECKSUM_BREAK 76
+ 
++/* Unset checksum-complete
++ *
++ * Unset checksum complete can be done when packet is being modified
++ * (uncompressed for instance) and checksum-complete value is
++ * invalidated.
++ */
++static inline void skb_checksum_complete_unset(struct sk_buff *skb)
++{
++	if (skb->ip_summed == CHECKSUM_COMPLETE)
++		skb->ip_summed = CHECKSUM_NONE;
++}
++
+ /* Validate (init) checksum based on checksum complete.
+  *
+  * Return values:
+diff --git a/include/linux/usb.h b/include/linux/usb.h
+index 7ee1b5c..447fe29 100644
+--- a/include/linux/usb.h
++++ b/include/linux/usb.h
+@@ -205,6 +205,32 @@ void usb_put_intf(struct usb_interface *intf);
+ #define USB_MAXINTERFACES	32
+ #define USB_MAXIADS		(USB_MAXINTERFACES/2)
+ 
++/*
++ * USB Resume Timer: Every Host controller driver should drive the resume
++ * signalling on the bus for the amount of time defined by this macro.
++ *
++ * That way we will have a 'stable' behavior among all HCDs supported by Linux.
++ *
++ * Note that the USB Specification states we should drive resume for *at least*
++ * 20 ms, but it doesn't give an upper bound. This creates two possible
++ * situations which we want to avoid:
++ *
++ * (a) sometimes an msleep(20) might expire slightly before 20 ms, which causes
++ * us to fail USB Electrical Tests, thus failing Certification
++ *
++ * (b) Some (many) devices actually need more than 20 ms of resume signalling,
++ * and while we can argue that's against the USB Specification, we don't have
++ * control over which devices a certification laboratory will be using for
++ * certification. If CertLab uses a device which was tested against Windows and
++ * that happens to have relaxed resume signalling rules, we might fall into
++ * situations where we fail interoperability and electrical tests.
++ *
++ * In order to avoid both conditions, we're using a 40 ms resume timeout, which
++ * should cope with both LPJ calibration errors and devices not following every
++ * detail of the USB Specification.
++ */
++#define USB_RESUME_TIMEOUT	40 /* ms */
++
+ /**
+  * struct usb_interface_cache - long-term representation of a device interface
+  * @num_altsetting: number of altsettings defined.
+diff --git a/include/target/iscsi/iscsi_target_core.h b/include/target/iscsi/iscsi_target_core.h
+index d3583d3..dd0f3ab 100644
+--- a/include/target/iscsi/iscsi_target_core.h
++++ b/include/target/iscsi/iscsi_target_core.h
+@@ -602,6 +602,11 @@ struct iscsi_conn {
+ 	struct iscsi_session	*sess;
+ 	/* Pointer to thread_set in use for this conn's threads */
+ 	struct iscsi_thread_set	*thread_set;
++	int			bitmap_id;
++	int			rx_thread_active;
++	struct task_struct	*rx_thread;
++	int			tx_thread_active;
++	struct task_struct	*tx_thread;
+ 	/* list_head for session connection list */
+ 	struct list_head	conn_list;
+ } ____cacheline_aligned;
+@@ -871,10 +876,12 @@ struct iscsit_global {
+ 	/* Unique identifier used for the authentication daemon */
+ 	u32			auth_id;
+ 	u32			inactive_ts;
++#define ISCSIT_BITMAP_BITS	262144
+ 	/* Thread Set bitmap count */
+ 	int			ts_bitmap_count;
+ 	/* Thread Set bitmap pointer */
+ 	unsigned long		*ts_bitmap;
++	spinlock_t		ts_bitmap_lock;
+ 	/* Used for iSCSI discovery session authentication */
+ 	struct iscsi_node_acl	discovery_acl;
+ 	struct iscsi_portal_group	*discovery_tpg;
+diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
+index 672150b..985ca4c 100644
+--- a/include/target/target_core_base.h
++++ b/include/target/target_core_base.h
+@@ -524,7 +524,7 @@ struct se_cmd {
+ 	sense_reason_t		(*execute_cmd)(struct se_cmd *);
+ 	sense_reason_t		(*execute_rw)(struct se_cmd *, struct scatterlist *,
+ 					      u32, enum dma_data_direction);
+-	sense_reason_t (*transport_complete_callback)(struct se_cmd *);
++	sense_reason_t (*transport_complete_callback)(struct se_cmd *, bool);
+ 
+ 	unsigned char		*t_task_cdb;
+ 	unsigned char		__t_task_cdb[TCM_MAX_COMMAND_SIZE];
+diff --git a/include/uapi/linux/nfsd/debug.h b/include/uapi/linux/nfsd/debug.h
+index 0bf130a..28ec6c9 100644
+--- a/include/uapi/linux/nfsd/debug.h
++++ b/include/uapi/linux/nfsd/debug.h
+@@ -12,14 +12,6 @@
+ #include <linux/sunrpc/debug.h>
+ 
+ /*
+- * Enable debugging for nfsd.
+- * Requires RPC_DEBUG.
+- */
+-#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
+-# define NFSD_DEBUG		1
+-#endif
+-
+-/*
+  * knfsd debug flags
+  */
+ #define NFSDDBG_SOCK		0x0001
+diff --git a/include/video/samsung_fimd.h b/include/video/samsung_fimd.h
+index a20e4a3..847a0a2 100644
+--- a/include/video/samsung_fimd.h
++++ b/include/video/samsung_fimd.h
+@@ -436,6 +436,12 @@
+ #define BLENDCON_NEW_8BIT_ALPHA_VALUE		(1 << 0)
+ #define BLENDCON_NEW_4BIT_ALPHA_VALUE		(0 << 0)
+ 
++/* Display port clock control */
++#define DP_MIE_CLKCON				0x27c
++#define DP_MIE_CLK_DISABLE			0x0
++#define DP_MIE_CLK_DP_ENABLE			0x2
++#define DP_MIE_CLK_MIE_ENABLE			0x3
++
+ /* Notes on per-window bpp settings
+  *
+  * Value	Win0	 Win1	  Win2	   Win3	    Win 4
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 36508e6..5d8ea3d 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -755,7 +755,7 @@ static int check_func_arg(struct verifier_env *env, u32 regno,
+ 	enum bpf_reg_type expected_type;
+ 	int err = 0;
+ 
+-	if (arg_type == ARG_ANYTHING)
++	if (arg_type == ARG_DONTCARE)
+ 		return 0;
+ 
+ 	if (reg->type == NOT_INIT) {
+@@ -763,6 +763,9 @@ static int check_func_arg(struct verifier_env *env, u32 regno,
+ 		return -EACCES;
+ 	}
+ 
++	if (arg_type == ARG_ANYTHING)
++		return 0;
++
+ 	if (arg_type == ARG_PTR_TO_STACK || arg_type == ARG_PTR_TO_MAP_KEY ||
+ 	    arg_type == ARG_PTR_TO_MAP_VALUE) {
+ 		expected_type = PTR_TO_STACK;
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index 227fec3..9a34bd8 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -697,6 +697,8 @@ static int ptrace_peek_siginfo(struct task_struct *child,
+ static int ptrace_resume(struct task_struct *child, long request,
+ 			 unsigned long data)
+ {
++	bool need_siglock;
++
+ 	if (!valid_signal(data))
+ 		return -EIO;
+ 
+@@ -724,8 +726,26 @@ static int ptrace_resume(struct task_struct *child, long request,
+ 		user_disable_single_step(child);
+ 	}
+ 
++	/*
++	 * Change ->exit_code and ->state under siglock to avoid the race
++	 * with wait_task_stopped() in between; a non-zero ->exit_code will
++	 * wrongly look like another report from tracee.
++	 *
++	 * Note that we need siglock even if ->exit_code == data and/or this
++	 * status was not reported yet, the new status must not be cleared by
++	 * wait_task_stopped() after resume.
++	 *
++	 * If data == 0 we do not care if wait_task_stopped() reports the old
++	 * status and clears the code too; this can't race with the tracee, it
++	 * takes siglock after resume.
++	 */
++	need_siglock = data && !thread_group_empty(current);
++	if (need_siglock)
++		spin_lock_irq(&child->sighand->siglock);
+ 	child->exit_code = data;
+ 	wake_up_state(child, __TASK_TRACED);
++	if (need_siglock)
++		spin_unlock_irq(&child->sighand->siglock);
+ 
+ 	return 0;
+ }
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 62671f5..3d5f6f6 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -996,6 +996,13 @@ void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags)
+ 		rq_clock_skip_update(rq, true);
+ }
+ 
++static ATOMIC_NOTIFIER_HEAD(task_migration_notifier);
++
++void register_task_migration_notifier(struct notifier_block *n)
++{
++	atomic_notifier_chain_register(&task_migration_notifier, n);
++}
++
+ #ifdef CONFIG_SMP
+ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
+ {
+@@ -1026,10 +1033,18 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
+ 	trace_sched_migrate_task(p, new_cpu);
+ 
+ 	if (task_cpu(p) != new_cpu) {
++		struct task_migration_notifier tmn;
++
+ 		if (p->sched_class->migrate_task_rq)
+ 			p->sched_class->migrate_task_rq(p, new_cpu);
+ 		p->se.nr_migrations++;
+ 		perf_sw_event_sched(PERF_COUNT_SW_CPU_MIGRATIONS, 1, 0);
++
++		tmn.task = p;
++		tmn.from_cpu = task_cpu(p);
++		tmn.to_cpu = new_cpu;
++
++		atomic_notifier_call_chain(&task_migration_notifier, 0, &tmn);
+ 	}
+ 
+ 	__set_task_cpu(p, new_cpu);
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 3fa8fa6..f670cbb 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -514,7 +514,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
+ 	unsigned long flags;
+ 	struct rq *rq;
+ 
+-	rq = task_rq_lock(current, &flags);
++	rq = task_rq_lock(p, &flags);
+ 
+ 	/*
+ 	 * We need to take care of several possible races here:
+@@ -569,7 +569,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
+ 		push_dl_task(rq);
+ #endif
+ unlock:
+-	task_rq_unlock(rq, current, &flags);
++	task_rq_unlock(rq, p, &flags);
+ 
+ 	return HRTIMER_NORESTART;
+ }
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 5040d44..922048a 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -2679,7 +2679,7 @@ static DEFINE_PER_CPU(unsigned int, current_context);
+ 
+ static __always_inline int trace_recursive_lock(void)
+ {
+-	unsigned int val = this_cpu_read(current_context);
++	unsigned int val = __this_cpu_read(current_context);
+ 	int bit;
+ 
+ 	if (in_interrupt()) {
+@@ -2696,18 +2696,17 @@ static __always_inline int trace_recursive_lock(void)
+ 		return 1;
+ 
+ 	val |= (1 << bit);
+-	this_cpu_write(current_context, val);
++	__this_cpu_write(current_context, val);
+ 
+ 	return 0;
+ }
+ 
+ static __always_inline void trace_recursive_unlock(void)
+ {
+-	unsigned int val = this_cpu_read(current_context);
++	unsigned int val = __this_cpu_read(current_context);
+ 
+-	val--;
+-	val &= this_cpu_read(current_context);
+-	this_cpu_write(current_context, val);
++	val &= val & (val - 1);
++	__this_cpu_write(current_context, val);
+ }
+ 
+ #else
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index db54dda..a9c10a3 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -565,6 +565,7 @@ static int __ftrace_set_clr_event(struct trace_array *tr, const char *match,
+ static int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set)
+ {
+ 	char *event = NULL, *sub = NULL, *match;
++	int ret;
+ 
+ 	/*
+ 	 * The buf format can be <subsystem>:<event-name>
+@@ -590,7 +591,13 @@ static int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set)
+ 			event = NULL;
+ 	}
+ 
+-	return __ftrace_set_clr_event(tr, match, sub, event, set);
++	ret = __ftrace_set_clr_event(tr, match, sub, event, set);
++
++	/* Put back the colon to allow this to be called again */
++	if (buf)
++		*(buf - 1) = ':';
++
++	return ret;
+ }
+ 
+ /**
+diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
+index 2d25ad1..b6fce36 100644
+--- a/kernel/trace/trace_functions_graph.c
++++ b/kernel/trace/trace_functions_graph.c
+@@ -1309,15 +1309,19 @@ void graph_trace_open(struct trace_iterator *iter)
+ {
+ 	/* pid and depth on the last trace processed */
+ 	struct fgraph_data *data;
++	gfp_t gfpflags;
+ 	int cpu;
+ 
+ 	iter->private = NULL;
+ 
+-	data = kzalloc(sizeof(*data), GFP_KERNEL);
++	/* We can be called in atomic context via ftrace_dump() */
++	gfpflags = (in_atomic() || irqs_disabled()) ? GFP_ATOMIC : GFP_KERNEL;
++
++	data = kzalloc(sizeof(*data), gfpflags);
+ 	if (!data)
+ 		goto out_err;
+ 
+-	data->cpu_data = alloc_percpu(struct fgraph_cpu_data);
++	data->cpu_data = alloc_percpu_gfp(struct fgraph_cpu_data, gfpflags);
+ 	if (!data->cpu_data)
+ 		goto out_err_free;
+ 
+diff --git a/lib/string.c b/lib/string.c
+index ce81aae..a579201 100644
+--- a/lib/string.c
++++ b/lib/string.c
+@@ -607,7 +607,7 @@ EXPORT_SYMBOL(memset);
+ void memzero_explicit(void *s, size_t count)
+ {
+ 	memset(s, 0, count);
+-	OPTIMIZER_HIDE_VAR(s);
++	barrier();
+ }
+ EXPORT_SYMBOL(memzero_explicit);
+ 
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 6817b03..956d4db 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2316,8 +2316,14 @@ static struct page
+ 		       struct vm_area_struct *vma, unsigned long address,
+ 		       int node)
+ {
++	gfp_t flags;
++
+ 	VM_BUG_ON_PAGE(*hpage, *hpage);
+ 
++	/* Only allocate from the target node */
++	flags = alloc_hugepage_gfpmask(khugepaged_defrag(), __GFP_OTHER_NODE) |
++	        __GFP_THISNODE;
++
+ 	/*
+ 	 * Before allocating the hugepage, release the mmap_sem read lock.
+ 	 * The allocation can take potentially a long time if it involves
+@@ -2326,8 +2332,7 @@ static struct page
+ 	 */
+ 	up_read(&mm->mmap_sem);
+ 
+-	*hpage = alloc_pages_exact_node(node, alloc_hugepage_gfpmask(
+-		khugepaged_defrag(), __GFP_OTHER_NODE), HPAGE_PMD_ORDER);
++	*hpage = alloc_pages_exact_node(node, flags, HPAGE_PMD_ORDER);
+ 	if (unlikely(!*hpage)) {
+ 		count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
+ 		*hpage = ERR_PTR(-ENOMEM);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index c41b2a0..caad3c5 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3735,8 +3735,7 @@ retry:
+ 	if (!pmd_huge(*pmd))
+ 		goto out;
+ 	if (pmd_present(*pmd)) {
+-		page = pte_page(*(pte_t *)pmd) +
+-			((address & ~PMD_MASK) >> PAGE_SHIFT);
++		page = pmd_page(*pmd) + ((address & ~PMD_MASK) >> PAGE_SHIFT);
+ 		if (flags & FOLL_GET)
+ 			get_page(page);
+ 	} else {
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 4721046..de5dc5e 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -1985,7 +1985,8 @@ retry_cpuset:
+ 		nmask = policy_nodemask(gfp, pol);
+ 		if (!nmask || node_isset(node, *nmask)) {
+ 			mpol_cond_put(pol);
+-			page = alloc_pages_exact_node(node, gfp, order);
++			page = alloc_pages_exact_node(node,
++						gfp | __GFP_THISNODE, order);
+ 			goto out;
+ 		}
+ 	}
+diff --git a/net/bridge/br_netfilter.c b/net/bridge/br_netfilter.c
+index 0ee453f..f371cbf 100644
+--- a/net/bridge/br_netfilter.c
++++ b/net/bridge/br_netfilter.c
+@@ -651,6 +651,13 @@ static int br_nf_forward_finish(struct sk_buff *skb)
+ 	struct net_device *in;
+ 
+ 	if (!IS_ARP(skb) && !IS_VLAN_ARP(skb)) {
++		int frag_max_size;
++
++		if (skb->protocol == htons(ETH_P_IP)) {
++			frag_max_size = IPCB(skb)->frag_max_size;
++			BR_INPUT_SKB_CB(skb)->frag_max_size = frag_max_size;
++		}
++
+ 		in = nf_bridge->physindev;
+ 		if (nf_bridge->mask & BRNF_PKT_TYPE) {
+ 			skb->pkt_type = PACKET_OTHERHOST;
+@@ -710,8 +717,14 @@ static unsigned int br_nf_forward_ip(const struct nf_hook_ops *ops,
+ 		nf_bridge->mask |= BRNF_PKT_TYPE;
+ 	}
+ 
+-	if (pf == NFPROTO_IPV4 && br_parse_ip_options(skb))
+-		return NF_DROP;
++	if (pf == NFPROTO_IPV4) {
++		int frag_max = BR_INPUT_SKB_CB(skb)->frag_max_size;
++
++		if (br_parse_ip_options(skb))
++			return NF_DROP;
++
++		IPCB(skb)->frag_max_size = frag_max;
++	}
+ 
+ 	/* The physdev module checks on this */
+ 	nf_bridge->mask |= BRNF_BRIDGED;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 45109b7..22a53ac 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3041,7 +3041,7 @@ static struct rps_dev_flow *
+ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
+ 	    struct rps_dev_flow *rflow, u16 next_cpu)
+ {
+-	if (next_cpu != RPS_NO_CPU) {
++	if (next_cpu < nr_cpu_ids) {
+ #ifdef CONFIG_RFS_ACCEL
+ 		struct netdev_rx_queue *rxqueue;
+ 		struct rps_dev_flow_table *flow_table;
+@@ -3146,7 +3146,7 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
+ 		 * If the desired CPU (where last recvmsg was done) is
+ 		 * different from current CPU (one in the rx-queue flow
+ 		 * table entry), switch if one of the following holds:
+-		 *   - Current CPU is unset (equal to RPS_NO_CPU).
++		 *   - Current CPU is unset (>= nr_cpu_ids).
+ 		 *   - Current CPU is offline.
+ 		 *   - The current CPU's queue tail has advanced beyond the
+ 		 *     last packet that was enqueued using this table entry.
+@@ -3154,14 +3154,14 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
+ 		 *     have been dequeued, thus preserving in order delivery.
+ 		 */
+ 		if (unlikely(tcpu != next_cpu) &&
+-		    (tcpu == RPS_NO_CPU || !cpu_online(tcpu) ||
++		    (tcpu >= nr_cpu_ids || !cpu_online(tcpu) ||
+ 		     ((int)(per_cpu(softnet_data, tcpu).input_queue_head -
+ 		      rflow->last_qtail)) >= 0)) {
+ 			tcpu = next_cpu;
+ 			rflow = set_rps_cpu(dev, skb, rflow, next_cpu);
+ 		}
+ 
+-		if (tcpu != RPS_NO_CPU && cpu_online(tcpu)) {
++		if (tcpu < nr_cpu_ids && cpu_online(tcpu)) {
+ 			*rflowp = rflow;
+ 			cpu = tcpu;
+ 			goto done;
+@@ -3202,14 +3202,14 @@ bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index,
+ 	struct rps_dev_flow_table *flow_table;
+ 	struct rps_dev_flow *rflow;
+ 	bool expire = true;
+-	int cpu;
++	unsigned int cpu;
+ 
+ 	rcu_read_lock();
+ 	flow_table = rcu_dereference(rxqueue->rps_flow_table);
+ 	if (flow_table && flow_id <= flow_table->mask) {
+ 		rflow = &flow_table->flows[flow_id];
+ 		cpu = ACCESS_ONCE(rflow->cpu);
+-		if (rflow->filter == filter_id && cpu != RPS_NO_CPU &&
++		if (rflow->filter == filter_id && cpu < nr_cpu_ids &&
+ 		    ((int)(per_cpu(softnet_data, cpu).input_queue_head -
+ 			   rflow->last_qtail) <
+ 		     (int)(10 * flow_table->mask)))
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 98d45fe..e9f9a15 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -280,13 +280,14 @@ nodata:
+ EXPORT_SYMBOL(__alloc_skb);
+ 
+ /**
+- * build_skb - build a network buffer
++ * __build_skb - build a network buffer
+  * @data: data buffer provided by caller
+- * @frag_size: size of fragment, or 0 if head was kmalloced
++ * @frag_size: size of data, or 0 if head was kmalloced
+  *
+  * Allocate a new &sk_buff. Caller provides space holding head and
+  * skb_shared_info. @data must have been allocated by kmalloc() only if
+- * @frag_size is 0, otherwise data should come from the page allocator.
++ * @frag_size is 0, otherwise data should come from the page allocator
++ *  or vmalloc()
+  * The return is the new skb buffer.
+  * On a failure the return is %NULL, and @data is not freed.
+  * Notes :
+@@ -297,7 +298,7 @@ EXPORT_SYMBOL(__alloc_skb);
+  *  before giving packet to stack.
+  *  RX rings only contains data buffers, not full skbs.
+  */
+-struct sk_buff *build_skb(void *data, unsigned int frag_size)
++struct sk_buff *__build_skb(void *data, unsigned int frag_size)
+ {
+ 	struct skb_shared_info *shinfo;
+ 	struct sk_buff *skb;
+@@ -311,7 +312,6 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size)
+ 
+ 	memset(skb, 0, offsetof(struct sk_buff, tail));
+ 	skb->truesize = SKB_TRUESIZE(size);
+-	skb->head_frag = frag_size != 0;
+ 	atomic_set(&skb->users, 1);
+ 	skb->head = data;
+ 	skb->data = data;
+@@ -328,6 +328,23 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size)
+ 
+ 	return skb;
+ }
++
++/* build_skb() is wrapper over __build_skb(), that specifically
++ * takes care of skb->head and skb->pfmemalloc
++ * This means that if @frag_size is not zero, then @data must be backed
++ * by a page fragment, not kmalloc() or vmalloc()
++ */
++struct sk_buff *build_skb(void *data, unsigned int frag_size)
++{
++	struct sk_buff *skb = __build_skb(data, frag_size);
++
++	if (skb && frag_size) {
++		skb->head_frag = 1;
++		if (virt_to_head_page(data)->pfmemalloc)
++			skb->pfmemalloc = 1;
++	}
++	return skb;
++}
+ EXPORT_SYMBOL(build_skb);
+ 
+ struct netdev_alloc_cache {
+@@ -348,7 +365,8 @@ static struct page *__page_frag_refill(struct netdev_alloc_cache *nc,
+ 	gfp_t gfp = gfp_mask;
+ 
+ 	if (order) {
+-		gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY;
++		gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY |
++			    __GFP_NOMEMALLOC;
+ 		page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, order);
+ 		nc->frag.size = PAGE_SIZE << (page ? order : 0);
+ 	}
+diff --git a/net/ipv4/ip_forward.c b/net/ipv4/ip_forward.c
+index d9bc28a..53bd53f 100644
+--- a/net/ipv4/ip_forward.c
++++ b/net/ipv4/ip_forward.c
+@@ -82,6 +82,9 @@ int ip_forward(struct sk_buff *skb)
+ 	if (skb->pkt_type != PACKET_HOST)
+ 		goto drop;
+ 
++	if (unlikely(skb->sk))
++		goto drop;
++
+ 	if (skb_warn_if_lro(skb))
+ 		goto drop;
+ 
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index d520492..9d48dc4 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -2751,39 +2751,65 @@ begin_fwd:
+ 	}
+ }
+ 
+-/* Send a fin.  The caller locks the socket for us.  This cannot be
+- * allowed to fail queueing a FIN frame under any circumstances.
++/* We allow to exceed memory limits for FIN packets to expedite
++ * connection tear down and (memory) recovery.
++ * Otherwise tcp_send_fin() could be tempted to either delay FIN
++ * or even be forced to close flow without any FIN.
++ */
++static void sk_forced_wmem_schedule(struct sock *sk, int size)
++{
++	int amt, status;
++
++	if (size <= sk->sk_forward_alloc)
++		return;
++	amt = sk_mem_pages(size);
++	sk->sk_forward_alloc += amt * SK_MEM_QUANTUM;
++	sk_memory_allocated_add(sk, amt, &status);
++}
++
++/* Send a FIN. The caller locks the socket for us.
++ * We should try to send a FIN packet really hard, but eventually give up.
+  */
+ void tcp_send_fin(struct sock *sk)
+ {
++	struct sk_buff *skb, *tskb = tcp_write_queue_tail(sk);
+ 	struct tcp_sock *tp = tcp_sk(sk);
+-	struct sk_buff *skb = tcp_write_queue_tail(sk);
+-	int mss_now;
+ 
+-	/* Optimization, tack on the FIN if we have a queue of
+-	 * unsent frames.  But be careful about outgoing SACKS
+-	 * and IP options.
++	/* Optimization, tack on the FIN if we have one skb in write queue and
++	 * this skb was not yet sent, or we are under memory pressure.
++	 * Note: in the latter case, FIN packet will be sent after a timeout,
++	 * as TCP stack thinks it has already been transmitted.
+ 	 */
+-	mss_now = tcp_current_mss(sk);
+-
+-	if (tcp_send_head(sk) != NULL) {
+-		TCP_SKB_CB(skb)->tcp_flags |= TCPHDR_FIN;
+-		TCP_SKB_CB(skb)->end_seq++;
++	if (tskb && (tcp_send_head(sk) || sk_under_memory_pressure(sk))) {
++coalesce:
++		TCP_SKB_CB(tskb)->tcp_flags |= TCPHDR_FIN;
++		TCP_SKB_CB(tskb)->end_seq++;
+ 		tp->write_seq++;
++		if (!tcp_send_head(sk)) {
++			/* This means tskb was already sent.
++			 * Pretend we included the FIN on previous transmit.
++			 * We need to set tp->snd_nxt to the value it would have
++			 * if FIN had been sent. This is because retransmit path
++			 * does not change tp->snd_nxt.
++			 */
++			tp->snd_nxt++;
++			return;
++		}
+ 	} else {
+-		/* Socket is locked, keep trying until memory is available. */
+-		for (;;) {
+-			skb = sk_stream_alloc_skb(sk, 0, sk->sk_allocation);
+-			if (skb)
+-				break;
+-			yield();
++		skb = alloc_skb_fclone(MAX_TCP_HEADER, sk->sk_allocation);
++		if (unlikely(!skb)) {
++			if (tskb)
++				goto coalesce;
++			return;
+ 		}
++		skb_reserve(skb, MAX_TCP_HEADER);
++		sk_forced_wmem_schedule(sk, skb->truesize);
+ 		/* FIN eats a sequence byte, write_seq advanced by tcp_queue_skb(). */
+ 		tcp_init_nondata_skb(skb, tp->write_seq,
+ 				     TCPHDR_ACK | TCPHDR_FIN);
+ 		tcp_queue_skb(sk, skb);
+ 	}
+-	__tcp_push_pending_frames(sk, mss_now, TCP_NAGLE_OFF);
++	__tcp_push_pending_frames(sk, tcp_current_mss(sk), TCP_NAGLE_OFF);
+ }
+ 
+ /* We get here when a process closes a file descriptor (either due to
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 142f66a..0ca013d 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -2260,7 +2260,7 @@ static void ieee80211_mgd_probe_ap_send(struct ieee80211_sub_if_data *sdata)
+ 		else
+ 			ssid_len = ssid[1];
+ 
+-		ieee80211_send_probe_req(sdata, sdata->vif.addr, NULL,
++		ieee80211_send_probe_req(sdata, sdata->vif.addr, dst,
+ 					 ssid + 2, ssid_len, NULL,
+ 					 0, (u32) -1, true, 0,
+ 					 ifmgd->associated->channel, false);
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 05919bf..d1d7a81 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1616,13 +1616,11 @@ static struct sk_buff *netlink_alloc_large_skb(unsigned int size,
+ 	if (data == NULL)
+ 		return NULL;
+ 
+-	skb = build_skb(data, size);
++	skb = __build_skb(data, size);
+ 	if (skb == NULL)
+ 		vfree(data);
+-	else {
+-		skb->head_frag = 0;
++	else
+ 		skb->destructor = netlink_skb_destructor;
+-	}
+ 
+ 	return skb;
+ }
+diff --git a/sound/pci/emu10k1/emuproc.c b/sound/pci/emu10k1/emuproc.c
+index 2ca9f2e..53745f4 100644
+--- a/sound/pci/emu10k1/emuproc.c
++++ b/sound/pci/emu10k1/emuproc.c
+@@ -241,31 +241,22 @@ static void snd_emu10k1_proc_spdif_read(struct snd_info_entry *entry,
+ 	struct snd_emu10k1 *emu = entry->private_data;
+ 	u32 value;
+ 	u32 value2;
+-	unsigned long flags;
+ 	u32 rate;
+ 
+ 	if (emu->card_capabilities->emu_model) {
+-		spin_lock_irqsave(&emu->emu_lock, flags);
+ 		snd_emu1010_fpga_read(emu, 0x38, &value);
+-		spin_unlock_irqrestore(&emu->emu_lock, flags);
+ 		if ((value & 0x1) == 0) {
+-			spin_lock_irqsave(&emu->emu_lock, flags);
+ 			snd_emu1010_fpga_read(emu, 0x2a, &value);
+ 			snd_emu1010_fpga_read(emu, 0x2b, &value2);
+-			spin_unlock_irqrestore(&emu->emu_lock, flags);
+ 			rate = 0x1770000 / (((value << 5) | value2)+1);	
+ 			snd_iprintf(buffer, "ADAT Locked : %u\n", rate);
+ 		} else {
+ 			snd_iprintf(buffer, "ADAT Unlocked\n");
+ 		}
+-		spin_lock_irqsave(&emu->emu_lock, flags);
+ 		snd_emu1010_fpga_read(emu, 0x20, &value);
+-		spin_unlock_irqrestore(&emu->emu_lock, flags);
+ 		if ((value & 0x4) == 0) {
+-			spin_lock_irqsave(&emu->emu_lock, flags);
+ 			snd_emu1010_fpga_read(emu, 0x28, &value);
+ 			snd_emu1010_fpga_read(emu, 0x29, &value2);
+-			spin_unlock_irqrestore(&emu->emu_lock, flags);
+ 			rate = 0x1770000 / (((value << 5) | value2)+1);	
+ 			snd_iprintf(buffer, "SPDIF Locked : %d\n", rate);
+ 		} else {
+@@ -410,14 +401,11 @@ static void snd_emu_proc_emu1010_reg_read(struct snd_info_entry *entry,
+ {
+ 	struct snd_emu10k1 *emu = entry->private_data;
+ 	u32 value;
+-	unsigned long flags;
+ 	int i;
+ 	snd_iprintf(buffer, "EMU1010 Registers:\n\n");
+ 
+ 	for(i = 0; i < 0x40; i+=1) {
+-		spin_lock_irqsave(&emu->emu_lock, flags);
+ 		snd_emu1010_fpga_read(emu, i, &value);
+-		spin_unlock_irqrestore(&emu->emu_lock, flags);
+ 		snd_iprintf(buffer, "%02X: %08X, %02X\n", i, value, (value >> 8) & 0x7f);
+ 	}
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f9d12c0..2fd490b 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5047,12 +5047,14 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x2212, "Thinkpad T440", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad X240", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++	SND_PCI_QUIRK(0x17aa, 0x2226, "ThinkPad X250", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP),
+ 	SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x501e, "Thinkpad L440", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x5026, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++	SND_PCI_QUIRK(0x17aa, 0x5034, "Thinkpad T450", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x5036, "Thinkpad T450s", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+@@ -5142,6 +5144,16 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{0x1b, 0x411111f0}, \
+ 	{0x1e, 0x411111f0}
+ 
++#define ALC256_STANDARD_PINS \
++	{0x12, 0x90a60140}, \
++	{0x14, 0x90170110}, \
++	{0x19, 0x411111f0}, \
++	{0x1a, 0x411111f0}, \
++	{0x1b, 0x411111f0}, \
++	{0x1d, 0x40700001}, \
++	{0x1e, 0x411111f0}, \
++	{0x21, 0x02211020}
++
+ #define ALC282_STANDARD_PINS \
+ 	{0x14, 0x90170110}, \
+ 	{0x18, 0x411111f0}, \
+@@ -5235,15 +5247,11 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x1d, 0x40700001},
+ 		{0x21, 0x02211050}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+-		{0x12, 0x90a60140},
+-		{0x13, 0x40000000},
+-		{0x14, 0x90170110},
+-		{0x19, 0x411111f0},
+-		{0x1a, 0x411111f0},
+-		{0x1b, 0x411111f0},
+-		{0x1d, 0x40700001},
+-		{0x1e, 0x411111f0},
+-		{0x21, 0x02211020}),
++		ALC256_STANDARD_PINS,
++		{0x13, 0x40000000}),
++	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++		ALC256_STANDARD_PINS,
++		{0x13, 0x411111f0}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0280, 0x103c, "HP", ALC280_FIXUP_HP_GPIO4,
+ 		{0x12, 0x90a60130},
+ 		{0x13, 0x40000000},
+@@ -5563,6 +5571,8 @@ static int patch_alc269(struct hda_codec *codec)
+ 		break;
+ 	case 0x10ec0256:
+ 		spec->codec_variant = ALC269_TYPE_ALC256;
++		spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */
++		alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/
+ 		break;
+ 	}
+ 
+@@ -5576,8 +5586,8 @@ static int patch_alc269(struct hda_codec *codec)
+ 	if (err < 0)
+ 		goto error;
+ 
+-	if (!spec->gen.no_analog && spec->gen.beep_nid)
+-		set_beep_amp(spec, 0x0b, 0x04, HDA_INPUT);
++	if (!spec->gen.no_analog && spec->gen.beep_nid && spec->gen.mixer_nid)
++		set_beep_amp(spec, spec->gen.mixer_nid, 0x04, HDA_INPUT);
+ 
+ 	codec->patch_ops = alc_patch_ops;
+ #ifdef CONFIG_PM
+diff --git a/sound/soc/codecs/cs4271.c b/sound/soc/codecs/cs4271.c
+index 7d3a6ac..e770ee6 100644
+--- a/sound/soc/codecs/cs4271.c
++++ b/sound/soc/codecs/cs4271.c
+@@ -561,10 +561,10 @@ static int cs4271_codec_probe(struct snd_soc_codec *codec)
+ 	if (gpio_is_valid(cs4271->gpio_nreset)) {
+ 		/* Reset codec */
+ 		gpio_direction_output(cs4271->gpio_nreset, 0);
+-		udelay(1);
++		mdelay(1);
+ 		gpio_set_value(cs4271->gpio_nreset, 1);
+ 		/* Give the codec time to wake up */
+-		udelay(1);
++		mdelay(1);
+ 	}
+ 
+ 	ret = regmap_update_bits(cs4271->regmap, CS4271_MODE2,
+diff --git a/sound/soc/codecs/pcm512x.c b/sound/soc/codecs/pcm512x.c
+index 474cae8..8c09e3f 100644
+--- a/sound/soc/codecs/pcm512x.c
++++ b/sound/soc/codecs/pcm512x.c
+@@ -304,9 +304,9 @@ static const struct soc_enum pcm512x_veds =
+ static const struct snd_kcontrol_new pcm512x_controls[] = {
+ SOC_DOUBLE_R_TLV("Digital Playback Volume", PCM512x_DIGITAL_VOLUME_2,
+ 		 PCM512x_DIGITAL_VOLUME_3, 0, 255, 1, digital_tlv),
+-SOC_DOUBLE_TLV("Playback Volume", PCM512x_ANALOG_GAIN_CTRL,
++SOC_DOUBLE_TLV("Analogue Playback Volume", PCM512x_ANALOG_GAIN_CTRL,
+ 	       PCM512x_LAGN_SHIFT, PCM512x_RAGN_SHIFT, 1, 1, analog_tlv),
+-SOC_DOUBLE_TLV("Playback Boost Volume", PCM512x_ANALOG_GAIN_BOOST,
++SOC_DOUBLE_TLV("Analogue Playback Boost Volume", PCM512x_ANALOG_GAIN_BOOST,
+ 	       PCM512x_AGBL_SHIFT, PCM512x_AGBR_SHIFT, 1, 0, boost_tlv),
+ SOC_DOUBLE("Digital Playback Switch", PCM512x_MUTE, PCM512x_RQML_SHIFT,
+ 	   PCM512x_RQMR_SHIFT, 1, 1),
+@@ -576,8 +576,8 @@ static int pcm512x_find_pll_coeff(struct snd_soc_dai *dai,
+ 
+ 	/* pllin_rate / P (or here, den) cannot be greater than 20 MHz */
+ 	if (pllin_rate / den > 20000000 && num < 8) {
+-		num *= 20000000 / (pllin_rate / den);
+-		den *= 20000000 / (pllin_rate / den);
++		num *= DIV_ROUND_UP(pllin_rate / den, 20000000);
++		den *= DIV_ROUND_UP(pllin_rate / den, 20000000);
+ 	}
+ 	dev_dbg(dev, "num / den = %lu / %lu\n", num, den);
+ 
+diff --git a/sound/soc/codecs/wm8741.c b/sound/soc/codecs/wm8741.c
+index 31bb480..9e71c76 100644
+--- a/sound/soc/codecs/wm8741.c
++++ b/sound/soc/codecs/wm8741.c
+@@ -123,7 +123,7 @@ static struct {
+ };
+ 
+ static const unsigned int rates_11289[] = {
+-	44100, 88235,
++	44100, 88200,
+ };
+ 
+ static const struct snd_pcm_hw_constraint_list constraints_11289 = {
+@@ -150,7 +150,7 @@ static const struct snd_pcm_hw_constraint_list constraints_16384 = {
+ };
+ 
+ static const unsigned int rates_16934[] = {
+-	44100, 88235,
++	44100, 88200,
+ };
+ 
+ static const struct snd_pcm_hw_constraint_list constraints_16934 = {
+@@ -168,7 +168,7 @@ static const struct snd_pcm_hw_constraint_list constraints_18432 = {
+ };
+ 
+ static const unsigned int rates_22579[] = {
+-	44100, 88235, 1764000
++	44100, 88200, 176400
+ };
+ 
+ static const struct snd_pcm_hw_constraint_list constraints_22579 = {
+@@ -186,7 +186,7 @@ static const struct snd_pcm_hw_constraint_list constraints_24576 = {
+ };
+ 
+ static const unsigned int rates_36864[] = {
+-	48000, 96000, 19200
++	48000, 96000, 192000
+ };
+ 
+ static const struct snd_pcm_hw_constraint_list constraints_36864 = {
+diff --git a/sound/soc/davinci/davinci-evm.c b/sound/soc/davinci/davinci-evm.c
+index b6bb594..8c2b9be 100644
+--- a/sound/soc/davinci/davinci-evm.c
++++ b/sound/soc/davinci/davinci-evm.c
+@@ -425,18 +425,8 @@ static int davinci_evm_probe(struct platform_device *pdev)
+ 	return ret;
+ }
+ 
+-static int davinci_evm_remove(struct platform_device *pdev)
+-{
+-	struct snd_soc_card *card = platform_get_drvdata(pdev);
+-
+-	snd_soc_unregister_card(card);
+-
+-	return 0;
+-}
+-
+ static struct platform_driver davinci_evm_driver = {
+ 	.probe		= davinci_evm_probe,
+-	.remove		= davinci_evm_remove,
+ 	.driver		= {
+ 		.name	= "davinci_evm",
+ 		.pm	= &snd_soc_pm_ops,
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 9a28365..32631a8 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1115,6 +1115,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
+ {
+ 	/* devices which do not support reading the sample rate. */
+ 	switch (chip->usb_id) {
++	case USB_ID(0x045E, 0x075D): /* MS Lifecam Cinema  */
+ 	case USB_ID(0x045E, 0x076D): /* MS Lifecam HD-5000 */
+ 	case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */
+ 		return true;
+diff --git a/tools/lib/traceevent/kbuffer-parse.c b/tools/lib/traceevent/kbuffer-parse.c
+index dcc6652..deb3569 100644
+--- a/tools/lib/traceevent/kbuffer-parse.c
++++ b/tools/lib/traceevent/kbuffer-parse.c
+@@ -372,7 +372,6 @@ translate_data(struct kbuffer *kbuf, void *data, void **rptr,
+ 	switch (type_len) {
+ 	case KBUFFER_TYPE_PADDING:
+ 		*length = read_4(kbuf, data);
+-		data += *length;
+ 		break;
+ 
+ 	case KBUFFER_TYPE_TIME_EXTEND:
+diff --git a/tools/perf/config/Makefile b/tools/perf/config/Makefile
+index cc22408..0884d31 100644
+--- a/tools/perf/config/Makefile
++++ b/tools/perf/config/Makefile
+@@ -651,7 +651,7 @@ ifeq (${IS_64_BIT}, 1)
+       NO_PERF_READ_VDSO32 := 1
+     endif
+   endif
+-  ifneq (${IS_X86_64}, 1)
++  ifneq ($(ARCH), x86)
+     NO_PERF_READ_VDSOX32 := 1
+   endif
+   ifndef NO_PERF_READ_VDSOX32
+@@ -699,7 +699,7 @@ sysconfdir = $(prefix)/etc
+ ETC_PERFCONFIG = etc/perfconfig
+ endif
+ ifndef lib
+-ifeq ($(IS_X86_64),1)
++ifeq ($(ARCH)$(IS_64_BIT), x861)
+ lib = lib64
+ else
+ lib = lib
+diff --git a/tools/perf/tests/make b/tools/perf/tests/make
+index 75709d2..bff8532 100644
+--- a/tools/perf/tests/make
++++ b/tools/perf/tests/make
+@@ -5,7 +5,7 @@ include config/Makefile.arch
+ 
+ # FIXME looks like x86 is the only arch running tests ;-)
+ # we need some IS_(32/64) flag to make this generic
+-ifeq ($(IS_X86_64),1)
++ifeq ($(ARCH)$(IS_64_BIT), x861)
+ lib = lib64
+ else
+ lib = lib
+diff --git a/tools/perf/util/cloexec.c b/tools/perf/util/cloexec.c
+index 6da965b..85b5238 100644
+--- a/tools/perf/util/cloexec.c
++++ b/tools/perf/util/cloexec.c
+@@ -7,6 +7,12 @@
+ 
+ static unsigned long flag = PERF_FLAG_FD_CLOEXEC;
+ 
++int __weak sched_getcpu(void)
++{
++	errno = ENOSYS;
++	return -1;
++}
++
+ static int perf_flag_probe(void)
+ {
+ 	/* use 'safest' configuration as used in perf_evsel__fallback() */
+diff --git a/tools/perf/util/cloexec.h b/tools/perf/util/cloexec.h
+index 94a5a7d..68888c2 100644
+--- a/tools/perf/util/cloexec.h
++++ b/tools/perf/util/cloexec.h
+@@ -3,4 +3,10 @@
+ 
+ unsigned long perf_event_open_cloexec_flag(void);
+ 
++#ifdef __GLIBC_PREREQ
++#if !__GLIBC_PREREQ(2, 6)
++extern int sched_getcpu(void) __THROW;
++#endif
++#endif
++
+ #endif /* __PERF_CLOEXEC_H */
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index 33b7a2a..9bdf007 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -74,6 +74,10 @@ static inline uint8_t elf_sym__type(const GElf_Sym *sym)
+ 	return GELF_ST_TYPE(sym->st_info);
+ }
+ 
++#ifndef STT_GNU_IFUNC
++#define STT_GNU_IFUNC 10
++#endif
++
+ static inline int elf_sym__is_function(const GElf_Sym *sym)
+ {
+ 	return (elf_sym__type(sym) == STT_FUNC ||
+diff --git a/tools/power/x86/turbostat/Makefile b/tools/power/x86/turbostat/Makefile
+index d1b3a36..4039854 100644
+--- a/tools/power/x86/turbostat/Makefile
++++ b/tools/power/x86/turbostat/Makefile
+@@ -1,8 +1,12 @@
+ CC		= $(CROSS_COMPILE)gcc
+-BUILD_OUTPUT	:= $(PWD)
++BUILD_OUTPUT	:= $(CURDIR)
+ PREFIX		:= /usr
+ DESTDIR		:=
+ 
++ifeq ("$(origin O)", "command line")
++	BUILD_OUTPUT := $(O)
++endif
++
+ turbostat : turbostat.c
+ CFLAGS +=	-Wall
+ CFLAGS +=	-DMSRHEADER='"../../../../arch/x86/include/uapi/asm/msr-index.h"'
+diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
+index c9f60f5..e5abe7c 100644
+--- a/virt/kvm/arm/vgic.c
++++ b/virt/kvm/arm/vgic.c
+@@ -1371,6 +1371,9 @@ int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
+ 			goto out;
+ 	}
+ 
++	if (irq_num >= kvm->arch.vgic.nr_irqs)
++		return -EINVAL;
++
+ 	vcpu_id = vgic_update_irq_pending(kvm, cpuid, irq_num, level);
+ 	if (vcpu_id >= 0) {
+ 		/* kick the specified vcpu */
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index cc6a25d..f8f3f5f 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1653,8 +1653,8 @@ int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
+ 	ghc->generation = slots->generation;
+ 	ghc->len = len;
+ 	ghc->memslot = gfn_to_memslot(kvm, start_gfn);
+-	ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, &nr_pages_avail);
+-	if (!kvm_is_error_hva(ghc->hva) && nr_pages_avail >= nr_pages_needed) {
++	ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, NULL);
++	if (!kvm_is_error_hva(ghc->hva) && nr_pages_needed <= 1) {
+ 		ghc->hva += offset;
+ 	} else {
+ 		/*

diff --git a/1002_linux-4.0.3.patch b/1002_linux-4.0.3.patch
new file mode 100644
index 0000000..d137bf2
--- /dev/null
+++ b/1002_linux-4.0.3.patch
@@ -0,0 +1,2827 @@
+diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
+index bfcb1a62a7b4..4d68ec841304 100644
+--- a/Documentation/kernel-parameters.txt
++++ b/Documentation/kernel-parameters.txt
+@@ -3746,6 +3746,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
+ 					READ_CAPACITY_16 command);
+ 				f = NO_REPORT_OPCODES (don't use report opcodes
+ 					command, uas only);
++				g = MAX_SECTORS_240 (don't transfer more than
++					240 sectors at a time, uas only);
+ 				h = CAPACITY_HEURISTICS (decrease the
+ 					reported device capacity by one
+ 					sector if the number is odd);
+diff --git a/Makefile b/Makefile
+index 0649a6011a76..dc9f43a019d6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
+index ef7d112f5ce0..b0bd4e5fd5cf 100644
+--- a/arch/arm64/mm/dma-mapping.c
++++ b/arch/arm64/mm/dma-mapping.c
+@@ -67,8 +67,7 @@ static void *__alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags)
+ 
+ 		*ret_page = phys_to_page(phys);
+ 		ptr = (void *)val;
+-		if (flags & __GFP_ZERO)
+-			memset(ptr, 0, size);
++		memset(ptr, 0, size);
+ 	}
+ 
+ 	return ptr;
+@@ -105,7 +104,6 @@ static void *__dma_alloc_coherent(struct device *dev, size_t size,
+ 		struct page *page;
+ 		void *addr;
+ 
+-		size = PAGE_ALIGN(size);
+ 		page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT,
+ 							get_order(size));
+ 		if (!page)
+@@ -113,8 +111,7 @@ static void *__dma_alloc_coherent(struct device *dev, size_t size,
+ 
+ 		*dma_handle = phys_to_dma(dev, page_to_phys(page));
+ 		addr = page_address(page);
+-		if (flags & __GFP_ZERO)
+-			memset(addr, 0, size);
++		memset(addr, 0, size);
+ 		return addr;
+ 	} else {
+ 		return swiotlb_alloc_coherent(dev, size, dma_handle, flags);
+@@ -195,6 +192,8 @@ static void __dma_free(struct device *dev, size_t size,
+ {
+ 	void *swiotlb_addr = phys_to_virt(dma_to_phys(dev, dma_handle));
+ 
++	size = PAGE_ALIGN(size);
++
+ 	if (!is_device_dma_coherent(dev)) {
+ 		if (__free_from_pool(vaddr, size))
+ 			return;
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index c7a16904cd03..1a313c468d65 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -2072,7 +2072,7 @@ config MIPSR2_TO_R6_EMULATOR
+ 	help
+ 	  Choose this option if you want to run non-R6 MIPS userland code.
+ 	  Even if you say 'Y' here, the emulator will still be disabled by
+-	  default. You can enable it using the 'mipsr2emul' kernel option.
++	  default. You can enable it using the 'mipsr2emu' kernel option.
+ 	  The only reason this is a build-time option is to save ~14K from the
+ 	  final kernel image.
+ comment "MIPS R2-to-R6 emulator is only available for UP kernels"
+@@ -2142,7 +2142,7 @@ config MIPS_CMP
+ 
+ config MIPS_CPS
+ 	bool "MIPS Coherent Processing System support"
+-	depends on SYS_SUPPORTS_MIPS_CPS
++	depends on SYS_SUPPORTS_MIPS_CPS && !64BIT
+ 	select MIPS_CM
+ 	select MIPS_CPC
+ 	select MIPS_CPS_PM if HOTPLUG_CPU
+diff --git a/arch/mips/Makefile b/arch/mips/Makefile
+index 8f57fc72d62c..1b4dab1e6ab8 100644
+--- a/arch/mips/Makefile
++++ b/arch/mips/Makefile
+@@ -197,11 +197,17 @@ endif
+ # Warning: the 64-bit MIPS architecture does not support the `smartmips' extension
+ # Pass -Wa,--no-warn to disable all assembler warnings until the kernel code has
+ # been fixed properly.
+-mips-cflags				:= "$(cflags-y)"
+-cflags-$(CONFIG_CPU_HAS_SMARTMIPS)	+= $(call cc-option,$(mips-cflags),-msmartmips) -Wa,--no-warn
+-cflags-$(CONFIG_CPU_MICROMIPS)		+= $(call cc-option,$(mips-cflags),-mmicromips)
++mips-cflags				:= $(cflags-y)
++ifeq ($(CONFIG_CPU_HAS_SMARTMIPS),y)
++smartmips-ase				:= $(call cc-option-yn,$(mips-cflags) -msmartmips)
++cflags-$(smartmips-ase)			+= -msmartmips -Wa,--no-warn
++endif
++ifeq ($(CONFIG_CPU_MICROMIPS),y)
++micromips-ase				:= $(call cc-option-yn,$(mips-cflags) -mmicromips)
++cflags-$(micromips-ase)			+= -mmicromips
++endif
+ ifeq ($(CONFIG_CPU_HAS_MSA),y)
+-toolchain-msa				:= $(call cc-option-yn,-$(mips-cflags),mhard-float -mfp64 -Wa$(comma)-mmsa)
++toolchain-msa				:= $(call cc-option-yn,$(mips-cflags) -mhard-float -mfp64 -Wa$(comma)-mmsa)
+ cflags-$(toolchain-msa)			+= -DTOOLCHAIN_SUPPORTS_MSA
+ endif
+ 
+diff --git a/arch/mips/bcm47xx/board.c b/arch/mips/bcm47xx/board.c
+index b3ae068ca4fa..3fd369d74444 100644
+--- a/arch/mips/bcm47xx/board.c
++++ b/arch/mips/bcm47xx/board.c
+@@ -247,8 +247,8 @@ static __init const struct bcm47xx_board_type *bcm47xx_board_get_nvram(void)
+ 	}
+ 
+ 	if (bcm47xx_nvram_getenv("hardware_version", buf1, sizeof(buf1)) >= 0 &&
+-	    bcm47xx_nvram_getenv("boardtype", buf2, sizeof(buf2)) >= 0) {
+-		for (e2 = bcm47xx_board_list_boot_hw; e2->value1; e2++) {
++	    bcm47xx_nvram_getenv("boardnum", buf2, sizeof(buf2)) >= 0) {
++		for (e2 = bcm47xx_board_list_hw_version_num; e2->value1; e2++) {
+ 			if (!strstarts(buf1, e2->value1) &&
+ 			    !strcmp(buf2, e2->value2))
+ 				return &e2->board;
+diff --git a/arch/mips/bcm63xx/prom.c b/arch/mips/bcm63xx/prom.c
+index e1f27d653f60..7019e2967009 100644
+--- a/arch/mips/bcm63xx/prom.c
++++ b/arch/mips/bcm63xx/prom.c
+@@ -17,7 +17,6 @@
+ #include <bcm63xx_cpu.h>
+ #include <bcm63xx_io.h>
+ #include <bcm63xx_regs.h>
+-#include <bcm63xx_gpio.h>
+ 
+ void __init prom_init(void)
+ {
+@@ -53,9 +52,6 @@ void __init prom_init(void)
+ 	reg &= ~mask;
+ 	bcm_perf_writel(reg, PERF_CKCTL_REG);
+ 
+-	/* register gpiochip */
+-	bcm63xx_gpio_init();
+-
+ 	/* do low level board init */
+ 	board_prom_init();
+ 
+diff --git a/arch/mips/bcm63xx/setup.c b/arch/mips/bcm63xx/setup.c
+index 6660c7ddf87b..240fb4ffa55c 100644
+--- a/arch/mips/bcm63xx/setup.c
++++ b/arch/mips/bcm63xx/setup.c
+@@ -20,6 +20,7 @@
+ #include <bcm63xx_cpu.h>
+ #include <bcm63xx_regs.h>
+ #include <bcm63xx_io.h>
++#include <bcm63xx_gpio.h>
+ 
+ void bcm63xx_machine_halt(void)
+ {
+@@ -160,6 +161,9 @@ void __init plat_mem_setup(void)
+ 
+ int __init bcm63xx_register_devices(void)
+ {
++	/* register gpiochip */
++	bcm63xx_gpio_init();
++
+ 	return board_register_devices();
+ }
+ 
+diff --git a/arch/mips/cavium-octeon/dma-octeon.c b/arch/mips/cavium-octeon/dma-octeon.c
+index 7d8987818ccf..d8960d46417b 100644
+--- a/arch/mips/cavium-octeon/dma-octeon.c
++++ b/arch/mips/cavium-octeon/dma-octeon.c
+@@ -306,7 +306,7 @@ void __init plat_swiotlb_setup(void)
+ 		swiotlbsize = 64 * (1<<20);
+ 	}
+ #endif
+-#ifdef CONFIG_USB_OCTEON_OHCI
++#ifdef CONFIG_USB_OHCI_HCD_PLATFORM
+ 	/* OCTEON II ohci is only 32-bit. */
+ 	if (OCTEON_IS_OCTEON2() && max_addr >= 0x100000000ul)
+ 		swiotlbsize = 64 * (1<<20);
+diff --git a/arch/mips/cavium-octeon/setup.c b/arch/mips/cavium-octeon/setup.c
+index a42110e7edbc..a7f40820e567 100644
+--- a/arch/mips/cavium-octeon/setup.c
++++ b/arch/mips/cavium-octeon/setup.c
+@@ -413,7 +413,10 @@ static void octeon_restart(char *command)
+ 
+ 	mb();
+ 	while (1)
+-		cvmx_write_csr(CVMX_CIU_SOFT_RST, 1);
++		if (OCTEON_IS_OCTEON3())
++			cvmx_write_csr(CVMX_RST_SOFT_RST, 1);
++		else
++			cvmx_write_csr(CVMX_CIU_SOFT_RST, 1);
+ }
+ 
+ 
+diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h
+index e08381a37f8b..723229f4cf27 100644
+--- a/arch/mips/include/asm/cacheflush.h
++++ b/arch/mips/include/asm/cacheflush.h
+@@ -29,6 +29,20 @@
+  *  - flush_icache_all() flush the entire instruction cache
+  *  - flush_data_cache_page() flushes a page from the data cache
+  */
++
++ /*
++ * This flag is used to indicate that the page pointed to by a pte
++ * is dirty and requires cleaning before returning it to the user.
++ */
++#define PG_dcache_dirty			PG_arch_1
++
++#define Page_dcache_dirty(page)		\
++	test_bit(PG_dcache_dirty, &(page)->flags)
++#define SetPageDcacheDirty(page)	\
++	set_bit(PG_dcache_dirty, &(page)->flags)
++#define ClearPageDcacheDirty(page)	\
++	clear_bit(PG_dcache_dirty, &(page)->flags)
++
+ extern void (*flush_cache_all)(void);
+ extern void (*__flush_cache_all)(void);
+ extern void (*flush_cache_mm)(struct mm_struct *mm);
+@@ -37,13 +51,15 @@ extern void (*flush_cache_range)(struct vm_area_struct *vma,
+ 	unsigned long start, unsigned long end);
+ extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn);
+ extern void __flush_dcache_page(struct page *page);
++extern void __flush_icache_page(struct vm_area_struct *vma, struct page *page);
+ 
+ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
+ static inline void flush_dcache_page(struct page *page)
+ {
+-	if (cpu_has_dc_aliases || !cpu_has_ic_fills_f_dc)
++	if (cpu_has_dc_aliases)
+ 		__flush_dcache_page(page);
+-
++	else if (!cpu_has_ic_fills_f_dc)
++		SetPageDcacheDirty(page);
+ }
+ 
+ #define flush_dcache_mmap_lock(mapping)		do { } while (0)
+@@ -61,6 +77,11 @@ static inline void flush_anon_page(struct vm_area_struct *vma,
+ static inline void flush_icache_page(struct vm_area_struct *vma,
+ 	struct page *page)
+ {
++	if (!cpu_has_ic_fills_f_dc && (vma->vm_flags & VM_EXEC) &&
++	    Page_dcache_dirty(page)) {
++		__flush_icache_page(vma, page);
++		ClearPageDcacheDirty(page);
++	}
+ }
+ 
+ extern void (*flush_icache_range)(unsigned long start, unsigned long end);
+@@ -95,19 +116,6 @@ extern void (*flush_icache_all)(void);
+ extern void (*local_flush_data_cache_page)(void * addr);
+ extern void (*flush_data_cache_page)(unsigned long addr);
+ 
+-/*
+- * This flag is used to indicate that the page pointed to by a pte
+- * is dirty and requires cleaning before returning it to the user.
+- */
+-#define PG_dcache_dirty			PG_arch_1
+-
+-#define Page_dcache_dirty(page)		\
+-	test_bit(PG_dcache_dirty, &(page)->flags)
+-#define SetPageDcacheDirty(page)	\
+-	set_bit(PG_dcache_dirty, &(page)->flags)
+-#define ClearPageDcacheDirty(page)	\
+-	clear_bit(PG_dcache_dirty, &(page)->flags)
+-
+ /* Run kernel code uncached, useful for cache probing functions. */
+ unsigned long run_uncached(void *func);
+ 
+diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h
+index 0d8208de9a3f..345fd7f80730 100644
+--- a/arch/mips/include/asm/cpu-features.h
++++ b/arch/mips/include/asm/cpu-features.h
+@@ -235,8 +235,39 @@
+ /* MIPSR2 and MIPSR6 have a lot of similarities */
+ #define cpu_has_mips_r2_r6	(cpu_has_mips_r2 | cpu_has_mips_r6)
+ 
++/*
++ * cpu_has_mips_r2_exec_hazard - return if IHB is required on current processor
++ *
++ * Returns non-zero value if the current processor implementation requires
++ * an IHB instruction to deal with an instruction hazard as per MIPS R2
++ * architecture specification, zero otherwise.
++ */
+ #ifndef cpu_has_mips_r2_exec_hazard
+-#define cpu_has_mips_r2_exec_hazard (cpu_has_mips_r2 | cpu_has_mips_r6)
++#define cpu_has_mips_r2_exec_hazard					\
++({									\
++	int __res;							\
++									\
++	switch (current_cpu_type()) {					\
++	case CPU_M14KC:							\
++	case CPU_74K:							\
++	case CPU_1074K:							\
++	case CPU_PROAPTIV:						\
++	case CPU_P5600:							\
++	case CPU_M5150:							\
++	case CPU_QEMU_GENERIC:						\
++	case CPU_CAVIUM_OCTEON:						\
++	case CPU_CAVIUM_OCTEON_PLUS:					\
++	case CPU_CAVIUM_OCTEON2:					\
++	case CPU_CAVIUM_OCTEON3:					\
++		__res = 0;						\
++		break;							\
++									\
++	default:							\
++		__res = 1;						\
++	}								\
++									\
++	__res;								\
++})
+ #endif
+ 
+ /*
+diff --git a/arch/mips/include/asm/elf.h b/arch/mips/include/asm/elf.h
+index 535f196ffe02..694925a26924 100644
+--- a/arch/mips/include/asm/elf.h
++++ b/arch/mips/include/asm/elf.h
+@@ -294,6 +294,9 @@ do {									\
+ 	if (personality(current->personality) != PER_LINUX)		\
+ 		set_personality(PER_LINUX);				\
+ 									\
++	clear_thread_flag(TIF_HYBRID_FPREGS);				\
++	set_thread_flag(TIF_32BIT_FPREGS);				\
++									\
+ 	mips_set_personality_fp(state);					\
+ 									\
+ 	current->thread.abi = &mips_abi;				\
+@@ -319,6 +322,8 @@ do {									\
+ 	do {								\
+ 		set_thread_flag(TIF_32BIT_REGS);			\
+ 		set_thread_flag(TIF_32BIT_ADDR);			\
++		clear_thread_flag(TIF_HYBRID_FPREGS);			\
++		set_thread_flag(TIF_32BIT_FPREGS);			\
+ 									\
+ 		mips_set_personality_fp(state);				\
+ 									\
+diff --git a/arch/mips/include/asm/mach-cavium-octeon/cpu-feature-overrides.h b/arch/mips/include/asm/mach-cavium-octeon/cpu-feature-overrides.h
+index fa1f3cfbae8d..d68e685cde60 100644
+--- a/arch/mips/include/asm/mach-cavium-octeon/cpu-feature-overrides.h
++++ b/arch/mips/include/asm/mach-cavium-octeon/cpu-feature-overrides.h
+@@ -50,7 +50,6 @@
+ #define cpu_has_mips32r2	0
+ #define cpu_has_mips64r1	0
+ #define cpu_has_mips64r2	1
+-#define cpu_has_mips_r2_exec_hazard 0
+ #define cpu_has_dsp		0
+ #define cpu_has_dsp2		0
+ #define cpu_has_mipsmt		0
+diff --git a/arch/mips/include/asm/octeon/cvmx.h b/arch/mips/include/asm/octeon/cvmx.h
+index 33db1c806b01..774bb45834cb 100644
+--- a/arch/mips/include/asm/octeon/cvmx.h
++++ b/arch/mips/include/asm/octeon/cvmx.h
+@@ -436,14 +436,6 @@ static inline uint64_t cvmx_get_cycle_global(void)
+ 
+ /***************************************************************************/
+ 
+-static inline void cvmx_reset_octeon(void)
+-{
+-	union cvmx_ciu_soft_rst ciu_soft_rst;
+-	ciu_soft_rst.u64 = 0;
+-	ciu_soft_rst.s.soft_rst = 1;
+-	cvmx_write_csr(CVMX_CIU_SOFT_RST, ciu_soft_rst.u64);
+-}
+-
+ /* Return the number of cores available in the chip */
+ static inline uint32_t cvmx_octeon_num_cores(void)
+ {
+diff --git a/arch/mips/include/asm/octeon/pci-octeon.h b/arch/mips/include/asm/octeon/pci-octeon.h
+index 64ba56a02843..1884609741a8 100644
+--- a/arch/mips/include/asm/octeon/pci-octeon.h
++++ b/arch/mips/include/asm/octeon/pci-octeon.h
+@@ -11,9 +11,6 @@
+ 
+ #include <linux/pci.h>
+ 
+-/* Some PCI cards require delays when accessing config space. */
+-#define PCI_CONFIG_SPACE_DELAY 10000
+-
+ /*
+  * The physical memory base mapped by BAR1.  256MB at the end of the
+  * first 4GB.
+diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
+index bef782c4a44b..f8f809fd6c6d 100644
+--- a/arch/mips/include/asm/pgtable.h
++++ b/arch/mips/include/asm/pgtable.h
+@@ -127,10 +127,6 @@ do {									\
+ 	}								\
+ } while(0)
+ 
+-
+-extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
+-	pte_t pteval);
+-
+ #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+ 
+ #define pte_none(pte)		(!(((pte).pte_low | (pte).pte_high) & ~_PAGE_GLOBAL))
+@@ -154,6 +150,7 @@ static inline void set_pte(pte_t *ptep, pte_t pte)
+ 		}
+ 	}
+ }
++#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)
+ 
+ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+ {
+@@ -192,6 +189,7 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
+ 	}
+ #endif
+ }
++#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)
+ 
+ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+ {
+@@ -407,12 +405,15 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+ 
+ extern void __update_tlb(struct vm_area_struct *vma, unsigned long address,
+ 	pte_t pte);
++extern void __update_cache(struct vm_area_struct *vma, unsigned long address,
++	pte_t pte);
+ 
+ static inline void update_mmu_cache(struct vm_area_struct *vma,
+ 	unsigned long address, pte_t *ptep)
+ {
+ 	pte_t pte = *ptep;
+ 	__update_tlb(vma, address, pte);
++	__update_cache(vma, address, pte);
+ }
+ 
+ static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
+diff --git a/arch/mips/include/asm/r4kcache.h b/arch/mips/include/asm/r4kcache.h
+index 1b22d2da88a1..38902bf97adc 100644
+--- a/arch/mips/include/asm/r4kcache.h
++++ b/arch/mips/include/asm/r4kcache.h
+@@ -12,6 +12,8 @@
+ #ifndef _ASM_R4KCACHE_H
+ #define _ASM_R4KCACHE_H
+ 
++#include <linux/stringify.h>
++
+ #include <asm/asm.h>
+ #include <asm/cacheops.h>
+ #include <asm/compiler.h>
+@@ -344,7 +346,7 @@ static inline void invalidate_tcache_page(unsigned long addr)
+ 	"	cache %1, 0x0a0(%0); cache %1, 0x0b0(%0)\n"	\
+ 	"	cache %1, 0x0c0(%0); cache %1, 0x0d0(%0)\n"	\
+ 	"	cache %1, 0x0e0(%0); cache %1, 0x0f0(%0)\n"	\
+-	"	addiu $1, $0, 0x100			\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, %0, 0x100	\n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x010($1)\n"	\
+ 	"	cache %1, 0x020($1); cache %1, 0x030($1)\n"	\
+ 	"	cache %1, 0x040($1); cache %1, 0x050($1)\n"	\
+@@ -368,17 +370,17 @@ static inline void invalidate_tcache_page(unsigned long addr)
+ 	"	cache %1, 0x040(%0); cache %1, 0x060(%0)\n"	\
+ 	"	cache %1, 0x080(%0); cache %1, 0x0a0(%0)\n"	\
+ 	"	cache %1, 0x0c0(%0); cache %1, 0x0e0(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, %0, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x020($1)\n"	\
+ 	"	cache %1, 0x040($1); cache %1, 0x060($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0a0($1)\n"	\
+ 	"	cache %1, 0x0c0($1); cache %1, 0x0e0($1)\n"	\
+-	"	addiu $1, $1, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x020($1)\n"	\
+ 	"	cache %1, 0x040($1); cache %1, 0x060($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0a0($1)\n"	\
+ 	"	cache %1, 0x0c0($1); cache %1, 0x0e0($1)\n"	\
+-	"	addiu $1, $1, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100\n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x020($1)\n"	\
+ 	"	cache %1, 0x040($1); cache %1, 0x060($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0a0($1)\n"	\
+@@ -396,25 +398,25 @@ static inline void invalidate_tcache_page(unsigned long addr)
+ 	"	.set noat\n"					\
+ 	"	cache %1, 0x000(%0); cache %1, 0x040(%0)\n"	\
+ 	"	cache %1, 0x080(%0); cache %1, 0x0c0(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, %0, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x040($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0c0($1)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x040($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0c0($1)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x040($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0c0($1)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x040($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0c0($1)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x040($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0c0($1)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x040($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0c0($1)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
+ 	"	cache %1, 0x000($1); cache %1, 0x040($1)\n"	\
+ 	"	cache %1, 0x080($1); cache %1, 0x0c0($1)\n"	\
+ 	"	.set pop\n"					\
+@@ -429,39 +431,38 @@ static inline void invalidate_tcache_page(unsigned long addr)
+ 	"	.set mips64r6\n"				\
+ 	"	.set noat\n"					\
+ 	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
+-	"	cache %1, 0x000(%0); cache %1, 0x080(%0)\n"	\
+-	"	addiu $1, %0, 0x100\n"				\
++	"	"__stringify(LONG_ADDIU)" $1, %0, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
++	"	"__stringify(LONG_ADDIU)" $1, $1, 0x100 \n"	\
++	"	cache %1, 0x000($1); cache %1, 0x080($1)\n"	\
+ 	"	.set pop\n"					\
+ 		:						\
+ 		: "r" (base),					\
+diff --git a/arch/mips/include/asm/spinlock.h b/arch/mips/include/asm/spinlock.h
+index b4548690ade9..1fca2e0793dc 100644
+--- a/arch/mips/include/asm/spinlock.h
++++ b/arch/mips/include/asm/spinlock.h
+@@ -263,7 +263,7 @@ static inline void arch_read_unlock(arch_rwlock_t *rw)
+ 	if (R10000_LLSC_WAR) {
+ 		__asm__ __volatile__(
+ 		"1:	ll	%1, %2		# arch_read_unlock	\n"
+-		"	addiu	%1, 1					\n"
++		"	addiu	%1, -1					\n"
+ 		"	sc	%1, %0					\n"
+ 		"	beqzl	%1, 1b					\n"
+ 		: "=" GCC_OFF_SMALL_ASM() (rw->lock), "=&r" (tmp)
+diff --git a/arch/mips/kernel/entry.S b/arch/mips/kernel/entry.S
+index af41ba6db960..7791840cf22c 100644
+--- a/arch/mips/kernel/entry.S
++++ b/arch/mips/kernel/entry.S
+@@ -10,6 +10,7 @@
+ 
+ #include <asm/asm.h>
+ #include <asm/asmmacro.h>
++#include <asm/compiler.h>
+ #include <asm/regdef.h>
+ #include <asm/mipsregs.h>
+ #include <asm/stackframe.h>
+@@ -185,7 +186,7 @@ syscall_exit_work:
+  * For C code use the inline version named instruction_hazard().
+  */
+ LEAF(mips_ihb)
+-	.set	mips32r2
++	.set	MIPS_ISA_LEVEL_RAW
+ 	jr.hb	ra
+ 	nop
+ 	END(mips_ihb)
+diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c
+index bed7590e475f..d5589bedd0a4 100644
+--- a/arch/mips/kernel/smp-cps.c
++++ b/arch/mips/kernel/smp-cps.c
+@@ -88,6 +88,12 @@ static void __init cps_smp_setup(void)
+ 
+ 	/* Make core 0 coherent with everything */
+ 	write_gcr_cl_coherence(0xff);
++
++#ifdef CONFIG_MIPS_MT_FPAFF
++	/* If we have an FPU, enroll ourselves in the FPU-full mask */
++	if (cpu_has_fpu)
++		cpu_set(0, mt_fpu_cpumask);
++#endif /* CONFIG_MIPS_MT_FPAFF */
+ }
+ 
+ static void __init cps_prepare_cpus(unsigned int max_cpus)
+diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
+index 7e3ea7766822..77d96db8253c 100644
+--- a/arch/mips/mm/cache.c
++++ b/arch/mips/mm/cache.c
+@@ -119,36 +119,37 @@ void __flush_anon_page(struct page *page, unsigned long vmaddr)
+ 
+ EXPORT_SYMBOL(__flush_anon_page);
+ 
+-static void mips_flush_dcache_from_pte(pte_t pteval, unsigned long address)
++void __flush_icache_page(struct vm_area_struct *vma, struct page *page)
++{
++	unsigned long addr;
++
++	if (PageHighMem(page))
++		return;
++
++	addr = (unsigned long) page_address(page);
++	flush_data_cache_page(addr);
++}
++EXPORT_SYMBOL_GPL(__flush_icache_page);
++
++void __update_cache(struct vm_area_struct *vma, unsigned long address,
++	pte_t pte)
+ {
+ 	struct page *page;
+-	unsigned long pfn = pte_pfn(pteval);
++	unsigned long pfn, addr;
++	int exec = (vma->vm_flags & VM_EXEC) && !cpu_has_ic_fills_f_dc;
+ 
++	pfn = pte_pfn(pte);
+ 	if (unlikely(!pfn_valid(pfn)))
+ 		return;
+-
+ 	page = pfn_to_page(pfn);
+ 	if (page_mapping(page) && Page_dcache_dirty(page)) {
+-		unsigned long page_addr = (unsigned long) page_address(page);
+-
+-		if (!cpu_has_ic_fills_f_dc ||
+-		    pages_do_alias(page_addr, address & PAGE_MASK))
+-			flush_data_cache_page(page_addr);
++		addr = (unsigned long) page_address(page);
++		if (exec || pages_do_alias(addr, address & PAGE_MASK))
++			flush_data_cache_page(addr);
+ 		ClearPageDcacheDirty(page);
+ 	}
+ }
+ 
+-void set_pte_at(struct mm_struct *mm, unsigned long addr,
+-        pte_t *ptep, pte_t pteval)
+-{
+-        if (cpu_has_dc_aliases || !cpu_has_ic_fills_f_dc) {
+-                if (pte_present(pteval))
+-                        mips_flush_dcache_from_pte(pteval, addr);
+-        }
+-
+-        set_pte(ptep, pteval);
+-}
+-
+ unsigned long _page_cachable_default;
+ EXPORT_SYMBOL(_page_cachable_default);
+ 
+diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
+index d75ff73a2012..a79fd0af0224 100644
+--- a/arch/mips/mm/tlbex.c
++++ b/arch/mips/mm/tlbex.c
+@@ -501,26 +501,9 @@ static void build_tlb_write_entry(u32 **p, struct uasm_label **l,
+ 	case tlb_indexed: tlbw = uasm_i_tlbwi; break;
+ 	}
+ 
+-	if (cpu_has_mips_r2_exec_hazard) {
+-		/*
+-		 * The architecture spec says an ehb is required here,
+-		 * but a number of cores do not have the hazard and
+-		 * using an ehb causes an expensive pipeline stall.
+-		 */
+-		switch (current_cpu_type()) {
+-		case CPU_M14KC:
+-		case CPU_74K:
+-		case CPU_1074K:
+-		case CPU_PROAPTIV:
+-		case CPU_P5600:
+-		case CPU_M5150:
+-		case CPU_QEMU_GENERIC:
+-			break;
+-
+-		default:
++	if (cpu_has_mips_r2_r6) {
++		if (cpu_has_mips_r2_exec_hazard)
+ 			uasm_i_ehb(p);
+-			break;
+-		}
+ 		tlbw(p);
+ 		return;
+ 	}
+diff --git a/arch/mips/netlogic/xlp/ahci-init-xlp2.c b/arch/mips/netlogic/xlp/ahci-init-xlp2.c
+index c83dbf3689e2..7b066a44e679 100644
+--- a/arch/mips/netlogic/xlp/ahci-init-xlp2.c
++++ b/arch/mips/netlogic/xlp/ahci-init-xlp2.c
+@@ -203,6 +203,7 @@ static u8 read_phy_reg(u64 regbase, u32 addr, u32 physel)
+ static void config_sata_phy(u64 regbase)
+ {
+ 	u32 port, i, reg;
++	u8 val;
+ 
+ 	for (port = 0; port < 2; port++) {
+ 		for (i = 0, reg = RXCDRCALFOSC0; reg <= CALDUTY; reg++, i++)
+@@ -210,6 +211,18 @@ static void config_sata_phy(u64 regbase)
+ 
+ 		for (i = 0, reg = RXDPIF; reg <= PPMDRIFTMAX_HI; reg++, i++)
+ 			write_phy_reg(regbase, reg, port, sata_phy_config2[i]);
++
++		/* Fix for PHY link up failures at lower temperatures */
++		write_phy_reg(regbase, 0x800F, port, 0x1f);
++
++		val = read_phy_reg(regbase, 0x0029, port);
++		write_phy_reg(regbase, 0x0029, port, val | (0x7 << 1));
++
++		val = read_phy_reg(regbase, 0x0056, port);
++		write_phy_reg(regbase, 0x0056, port, val & ~(1 << 3));
++
++		val = read_phy_reg(regbase, 0x0018, port);
++		write_phy_reg(regbase, 0x0018, port, val & ~(0x7 << 0));
+ 	}
+ }
+ 
+diff --git a/arch/mips/pci/Makefile b/arch/mips/pci/Makefile
+index 300591c6278d..2eda01e6e08f 100644
+--- a/arch/mips/pci/Makefile
++++ b/arch/mips/pci/Makefile
+@@ -43,7 +43,7 @@ obj-$(CONFIG_SIBYTE_BCM1x80)	+= pci-bcm1480.o pci-bcm1480ht.o
+ obj-$(CONFIG_SNI_RM)		+= fixup-sni.o ops-sni.o
+ obj-$(CONFIG_LANTIQ)		+= fixup-lantiq.o
+ obj-$(CONFIG_PCI_LANTIQ)	+= pci-lantiq.o ops-lantiq.o
+-obj-$(CONFIG_SOC_RT2880)	+= pci-rt2880.o
++obj-$(CONFIG_SOC_RT288X)	+= pci-rt2880.o
+ obj-$(CONFIG_SOC_RT3883)	+= pci-rt3883.o
+ obj-$(CONFIG_TANBAC_TB0219)	+= fixup-tb0219.o
+ obj-$(CONFIG_TANBAC_TB0226)	+= fixup-tb0226.o
+diff --git a/arch/mips/pci/pci-octeon.c b/arch/mips/pci/pci-octeon.c
+index a04af55d89f1..c258cd406fbb 100644
+--- a/arch/mips/pci/pci-octeon.c
++++ b/arch/mips/pci/pci-octeon.c
+@@ -214,6 +214,8 @@ const char *octeon_get_pci_interrupts(void)
+ 		return "AAABAAAAAAAAAAAAAAAAAAAAAAAAAAAA";
+ 	case CVMX_BOARD_TYPE_BBGW_REF:
+ 		return "AABCD";
++	case CVMX_BOARD_TYPE_CUST_DSR1000N:
++		return "CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC";
+ 	case CVMX_BOARD_TYPE_THUNDER:
+ 	case CVMX_BOARD_TYPE_EBH3000:
+ 	default:
+@@ -271,9 +273,6 @@ static int octeon_read_config(struct pci_bus *bus, unsigned int devfn,
+ 	pci_addr.s.func = devfn & 0x7;
+ 	pci_addr.s.reg = reg;
+ 
+-#if PCI_CONFIG_SPACE_DELAY
+-	udelay(PCI_CONFIG_SPACE_DELAY);
+-#endif
+ 	switch (size) {
+ 	case 4:
+ 		*val = le32_to_cpu(cvmx_read64_uint32(pci_addr.u64));
+@@ -308,9 +307,6 @@ static int octeon_write_config(struct pci_bus *bus, unsigned int devfn,
+ 	pci_addr.s.func = devfn & 0x7;
+ 	pci_addr.s.reg = reg;
+ 
+-#if PCI_CONFIG_SPACE_DELAY
+-	udelay(PCI_CONFIG_SPACE_DELAY);
+-#endif
+ 	switch (size) {
+ 	case 4:
+ 		cvmx_write64_uint32(pci_addr.u64, cpu_to_le32(val));
+diff --git a/arch/mips/pci/pcie-octeon.c b/arch/mips/pci/pcie-octeon.c
+index 1bb0b2bf8d6e..99f3db4f0a9b 100644
+--- a/arch/mips/pci/pcie-octeon.c
++++ b/arch/mips/pci/pcie-octeon.c
+@@ -1762,14 +1762,6 @@ static int octeon_pcie_write_config(unsigned int pcie_port, struct pci_bus *bus,
+ 	default:
+ 		return PCIBIOS_FUNC_NOT_SUPPORTED;
+ 	}
+-#if PCI_CONFIG_SPACE_DELAY
+-	/*
+-	 * Delay on writes so that devices have time to come up. Some
+-	 * bridges need this to allow time for the secondary busses to
+-	 * work
+-	 */
+-	udelay(PCI_CONFIG_SPACE_DELAY);
+-#endif
+ 	return PCIBIOS_SUCCESSFUL;
+ }
+ 
+diff --git a/arch/mips/ralink/Kconfig b/arch/mips/ralink/Kconfig
+index b1c52ca580f9..e9bc8c96174e 100644
+--- a/arch/mips/ralink/Kconfig
++++ b/arch/mips/ralink/Kconfig
+@@ -7,6 +7,11 @@ config CLKEVT_RT3352
+ 	select CLKSRC_OF
+ 	select CLKSRC_MMIO
+ 
++config RALINK_ILL_ACC
++	bool
++	depends on SOC_RT305X
++	default y
++
+ choice
+ 	prompt "Ralink SoC selection"
+ 	default SOC_RT305X
+diff --git a/drivers/acpi/sbs.c b/drivers/acpi/sbs.c
+index a7a3edd28beb..f23179e84128 100644
+--- a/drivers/acpi/sbs.c
++++ b/drivers/acpi/sbs.c
+@@ -670,7 +670,7 @@ static int acpi_sbs_add(struct acpi_device *device)
+ 	if (!sbs_manager_broken) {
+ 		result = acpi_manager_get_info(sbs);
+ 		if (!result) {
+-			sbs->manager_present = 0;
++			sbs->manager_present = 1;
+ 			for (id = 0; id < MAX_SBS_BAT; ++id)
+ 				if ((sbs->batteries_supported & (1 << id)))
+ 					acpi_battery_add(sbs, id);
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index b40af3203089..b67066d0d9a6 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -2264,6 +2264,11 @@ static bool rbd_img_obj_end_request(struct rbd_obj_request *obj_request)
+ 			result, xferred);
+ 		if (!img_request->result)
+ 			img_request->result = result;
++		/*
++		 * Need to end I/O on the entire obj_request worth of
++		 * bytes in case of error.
++		 */
++		xferred = obj_request->length;
+ 	}
+ 
+ 	/* Image object requests don't own their page array */
+diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c
+index 9bd56116fd5a..1afc0b419da2 100644
+--- a/drivers/gpu/drm/radeon/atombios_crtc.c
++++ b/drivers/gpu/drm/radeon/atombios_crtc.c
+@@ -580,6 +580,9 @@ static u32 atombios_adjust_pll(struct drm_crtc *crtc,
+ 		else
+ 			radeon_crtc->pll_flags |= RADEON_PLL_PREFER_LOW_REF_DIV;
+ 
++		/* if there is no audio, set MINM_OVER_MAXP  */
++		if (!drm_detect_monitor_audio(radeon_connector_edid(connector)))
++			radeon_crtc->pll_flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
+ 		if (rdev->family < CHIP_RV770)
+ 			radeon_crtc->pll_flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
+ 		/* use frac fb div on APUs */
+diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c
+index c39c1d0d9d4e..f20eb32406d1 100644
+--- a/drivers/gpu/drm/radeon/atombios_encoders.c
++++ b/drivers/gpu/drm/radeon/atombios_encoders.c
+@@ -1729,17 +1729,15 @@ radeon_atom_encoder_dpms(struct drm_encoder *encoder, int mode)
+ 	struct drm_device *dev = encoder->dev;
+ 	struct radeon_device *rdev = dev->dev_private;
+ 	struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+-	struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
+ 	int encoder_mode = atombios_get_encoder_mode(encoder);
+ 
+ 	DRM_DEBUG_KMS("encoder dpms %d to mode %d, devices %08x, active_devices %08x\n",
+ 		  radeon_encoder->encoder_id, mode, radeon_encoder->devices,
+ 		  radeon_encoder->active_device);
+ 
+-	if (connector && (radeon_audio != 0) &&
++	if ((radeon_audio != 0) &&
+ 	    ((encoder_mode == ATOM_ENCODER_MODE_HDMI) ||
+-	     (ENCODER_MODE_IS_DP(encoder_mode) &&
+-	      drm_detect_monitor_audio(radeon_connector_edid(connector)))))
++	     ENCODER_MODE_IS_DP(encoder_mode)))
+ 		radeon_audio_dpms(encoder, mode);
+ 
+ 	switch (radeon_encoder->encoder_id) {
+diff --git a/drivers/gpu/drm/radeon/dce6_afmt.c b/drivers/gpu/drm/radeon/dce6_afmt.c
+index 3adc2afe32aa..68fd9fc677e3 100644
+--- a/drivers/gpu/drm/radeon/dce6_afmt.c
++++ b/drivers/gpu/drm/radeon/dce6_afmt.c
+@@ -295,28 +295,3 @@ void dce6_dp_audio_set_dto(struct radeon_device *rdev,
+ 		WREG32(DCCG_AUDIO_DTO1_MODULE, clock);
+ 	}
+ }
+-
+-void dce6_dp_enable(struct drm_encoder *encoder, bool enable)
+-{
+-	struct drm_device *dev = encoder->dev;
+-	struct radeon_device *rdev = dev->dev_private;
+-	struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+-	struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
+-
+-	if (!dig || !dig->afmt)
+-		return;
+-
+-	if (enable) {
+-		WREG32(EVERGREEN_DP_SEC_TIMESTAMP + dig->afmt->offset,
+-		       EVERGREEN_DP_SEC_TIMESTAMP_MODE(1));
+-		WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset,
+-		       EVERGREEN_DP_SEC_ASP_ENABLE |		/* Audio packet transmission */
+-		       EVERGREEN_DP_SEC_ATP_ENABLE |		/* Audio timestamp packet transmission */
+-		       EVERGREEN_DP_SEC_AIP_ENABLE |		/* Audio infoframe packet transmission */
+-		       EVERGREEN_DP_SEC_STREAM_ENABLE);	/* Master enable for secondary stream engine */
+-	} else {
+-		WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 0);
+-	}
+-
+-	dig->afmt->enabled = enable;
+-}
+diff --git a/drivers/gpu/drm/radeon/evergreen_hdmi.c b/drivers/gpu/drm/radeon/evergreen_hdmi.c
+index c18d4ecbd95d..0926739c9fa7 100644
+--- a/drivers/gpu/drm/radeon/evergreen_hdmi.c
++++ b/drivers/gpu/drm/radeon/evergreen_hdmi.c
+@@ -219,13 +219,9 @@ void evergreen_set_avi_packet(struct radeon_device *rdev, u32 offset,
+ 	WREG32(AFMT_AVI_INFO3 + offset,
+ 		frame[0xC] | (frame[0xD] << 8) | (buffer[1] << 24));
+ 
+-	WREG32_OR(HDMI_INFOFRAME_CONTROL0 + offset,
+-		HDMI_AVI_INFO_SEND |	/* enable AVI info frames */
+-		HDMI_AVI_INFO_CONT);	/* required for audio info values to be updated */
+-
+ 	WREG32_P(HDMI_INFOFRAME_CONTROL1 + offset,
+-		HDMI_AVI_INFO_LINE(2),	/* anything other than 0 */
+-		~HDMI_AVI_INFO_LINE_MASK);
++		 HDMI_AVI_INFO_LINE(2),	/* anything other than 0 */
++		 ~HDMI_AVI_INFO_LINE_MASK);
+ }
+ 
+ void dce4_hdmi_audio_set_dto(struct radeon_device *rdev,
+@@ -370,9 +366,13 @@ void dce4_set_audio_packet(struct drm_encoder *encoder, u32 offset)
+ 	WREG32(AFMT_AUDIO_PACKET_CONTROL2 + offset,
+ 		AFMT_AUDIO_CHANNEL_ENABLE(0xff));
+ 
++	WREG32(HDMI_AUDIO_PACKET_CONTROL + offset,
++	       HDMI_AUDIO_DELAY_EN(1) | /* set the default audio delay */
++	       HDMI_AUDIO_PACKETS_PER_LINE(3)); /* should be suffient for all audio modes and small enough for all hblanks */
++
+ 	/* allow 60958 channel status and send audio packets fields to be updated */
+-	WREG32(AFMT_AUDIO_PACKET_CONTROL + offset,
+-		AFMT_AUDIO_SAMPLE_SEND | AFMT_RESET_FIFO_WHEN_AUDIO_DIS | AFMT_60958_CS_UPDATE);
++	WREG32_OR(AFMT_AUDIO_PACKET_CONTROL + offset,
++		  AFMT_RESET_FIFO_WHEN_AUDIO_DIS | AFMT_60958_CS_UPDATE);
+ }
+ 
+ 
+@@ -398,17 +398,26 @@ void evergreen_hdmi_enable(struct drm_encoder *encoder, bool enable)
+ 		return;
+ 
+ 	if (enable) {
+-		WREG32(HDMI_INFOFRAME_CONTROL1 + dig->afmt->offset,
+-		       HDMI_AUDIO_INFO_LINE(2)); /* anything other than 0 */
+-
+-		WREG32(HDMI_AUDIO_PACKET_CONTROL + dig->afmt->offset,
+-		       HDMI_AUDIO_DELAY_EN(1) | /* set the default audio delay */
+-		       HDMI_AUDIO_PACKETS_PER_LINE(3)); /* should be suffient for all audio modes and small enough for all hblanks */
++		struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
+ 
+-		WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset,
+-		       HDMI_AUDIO_INFO_SEND | /* enable audio info frames (frames won't be set until audio is enabled) */
+-		       HDMI_AUDIO_INFO_CONT); /* required for audio info values to be updated */
++		if (drm_detect_monitor_audio(radeon_connector_edid(connector))) {
++			WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset,
++			       HDMI_AVI_INFO_SEND | /* enable AVI info frames */
++			       HDMI_AVI_INFO_CONT | /* required for audio info values to be updated */
++			       HDMI_AUDIO_INFO_SEND | /* enable audio info frames (frames won't be set until audio is enabled) */
++			       HDMI_AUDIO_INFO_CONT); /* required for audio info values to be updated */
++			WREG32_OR(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset,
++				  AFMT_AUDIO_SAMPLE_SEND);
++		} else {
++			WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset,
++			       HDMI_AVI_INFO_SEND | /* enable AVI info frames */
++			       HDMI_AVI_INFO_CONT); /* required for audio info values to be updated */
++			WREG32_AND(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset,
++				   ~AFMT_AUDIO_SAMPLE_SEND);
++		}
+ 	} else {
++		WREG32_AND(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset,
++			   ~AFMT_AUDIO_SAMPLE_SEND);
+ 		WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset, 0);
+ 	}
+ 
+@@ -424,20 +433,24 @@ void evergreen_dp_enable(struct drm_encoder *encoder, bool enable)
+ 	struct radeon_device *rdev = dev->dev_private;
+ 	struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+ 	struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
++	struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
+ 
+ 	if (!dig || !dig->afmt)
+ 		return;
+ 
+-	if (enable) {
++	if (enable && drm_detect_monitor_audio(radeon_connector_edid(connector))) {
+ 		struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
+ 		struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ 		struct radeon_connector_atom_dig *dig_connector;
+ 		uint32_t val;
+ 
++		WREG32_OR(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset,
++			  AFMT_AUDIO_SAMPLE_SEND);
++
+ 		WREG32(EVERGREEN_DP_SEC_TIMESTAMP + dig->afmt->offset,
+ 		       EVERGREEN_DP_SEC_TIMESTAMP_MODE(1));
+ 
+-		if (radeon_connector->con_priv) {
++		if (!ASIC_IS_DCE6(rdev) && radeon_connector->con_priv) {
+ 			dig_connector = radeon_connector->con_priv;
+ 			val = RREG32(EVERGREEN_DP_SEC_AUD_N + dig->afmt->offset);
+ 			val &= ~EVERGREEN_DP_SEC_N_BASE_MULTIPLE(0xf);
+@@ -457,6 +470,8 @@ void evergreen_dp_enable(struct drm_encoder *encoder, bool enable)
+ 			EVERGREEN_DP_SEC_STREAM_ENABLE);	/* Master enable for secondary stream engine */
+ 	} else {
+ 		WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 0);
++		WREG32_AND(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset,
++			   ~AFMT_AUDIO_SAMPLE_SEND);
+ 	}
+ 
+ 	dig->afmt->enabled = enable;
+diff --git a/drivers/gpu/drm/radeon/r600_hdmi.c b/drivers/gpu/drm/radeon/r600_hdmi.c
+index dd6606b8e23c..e85894ade95c 100644
+--- a/drivers/gpu/drm/radeon/r600_hdmi.c
++++ b/drivers/gpu/drm/radeon/r600_hdmi.c
+@@ -228,12 +228,13 @@ void r600_set_avi_packet(struct radeon_device *rdev, u32 offset,
+ 	WREG32(HDMI0_AVI_INFO3 + offset,
+ 		frame[0xC] | (frame[0xD] << 8) | (buffer[1] << 24));
+ 
++	WREG32_OR(HDMI0_INFOFRAME_CONTROL1 + offset,
++		  HDMI0_AVI_INFO_LINE(2));	/* anything other than 0 */
++
+ 	WREG32_OR(HDMI0_INFOFRAME_CONTROL0 + offset,
+-		HDMI0_AVI_INFO_SEND |	/* enable AVI info frames */
+-		HDMI0_AVI_INFO_CONT);	/* send AVI info frames every frame/field */
++		  HDMI0_AVI_INFO_SEND |	/* enable AVI info frames */
++		  HDMI0_AVI_INFO_CONT);	/* send AVI info frames every frame/field */
+ 
+-	WREG32_OR(HDMI0_INFOFRAME_CONTROL1 + offset,
+-		HDMI0_AVI_INFO_LINE(2));	/* anything other than 0 */
+ }
+ 
+ /*
+diff --git a/drivers/gpu/drm/radeon/radeon_audio.c b/drivers/gpu/drm/radeon/radeon_audio.c
+index b21ef69a34ac..b7d33a13db9f 100644
+--- a/drivers/gpu/drm/radeon/radeon_audio.c
++++ b/drivers/gpu/drm/radeon/radeon_audio.c
+@@ -102,7 +102,6 @@ static void radeon_audio_dp_mode_set(struct drm_encoder *encoder,
+ void r600_hdmi_enable(struct drm_encoder *encoder, bool enable);
+ void evergreen_hdmi_enable(struct drm_encoder *encoder, bool enable);
+ void evergreen_dp_enable(struct drm_encoder *encoder, bool enable);
+-void dce6_dp_enable(struct drm_encoder *encoder, bool enable);
+ 
+ static const u32 pin_offsets[7] =
+ {
+@@ -240,7 +239,7 @@ static struct radeon_audio_funcs dce6_dp_funcs = {
+ 	.set_avi_packet = evergreen_set_avi_packet,
+ 	.set_audio_packet = dce4_set_audio_packet,
+ 	.mode_set = radeon_audio_dp_mode_set,
+-	.dpms = dce6_dp_enable,
++	.dpms = evergreen_dp_enable,
+ };
+ 
+ static void radeon_audio_interface_init(struct radeon_device *rdev)
+@@ -461,30 +460,33 @@ void radeon_audio_detect(struct drm_connector *connector,
+ 	if (!connector || !connector->encoder)
+ 		return;
+ 
++	if (!radeon_encoder_is_digital(connector->encoder))
++		return;
++
+ 	rdev = connector->encoder->dev->dev_private;
+ 	radeon_encoder = to_radeon_encoder(connector->encoder);
+ 	dig = radeon_encoder->enc_priv;
+ 
+-	if (status == connector_status_connected) {
+-		struct radeon_connector *radeon_connector;
+-		int sink_type;
+-
+-		if (!drm_detect_monitor_audio(radeon_connector_edid(connector))) {
+-			radeon_encoder->audio = NULL;
+-			return;
+-		}
++	if (!dig->afmt)
++		return;
+ 
+-		radeon_connector = to_radeon_connector(connector);
+-		sink_type = radeon_dp_getsinktype(radeon_connector);
++	if (status == connector_status_connected) {
++		struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ 
+ 		if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort &&
+-			sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT)
++		    radeon_dp_getsinktype(radeon_connector) ==
++		    CONNECTOR_OBJECT_ID_DISPLAYPORT)
+ 			radeon_encoder->audio = rdev->audio.dp_funcs;
+ 		else
+ 			radeon_encoder->audio = rdev->audio.hdmi_funcs;
+ 
+ 		dig->afmt->pin = radeon_audio_get_pin(connector->encoder);
+-		radeon_audio_enable(rdev, dig->afmt->pin, 0xf);
++		if (drm_detect_monitor_audio(radeon_connector_edid(connector))) {
++			radeon_audio_enable(rdev, dig->afmt->pin, 0xf);
++		} else {
++			radeon_audio_enable(rdev, dig->afmt->pin, 0);
++			dig->afmt->pin = NULL;
++		}
+ 	} else {
+ 		radeon_audio_enable(rdev, dig->afmt->pin, 0);
+ 		dig->afmt->pin = NULL;
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index 27def67cb6be..27973e3faf0e 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -1333,8 +1333,10 @@ out:
+ 	/* updated in get modes as well since we need to know if it's analog or digital */
+ 	radeon_connector_update_scratch_regs(connector, ret);
+ 
+-	if (radeon_audio != 0)
++	if (radeon_audio != 0) {
++		radeon_connector_get_edid(connector);
+ 		radeon_audio_detect(connector, ret);
++	}
+ 
+ exit:
+ 	pm_runtime_mark_last_busy(connector->dev->dev);
+@@ -1659,8 +1661,10 @@ radeon_dp_detect(struct drm_connector *connector, bool force)
+ 
+ 	radeon_connector_update_scratch_regs(connector, ret);
+ 
+-	if (radeon_audio != 0)
++	if (radeon_audio != 0) {
++		radeon_connector_get_edid(connector);
+ 		radeon_audio_detect(connector, ret);
++	}
+ 
+ out:
+ 	pm_runtime_mark_last_busy(connector->dev->dev);
+diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c
+index 4d0f96cc3da4..ab39b85e0f76 100644
+--- a/drivers/gpu/drm/radeon/radeon_cs.c
++++ b/drivers/gpu/drm/radeon/radeon_cs.c
+@@ -88,7 +88,7 @@ static int radeon_cs_parser_relocs(struct radeon_cs_parser *p)
+ 	p->dma_reloc_idx = 0;
+ 	/* FIXME: we assume that each relocs use 4 dwords */
+ 	p->nrelocs = chunk->length_dw / 4;
+-	p->relocs = kcalloc(p->nrelocs, sizeof(struct radeon_bo_list), GFP_KERNEL);
++	p->relocs = drm_calloc_large(p->nrelocs, sizeof(struct radeon_bo_list));
+ 	if (p->relocs == NULL) {
+ 		return -ENOMEM;
+ 	}
+@@ -428,7 +428,7 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser *parser, int error, bo
+ 		}
+ 	}
+ 	kfree(parser->track);
+-	kfree(parser->relocs);
++	drm_free_large(parser->relocs);
+ 	drm_free_large(parser->vm_bos);
+ 	for (i = 0; i < parser->nchunks; i++)
+ 		drm_free_large(parser->chunks[i].kdata);
+diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c
+index 2a5a4a9e772d..de42fc4a22b8 100644
+--- a/drivers/gpu/drm/radeon/radeon_vm.c
++++ b/drivers/gpu/drm/radeon/radeon_vm.c
+@@ -473,6 +473,23 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 	}
+ 
+ 	mutex_lock(&vm->mutex);
++	soffset /= RADEON_GPU_PAGE_SIZE;
++	eoffset /= RADEON_GPU_PAGE_SIZE;
++	if (soffset || eoffset) {
++		struct interval_tree_node *it;
++		it = interval_tree_iter_first(&vm->va, soffset, eoffset - 1);
++		if (it && it != &bo_va->it) {
++			struct radeon_bo_va *tmp;
++			tmp = container_of(it, struct radeon_bo_va, it);
++			/* bo and tmp overlap, invalid offset */
++			dev_err(rdev->dev, "bo %p va 0x%010Lx conflict with "
++				"(bo %p 0x%010lx 0x%010lx)\n", bo_va->bo,
++				soffset, tmp->bo, tmp->it.start, tmp->it.last);
++			mutex_unlock(&vm->mutex);
++			return -EINVAL;
++		}
++	}
++
+ 	if (bo_va->it.start || bo_va->it.last) {
+ 		if (bo_va->addr) {
+ 			/* add a clone of the bo_va to clear the old address */
+@@ -490,6 +507,8 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 			spin_lock(&vm->status_lock);
+ 			list_add(&tmp->vm_status, &vm->freed);
+ 			spin_unlock(&vm->status_lock);
++
++			bo_va->addr = 0;
+ 		}
+ 
+ 		interval_tree_remove(&bo_va->it, &vm->va);
+@@ -497,21 +516,7 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 		bo_va->it.last = 0;
+ 	}
+ 
+-	soffset /= RADEON_GPU_PAGE_SIZE;
+-	eoffset /= RADEON_GPU_PAGE_SIZE;
+ 	if (soffset || eoffset) {
+-		struct interval_tree_node *it;
+-		it = interval_tree_iter_first(&vm->va, soffset, eoffset - 1);
+-		if (it) {
+-			struct radeon_bo_va *tmp;
+-			tmp = container_of(it, struct radeon_bo_va, it);
+-			/* bo and tmp overlap, invalid offset */
+-			dev_err(rdev->dev, "bo %p va 0x%010Lx conflict with "
+-				"(bo %p 0x%010lx 0x%010lx)\n", bo_va->bo,
+-				soffset, tmp->bo, tmp->it.start, tmp->it.last);
+-			mutex_unlock(&vm->mutex);
+-			return -EINVAL;
+-		}
+ 		bo_va->it.start = soffset;
+ 		bo_va->it.last = eoffset - 1;
+ 		interval_tree_insert(&bo_va->it, &vm->va);
+@@ -1107,7 +1112,8 @@ void radeon_vm_bo_rmv(struct radeon_device *rdev,
+ 	list_del(&bo_va->bo_list);
+ 
+ 	mutex_lock(&vm->mutex);
+-	interval_tree_remove(&bo_va->it, &vm->va);
++	if (bo_va->it.start || bo_va->it.last)
++		interval_tree_remove(&bo_va->it, &vm->va);
+ 	spin_lock(&vm->status_lock);
+ 	list_del(&bo_va->vm_status);
+ 
+diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c
+index 7be11651b7e6..9dbb3154d559 100644
+--- a/drivers/gpu/drm/radeon/si_dpm.c
++++ b/drivers/gpu/drm/radeon/si_dpm.c
+@@ -2924,6 +2924,7 @@ struct si_dpm_quirk {
+ static struct si_dpm_quirk si_dpm_quirk_list[] = {
+ 	/* PITCAIRN - https://bugs.freedesktop.org/show_bug.cgi?id=76490 */
+ 	{ PCI_VENDOR_ID_ATI, 0x6810, 0x1462, 0x3036, 0, 120000 },
++	{ PCI_VENDOR_ID_ATI, 0x6811, 0x174b, 0xe271, 0, 120000 },
+ 	{ 0, 0, 0, 0 },
+ };
+ 
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 3736f71bdec5..18def3022f6e 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -787,7 +787,7 @@ int vmbus_request_offers(void)
+ {
+ 	struct vmbus_channel_message_header *msg;
+ 	struct vmbus_channel_msginfo *msginfo;
+-	int ret, t;
++	int ret;
+ 
+ 	msginfo = kmalloc(sizeof(*msginfo) +
+ 			  sizeof(struct vmbus_channel_message_header),
+@@ -795,8 +795,6 @@ int vmbus_request_offers(void)
+ 	if (!msginfo)
+ 		return -ENOMEM;
+ 
+-	init_completion(&msginfo->waitevent);
+-
+ 	msg = (struct vmbus_channel_message_header *)msginfo->msg;
+ 
+ 	msg->msgtype = CHANNELMSG_REQUESTOFFERS;
+@@ -810,14 +808,6 @@ int vmbus_request_offers(void)
+ 		goto cleanup;
+ 	}
+ 
+-	t = wait_for_completion_timeout(&msginfo->waitevent, 5*HZ);
+-	if (t == 0) {
+-		ret = -ETIMEDOUT;
+-		goto cleanup;
+-	}
+-
+-
+-
+ cleanup:
+ 	kfree(msginfo);
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+index ee394dc68303..ec1ea8ba7aac 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+@@ -492,7 +492,7 @@ int t4_memory_rw(struct adapter *adap, int win, int mtype, u32 addr,
+ 		memoffset = (mtype * (edc_size * 1024 * 1024));
+ 	else {
+ 		mc_size = EXT_MEM0_SIZE_G(t4_read_reg(adap,
+-						      MA_EXT_MEMORY1_BAR_A));
++						      MA_EXT_MEMORY0_BAR_A));
+ 		memoffset = (MEM_MC0 * edc_size + mc_size) * 1024 * 1024;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+index 3485acf03014..2f1324bed7b3 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+@@ -1467,6 +1467,7 @@ static void mlx4_en_service_task(struct work_struct *work)
+ 		if (mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_TS)
+ 			mlx4_en_ptp_overflow_check(mdev);
+ 
++		mlx4_en_recover_from_oom(priv);
+ 		queue_delayed_work(mdev->workqueue, &priv->service_task,
+ 				   SERVICE_TASK_DELAY);
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+index 698d60de1255..05ec5e151ded 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+@@ -244,6 +244,12 @@ static int mlx4_en_prepare_rx_desc(struct mlx4_en_priv *priv,
+ 	return mlx4_en_alloc_frags(priv, rx_desc, frags, ring->page_alloc, gfp);
+ }
+ 
++static inline bool mlx4_en_is_ring_empty(struct mlx4_en_rx_ring *ring)
++{
++	BUG_ON((u32)(ring->prod - ring->cons) > ring->actual_size);
++	return ring->prod == ring->cons;
++}
++
+ static inline void mlx4_en_update_rx_prod_db(struct mlx4_en_rx_ring *ring)
+ {
+ 	*ring->wqres.db.db = cpu_to_be32(ring->prod & 0xffff);
+@@ -315,8 +321,7 @@ static void mlx4_en_free_rx_buf(struct mlx4_en_priv *priv,
+ 	       ring->cons, ring->prod);
+ 
+ 	/* Unmap and free Rx buffers */
+-	BUG_ON((u32) (ring->prod - ring->cons) > ring->actual_size);
+-	while (ring->cons != ring->prod) {
++	while (!mlx4_en_is_ring_empty(ring)) {
+ 		index = ring->cons & ring->size_mask;
+ 		en_dbg(DRV, priv, "Processing descriptor:%d\n", index);
+ 		mlx4_en_free_rx_desc(priv, ring, index);
+@@ -491,6 +496,23 @@ err_allocator:
+ 	return err;
+ }
+ 
++/* We recover from out of memory by scheduling our napi poll
++ * function (mlx4_en_process_cq), which tries to allocate
++ * all missing RX buffers (call to mlx4_en_refill_rx_buffers).
++ */
++void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv)
++{
++	int ring;
++
++	if (!priv->port_up)
++		return;
++
++	for (ring = 0; ring < priv->rx_ring_num; ring++) {
++		if (mlx4_en_is_ring_empty(priv->rx_ring[ring]))
++			napi_reschedule(&priv->rx_cq[ring]->napi);
++	}
++}
++
+ void mlx4_en_destroy_rx_ring(struct mlx4_en_priv *priv,
+ 			     struct mlx4_en_rx_ring **pring,
+ 			     u32 size, u16 stride)
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+index 55f9f5c5344e..8c234ec1d8aa 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+@@ -143,8 +143,10 @@ int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
+ 	ring->hwtstamp_tx_type = priv->hwtstamp_config.tx_type;
+ 	ring->queue_index = queue_index;
+ 
+-	if (queue_index < priv->num_tx_rings_p_up && cpu_online(queue_index))
+-		cpumask_set_cpu(queue_index, &ring->affinity_mask);
++	if (queue_index < priv->num_tx_rings_p_up)
++		cpumask_set_cpu_local_first(queue_index,
++					    priv->mdev->dev->numa_node,
++					    &ring->affinity_mask);
+ 
+ 	*pring = ring;
+ 	return 0;
+@@ -213,7 +215,7 @@ int mlx4_en_activate_tx_ring(struct mlx4_en_priv *priv,
+ 
+ 	err = mlx4_qp_to_ready(mdev->dev, &ring->wqres.mtt, &ring->context,
+ 			       &ring->qp, &ring->qp_state);
+-	if (!user_prio && cpu_online(ring->queue_index))
++	if (!cpumask_empty(&ring->affinity_mask))
+ 		netif_set_xps_queue(priv->dev, &ring->affinity_mask,
+ 				    ring->queue_index);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+index ebbe244e80dd..8687c8d54227 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
++++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+@@ -790,6 +790,7 @@ int mlx4_en_activate_tx_ring(struct mlx4_en_priv *priv,
+ void mlx4_en_deactivate_tx_ring(struct mlx4_en_priv *priv,
+ 				struct mlx4_en_tx_ring *ring);
+ void mlx4_en_set_num_rx_rings(struct mlx4_en_dev *mdev);
++void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv);
+ int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv,
+ 			   struct mlx4_en_rx_ring **pring,
+ 			   u32 size, u16 stride, int node);
+diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
+index 7600639db4c4..add419d6ff34 100644
+--- a/drivers/scsi/3w-9xxx.c
++++ b/drivers/scsi/3w-9xxx.c
+@@ -149,7 +149,6 @@ static int twa_reset_sequence(TW_Device_Extension *tw_dev, int soft_reset);
+ static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, char *cdb, int use_sg, TW_SG_Entry *sglistarg);
+ static void twa_scsiop_execute_scsi_complete(TW_Device_Extension *tw_dev, int request_id);
+ static char *twa_string_lookup(twa_message_type *table, unsigned int aen_code);
+-static void twa_unmap_scsi_data(TW_Device_Extension *tw_dev, int request_id);
+ 
+ /* Functions */
+ 
+@@ -1340,11 +1339,11 @@ static irqreturn_t twa_interrupt(int irq, void *dev_instance)
+ 				}
+ 
+ 				/* Now complete the io */
++				scsi_dma_unmap(cmd);
++				cmd->scsi_done(cmd);
+ 				tw_dev->state[request_id] = TW_S_COMPLETED;
+ 				twa_free_request_id(tw_dev, request_id);
+ 				tw_dev->posted_request_count--;
+-				tw_dev->srb[request_id]->scsi_done(tw_dev->srb[request_id]);
+-				twa_unmap_scsi_data(tw_dev, request_id);
+ 			}
+ 
+ 			/* Check for valid status after each drain */
+@@ -1402,26 +1401,6 @@ static void twa_load_sgl(TW_Device_Extension *tw_dev, TW_Command_Full *full_comm
+ 	}
+ } /* End twa_load_sgl() */
+ 
+-/* This function will perform a pci-dma mapping for a scatter gather list */
+-static int twa_map_scsi_sg_data(TW_Device_Extension *tw_dev, int request_id)
+-{
+-	int use_sg;
+-	struct scsi_cmnd *cmd = tw_dev->srb[request_id];
+-
+-	use_sg = scsi_dma_map(cmd);
+-	if (!use_sg)
+-		return 0;
+-	else if (use_sg < 0) {
+-		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1c, "Failed to map scatter gather list");
+-		return 0;
+-	}
+-
+-	cmd->SCp.phase = TW_PHASE_SGLIST;
+-	cmd->SCp.have_data_in = use_sg;
+-
+-	return use_sg;
+-} /* End twa_map_scsi_sg_data() */
+-
+ /* This function will poll for a response interrupt of a request */
+ static int twa_poll_response(TW_Device_Extension *tw_dev, int request_id, int seconds)
+ {
+@@ -1600,9 +1579,11 @@ static int twa_reset_device_extension(TW_Device_Extension *tw_dev)
+ 		    (tw_dev->state[i] != TW_S_INITIAL) &&
+ 		    (tw_dev->state[i] != TW_S_COMPLETED)) {
+ 			if (tw_dev->srb[i]) {
+-				tw_dev->srb[i]->result = (DID_RESET << 16);
+-				tw_dev->srb[i]->scsi_done(tw_dev->srb[i]);
+-				twa_unmap_scsi_data(tw_dev, i);
++				struct scsi_cmnd *cmd = tw_dev->srb[i];
++
++				cmd->result = (DID_RESET << 16);
++				scsi_dma_unmap(cmd);
++				cmd->scsi_done(cmd);
+ 			}
+ 		}
+ 	}
+@@ -1781,21 +1762,18 @@ static int twa_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_
+ 	/* Save the scsi command for use by the ISR */
+ 	tw_dev->srb[request_id] = SCpnt;
+ 
+-	/* Initialize phase to zero */
+-	SCpnt->SCp.phase = TW_PHASE_INITIAL;
+-
+ 	retval = twa_scsiop_execute_scsi(tw_dev, request_id, NULL, 0, NULL);
+ 	switch (retval) {
+ 	case SCSI_MLQUEUE_HOST_BUSY:
++		scsi_dma_unmap(SCpnt);
+ 		twa_free_request_id(tw_dev, request_id);
+-		twa_unmap_scsi_data(tw_dev, request_id);
+ 		break;
+ 	case 1:
+-		tw_dev->state[request_id] = TW_S_COMPLETED;
+-		twa_free_request_id(tw_dev, request_id);
+-		twa_unmap_scsi_data(tw_dev, request_id);
+ 		SCpnt->result = (DID_ERROR << 16);
++		scsi_dma_unmap(SCpnt);
+ 		done(SCpnt);
++		tw_dev->state[request_id] = TW_S_COMPLETED;
++		twa_free_request_id(tw_dev, request_id);
+ 		retval = 0;
+ 	}
+ out:
+@@ -1863,8 +1841,8 @@ static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id,
+ 				command_packet->sg_list[0].address = TW_CPU_TO_SGL(tw_dev->generic_buffer_phys[request_id]);
+ 				command_packet->sg_list[0].length = cpu_to_le32(TW_MIN_SGL_LENGTH);
+ 			} else {
+-				sg_count = twa_map_scsi_sg_data(tw_dev, request_id);
+-				if (sg_count == 0)
++				sg_count = scsi_dma_map(srb);
++				if (sg_count < 0)
+ 					goto out;
+ 
+ 				scsi_for_each_sg(srb, sg, sg_count, i) {
+@@ -1979,15 +1957,6 @@ static char *twa_string_lookup(twa_message_type *table, unsigned int code)
+ 	return(table[index].text);
+ } /* End twa_string_lookup() */
+ 
+-/* This function will perform a pci-dma unmap */
+-static void twa_unmap_scsi_data(TW_Device_Extension *tw_dev, int request_id)
+-{
+-	struct scsi_cmnd *cmd = tw_dev->srb[request_id];
+-
+-	if (cmd->SCp.phase == TW_PHASE_SGLIST)
+-		scsi_dma_unmap(cmd);
+-} /* End twa_unmap_scsi_data() */
+-
+ /* This function gets called when a disk is coming on-line */
+ static int twa_slave_configure(struct scsi_device *sdev)
+ {
+diff --git a/drivers/scsi/3w-9xxx.h b/drivers/scsi/3w-9xxx.h
+index 040f7214e5b7..0fdc83cfa0e1 100644
+--- a/drivers/scsi/3w-9xxx.h
++++ b/drivers/scsi/3w-9xxx.h
+@@ -324,11 +324,6 @@ static twa_message_type twa_error_table[] = {
+ #define TW_CURRENT_DRIVER_BUILD 0
+ #define TW_CURRENT_DRIVER_BRANCH 0
+ 
+-/* Phase defines */
+-#define TW_PHASE_INITIAL 0
+-#define TW_PHASE_SINGLE  1
+-#define TW_PHASE_SGLIST  2
+-
+ /* Misc defines */
+ #define TW_9550SX_DRAIN_COMPLETED	      0xFFFF
+ #define TW_SECTOR_SIZE                        512
+diff --git a/drivers/scsi/3w-sas.c b/drivers/scsi/3w-sas.c
+index 2361772d5909..f8374850f714 100644
+--- a/drivers/scsi/3w-sas.c
++++ b/drivers/scsi/3w-sas.c
+@@ -290,26 +290,6 @@ static int twl_post_command_packet(TW_Device_Extension *tw_dev, int request_id)
+ 	return 0;
+ } /* End twl_post_command_packet() */
+ 
+-/* This function will perform a pci-dma mapping for a scatter gather list */
+-static int twl_map_scsi_sg_data(TW_Device_Extension *tw_dev, int request_id)
+-{
+-	int use_sg;
+-	struct scsi_cmnd *cmd = tw_dev->srb[request_id];
+-
+-	use_sg = scsi_dma_map(cmd);
+-	if (!use_sg)
+-		return 0;
+-	else if (use_sg < 0) {
+-		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1, "Failed to map scatter gather list");
+-		return 0;
+-	}
+-
+-	cmd->SCp.phase = TW_PHASE_SGLIST;
+-	cmd->SCp.have_data_in = use_sg;
+-
+-	return use_sg;
+-} /* End twl_map_scsi_sg_data() */
+-
+ /* This function hands scsi cdb's to the firmware */
+ static int twl_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, char *cdb, int use_sg, TW_SG_Entry_ISO *sglistarg)
+ {
+@@ -357,8 +337,8 @@ static int twl_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id,
+ 	if (!sglistarg) {
+ 		/* Map sglist from scsi layer to cmd packet */
+ 		if (scsi_sg_count(srb)) {
+-			sg_count = twl_map_scsi_sg_data(tw_dev, request_id);
+-			if (sg_count == 0)
++			sg_count = scsi_dma_map(srb);
++			if (sg_count <= 0)
+ 				goto out;
+ 
+ 			scsi_for_each_sg(srb, sg, sg_count, i) {
+@@ -1102,15 +1082,6 @@ out:
+ 	return retval;
+ } /* End twl_initialize_device_extension() */
+ 
+-/* This function will perform a pci-dma unmap */
+-static void twl_unmap_scsi_data(TW_Device_Extension *tw_dev, int request_id)
+-{
+-	struct scsi_cmnd *cmd = tw_dev->srb[request_id];
+-
+-	if (cmd->SCp.phase == TW_PHASE_SGLIST)
+-		scsi_dma_unmap(cmd);
+-} /* End twl_unmap_scsi_data() */
+-
+ /* This function will handle attention interrupts */
+ static int twl_handle_attention_interrupt(TW_Device_Extension *tw_dev)
+ {
+@@ -1251,11 +1222,11 @@ static irqreturn_t twl_interrupt(int irq, void *dev_instance)
+ 			}
+ 
+ 			/* Now complete the io */
++			scsi_dma_unmap(cmd);
++			cmd->scsi_done(cmd);
+ 			tw_dev->state[request_id] = TW_S_COMPLETED;
+ 			twl_free_request_id(tw_dev, request_id);
+ 			tw_dev->posted_request_count--;
+-			tw_dev->srb[request_id]->scsi_done(tw_dev->srb[request_id]);
+-			twl_unmap_scsi_data(tw_dev, request_id);
+ 		}
+ 
+ 		/* Check for another response interrupt */
+@@ -1400,10 +1371,12 @@ static int twl_reset_device_extension(TW_Device_Extension *tw_dev, int ioctl_res
+ 		if ((tw_dev->state[i] != TW_S_FINISHED) &&
+ 		    (tw_dev->state[i] != TW_S_INITIAL) &&
+ 		    (tw_dev->state[i] != TW_S_COMPLETED)) {
+-			if (tw_dev->srb[i]) {
+-				tw_dev->srb[i]->result = (DID_RESET << 16);
+-				tw_dev->srb[i]->scsi_done(tw_dev->srb[i]);
+-				twl_unmap_scsi_data(tw_dev, i);
++			struct scsi_cmnd *cmd = tw_dev->srb[i];
++
++			if (cmd) {
++				cmd->result = (DID_RESET << 16);
++				scsi_dma_unmap(cmd);
++				cmd->scsi_done(cmd);
+ 			}
+ 		}
+ 	}
+@@ -1507,9 +1480,6 @@ static int twl_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_
+ 	/* Save the scsi command for use by the ISR */
+ 	tw_dev->srb[request_id] = SCpnt;
+ 
+-	/* Initialize phase to zero */
+-	SCpnt->SCp.phase = TW_PHASE_INITIAL;
+-
+ 	retval = twl_scsiop_execute_scsi(tw_dev, request_id, NULL, 0, NULL);
+ 	if (retval) {
+ 		tw_dev->state[request_id] = TW_S_COMPLETED;
+diff --git a/drivers/scsi/3w-sas.h b/drivers/scsi/3w-sas.h
+index d474892701d4..fec6449c7595 100644
+--- a/drivers/scsi/3w-sas.h
++++ b/drivers/scsi/3w-sas.h
+@@ -103,10 +103,6 @@ static char *twl_aen_severity_table[] =
+ #define TW_CURRENT_DRIVER_BUILD 0
+ #define TW_CURRENT_DRIVER_BRANCH 0
+ 
+-/* Phase defines */
+-#define TW_PHASE_INITIAL 0
+-#define TW_PHASE_SGLIST  2
+-
+ /* Misc defines */
+ #define TW_SECTOR_SIZE                        512
+ #define TW_MAX_UNITS			      32
+diff --git a/drivers/scsi/3w-xxxx.c b/drivers/scsi/3w-xxxx.c
+index c75f2048319f..2940bd769936 100644
+--- a/drivers/scsi/3w-xxxx.c
++++ b/drivers/scsi/3w-xxxx.c
+@@ -1271,32 +1271,6 @@ static int tw_initialize_device_extension(TW_Device_Extension *tw_dev)
+ 	return 0;
+ } /* End tw_initialize_device_extension() */
+ 
+-static int tw_map_scsi_sg_data(struct pci_dev *pdev, struct scsi_cmnd *cmd)
+-{
+-	int use_sg;
+-
+-	dprintk(KERN_WARNING "3w-xxxx: tw_map_scsi_sg_data()\n");
+-
+-	use_sg = scsi_dma_map(cmd);
+-	if (use_sg < 0) {
+-		printk(KERN_WARNING "3w-xxxx: tw_map_scsi_sg_data(): pci_map_sg() failed.\n");
+-		return 0;
+-	}
+-
+-	cmd->SCp.phase = TW_PHASE_SGLIST;
+-	cmd->SCp.have_data_in = use_sg;
+-
+-	return use_sg;
+-} /* End tw_map_scsi_sg_data() */
+-
+-static void tw_unmap_scsi_data(struct pci_dev *pdev, struct scsi_cmnd *cmd)
+-{
+-	dprintk(KERN_WARNING "3w-xxxx: tw_unmap_scsi_data()\n");
+-
+-	if (cmd->SCp.phase == TW_PHASE_SGLIST)
+-		scsi_dma_unmap(cmd);
+-} /* End tw_unmap_scsi_data() */
+-
+ /* This function will reset a device extension */
+ static int tw_reset_device_extension(TW_Device_Extension *tw_dev)
+ {
+@@ -1319,8 +1293,8 @@ static int tw_reset_device_extension(TW_Device_Extension *tw_dev)
+ 			srb = tw_dev->srb[i];
+ 			if (srb != NULL) {
+ 				srb->result = (DID_RESET << 16);
+-				tw_dev->srb[i]->scsi_done(tw_dev->srb[i]);
+-				tw_unmap_scsi_data(tw_dev->tw_pci_dev, tw_dev->srb[i]);
++				scsi_dma_unmap(srb);
++				srb->scsi_done(srb);
+ 			}
+ 		}
+ 	}
+@@ -1767,8 +1741,8 @@ static int tw_scsiop_read_write(TW_Device_Extension *tw_dev, int request_id)
+ 	command_packet->byte8.io.lba = lba;
+ 	command_packet->byte6.block_count = num_sectors;
+ 
+-	use_sg = tw_map_scsi_sg_data(tw_dev->tw_pci_dev, tw_dev->srb[request_id]);
+-	if (!use_sg)
++	use_sg = scsi_dma_map(srb);
++	if (use_sg <= 0)
+ 		return 1;
+ 
+ 	scsi_for_each_sg(tw_dev->srb[request_id], sg, use_sg, i) {
+@@ -1955,9 +1929,6 @@ static int tw_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_c
+ 	/* Save the scsi command for use by the ISR */
+ 	tw_dev->srb[request_id] = SCpnt;
+ 
+-	/* Initialize phase to zero */
+-	SCpnt->SCp.phase = TW_PHASE_INITIAL;
+-
+ 	switch (*command) {
+ 		case READ_10:
+ 		case READ_6:
+@@ -2185,12 +2156,11 @@ static irqreturn_t tw_interrupt(int irq, void *dev_instance)
+ 
+ 				/* Now complete the io */
+ 				if ((error != TW_ISR_DONT_COMPLETE)) {
++					scsi_dma_unmap(tw_dev->srb[request_id]);
++					tw_dev->srb[request_id]->scsi_done(tw_dev->srb[request_id]);
+ 					tw_dev->state[request_id] = TW_S_COMPLETED;
+ 					tw_state_request_finish(tw_dev, request_id);
+ 					tw_dev->posted_request_count--;
+-					tw_dev->srb[request_id]->scsi_done(tw_dev->srb[request_id]);
+-					
+-					tw_unmap_scsi_data(tw_dev->tw_pci_dev, tw_dev->srb[request_id]);
+ 				}
+ 			}
+ 				
+diff --git a/drivers/scsi/3w-xxxx.h b/drivers/scsi/3w-xxxx.h
+index 29b0b84ed69e..6f65e663d393 100644
+--- a/drivers/scsi/3w-xxxx.h
++++ b/drivers/scsi/3w-xxxx.h
+@@ -195,11 +195,6 @@ static unsigned char tw_sense_table[][4] =
+ #define TW_AEN_SMART_FAIL        0x000F
+ #define TW_AEN_SBUF_FAIL         0x0024
+ 
+-/* Phase defines */
+-#define TW_PHASE_INITIAL 0
+-#define TW_PHASE_SINGLE 1
+-#define TW_PHASE_SGLIST 2
+-
+ /* Misc defines */
+ #define TW_ALIGNMENT_6000		      64 /* 64 bytes */
+ #define TW_ALIGNMENT_7000                     4  /* 4 bytes */
+diff --git a/drivers/scsi/scsi_devinfo.c b/drivers/scsi/scsi_devinfo.c
+index 262ab837a704..9f77d23239a2 100644
+--- a/drivers/scsi/scsi_devinfo.c
++++ b/drivers/scsi/scsi_devinfo.c
+@@ -226,6 +226,7 @@ static struct {
+ 	{"PIONEER", "CD-ROM DRM-624X", NULL, BLIST_FORCELUN | BLIST_SINGLELUN},
+ 	{"Promise", "VTrak E610f", NULL, BLIST_SPARSELUN | BLIST_NO_RSOC},
+ 	{"Promise", "", NULL, BLIST_SPARSELUN},
++	{"QNAP", "iSCSI Storage", NULL, BLIST_MAX_1024},
+ 	{"QUANTUM", "XP34301", "1071", BLIST_NOTQ},
+ 	{"REGAL", "CDC-4X", NULL, BLIST_MAX5LUN | BLIST_SINGLELUN},
+ 	{"SanDisk", "ImageMate CF-SD1", NULL, BLIST_FORCELUN},
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index 9c0a520d933c..3e6142f61499 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -897,6 +897,12 @@ static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result,
+ 	 */
+ 	if (*bflags & BLIST_MAX_512)
+ 		blk_queue_max_hw_sectors(sdev->request_queue, 512);
++	/*
++	 * Max 1024 sector transfer length for targets that report incorrect
++	 * max/optimal lengths and relied on the old block layer safe default
++	 */
++	else if (*bflags & BLIST_MAX_1024)
++		blk_queue_max_hw_sectors(sdev->request_queue, 1024);
+ 
+ 	/*
+ 	 * Some devices may not want to have a start command automatically
+diff --git a/drivers/ssb/Kconfig b/drivers/ssb/Kconfig
+index 75b3603906c1..f0d22cdb51cd 100644
+--- a/drivers/ssb/Kconfig
++++ b/drivers/ssb/Kconfig
+@@ -130,6 +130,7 @@ config SSB_DRIVER_MIPS
+ 	bool "SSB Broadcom MIPS core driver"
+ 	depends on SSB && MIPS
+ 	select SSB_SERIAL
++	select SSB_SFLASH
+ 	help
+ 	  Driver for the Sonics Silicon Backplane attached
+ 	  Broadcom MIPS core.
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index 4e959c43f680..6afce7eb3d74 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -880,6 +880,7 @@ static int atmel_prepare_tx_dma(struct uart_port *port)
+ 	config.direction = DMA_MEM_TO_DEV;
+ 	config.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ 	config.dst_addr = port->mapbase + ATMEL_US_THR;
++	config.dst_maxburst = 1;
+ 
+ 	ret = dmaengine_slave_config(atmel_port->chan_tx,
+ 				     &config);
+@@ -1059,6 +1060,7 @@ static int atmel_prepare_rx_dma(struct uart_port *port)
+ 	config.direction = DMA_DEV_TO_MEM;
+ 	config.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ 	config.src_addr = port->mapbase + ATMEL_US_RHR;
++	config.src_maxburst = 1;
+ 
+ 	ret = dmaengine_slave_config(atmel_port->chan_rx,
+ 				     &config);
+diff --git a/drivers/tty/serial/of_serial.c b/drivers/tty/serial/of_serial.c
+index 33fb94f78967..0a52c8b55a5f 100644
+--- a/drivers/tty/serial/of_serial.c
++++ b/drivers/tty/serial/of_serial.c
+@@ -344,7 +344,6 @@ static struct of_device_id of_platform_serial_table[] = {
+ 	{ .compatible = "ibm,qpace-nwp-serial",
+ 		.data = (void *)PORT_NWPSERIAL, },
+ #endif
+-	{ .type = "serial",         .data = (void *)PORT_UNKNOWN, },
+ 	{ /* end of list */ },
+ };
+ 
+diff --git a/drivers/tty/serial/uartlite.c b/drivers/tty/serial/uartlite.c
+index 189f52e3111f..a0099a7f60d4 100644
+--- a/drivers/tty/serial/uartlite.c
++++ b/drivers/tty/serial/uartlite.c
+@@ -632,7 +632,8 @@ MODULE_DEVICE_TABLE(of, ulite_of_match);
+ 
+ static int ulite_probe(struct platform_device *pdev)
+ {
+-	struct resource *res, *res2;
++	struct resource *res;
++	int irq;
+ 	int id = pdev->id;
+ #ifdef CONFIG_OF
+ 	const __be32 *prop;
+@@ -646,11 +647,11 @@ static int ulite_probe(struct platform_device *pdev)
+ 	if (!res)
+ 		return -ENODEV;
+ 
+-	res2 = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+-	if (!res2)
+-		return -ENODEV;
++	irq = platform_get_irq(pdev, 0);
++	if (irq <= 0)
++		return -ENXIO;
+ 
+-	return ulite_assign(&pdev->dev, id, res->start, res2->start);
++	return ulite_assign(&pdev->dev, id, res->start, irq);
+ }
+ 
+ static int ulite_remove(struct platform_device *pdev)
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index cff531a51a78..54853a02ce9e 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -1325,9 +1325,9 @@ static SIMPLE_DEV_PM_OPS(cdns_uart_dev_pm_ops, cdns_uart_suspend,
+  */
+ static int cdns_uart_probe(struct platform_device *pdev)
+ {
+-	int rc, id;
++	int rc, id, irq;
+ 	struct uart_port *port;
+-	struct resource *res, *res2;
++	struct resource *res;
+ 	struct cdns_uart *cdns_uart_data;
+ 
+ 	cdns_uart_data = devm_kzalloc(&pdev->dev, sizeof(*cdns_uart_data),
+@@ -1374,9 +1374,9 @@ static int cdns_uart_probe(struct platform_device *pdev)
+ 		goto err_out_clk_disable;
+ 	}
+ 
+-	res2 = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+-	if (!res2) {
+-		rc = -ENODEV;
++	irq = platform_get_irq(pdev, 0);
++	if (irq <= 0) {
++		rc = -ENXIO;
+ 		goto err_out_clk_disable;
+ 	}
+ 
+@@ -1405,7 +1405,7 @@ static int cdns_uart_probe(struct platform_device *pdev)
+ 		 * and triggers invocation of the config_port() entry point.
+ 		 */
+ 		port->mapbase = res->start;
+-		port->irq = res2->start;
++		port->irq = irq;
+ 		port->dev = &pdev->dev;
+ 		port->uartclk = clk_get_rate(cdns_uart_data->uartclk);
+ 		port->private_data = cdns_uart_data;
+diff --git a/drivers/usb/chipidea/otg_fsm.c b/drivers/usb/chipidea/otg_fsm.c
+index 562e581f6765..3770330a2201 100644
+--- a/drivers/usb/chipidea/otg_fsm.c
++++ b/drivers/usb/chipidea/otg_fsm.c
+@@ -537,7 +537,6 @@ static int ci_otg_start_host(struct otg_fsm *fsm, int on)
+ {
+ 	struct ci_hdrc	*ci = container_of(fsm, struct ci_hdrc, fsm);
+ 
+-	mutex_unlock(&fsm->lock);
+ 	if (on) {
+ 		ci_role_stop(ci);
+ 		ci_role_start(ci, CI_ROLE_HOST);
+@@ -546,7 +545,6 @@ static int ci_otg_start_host(struct otg_fsm *fsm, int on)
+ 		hw_device_reset(ci);
+ 		ci_role_start(ci, CI_ROLE_GADGET);
+ 	}
+-	mutex_lock(&fsm->lock);
+ 	return 0;
+ }
+ 
+@@ -554,12 +552,10 @@ static int ci_otg_start_gadget(struct otg_fsm *fsm, int on)
+ {
+ 	struct ci_hdrc	*ci = container_of(fsm, struct ci_hdrc, fsm);
+ 
+-	mutex_unlock(&fsm->lock);
+ 	if (on)
+ 		usb_gadget_vbus_connect(&ci->gadget);
+ 	else
+ 		usb_gadget_vbus_disconnect(&ci->gadget);
+-	mutex_lock(&fsm->lock);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 683617714e7c..220c0fd059bb 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1133,11 +1133,16 @@ static int acm_probe(struct usb_interface *intf,
+ 	}
+ 
+ 	while (buflen > 0) {
++		elength = buffer[0];
++		if (!elength) {
++			dev_err(&intf->dev, "skipping garbage byte\n");
++			elength = 1;
++			goto next_desc;
++		}
+ 		if (buffer[1] != USB_DT_CS_INTERFACE) {
+ 			dev_err(&intf->dev, "skipping garbage\n");
+ 			goto next_desc;
+ 		}
+-		elength = buffer[0];
+ 
+ 		switch (buffer[2]) {
+ 		case USB_CDC_UNION_TYPE: /* we've found it */
+diff --git a/drivers/usb/storage/uas-detect.h b/drivers/usb/storage/uas-detect.h
+index 9893d696fc97..f58caa9e6a27 100644
+--- a/drivers/usb/storage/uas-detect.h
++++ b/drivers/usb/storage/uas-detect.h
+@@ -51,7 +51,8 @@ static int uas_find_endpoints(struct usb_host_interface *alt,
+ }
+ 
+ static int uas_use_uas_driver(struct usb_interface *intf,
+-			      const struct usb_device_id *id)
++			      const struct usb_device_id *id,
++			      unsigned long *flags_ret)
+ {
+ 	struct usb_host_endpoint *eps[4] = { };
+ 	struct usb_device *udev = interface_to_usbdev(intf);
+@@ -73,7 +74,7 @@ static int uas_use_uas_driver(struct usb_interface *intf,
+ 	 * this writing the following versions exist:
+ 	 * ASM1051 - no uas support version
+ 	 * ASM1051 - with broken (*) uas support
+-	 * ASM1053 - with working uas support
++	 * ASM1053 - with working uas support, but problems with large xfers
+ 	 * ASM1153 - with working uas support
+ 	 *
+ 	 * Devices with these chips re-use a number of device-ids over the
+@@ -103,6 +104,9 @@ static int uas_use_uas_driver(struct usb_interface *intf,
+ 		} else if (usb_ss_max_streams(&eps[1]->ss_ep_comp) == 32) {
+ 			/* Possibly an ASM1051, disable uas */
+ 			flags |= US_FL_IGNORE_UAS;
++		} else {
++			/* ASM1053, these have issues with large transfers */
++			flags |= US_FL_MAX_SECTORS_240;
+ 		}
+ 	}
+ 
+@@ -132,5 +136,8 @@ static int uas_use_uas_driver(struct usb_interface *intf,
+ 		return 0;
+ 	}
+ 
++	if (flags_ret)
++		*flags_ret = flags;
++
+ 	return 1;
+ }
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 6cdabdc119a7..6d3122afeed3 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -759,7 +759,10 @@ static int uas_eh_bus_reset_handler(struct scsi_cmnd *cmnd)
+ 
+ static int uas_slave_alloc(struct scsi_device *sdev)
+ {
+-	sdev->hostdata = (void *)sdev->host->hostdata;
++	struct uas_dev_info *devinfo =
++		(struct uas_dev_info *)sdev->host->hostdata;
++
++	sdev->hostdata = devinfo;
+ 
+ 	/* USB has unusual DMA-alignment requirements: Although the
+ 	 * starting address of each scatter-gather element doesn't matter,
+@@ -778,6 +781,11 @@ static int uas_slave_alloc(struct scsi_device *sdev)
+ 	 */
+ 	blk_queue_update_dma_alignment(sdev->request_queue, (512 - 1));
+ 
++	if (devinfo->flags & US_FL_MAX_SECTORS_64)
++		blk_queue_max_hw_sectors(sdev->request_queue, 64);
++	else if (devinfo->flags & US_FL_MAX_SECTORS_240)
++		blk_queue_max_hw_sectors(sdev->request_queue, 240);
++
+ 	return 0;
+ }
+ 
+@@ -887,8 +895,9 @@ static int uas_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ 	struct Scsi_Host *shost = NULL;
+ 	struct uas_dev_info *devinfo;
+ 	struct usb_device *udev = interface_to_usbdev(intf);
++	unsigned long dev_flags;
+ 
+-	if (!uas_use_uas_driver(intf, id))
++	if (!uas_use_uas_driver(intf, id, &dev_flags))
+ 		return -ENODEV;
+ 
+ 	if (uas_switch_interface(udev, intf))
+@@ -910,8 +919,7 @@ static int uas_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ 	devinfo->udev = udev;
+ 	devinfo->resetting = 0;
+ 	devinfo->shutdown = 0;
+-	devinfo->flags = id->driver_info;
+-	usb_stor_adjust_quirks(udev, &devinfo->flags);
++	devinfo->flags = dev_flags;
+ 	init_usb_anchor(&devinfo->cmd_urbs);
+ 	init_usb_anchor(&devinfo->sense_urbs);
+ 	init_usb_anchor(&devinfo->data_urbs);
+diff --git a/drivers/usb/storage/usb.c b/drivers/usb/storage/usb.c
+index 5600c33fcadb..6c10c888f35f 100644
+--- a/drivers/usb/storage/usb.c
++++ b/drivers/usb/storage/usb.c
+@@ -479,7 +479,8 @@ void usb_stor_adjust_quirks(struct usb_device *udev, unsigned long *fflags)
+ 			US_FL_SINGLE_LUN | US_FL_NO_WP_DETECT |
+ 			US_FL_NO_READ_DISC_INFO | US_FL_NO_READ_CAPACITY_16 |
+ 			US_FL_INITIAL_READ10 | US_FL_WRITE_CACHE |
+-			US_FL_NO_ATA_1X | US_FL_NO_REPORT_OPCODES);
++			US_FL_NO_ATA_1X | US_FL_NO_REPORT_OPCODES |
++			US_FL_MAX_SECTORS_240);
+ 
+ 	p = quirks;
+ 	while (*p) {
+@@ -520,6 +521,9 @@ void usb_stor_adjust_quirks(struct usb_device *udev, unsigned long *fflags)
+ 		case 'f':
+ 			f |= US_FL_NO_REPORT_OPCODES;
+ 			break;
++		case 'g':
++			f |= US_FL_MAX_SECTORS_240;
++			break;
+ 		case 'h':
+ 			f |= US_FL_CAPACITY_HEURISTICS;
+ 			break;
+@@ -1080,7 +1084,7 @@ static int storage_probe(struct usb_interface *intf,
+ 
+ 	/* If uas is enabled and this device can do uas then ignore it. */
+ #if IS_ENABLED(CONFIG_USB_UAS)
+-	if (uas_use_uas_driver(intf, id))
++	if (uas_use_uas_driver(intf, id, NULL))
+ 		return -ENXIO;
+ #endif
+ 
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index f23d4be3280e..2b4c5423672d 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -2403,7 +2403,7 @@ static noinline int btrfs_ioctl_snap_destroy(struct file *file,
+ 			"Attempt to delete subvolume %llu during send",
+ 			dest->root_key.objectid);
+ 		err = -EPERM;
+-		goto out_dput;
++		goto out_unlock_inode;
+ 	}
+ 
+ 	d_invalidate(dentry);
+@@ -2498,6 +2498,7 @@ out_up_write:
+ 				root_flags & ~BTRFS_ROOT_SUBVOL_DEAD);
+ 		spin_unlock(&dest->root_item_lock);
+ 	}
++out_unlock_inode:
+ 	mutex_unlock(&inode->i_mutex);
+ 	if (!err) {
+ 		shrink_dcache_sb(root->fs_info->sb);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index bed43081720f..16f6365f65e7 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -4934,13 +4934,6 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
+ 	if (ret)
+ 		return ret;
+ 
+-	/*
+-	 * currently supporting (pre)allocate mode for extent-based
+-	 * files _only_
+-	 */
+-	if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+-		return -EOPNOTSUPP;
+-
+ 	if (mode & FALLOC_FL_COLLAPSE_RANGE)
+ 		return ext4_collapse_range(inode, offset, len);
+ 
+@@ -4962,6 +4955,14 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
+ 
+ 	mutex_lock(&inode->i_mutex);
+ 
++	/*
++	 * We only support preallocation for extent-based files only
++	 */
++	if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) {
++		ret = -EOPNOTSUPP;
++		goto out;
++	}
++
+ 	if (!(mode & FALLOC_FL_KEEP_SIZE) &&
+ 	     offset + len > i_size_read(inode)) {
+ 		new_size = offset + len;
+diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
+index e04d45733976..9a0121376358 100644
+--- a/fs/ext4/extents_status.c
++++ b/fs/ext4/extents_status.c
+@@ -705,6 +705,14 @@ int ext4_es_insert_extent(struct inode *inode, ext4_lblk_t lblk,
+ 
+ 	BUG_ON(end < lblk);
+ 
++	if ((status & EXTENT_STATUS_DELAYED) &&
++	    (status & EXTENT_STATUS_WRITTEN)) {
++		ext4_warning(inode->i_sb, "Inserting extent [%u/%u] as "
++				" delayed and written which can potentially "
++				" cause data loss.\n", lblk, len);
++		WARN_ON(1);
++	}
++
+ 	newes.es_lblk = lblk;
+ 	newes.es_len = len;
+ 	ext4_es_store_pblock_status(&newes, pblk, status);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 5cb9a212b86f..852cc521f327 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -534,6 +534,7 @@ int ext4_map_blocks(handle_t *handle, struct inode *inode,
+ 		status = map->m_flags & EXT4_MAP_UNWRITTEN ?
+ 				EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
+ 		if (!(flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE) &&
++		    !(status & EXTENT_STATUS_WRITTEN) &&
+ 		    ext4_find_delalloc_range(inode, map->m_lblk,
+ 					     map->m_lblk + map->m_len - 1))
+ 			status |= EXTENT_STATUS_DELAYED;
+@@ -638,6 +639,7 @@ found:
+ 		status = map->m_flags & EXT4_MAP_UNWRITTEN ?
+ 				EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
+ 		if (!(flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE) &&
++		    !(status & EXTENT_STATUS_WRITTEN) &&
+ 		    ext4_find_delalloc_range(inode, map->m_lblk,
+ 					     map->m_lblk + map->m_len - 1))
+ 			status |= EXTENT_STATUS_DELAYED;
+diff --git a/fs/hfsplus/xattr.c b/fs/hfsplus/xattr.c
+index d98094a9f476..ff10f3decbc9 100644
+--- a/fs/hfsplus/xattr.c
++++ b/fs/hfsplus/xattr.c
+@@ -806,9 +806,6 @@ end_removexattr:
+ static int hfsplus_osx_getxattr(struct dentry *dentry, const char *name,
+ 					void *buffer, size_t size, int type)
+ {
+-	char *xattr_name;
+-	int res;
+-
+ 	if (!strcmp(name, ""))
+ 		return -EINVAL;
+ 
+@@ -818,24 +815,19 @@ static int hfsplus_osx_getxattr(struct dentry *dentry, const char *name,
+ 	 */
+ 	if (is_known_namespace(name))
+ 		return -EOPNOTSUPP;
+-	xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN
+-		+ XATTR_MAC_OSX_PREFIX_LEN + 1, GFP_KERNEL);
+-	if (!xattr_name)
+-		return -ENOMEM;
+-	strcpy(xattr_name, XATTR_MAC_OSX_PREFIX);
+-	strcpy(xattr_name + XATTR_MAC_OSX_PREFIX_LEN, name);
+ 
+-	res = hfsplus_getxattr(dentry, xattr_name, buffer, size);
+-	kfree(xattr_name);
+-	return res;
++	/*
++	 * osx is the namespace we use to indicate an unprefixed
++	 * attribute on the filesystem (like the ones that OS X
++	 * creates), so we pass the name through unmodified (after
++	 * ensuring it doesn't conflict with another namespace).
++	 */
++	return hfsplus_getxattr(dentry, name, buffer, size);
+ }
+ 
+ static int hfsplus_osx_setxattr(struct dentry *dentry, const char *name,
+ 		const void *buffer, size_t size, int flags, int type)
+ {
+-	char *xattr_name;
+-	int res;
+-
+ 	if (!strcmp(name, ""))
+ 		return -EINVAL;
+ 
+@@ -845,16 +837,14 @@ static int hfsplus_osx_setxattr(struct dentry *dentry, const char *name,
+ 	 */
+ 	if (is_known_namespace(name))
+ 		return -EOPNOTSUPP;
+-	xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN
+-		+ XATTR_MAC_OSX_PREFIX_LEN + 1, GFP_KERNEL);
+-	if (!xattr_name)
+-		return -ENOMEM;
+-	strcpy(xattr_name, XATTR_MAC_OSX_PREFIX);
+-	strcpy(xattr_name + XATTR_MAC_OSX_PREFIX_LEN, name);
+ 
+-	res = hfsplus_setxattr(dentry, xattr_name, buffer, size, flags);
+-	kfree(xattr_name);
+-	return res;
++	/*
++	 * osx is the namespace we use to indicate an unprefixed
++	 * attribute on the filesystem (like the ones that OS X
++	 * creates), so we pass the name through unmodified (after
++	 * ensuring it doesn't conflict with another namespace).
++	 */
++	return hfsplus_setxattr(dentry, name, buffer, size, flags);
+ }
+ 
+ static size_t hfsplus_osx_listxattr(struct dentry *dentry, char *list,
+diff --git a/include/linux/usb_usual.h b/include/linux/usb_usual.h
+index a7f2604c5f25..7f5f78bd15ad 100644
+--- a/include/linux/usb_usual.h
++++ b/include/linux/usb_usual.h
+@@ -77,6 +77,8 @@
+ 		/* Cannot handle ATA_12 or ATA_16 CDBs */	\
+ 	US_FLAG(NO_REPORT_OPCODES,	0x04000000)		\
+ 		/* Cannot handle MI_REPORT_SUPPORTED_OPERATION_CODES */	\
++	US_FLAG(MAX_SECTORS_240,	0x08000000)		\
++		/* Sets max_sectors to 240 */			\
+ 
+ #define US_FLAG(name, value)	US_FL_##name = value ,
+ enum { US_DO_ALL_FLAGS };
+diff --git a/include/scsi/scsi_devinfo.h b/include/scsi/scsi_devinfo.h
+index 183eaab7c380..96e3f56519e7 100644
+--- a/include/scsi/scsi_devinfo.h
++++ b/include/scsi/scsi_devinfo.h
+@@ -36,5 +36,6 @@
+ 					     for sequential scan */
+ #define BLIST_TRY_VPD_PAGES	0x10000000 /* Attempt to read VPD pages */
+ #define BLIST_NO_RSOC		0x20000000 /* don't try to issue RSOC */
++#define BLIST_MAX_1024		0x40000000 /* maximum 1024 sector cdb length */
+ 
+ #endif
+diff --git a/include/sound/emu10k1.h b/include/sound/emu10k1.h
+index 0de95ccb92cf..5bd134651f5e 100644
+--- a/include/sound/emu10k1.h
++++ b/include/sound/emu10k1.h
+@@ -41,7 +41,8 @@
+ 
+ #define EMUPAGESIZE     4096
+ #define MAXREQVOICES    8
+-#define MAXPAGES        8192
++#define MAXPAGES0       4096	/* 32 bit mode */
++#define MAXPAGES1       8192	/* 31 bit mode */
+ #define RESERVED        0
+ #define NUM_MIDI        16
+ #define NUM_G           64              /* use all channels */
+@@ -50,8 +51,7 @@
+ 
+ /* FIXME? - according to the OSS driver the EMU10K1 needs a 29 bit DMA mask */
+ #define EMU10K1_DMA_MASK	0x7fffffffUL	/* 31bit */
+-#define AUDIGY_DMA_MASK		0x7fffffffUL	/* 31bit FIXME - 32 should work? */
+-						/* See ALSA bug #1276 - rlrevell */
++#define AUDIGY_DMA_MASK		0xffffffffUL	/* 32bit mode */
+ 
+ #define TMEMSIZE        256*1024
+ #define TMEMSIZEREG     4
+@@ -466,8 +466,11 @@
+ 
+ #define MAPB			0x0d		/* Cache map B						*/
+ 
+-#define MAP_PTE_MASK		0xffffe000	/* The 19 MSBs of the PTE indexed by the PTI		*/
+-#define MAP_PTI_MASK		0x00001fff	/* The 13 bit index to one of the 8192 PTE dwords      	*/
++#define MAP_PTE_MASK0		0xfffff000	/* The 20 MSBs of the PTE indexed by the PTI		*/
++#define MAP_PTI_MASK0		0x00000fff	/* The 12 bit index to one of the 4096 PTE dwords      	*/
++
++#define MAP_PTE_MASK1		0xffffe000	/* The 19 MSBs of the PTE indexed by the PTI		*/
++#define MAP_PTI_MASK1		0x00001fff	/* The 13 bit index to one of the 8192 PTE dwords      	*/
+ 
+ /* 0x0e, 0x0f: Not used */
+ 
+@@ -1704,6 +1707,7 @@ struct snd_emu10k1 {
+ 	unsigned short model;			/* subsystem id */
+ 	unsigned int card_type;			/* EMU10K1_CARD_* */
+ 	unsigned int ecard_ctrl;		/* ecard control bits */
++	unsigned int address_mode;		/* address mode */
+ 	unsigned long dma_mask;			/* PCI DMA mask */
+ 	unsigned int delay_pcm_irq;		/* in samples */
+ 	int max_cache_pages;			/* max memory size / PAGE_SIZE */
+diff --git a/include/sound/soc-dapm.h b/include/sound/soc-dapm.h
+index 8d7416e46861..15355892a0ff 100644
+--- a/include/sound/soc-dapm.h
++++ b/include/sound/soc-dapm.h
+@@ -287,7 +287,7 @@ struct device;
+ 	.access = SNDRV_CTL_ELEM_ACCESS_TLV_READ | SNDRV_CTL_ELEM_ACCESS_READWRITE,\
+ 	.tlv.p = (tlv_array), \
+ 	.get = snd_soc_dapm_get_volsw, .put = snd_soc_dapm_put_volsw, \
+-	.private_value = SOC_SINGLE_VALUE(reg, shift, max, invert, 0) }
++	.private_value = SOC_SINGLE_VALUE(reg, shift, max, invert, 1) }
+ #define SOC_DAPM_SINGLE_TLV_VIRT(xname, max, tlv_array) \
+ 	SOC_DAPM_SINGLE(xname, SND_SOC_NOPM, 0, max, 0, tlv_array)
+ #define SOC_DAPM_ENUM(xname, xenum) \
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index a64e7a207d2b..0c5796eadae1 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -357,8 +357,8 @@ select_insn:
+ 	ALU64_MOD_X:
+ 		if (unlikely(SRC == 0))
+ 			return 0;
+-		tmp = DST;
+-		DST = do_div(tmp, SRC);
++		div64_u64_rem(DST, SRC, &tmp);
++		DST = tmp;
+ 		CONT;
+ 	ALU_MOD_X:
+ 		if (unlikely(SRC == 0))
+@@ -367,8 +367,8 @@ select_insn:
+ 		DST = do_div(tmp, (u32) SRC);
+ 		CONT;
+ 	ALU64_MOD_K:
+-		tmp = DST;
+-		DST = do_div(tmp, IMM);
++		div64_u64_rem(DST, IMM, &tmp);
++		DST = tmp;
+ 		CONT;
+ 	ALU_MOD_K:
+ 		tmp = (u32) DST;
+@@ -377,7 +377,7 @@ select_insn:
+ 	ALU64_DIV_X:
+ 		if (unlikely(SRC == 0))
+ 			return 0;
+-		do_div(DST, SRC);
++		DST = div64_u64(DST, SRC);
+ 		CONT;
+ 	ALU_DIV_X:
+ 		if (unlikely(SRC == 0))
+@@ -387,7 +387,7 @@ select_insn:
+ 		DST = (u32) tmp;
+ 		CONT;
+ 	ALU64_DIV_K:
+-		do_div(DST, IMM);
++		DST = div64_u64(DST, IMM);
+ 		CONT;
+ 	ALU_DIV_K:
+ 		tmp = (u32) DST;
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index 208d5439e59b..787b0d699969 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -158,6 +158,7 @@ void ping_unhash(struct sock *sk)
+ 	if (sk_hashed(sk)) {
+ 		write_lock_bh(&ping_table.lock);
+ 		hlist_nulls_del(&sk->sk_nulls_node);
++		sk_nulls_node_init(&sk->sk_nulls_node);
+ 		sock_put(sk);
+ 		isk->inet_num = 0;
+ 		isk->inet_sport = 0;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index ad5064362c5c..20fc0202cbbe 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -963,10 +963,7 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
+ 	if (dst_metric_locked(dst, RTAX_MTU))
+ 		return;
+ 
+-	if (dst->dev->mtu < mtu)
+-		return;
+-
+-	if (rt->rt_pmtu && rt->rt_pmtu < mtu)
++	if (ipv4_mtu(dst) < mtu)
+ 		return;
+ 
+ 	if (mtu < ip_rt_min_pmtu)
+diff --git a/sound/pci/emu10k1/emu10k1.c b/sound/pci/emu10k1/emu10k1.c
+index 37d0220a094c..db7a2e5e4a14 100644
+--- a/sound/pci/emu10k1/emu10k1.c
++++ b/sound/pci/emu10k1/emu10k1.c
+@@ -183,8 +183,10 @@ static int snd_card_emu10k1_probe(struct pci_dev *pci,
+ 	}
+ #endif
+  
+-	strcpy(card->driver, emu->card_capabilities->driver);
+-	strcpy(card->shortname, emu->card_capabilities->name);
++	strlcpy(card->driver, emu->card_capabilities->driver,
++		sizeof(card->driver));
++	strlcpy(card->shortname, emu->card_capabilities->name,
++		sizeof(card->shortname));
+ 	snprintf(card->longname, sizeof(card->longname),
+ 		 "%s (rev.%d, serial:0x%x) at 0x%lx, irq %i",
+ 		 card->shortname, emu->revision, emu->serial, emu->port, emu->irq);
+diff --git a/sound/pci/emu10k1/emu10k1_callback.c b/sound/pci/emu10k1/emu10k1_callback.c
+index 874cd76c7b7f..d2c7ea3a7610 100644
+--- a/sound/pci/emu10k1/emu10k1_callback.c
++++ b/sound/pci/emu10k1/emu10k1_callback.c
+@@ -415,7 +415,7 @@ start_voice(struct snd_emux_voice *vp)
+ 	snd_emu10k1_ptr_write(hw, Z2, ch, 0);
+ 
+ 	/* invalidate maps */
+-	temp = (hw->silent_page.addr << 1) | MAP_PTI_MASK;
++	temp = (hw->silent_page.addr << hw->address_mode) | (hw->address_mode ? MAP_PTI_MASK1 : MAP_PTI_MASK0);
+ 	snd_emu10k1_ptr_write(hw, MAPA, ch, temp);
+ 	snd_emu10k1_ptr_write(hw, MAPB, ch, temp);
+ #if 0
+@@ -436,7 +436,7 @@ start_voice(struct snd_emux_voice *vp)
+ 		snd_emu10k1_ptr_write(hw, CDF, ch, sample);
+ 
+ 		/* invalidate maps */
+-		temp = ((unsigned int)hw->silent_page.addr << 1) | MAP_PTI_MASK;
++		temp = ((unsigned int)hw->silent_page.addr << hw_address_mode) | (hw->address_mode ? MAP_PTI_MASK1 : MAP_PTI_MASK0);
+ 		snd_emu10k1_ptr_write(hw, MAPA, ch, temp);
+ 		snd_emu10k1_ptr_write(hw, MAPB, ch, temp);
+ 		
+diff --git a/sound/pci/emu10k1/emu10k1_main.c b/sound/pci/emu10k1/emu10k1_main.c
+index b4458a630a7c..df9f5c7c9c77 100644
+--- a/sound/pci/emu10k1/emu10k1_main.c
++++ b/sound/pci/emu10k1/emu10k1_main.c
+@@ -282,7 +282,7 @@ static int snd_emu10k1_init(struct snd_emu10k1 *emu, int enable_ir, int resume)
+ 	snd_emu10k1_ptr_write(emu, TCB, 0, 0);	/* taken from original driver */
+ 	snd_emu10k1_ptr_write(emu, TCBS, 0, 4);	/* taken from original driver */
+ 
+-	silent_page = (emu->silent_page.addr << 1) | MAP_PTI_MASK;
++	silent_page = (emu->silent_page.addr << emu->address_mode) | (emu->address_mode ? MAP_PTI_MASK1 : MAP_PTI_MASK0);
+ 	for (ch = 0; ch < NUM_G; ch++) {
+ 		snd_emu10k1_ptr_write(emu, MAPA, ch, silent_page);
+ 		snd_emu10k1_ptr_write(emu, MAPB, ch, silent_page);
+@@ -348,6 +348,11 @@ static int snd_emu10k1_init(struct snd_emu10k1 *emu, int enable_ir, int resume)
+ 		outl(reg | A_IOCFG_GPOUT0, emu->port + A_IOCFG);
+ 	}
+ 
++	if (emu->address_mode == 0) {
++		/* use 16M in 4G */
++		outl(inl(emu->port + HCFG) | HCFG_EXPANDED_MEM, emu->port + HCFG);
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1421,7 +1426,7 @@ static struct snd_emu_chip_details emu_chip_details[] = {
+ 	 *
+ 	 */
+ 	{.vendor = 0x1102, .device = 0x0008, .subsystem = 0x20011102,
+-	 .driver = "Audigy2", .name = "SB Audigy 2 ZS Notebook [SB0530]",
++	 .driver = "Audigy2", .name = "Audigy 2 ZS Notebook [SB0530]",
+ 	 .id = "Audigy2",
+ 	 .emu10k2_chip = 1,
+ 	 .ca0108_chip = 1,
+@@ -1571,7 +1576,7 @@ static struct snd_emu_chip_details emu_chip_details[] = {
+ 	 .adc_1361t = 1,  /* 24 bit capture instead of 16bit */
+ 	 .ac97_chip = 1} ,
+ 	{.vendor = 0x1102, .device = 0x0004, .subsystem = 0x10051102,
+-	 .driver = "Audigy2", .name = "SB Audigy 2 Platinum EX [SB0280]",
++	 .driver = "Audigy2", .name = "Audigy 2 Platinum EX [SB0280]",
+ 	 .id = "Audigy2",
+ 	 .emu10k2_chip = 1,
+ 	 .ca0102_chip = 1,
+@@ -1877,8 +1882,10 @@ int snd_emu10k1_create(struct snd_card *card,
+ 
+ 	is_audigy = emu->audigy = c->emu10k2_chip;
+ 
++	/* set addressing mode */
++	emu->address_mode = is_audigy ? 0 : 1;
+ 	/* set the DMA transfer mask */
+-	emu->dma_mask = is_audigy ? AUDIGY_DMA_MASK : EMU10K1_DMA_MASK;
++	emu->dma_mask = emu->address_mode ? EMU10K1_DMA_MASK : AUDIGY_DMA_MASK;
+ 	if (pci_set_dma_mask(pci, emu->dma_mask) < 0 ||
+ 	    pci_set_consistent_dma_mask(pci, emu->dma_mask) < 0) {
+ 		dev_err(card->dev,
+@@ -1903,7 +1910,7 @@ int snd_emu10k1_create(struct snd_card *card,
+ 
+ 	emu->max_cache_pages = max_cache_bytes >> PAGE_SHIFT;
+ 	if (snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV, snd_dma_pci_data(pci),
+-				32 * 1024, &emu->ptb_pages) < 0) {
++				(emu->address_mode ? 32 : 16) * 1024, &emu->ptb_pages) < 0) {
+ 		err = -ENOMEM;
+ 		goto error;
+ 	}
+@@ -2002,8 +2009,8 @@ int snd_emu10k1_create(struct snd_card *card,
+ 
+ 	/* Clear silent pages and set up pointers */
+ 	memset(emu->silent_page.area, 0, PAGE_SIZE);
+-	silent_page = emu->silent_page.addr << 1;
+-	for (idx = 0; idx < MAXPAGES; idx++)
++	silent_page = emu->silent_page.addr << emu->address_mode;
++	for (idx = 0; idx < (emu->address_mode ? MAXPAGES1 : MAXPAGES0); idx++)
+ 		((u32 *)emu->ptb_pages.area)[idx] = cpu_to_le32(silent_page | idx);
+ 
+ 	/* set up voice indices */
+diff --git a/sound/pci/emu10k1/emupcm.c b/sound/pci/emu10k1/emupcm.c
+index 0dc07385af0e..14a305bd8a98 100644
+--- a/sound/pci/emu10k1/emupcm.c
++++ b/sound/pci/emu10k1/emupcm.c
+@@ -380,7 +380,7 @@ static void snd_emu10k1_pcm_init_voice(struct snd_emu10k1 *emu,
+ 	snd_emu10k1_ptr_write(emu, Z1, voice, 0);
+ 	snd_emu10k1_ptr_write(emu, Z2, voice, 0);
+ 	/* invalidate maps */
+-	silent_page = ((unsigned int)emu->silent_page.addr << 1) | MAP_PTI_MASK;
++	silent_page = ((unsigned int)emu->silent_page.addr << emu->address_mode) | (emu->address_mode ? MAP_PTI_MASK1 : MAP_PTI_MASK0);
+ 	snd_emu10k1_ptr_write(emu, MAPA, voice, silent_page);
+ 	snd_emu10k1_ptr_write(emu, MAPB, voice, silent_page);
+ 	/* modulation envelope */
+diff --git a/sound/pci/emu10k1/memory.c b/sound/pci/emu10k1/memory.c
+index c68e6dd2fa67..4f1f69be1865 100644
+--- a/sound/pci/emu10k1/memory.c
++++ b/sound/pci/emu10k1/memory.c
+@@ -34,10 +34,11 @@
+  * aligned pages in others
+  */
+ #define __set_ptb_entry(emu,page,addr) \
+-	(((u32 *)(emu)->ptb_pages.area)[page] = cpu_to_le32(((addr) << 1) | (page)))
++	(((u32 *)(emu)->ptb_pages.area)[page] = cpu_to_le32(((addr) << (emu->address_mode)) | (page)))
+ 
+ #define UNIT_PAGES		(PAGE_SIZE / EMUPAGESIZE)
+-#define MAX_ALIGN_PAGES		(MAXPAGES / UNIT_PAGES)
++#define MAX_ALIGN_PAGES0		(MAXPAGES0 / UNIT_PAGES)
++#define MAX_ALIGN_PAGES1		(MAXPAGES1 / UNIT_PAGES)
+ /* get aligned page from offset address */
+ #define get_aligned_page(offset)	((offset) >> PAGE_SHIFT)
+ /* get offset address from aligned page */
+@@ -124,7 +125,7 @@ static int search_empty_map_area(struct snd_emu10k1 *emu, int npages, struct lis
+ 		}
+ 		page = blk->mapped_page + blk->pages;
+ 	}
+-	size = MAX_ALIGN_PAGES - page;
++	size = (emu->address_mode ? MAX_ALIGN_PAGES1 : MAX_ALIGN_PAGES0) - page;
+ 	if (size >= max_size) {
+ 		*nextp = pos;
+ 		return page;
+@@ -181,7 +182,7 @@ static int unmap_memblk(struct snd_emu10k1 *emu, struct snd_emu10k1_memblk *blk)
+ 		q = get_emu10k1_memblk(p, mapped_link);
+ 		end_page = q->mapped_page;
+ 	} else
+-		end_page = MAX_ALIGN_PAGES;
++		end_page = (emu->address_mode ? MAX_ALIGN_PAGES1 : MAX_ALIGN_PAGES0);
+ 
+ 	/* remove links */
+ 	list_del(&blk->mapped_link);
+@@ -307,7 +308,7 @@ snd_emu10k1_alloc_pages(struct snd_emu10k1 *emu, struct snd_pcm_substream *subst
+ 	if (snd_BUG_ON(!emu))
+ 		return NULL;
+ 	if (snd_BUG_ON(runtime->dma_bytes <= 0 ||
+-		       runtime->dma_bytes >= MAXPAGES * EMUPAGESIZE))
++		       runtime->dma_bytes >= (emu->address_mode ? MAXPAGES1 : MAXPAGES0) * EMUPAGESIZE))
+ 		return NULL;
+ 	hdr = emu->memhdr;
+ 	if (snd_BUG_ON(!hdr))
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 2fe86d2e1b09..a63a86332deb 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -3027,6 +3027,16 @@ static struct snd_kcontrol_new vmaster_mute_mode = {
+ 	.put = vmaster_mute_mode_put,
+ };
+ 
++/* meta hook to call each driver's vmaster hook */
++static void vmaster_hook(void *private_data, int enabled)
++{
++	struct hda_vmaster_mute_hook *hook = private_data;
++
++	if (hook->mute_mode != HDA_VMUTE_FOLLOW_MASTER)
++		enabled = hook->mute_mode;
++	hook->hook(hook->codec, enabled);
++}
++
+ /**
+  * snd_hda_add_vmaster_hook - Add a vmaster hook for mute-LED
+  * @codec: the HDA codec
+@@ -3045,9 +3055,9 @@ int snd_hda_add_vmaster_hook(struct hda_codec *codec,
+ 
+ 	if (!hook->hook || !hook->sw_kctl)
+ 		return 0;
+-	snd_ctl_add_vmaster_hook(hook->sw_kctl, hook->hook, codec);
+ 	hook->codec = codec;
+ 	hook->mute_mode = HDA_VMUTE_FOLLOW_MASTER;
++	snd_ctl_add_vmaster_hook(hook->sw_kctl, vmaster_hook, hook);
+ 	if (!expose_enum_ctl)
+ 		return 0;
+ 	kctl = snd_ctl_new1(&vmaster_mute_mode, hook);
+@@ -3073,14 +3083,7 @@ void snd_hda_sync_vmaster_hook(struct hda_vmaster_mute_hook *hook)
+ 	 */
+ 	if (hook->codec->bus->shutdown)
+ 		return;
+-	switch (hook->mute_mode) {
+-	case HDA_VMUTE_FOLLOW_MASTER:
+-		snd_ctl_sync_vmaster_hook(hook->sw_kctl);
+-		break;
+-	default:
+-		hook->hook(hook->codec, hook->mute_mode);
+-		break;
+-	}
++	snd_ctl_sync_vmaster_hook(hook->sw_kctl);
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_sync_vmaster_hook);
+ 
+diff --git a/sound/pci/hda/thinkpad_helper.c b/sound/pci/hda/thinkpad_helper.c
+index 6ba0b5517c40..2341fc334163 100644
+--- a/sound/pci/hda/thinkpad_helper.c
++++ b/sound/pci/hda/thinkpad_helper.c
+@@ -72,6 +72,7 @@ static void hda_fixup_thinkpad_acpi(struct hda_codec *codec,
+ 		if (led_set_func(TPACPI_LED_MUTE, false) >= 0) {
+ 			old_vmaster_hook = spec->vmaster_mute.hook;
+ 			spec->vmaster_mute.hook = update_tpacpi_mute_led;
++			spec->vmaster_mute_enum = 1;
+ 			removefunc = false;
+ 		}
+ 		if (led_set_func(TPACPI_LED_MICMUTE, false) >= 0) {
+diff --git a/sound/soc/codecs/rt5677.c b/sound/soc/codecs/rt5677.c
+index fb9c20eace3f..97b33e96439a 100644
+--- a/sound/soc/codecs/rt5677.c
++++ b/sound/soc/codecs/rt5677.c
+@@ -62,6 +62,9 @@ static const struct reg_default init_list[] = {
+ 	{RT5677_PR_BASE + 0x1e,	0x0000},
+ 	{RT5677_PR_BASE + 0x12,	0x0eaa},
+ 	{RT5677_PR_BASE + 0x14,	0x018a},
++	{RT5677_PR_BASE + 0x15,	0x0490},
++	{RT5677_PR_BASE + 0x38,	0x0f71},
++	{RT5677_PR_BASE + 0x39,	0x0f71},
+ };
+ #define RT5677_INIT_REG_LEN ARRAY_SIZE(init_list)
+ 
+@@ -901,7 +904,7 @@ static int set_dmic_clk(struct snd_soc_dapm_widget *w,
+ {
+ 	struct snd_soc_codec *codec = snd_soc_dapm_to_codec(w->dapm);
+ 	struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+-	int idx = rl6231_calc_dmic_clk(rt5677->sysclk);
++	int idx = rl6231_calc_dmic_clk(rt5677->lrck[RT5677_AIF1] << 8);
+ 
+ 	if (idx < 0)
+ 		dev_err(codec->dev, "Failed to set DMIC clock\n");
+diff --git a/sound/soc/codecs/tfa9879.c b/sound/soc/codecs/tfa9879.c
+index 16f1b71edb55..aab0af681e8c 100644
+--- a/sound/soc/codecs/tfa9879.c
++++ b/sound/soc/codecs/tfa9879.c
+@@ -280,8 +280,8 @@ static int tfa9879_i2c_probe(struct i2c_client *i2c,
+ 	int i;
+ 
+ 	tfa9879 = devm_kzalloc(&i2c->dev, sizeof(*tfa9879), GFP_KERNEL);
+-	if (IS_ERR(tfa9879))
+-		return PTR_ERR(tfa9879);
++	if (!tfa9879)
++		return -ENOMEM;
+ 
+ 	i2c_set_clientdata(i2c, tfa9879);
+ 
+diff --git a/sound/soc/samsung/s3c24xx-i2s.c b/sound/soc/samsung/s3c24xx-i2s.c
+index 326d3c3804e3..5bf723689692 100644
+--- a/sound/soc/samsung/s3c24xx-i2s.c
++++ b/sound/soc/samsung/s3c24xx-i2s.c
+@@ -461,8 +461,8 @@ static int s3c24xx_iis_dev_probe(struct platform_device *pdev)
+ 		return -ENOENT;
+ 	}
+ 	s3c24xx_i2s.regs = devm_ioremap_resource(&pdev->dev, res);
+-	if (s3c24xx_i2s.regs == NULL)
+-		return -ENXIO;
++	if (IS_ERR(s3c24xx_i2s.regs))
++		return PTR_ERR(s3c24xx_i2s.regs);
+ 
+ 	s3c24xx_i2s_pcm_stereo_out.dma_addr = res->start + S3C2410_IISFIFO;
+ 	s3c24xx_i2s_pcm_stereo_in.dma_addr = res->start + S3C2410_IISFIFO;
+diff --git a/sound/synth/emux/emux_oss.c b/sound/synth/emux/emux_oss.c
+index ab37add269ae..82e350e9501c 100644
+--- a/sound/synth/emux/emux_oss.c
++++ b/sound/synth/emux/emux_oss.c
+@@ -118,12 +118,8 @@ snd_emux_open_seq_oss(struct snd_seq_oss_arg *arg, void *closure)
+ 	if (snd_BUG_ON(!arg || !emu))
+ 		return -ENXIO;
+ 
+-	mutex_lock(&emu->register_mutex);
+-
+-	if (!snd_emux_inc_count(emu)) {
+-		mutex_unlock(&emu->register_mutex);
++	if (!snd_emux_inc_count(emu))
+ 		return -EFAULT;
+-	}
+ 
+ 	memset(&callback, 0, sizeof(callback));
+ 	callback.owner = THIS_MODULE;
+@@ -135,7 +131,6 @@ snd_emux_open_seq_oss(struct snd_seq_oss_arg *arg, void *closure)
+ 	if (p == NULL) {
+ 		snd_printk(KERN_ERR "can't create port\n");
+ 		snd_emux_dec_count(emu);
+-		mutex_unlock(&emu->register_mutex);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -148,8 +143,6 @@ snd_emux_open_seq_oss(struct snd_seq_oss_arg *arg, void *closure)
+ 	reset_port_mode(p, arg->seq_mode);
+ 
+ 	snd_emux_reset_port(p);
+-
+-	mutex_unlock(&emu->register_mutex);
+ 	return 0;
+ }
+ 
+@@ -195,13 +188,11 @@ snd_emux_close_seq_oss(struct snd_seq_oss_arg *arg)
+ 	if (snd_BUG_ON(!emu))
+ 		return -ENXIO;
+ 
+-	mutex_lock(&emu->register_mutex);
+ 	snd_emux_sounds_off_all(p);
+ 	snd_soundfont_close_check(emu->sflist, SF_CLIENT_NO(p->chset.port));
+ 	snd_seq_event_port_detach(p->chset.client, p->chset.port);
+ 	snd_emux_dec_count(emu);
+ 
+-	mutex_unlock(&emu->register_mutex);
+ 	return 0;
+ }
+ 
+diff --git a/sound/synth/emux/emux_seq.c b/sound/synth/emux/emux_seq.c
+index 7778b8e19782..a0209204ae48 100644
+--- a/sound/synth/emux/emux_seq.c
++++ b/sound/synth/emux/emux_seq.c
+@@ -124,12 +124,10 @@ snd_emux_detach_seq(struct snd_emux *emu)
+ 	if (emu->voices)
+ 		snd_emux_terminate_all(emu);
+ 		
+-	mutex_lock(&emu->register_mutex);
+ 	if (emu->client >= 0) {
+ 		snd_seq_delete_kernel_client(emu->client);
+ 		emu->client = -1;
+ 	}
+-	mutex_unlock(&emu->register_mutex);
+ }
+ 
+ 
+@@ -269,8 +267,8 @@ snd_emux_event_input(struct snd_seq_event *ev, int direct, void *private_data,
+ /*
+  * increment usage count
+  */
+-int
+-snd_emux_inc_count(struct snd_emux *emu)
++static int
++__snd_emux_inc_count(struct snd_emux *emu)
+ {
+ 	emu->used++;
+ 	if (!try_module_get(emu->ops.owner))
+@@ -284,12 +282,21 @@ snd_emux_inc_count(struct snd_emux *emu)
+ 	return 1;
+ }
+ 
++int snd_emux_inc_count(struct snd_emux *emu)
++{
++	int ret;
++
++	mutex_lock(&emu->register_mutex);
++	ret = __snd_emux_inc_count(emu);
++	mutex_unlock(&emu->register_mutex);
++	return ret;
++}
+ 
+ /*
+  * decrease usage count
+  */
+-void
+-snd_emux_dec_count(struct snd_emux *emu)
++static void
++__snd_emux_dec_count(struct snd_emux *emu)
+ {
+ 	module_put(emu->card->module);
+ 	emu->used--;
+@@ -298,6 +305,12 @@ snd_emux_dec_count(struct snd_emux *emu)
+ 	module_put(emu->ops.owner);
+ }
+ 
++void snd_emux_dec_count(struct snd_emux *emu)
++{
++	mutex_lock(&emu->register_mutex);
++	__snd_emux_dec_count(emu);
++	mutex_unlock(&emu->register_mutex);
++}
+ 
+ /*
+  * Routine that is called upon a first use of a particular port
+@@ -317,7 +330,7 @@ snd_emux_use(void *private_data, struct snd_seq_port_subscribe *info)
+ 
+ 	mutex_lock(&emu->register_mutex);
+ 	snd_emux_init_port(p);
+-	snd_emux_inc_count(emu);
++	__snd_emux_inc_count(emu);
+ 	mutex_unlock(&emu->register_mutex);
+ 	return 0;
+ }
+@@ -340,7 +353,7 @@ snd_emux_unuse(void *private_data, struct snd_seq_port_subscribe *info)
+ 
+ 	mutex_lock(&emu->register_mutex);
+ 	snd_emux_sounds_off_all(p);
+-	snd_emux_dec_count(emu);
++	__snd_emux_dec_count(emu);
+ 	mutex_unlock(&emu->register_mutex);
+ 	return 0;
+ }

diff --git a/1003_linux-4.0.4.patch b/1003_linux-4.0.4.patch
new file mode 100644
index 0000000..e5c793a
--- /dev/null
+++ b/1003_linux-4.0.4.patch
@@ -0,0 +1,2713 @@
+diff --git a/Documentation/devicetree/bindings/dma/fsl-mxs-dma.txt b/Documentation/devicetree/bindings/dma/fsl-mxs-dma.txt
+index a4873e5e3e36..e30e184f50c7 100644
+--- a/Documentation/devicetree/bindings/dma/fsl-mxs-dma.txt
++++ b/Documentation/devicetree/bindings/dma/fsl-mxs-dma.txt
+@@ -38,7 +38,7 @@ dma_apbx: dma-apbx@80024000 {
+ 		      80 81 68 69
+ 		      70 71 72 73
+ 		      74 75 76 77>;
+-	interrupt-names = "auart4-rx", "aurat4-tx", "spdif-tx", "empty",
++	interrupt-names = "auart4-rx", "auart4-tx", "spdif-tx", "empty",
+ 			  "saif0", "saif1", "i2c0", "i2c1",
+ 			  "auart0-rx", "auart0-tx", "auart1-rx", "auart1-tx",
+ 			  "auart2-rx", "auart2-tx", "auart3-rx", "auart3-tx";
+diff --git a/Makefile b/Makefile
+index dc9f43a019d6..3d16bcc87585 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+diff --git a/arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts b/arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts
+index 0c76d9f05fd0..f4838ebd918b 100644
+--- a/arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts
++++ b/arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts
+@@ -105,6 +105,10 @@
+ 		};
+ 
+ 		internal-regs {
++			rtc@10300 {
++				/* No crystal connected to the internal RTC */
++				status = "disabled";
++			};
+ 			serial@12000 {
+ 				status = "okay";
+ 			};
+diff --git a/arch/arm/boot/dts/imx23-olinuxino.dts b/arch/arm/boot/dts/imx23-olinuxino.dts
+index 7e6eef2488e8..82045398bf1f 100644
+--- a/arch/arm/boot/dts/imx23-olinuxino.dts
++++ b/arch/arm/boot/dts/imx23-olinuxino.dts
+@@ -12,6 +12,7 @@
+  */
+ 
+ /dts-v1/;
++#include <dt-bindings/gpio/gpio.h>
+ #include "imx23.dtsi"
+ 
+ / {
+@@ -93,6 +94,7 @@
+ 
+ 	ahb@80080000 {
+ 		usb0: usb@80080000 {
++			dr_mode = "host";
+ 			vbus-supply = <&reg_usb0_vbus>;
+ 			status = "okay";
+ 		};
+@@ -122,7 +124,7 @@
+ 
+ 		user {
+ 			label = "green";
+-			gpios = <&gpio2 1 1>;
++			gpios = <&gpio2 1 GPIO_ACTIVE_HIGH>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/imx25.dtsi b/arch/arm/boot/dts/imx25.dtsi
+index e4d3aecc4ed2..677f81d9dcd5 100644
+--- a/arch/arm/boot/dts/imx25.dtsi
++++ b/arch/arm/boot/dts/imx25.dtsi
+@@ -428,6 +428,7 @@
+ 
+ 			pwm4: pwm@53fc8000 {
+ 				compatible = "fsl,imx25-pwm", "fsl,imx27-pwm";
++				#pwm-cells = <2>;
+ 				reg = <0x53fc8000 0x4000>;
+ 				clocks = <&clks 108>, <&clks 52>;
+ 				clock-names = "ipg", "per";
+diff --git a/arch/arm/boot/dts/imx28.dtsi b/arch/arm/boot/dts/imx28.dtsi
+index 47f68ac868d4..5ed245a3f9ac 100644
+--- a/arch/arm/boot/dts/imx28.dtsi
++++ b/arch/arm/boot/dts/imx28.dtsi
+@@ -900,7 +900,7 @@
+ 					      80 81 68 69
+ 					      70 71 72 73
+ 					      74 75 76 77>;
+-				interrupt-names = "auart4-rx", "aurat4-tx", "spdif-tx", "empty",
++				interrupt-names = "auart4-rx", "auart4-tx", "spdif-tx", "empty",
+ 						  "saif0", "saif1", "i2c0", "i2c1",
+ 						  "auart0-rx", "auart0-tx", "auart1-rx", "auart1-tx",
+ 						  "auart2-rx", "auart2-tx", "auart3-rx", "auart3-tx";
+diff --git a/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi b/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
+index 19cc269a08d4..1ce6133b67f5 100644
+--- a/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
+@@ -31,6 +31,7 @@
+ 			regulator-min-microvolt = <5000000>;
+ 			regulator-max-microvolt = <5000000>;
+ 			gpio = <&gpio4 15 0>;
++			enable-active-high;
+ 		};
+ 
+ 		reg_usb_h1_vbus: regulator@1 {
+@@ -40,6 +41,7 @@
+ 			regulator-min-microvolt = <5000000>;
+ 			regulator-max-microvolt = <5000000>;
+ 			gpio = <&gpio1 0 0>;
++			enable-active-high;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/omap3-n900.dts b/arch/arm/boot/dts/omap3-n900.dts
+index db80f9d376fa..9c8bdf2c93a1 100644
+--- a/arch/arm/boot/dts/omap3-n900.dts
++++ b/arch/arm/boot/dts/omap3-n900.dts
+@@ -484,6 +484,8 @@
+ 		DRVDD-supply = <&vmmc2>;
+ 		IOVDD-supply = <&vio>;
+ 		DVDD-supply = <&vio>;
++
++		ai3x-micbias-vg = <1>;
+ 	};
+ 
+ 	tlv320aic3x_aux: tlv320aic3x@19 {
+@@ -495,6 +497,8 @@
+ 		DRVDD-supply = <&vmmc2>;
+ 		IOVDD-supply = <&vio>;
+ 		DVDD-supply = <&vio>;
++
++		ai3x-micbias-vg = <2>;
+ 	};
+ 
+ 	tsl2563: tsl2563@29 {
+diff --git a/arch/arm/boot/dts/ste-dbx5x0.dtsi b/arch/arm/boot/dts/ste-dbx5x0.dtsi
+index bfd3f1c734b8..2201cd5da3bb 100644
+--- a/arch/arm/boot/dts/ste-dbx5x0.dtsi
++++ b/arch/arm/boot/dts/ste-dbx5x0.dtsi
+@@ -1017,23 +1017,6 @@
+ 			status = "disabled";
+ 		};
+ 
+-		vmmci: regulator-gpio {
+-			compatible = "regulator-gpio";
+-
+-			regulator-min-microvolt = <1800000>;
+-			regulator-max-microvolt = <2900000>;
+-			regulator-name = "mmci-reg";
+-			regulator-type = "voltage";
+-
+-			startup-delay-us = <100>;
+-			enable-active-high;
+-
+-			states = <1800000 0x1
+-				  2900000 0x0>;
+-
+-			status = "disabled";
+-		};
+-
+ 		mcde@a0350000 {
+ 			compatible = "stericsson,mcde";
+ 			reg = <0xa0350000 0x1000>, /* MCDE */
+diff --git a/arch/arm/boot/dts/ste-href.dtsi b/arch/arm/boot/dts/ste-href.dtsi
+index bf8f0eddc2c0..744c1e3a744d 100644
+--- a/arch/arm/boot/dts/ste-href.dtsi
++++ b/arch/arm/boot/dts/ste-href.dtsi
+@@ -111,6 +111,21 @@
+ 			pinctrl-1 = <&i2c3_sleep_mode>;
+ 		};
+ 
++		vmmci: regulator-gpio {
++			compatible = "regulator-gpio";
++
++			regulator-min-microvolt = <1800000>;
++			regulator-max-microvolt = <2900000>;
++			regulator-name = "mmci-reg";
++			regulator-type = "voltage";
++
++			startup-delay-us = <100>;
++			enable-active-high;
++
++			states = <1800000 0x1
++				  2900000 0x0>;
++		};
++
+ 		// External Micro SD slot
+ 		sdi0_per1@80126000 {
+ 			arm,primecell-periphid = <0x10480180>;
+diff --git a/arch/arm/boot/dts/ste-snowball.dts b/arch/arm/boot/dts/ste-snowball.dts
+index 206826a855c0..1bc84ebdccaa 100644
+--- a/arch/arm/boot/dts/ste-snowball.dts
++++ b/arch/arm/boot/dts/ste-snowball.dts
+@@ -146,8 +146,21 @@
+ 		};
+ 
+ 		vmmci: regulator-gpio {
++			compatible = "regulator-gpio";
++
+ 			gpios = <&gpio7 4 0x4>;
+ 			enable-gpio = <&gpio6 25 0x4>;
++
++			regulator-min-microvolt = <1800000>;
++			regulator-max-microvolt = <2900000>;
++			regulator-name = "mmci-reg";
++			regulator-type = "voltage";
++
++			startup-delay-us = <100>;
++			enable-active-high;
++
++			states = <1800000 0x1
++				  2900000 0x0>;
+ 		};
+ 
+ 		// External Micro SD slot
+diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
+index 902397dd1000..1c1cdfa566ac 100644
+--- a/arch/arm/kernel/Makefile
++++ b/arch/arm/kernel/Makefile
+@@ -86,7 +86,7 @@ obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
+ 
+ obj-$(CONFIG_ARM_VIRT_EXT)	+= hyp-stub.o
+ ifeq ($(CONFIG_ARM_PSCI),y)
+-obj-y				+= psci.o
++obj-y				+= psci.o psci-call.o
+ obj-$(CONFIG_SMP)		+= psci_smp.o
+ endif
+ 
+diff --git a/arch/arm/kernel/psci-call.S b/arch/arm/kernel/psci-call.S
+new file mode 100644
+index 000000000000..a78e9e1e206d
+--- /dev/null
++++ b/arch/arm/kernel/psci-call.S
+@@ -0,0 +1,31 @@
++/*
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * Copyright (C) 2015 ARM Limited
++ *
++ * Author: Mark Rutland <mark.rutland@arm.com>
++ */
++
++#include <linux/linkage.h>
++
++#include <asm/opcodes-sec.h>
++#include <asm/opcodes-virt.h>
++
++/* int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1, u32 arg2) */
++ENTRY(__invoke_psci_fn_hvc)
++	__HVC(0)
++	bx	lr
++ENDPROC(__invoke_psci_fn_hvc)
++
++/* int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1, u32 arg2) */
++ENTRY(__invoke_psci_fn_smc)
++	__SMC(0)
++	bx	lr
++ENDPROC(__invoke_psci_fn_smc)
+diff --git a/arch/arm/kernel/psci.c b/arch/arm/kernel/psci.c
+index f73891b6b730..f90fdf4ce7c7 100644
+--- a/arch/arm/kernel/psci.c
++++ b/arch/arm/kernel/psci.c
+@@ -23,8 +23,6 @@
+ 
+ #include <asm/compiler.h>
+ #include <asm/errno.h>
+-#include <asm/opcodes-sec.h>
+-#include <asm/opcodes-virt.h>
+ #include <asm/psci.h>
+ #include <asm/system_misc.h>
+ 
+@@ -33,6 +31,9 @@ struct psci_operations psci_ops;
+ static int (*invoke_psci_fn)(u32, u32, u32, u32);
+ typedef int (*psci_initcall_t)(const struct device_node *);
+ 
++asmlinkage int __invoke_psci_fn_hvc(u32, u32, u32, u32);
++asmlinkage int __invoke_psci_fn_smc(u32, u32, u32, u32);
++
+ enum psci_function {
+ 	PSCI_FN_CPU_SUSPEND,
+ 	PSCI_FN_CPU_ON,
+@@ -71,40 +72,6 @@ static u32 psci_power_state_pack(struct psci_power_state state)
+ 		 & PSCI_0_2_POWER_STATE_AFFL_MASK);
+ }
+ 
+-/*
+- * The following two functions are invoked via the invoke_psci_fn pointer
+- * and will not be inlined, allowing us to piggyback on the AAPCS.
+- */
+-static noinline int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1,
+-					 u32 arg2)
+-{
+-	asm volatile(
+-			__asmeq("%0", "r0")
+-			__asmeq("%1", "r1")
+-			__asmeq("%2", "r2")
+-			__asmeq("%3", "r3")
+-			__HVC(0)
+-		: "+r" (function_id)
+-		: "r" (arg0), "r" (arg1), "r" (arg2));
+-
+-	return function_id;
+-}
+-
+-static noinline int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1,
+-					 u32 arg2)
+-{
+-	asm volatile(
+-			__asmeq("%0", "r0")
+-			__asmeq("%1", "r1")
+-			__asmeq("%2", "r2")
+-			__asmeq("%3", "r3")
+-			__SMC(0)
+-		: "+r" (function_id)
+-		: "r" (arg0), "r" (arg1), "r" (arg2));
+-
+-	return function_id;
+-}
+-
+ static int psci_get_version(void)
+ {
+ 	int err;
+diff --git a/arch/arm/mach-omap2/prm-regbits-34xx.h b/arch/arm/mach-omap2/prm-regbits-34xx.h
+index cbefbd7cfdb5..661d753df584 100644
+--- a/arch/arm/mach-omap2/prm-regbits-34xx.h
++++ b/arch/arm/mach-omap2/prm-regbits-34xx.h
+@@ -112,6 +112,7 @@
+ #define OMAP3430_VC_CMD_ONLP_SHIFT			16
+ #define OMAP3430_VC_CMD_RET_SHIFT			8
+ #define OMAP3430_VC_CMD_OFF_SHIFT			0
++#define OMAP3430_SREN_MASK				(1 << 4)
+ #define OMAP3430_HSEN_MASK				(1 << 3)
+ #define OMAP3430_MCODE_MASK				(0x7 << 0)
+ #define OMAP3430_VALID_MASK				(1 << 24)
+diff --git a/arch/arm/mach-omap2/prm-regbits-44xx.h b/arch/arm/mach-omap2/prm-regbits-44xx.h
+index b1c7a33e00e7..e794828dee55 100644
+--- a/arch/arm/mach-omap2/prm-regbits-44xx.h
++++ b/arch/arm/mach-omap2/prm-regbits-44xx.h
+@@ -35,6 +35,7 @@
+ #define OMAP4430_GLOBAL_WARM_SW_RST_SHIFT				1
+ #define OMAP4430_GLOBAL_WUEN_MASK					(1 << 16)
+ #define OMAP4430_HSMCODE_MASK						(0x7 << 0)
++#define OMAP4430_SRMODEEN_MASK						(1 << 4)
+ #define OMAP4430_HSMODEEN_MASK						(1 << 3)
+ #define OMAP4430_HSSCLL_SHIFT						24
+ #define OMAP4430_ICEPICK_RST_SHIFT					9
+diff --git a/arch/arm/mach-omap2/vc.c b/arch/arm/mach-omap2/vc.c
+index be9ef834fa81..076fd20d7e5a 100644
+--- a/arch/arm/mach-omap2/vc.c
++++ b/arch/arm/mach-omap2/vc.c
+@@ -316,7 +316,8 @@ static void __init omap3_vc_init_pmic_signaling(struct voltagedomain *voltdm)
+ 	 * idle. And we can also scale voltages to zero for off-idle.
+ 	 * Note that no actual voltage scaling during off-idle will
+ 	 * happen unless the board specific twl4030 PMIC scripts are
+-	 * loaded.
++	 * loaded. See also omap_vc_i2c_init for comments regarding
++	 * erratum i531.
+ 	 */
+ 	val = voltdm->read(OMAP3_PRM_VOLTCTRL_OFFSET);
+ 	if (!(val & OMAP3430_PRM_VOLTCTRL_SEL_OFF)) {
+@@ -704,9 +705,16 @@ static void __init omap_vc_i2c_init(struct voltagedomain *voltdm)
+ 		return;
+ 	}
+ 
++	/*
++	 * Note that for omap3 OMAP3430_SREN_MASK clears SREN to work around
++	 * erratum i531 "Extra Power Consumed When Repeated Start Operation
++	 * Mode Is Enabled on I2C Interface Dedicated for Smart Reflex (I2C4)".
++	 * Otherwise I2C4 eventually leads into about 23mW extra power being
++	 * consumed even during off idle using VMODE.
++	 */
+ 	i2c_high_speed = voltdm->pmic->i2c_high_speed;
+ 	if (i2c_high_speed)
+-		voltdm->rmw(vc->common->i2c_cfg_hsen_mask,
++		voltdm->rmw(vc->common->i2c_cfg_clear_mask,
+ 			    vc->common->i2c_cfg_hsen_mask,
+ 			    vc->common->i2c_cfg_reg);
+ 
+diff --git a/arch/arm/mach-omap2/vc.h b/arch/arm/mach-omap2/vc.h
+index cdbdd78e755e..89b83b7ff3ec 100644
+--- a/arch/arm/mach-omap2/vc.h
++++ b/arch/arm/mach-omap2/vc.h
+@@ -34,6 +34,7 @@ struct voltagedomain;
+  * @cmd_ret_shift: RET field shift in PRM_VC_CMD_VAL_* register
+  * @cmd_off_shift: OFF field shift in PRM_VC_CMD_VAL_* register
+  * @i2c_cfg_reg: I2C configuration register offset
++ * @i2c_cfg_clear_mask: high-speed mode bit clear mask in I2C config register
+  * @i2c_cfg_hsen_mask: high-speed mode bit field mask in I2C config register
+  * @i2c_mcode_mask: MCODE field mask for I2C config register
+  *
+@@ -52,6 +53,7 @@ struct omap_vc_common {
+ 	u8 cmd_ret_shift;
+ 	u8 cmd_off_shift;
+ 	u8 i2c_cfg_reg;
++	u8 i2c_cfg_clear_mask;
+ 	u8 i2c_cfg_hsen_mask;
+ 	u8 i2c_mcode_mask;
+ };
+diff --git a/arch/arm/mach-omap2/vc3xxx_data.c b/arch/arm/mach-omap2/vc3xxx_data.c
+index 75bc4aa22b3a..71d74c9172c1 100644
+--- a/arch/arm/mach-omap2/vc3xxx_data.c
++++ b/arch/arm/mach-omap2/vc3xxx_data.c
+@@ -40,6 +40,7 @@ static struct omap_vc_common omap3_vc_common = {
+ 	.cmd_onlp_shift	 = OMAP3430_VC_CMD_ONLP_SHIFT,
+ 	.cmd_ret_shift	 = OMAP3430_VC_CMD_RET_SHIFT,
+ 	.cmd_off_shift	 = OMAP3430_VC_CMD_OFF_SHIFT,
++	.i2c_cfg_clear_mask = OMAP3430_SREN_MASK | OMAP3430_HSEN_MASK,
+ 	.i2c_cfg_hsen_mask = OMAP3430_HSEN_MASK,
+ 	.i2c_cfg_reg	 = OMAP3_PRM_VC_I2C_CFG_OFFSET,
+ 	.i2c_mcode_mask	 = OMAP3430_MCODE_MASK,
+diff --git a/arch/arm/mach-omap2/vc44xx_data.c b/arch/arm/mach-omap2/vc44xx_data.c
+index 085e5d6a04fd..2abd5fa8a697 100644
+--- a/arch/arm/mach-omap2/vc44xx_data.c
++++ b/arch/arm/mach-omap2/vc44xx_data.c
+@@ -42,6 +42,7 @@ static const struct omap_vc_common omap4_vc_common = {
+ 	.cmd_ret_shift = OMAP4430_RET_SHIFT,
+ 	.cmd_off_shift = OMAP4430_OFF_SHIFT,
+ 	.i2c_cfg_reg = OMAP4_PRM_VC_CFG_I2C_MODE_OFFSET,
++	.i2c_cfg_clear_mask = OMAP4430_SRMODEEN_MASK | OMAP4430_HSMODEEN_MASK,
+ 	.i2c_cfg_hsen_mask = OMAP4430_HSMODEEN_MASK,
+ 	.i2c_mcode_mask	 = OMAP4430_HSMCODE_MASK,
+ };
+diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
+index e1268f905026..f412b53ed268 100644
+--- a/arch/arm/net/bpf_jit_32.c
++++ b/arch/arm/net/bpf_jit_32.c
+@@ -449,10 +449,21 @@ static inline void emit_udiv(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx)
+ 		return;
+ 	}
+ #endif
+-	if (rm != ARM_R0)
+-		emit(ARM_MOV_R(ARM_R0, rm), ctx);
++
++	/*
++	 * For BPF_ALU | BPF_DIV | BPF_K instructions, rm is ARM_R4
++	 * (r_A) and rn is ARM_R0 (r_scratch) so load rn first into
++	 * ARM_R1 to avoid accidentally overwriting ARM_R0 with rm
++	 * before using it as a source for ARM_R1.
++	 *
++	 * For BPF_ALU | BPF_DIV | BPF_X rm is ARM_R4 (r_A) and rn is
++	 * ARM_R5 (r_X) so there is no particular register overlap
++	 * issues.
++	 */
+ 	if (rn != ARM_R1)
+ 		emit(ARM_MOV_R(ARM_R1, rn), ctx);
++	if (rm != ARM_R0)
++		emit(ARM_MOV_R(ARM_R0, rm), ctx);
+ 
+ 	ctx->seen |= SEEN_CALL;
+ 	emit_mov_i(ARM_R3, (u32)jit_udiv, ctx);
+diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
+index cf87de3fc390..64b611782ef0 100644
+--- a/arch/x86/include/asm/spinlock.h
++++ b/arch/x86/include/asm/spinlock.h
+@@ -169,7 +169,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock)
+ 	struct __raw_tickets tmp = READ_ONCE(lock->tickets);
+ 
+ 	tmp.head &= ~TICKET_SLOWPATH_FLAG;
+-	return (tmp.tail - tmp.head) > TICKET_LOCK_INC;
++	return (__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC;
+ }
+ #define arch_spin_is_contended	arch_spin_is_contended
+ 
+diff --git a/arch/x86/pci/acpi.c b/arch/x86/pci/acpi.c
+index e4695985f9de..d93963340c3c 100644
+--- a/arch/x86/pci/acpi.c
++++ b/arch/x86/pci/acpi.c
+@@ -325,6 +325,26 @@ static void release_pci_root_info(struct pci_host_bridge *bridge)
+ 	kfree(info);
+ }
+ 
++/*
++ * An IO port or MMIO resource assigned to a PCI host bridge may be
++ * consumed by the host bridge itself or available to its child
++ * bus/devices. The ACPI specification defines a bit (Producer/Consumer)
++ * to tell whether the resource is consumed by the host bridge itself,
++ * but firmware hasn't used that bit consistently, so we can't rely on it.
++ *
++ * On x86 and IA64 platforms, all IO port and MMIO resources are assumed
++ * to be available to child bus/devices except one special case:
++ *     IO port [0xCF8-0xCFF] is consumed by the host bridge itself
++ *     to access PCI configuration space.
++ *
++ * So explicitly filter out PCI CFG IO ports[0xCF8-0xCFF].
++ */
++static bool resource_is_pcicfg_ioport(struct resource *res)
++{
++	return (res->flags & IORESOURCE_IO) &&
++		res->start == 0xCF8 && res->end == 0xCFF;
++}
++
+ static void probe_pci_root_info(struct pci_root_info *info,
+ 				struct acpi_device *device,
+ 				int busnum, int domain,
+@@ -346,8 +366,8 @@ static void probe_pci_root_info(struct pci_root_info *info,
+ 			"no IO and memory resources present in _CRS\n");
+ 	else
+ 		resource_list_for_each_entry_safe(entry, tmp, list) {
+-			if ((entry->res->flags & IORESOURCE_WINDOW) == 0 ||
+-			    (entry->res->flags & IORESOURCE_DISABLED))
++			if ((entry->res->flags & IORESOURCE_DISABLED) ||
++			    resource_is_pcicfg_ioport(entry->res))
+ 				resource_list_destroy_entry(entry);
+ 			else
+ 				entry->res->name = info->name;
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 794c3e7f01cf..66406474f0c4 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -552,6 +552,8 @@ void blk_cleanup_queue(struct request_queue *q)
+ 		q->queue_lock = &q->__queue_lock;
+ 	spin_unlock_irq(lock);
+ 
++	bdi_destroy(&q->backing_dev_info);
++
+ 	/* @q is and will stay empty, shutdown and put */
+ 	blk_put_queue(q);
+ }
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 33c428530193..5c39703e644f 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -675,8 +675,11 @@ static void blk_mq_rq_timer(unsigned long priv)
+ 		data.next = blk_rq_timeout(round_jiffies_up(data.next));
+ 		mod_timer(&q->timeout, data.next);
+ 	} else {
+-		queue_for_each_hw_ctx(q, hctx, i)
+-			blk_mq_tag_idle(hctx);
++		queue_for_each_hw_ctx(q, hctx, i) {
++			/* the hctx may be unmapped, so check it here */
++			if (blk_mq_hw_queue_mapped(hctx))
++				blk_mq_tag_idle(hctx);
++		}
+ 	}
+ }
+ 
+@@ -1570,22 +1573,6 @@ static int blk_mq_hctx_cpu_offline(struct blk_mq_hw_ctx *hctx, int cpu)
+ 	return NOTIFY_OK;
+ }
+ 
+-static int blk_mq_hctx_cpu_online(struct blk_mq_hw_ctx *hctx, int cpu)
+-{
+-	struct request_queue *q = hctx->queue;
+-	struct blk_mq_tag_set *set = q->tag_set;
+-
+-	if (set->tags[hctx->queue_num])
+-		return NOTIFY_OK;
+-
+-	set->tags[hctx->queue_num] = blk_mq_init_rq_map(set, hctx->queue_num);
+-	if (!set->tags[hctx->queue_num])
+-		return NOTIFY_STOP;
+-
+-	hctx->tags = set->tags[hctx->queue_num];
+-	return NOTIFY_OK;
+-}
+-
+ static int blk_mq_hctx_notify(void *data, unsigned long action,
+ 			      unsigned int cpu)
+ {
+@@ -1593,8 +1580,11 @@ static int blk_mq_hctx_notify(void *data, unsigned long action,
+ 
+ 	if (action == CPU_DEAD || action == CPU_DEAD_FROZEN)
+ 		return blk_mq_hctx_cpu_offline(hctx, cpu);
+-	else if (action == CPU_ONLINE || action == CPU_ONLINE_FROZEN)
+-		return blk_mq_hctx_cpu_online(hctx, cpu);
++
++	/*
++	 * In case of CPU online, tags may be reallocated
++	 * in blk_mq_map_swqueue() after mapping is updated.
++	 */
+ 
+ 	return NOTIFY_OK;
+ }
+@@ -1776,6 +1766,7 @@ static void blk_mq_map_swqueue(struct request_queue *q)
+ 	unsigned int i;
+ 	struct blk_mq_hw_ctx *hctx;
+ 	struct blk_mq_ctx *ctx;
++	struct blk_mq_tag_set *set = q->tag_set;
+ 
+ 	queue_for_each_hw_ctx(q, hctx, i) {
+ 		cpumask_clear(hctx->cpumask);
+@@ -1802,16 +1793,20 @@ static void blk_mq_map_swqueue(struct request_queue *q)
+ 		 * disable it and free the request entries.
+ 		 */
+ 		if (!hctx->nr_ctx) {
+-			struct blk_mq_tag_set *set = q->tag_set;
+-
+ 			if (set->tags[i]) {
+ 				blk_mq_free_rq_map(set, set->tags[i], i);
+ 				set->tags[i] = NULL;
+-				hctx->tags = NULL;
+ 			}
++			hctx->tags = NULL;
+ 			continue;
+ 		}
+ 
++		/* unmapped hw queue can be remapped after CPU topo changed */
++		if (!set->tags[i])
++			set->tags[i] = blk_mq_init_rq_map(set, i);
++		hctx->tags = set->tags[i];
++		WARN_ON(!hctx->tags);
++
+ 		/*
+ 		 * Initialize batch roundrobin counts
+ 		 */
+@@ -2075,9 +2070,16 @@ static int blk_mq_queue_reinit_notify(struct notifier_block *nb,
+ 	 */
+ 	list_for_each_entry(q, &all_q_list, all_q_node)
+ 		blk_mq_freeze_queue_start(q);
+-	list_for_each_entry(q, &all_q_list, all_q_node)
++	list_for_each_entry(q, &all_q_list, all_q_node) {
+ 		blk_mq_freeze_queue_wait(q);
+ 
++		/*
++		 * timeout handler can't touch hw queue during the
++		 * reinitialization
++		 */
++		del_timer_sync(&q->timeout);
++	}
++
+ 	list_for_each_entry(q, &all_q_list, all_q_node)
+ 		blk_mq_queue_reinit(q);
+ 
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index faaf36ade7eb..2b8fd302f677 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -522,8 +522,6 @@ static void blk_release_queue(struct kobject *kobj)
+ 
+ 	blk_trace_shutdown(q);
+ 
+-	bdi_destroy(&q->backing_dev_info);
+-
+ 	ida_simple_remove(&blk_queue_ida, q->id);
+ 	call_rcu(&q->rcu_head, blk_free_queue_rcu);
+ }
+diff --git a/drivers/acpi/acpi_pnp.c b/drivers/acpi/acpi_pnp.c
+index b193f8425999..ff6d8adc9cda 100644
+--- a/drivers/acpi/acpi_pnp.c
++++ b/drivers/acpi/acpi_pnp.c
+@@ -304,6 +304,8 @@ static const struct acpi_device_id acpi_pnp_device_ids[] = {
+ 	{"PNPb006"},
+ 	/* cs423x-pnpbios */
+ 	{"CSC0100"},
++	{"CSC0103"},
++	{"CSC0110"},
+ 	{"CSC0000"},
+ 	{"GIM0100"},		/* Guillemot Turtlebeach something appears to be cs4232 compatible */
+ 	/* es18xx-pnpbios */
+diff --git a/drivers/acpi/acpica/acmacros.h b/drivers/acpi/acpica/acmacros.h
+index cf607fe69dbd..c240bdf824f2 100644
+--- a/drivers/acpi/acpica/acmacros.h
++++ b/drivers/acpi/acpica/acmacros.h
+@@ -63,23 +63,12 @@
+ #define ACPI_SET64(ptr, val)            (*ACPI_CAST64 (ptr) = (u64) (val))
+ 
+ /*
+- * printf() format helpers. These macros are workarounds for the difficulties
++ * printf() format helper. This macros is a workaround for the difficulties
+  * with emitting 64-bit integers and 64-bit pointers with the same code
+  * for both 32-bit and 64-bit hosts.
+  */
+ #define ACPI_FORMAT_UINT64(i)           ACPI_HIDWORD(i), ACPI_LODWORD(i)
+ 
+-#if ACPI_MACHINE_WIDTH == 64
+-#define ACPI_FORMAT_NATIVE_UINT(i)      ACPI_FORMAT_UINT64(i)
+-#define ACPI_FORMAT_TO_UINT(i)          ACPI_FORMAT_UINT64(i)
+-#define ACPI_PRINTF_UINT                 "0x%8.8X%8.8X"
+-
+-#else
+-#define ACPI_FORMAT_NATIVE_UINT(i)      0, (u32) (i)
+-#define ACPI_FORMAT_TO_UINT(i)          (u32) (i)
+-#define ACPI_PRINTF_UINT                 "0x%8.8X"
+-#endif
+-
+ /*
+  * Macros for moving data around to/from buffers that are possibly unaligned.
+  * If the hardware supports the transfer of unaligned data, just do the store.
+diff --git a/drivers/acpi/acpica/dsopcode.c b/drivers/acpi/acpica/dsopcode.c
+index 77244182ff02..ea0cc4e08f80 100644
+--- a/drivers/acpi/acpica/dsopcode.c
++++ b/drivers/acpi/acpica/dsopcode.c
+@@ -446,7 +446,7 @@ acpi_ds_eval_region_operands(struct acpi_walk_state *walk_state,
+ 
+ 	ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "RgnObj %p Addr %8.8X%8.8X Len %X\n",
+ 			  obj_desc,
+-			  ACPI_FORMAT_NATIVE_UINT(obj_desc->region.address),
++			  ACPI_FORMAT_UINT64(obj_desc->region.address),
+ 			  obj_desc->region.length));
+ 
+ 	/* Now the address and length are valid for this opregion */
+@@ -539,13 +539,12 @@ acpi_ds_eval_table_region_operands(struct acpi_walk_state *walk_state,
+ 		return_ACPI_STATUS(AE_NOT_EXIST);
+ 	}
+ 
+-	obj_desc->region.address =
+-	    (acpi_physical_address) ACPI_TO_INTEGER(table);
++	obj_desc->region.address = ACPI_PTR_TO_PHYSADDR(table);
+ 	obj_desc->region.length = table->length;
+ 
+ 	ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "RgnObj %p Addr %8.8X%8.8X Len %X\n",
+ 			  obj_desc,
+-			  ACPI_FORMAT_NATIVE_UINT(obj_desc->region.address),
++			  ACPI_FORMAT_UINT64(obj_desc->region.address),
+ 			  obj_desc->region.length));
+ 
+ 	/* Now the address and length are valid for this opregion */
+diff --git a/drivers/acpi/acpica/evregion.c b/drivers/acpi/acpica/evregion.c
+index 9abace3401f9..2ba28a63fb68 100644
+--- a/drivers/acpi/acpica/evregion.c
++++ b/drivers/acpi/acpica/evregion.c
+@@ -272,7 +272,7 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj,
+ 	ACPI_DEBUG_PRINT((ACPI_DB_OPREGION,
+ 			  "Handler %p (@%p) Address %8.8X%8.8X [%s]\n",
+ 			  &region_obj->region.handler->address_space, handler,
+-			  ACPI_FORMAT_NATIVE_UINT(address),
++			  ACPI_FORMAT_UINT64(address),
+ 			  acpi_ut_get_region_name(region_obj->region.
+ 						  space_id)));
+ 
+diff --git a/drivers/acpi/acpica/exdump.c b/drivers/acpi/acpica/exdump.c
+index 7c213b6b6472..1da52bef632e 100644
+--- a/drivers/acpi/acpica/exdump.c
++++ b/drivers/acpi/acpica/exdump.c
+@@ -767,8 +767,8 @@ void acpi_ex_dump_operand(union acpi_operand_object *obj_desc, u32 depth)
+ 			acpi_os_printf("\n");
+ 		} else {
+ 			acpi_os_printf(" base %8.8X%8.8X Length %X\n",
+-				       ACPI_FORMAT_NATIVE_UINT(obj_desc->region.
+-							       address),
++				       ACPI_FORMAT_UINT64(obj_desc->region.
++							  address),
+ 				       obj_desc->region.length);
+ 		}
+ 		break;
+diff --git a/drivers/acpi/acpica/exfldio.c b/drivers/acpi/acpica/exfldio.c
+index 49479927e7f7..725a3746a2df 100644
+--- a/drivers/acpi/acpica/exfldio.c
++++ b/drivers/acpi/acpica/exfldio.c
+@@ -263,17 +263,15 @@ acpi_ex_access_region(union acpi_operand_object *obj_desc,
+ 	}
+ 
+ 	ACPI_DEBUG_PRINT_RAW((ACPI_DB_BFIELD,
+-			      " Region [%s:%X], Width %X, ByteBase %X, Offset %X at %p\n",
++			      " Region [%s:%X], Width %X, ByteBase %X, Offset %X at %8.8X%8.8X\n",
+ 			      acpi_ut_get_region_name(rgn_desc->region.
+ 						      space_id),
+ 			      rgn_desc->region.space_id,
+ 			      obj_desc->common_field.access_byte_width,
+ 			      obj_desc->common_field.base_byte_offset,
+-			      field_datum_byte_offset, ACPI_CAST_PTR(void,
+-								     (rgn_desc->
+-								      region.
+-								      address +
+-								      region_offset))));
++			      field_datum_byte_offset,
++			      ACPI_FORMAT_UINT64(rgn_desc->region.address +
++						 region_offset)));
+ 
+ 	/* Invoke the appropriate address_space/op_region handler */
+ 
+diff --git a/drivers/acpi/acpica/exregion.c b/drivers/acpi/acpica/exregion.c
+index 0fe188e238ef..b4bbf3150bc1 100644
+--- a/drivers/acpi/acpica/exregion.c
++++ b/drivers/acpi/acpica/exregion.c
+@@ -181,7 +181,7 @@ acpi_ex_system_memory_space_handler(u32 function,
+ 		if (!mem_info->mapped_logical_address) {
+ 			ACPI_ERROR((AE_INFO,
+ 				    "Could not map memory at 0x%8.8X%8.8X, size %u",
+-				    ACPI_FORMAT_NATIVE_UINT(address),
++				    ACPI_FORMAT_UINT64(address),
+ 				    (u32) map_length));
+ 			mem_info->mapped_length = 0;
+ 			return_ACPI_STATUS(AE_NO_MEMORY);
+@@ -202,8 +202,7 @@ acpi_ex_system_memory_space_handler(u32 function,
+ 
+ 	ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ 			  "System-Memory (width %u) R/W %u Address=%8.8X%8.8X\n",
+-			  bit_width, function,
+-			  ACPI_FORMAT_NATIVE_UINT(address)));
++			  bit_width, function, ACPI_FORMAT_UINT64(address)));
+ 
+ 	/*
+ 	 * Perform the memory read or write
+@@ -318,8 +317,7 @@ acpi_ex_system_io_space_handler(u32 function,
+ 
+ 	ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ 			  "System-IO (width %u) R/W %u Address=%8.8X%8.8X\n",
+-			  bit_width, function,
+-			  ACPI_FORMAT_NATIVE_UINT(address)));
++			  bit_width, function, ACPI_FORMAT_UINT64(address)));
+ 
+ 	/* Decode the function parameter */
+ 
+diff --git a/drivers/acpi/acpica/hwvalid.c b/drivers/acpi/acpica/hwvalid.c
+index 2bd33fe56cb3..29033d71417b 100644
+--- a/drivers/acpi/acpica/hwvalid.c
++++ b/drivers/acpi/acpica/hwvalid.c
+@@ -142,17 +142,17 @@ acpi_hw_validate_io_request(acpi_io_address address, u32 bit_width)
+ 	byte_width = ACPI_DIV_8(bit_width);
+ 	last_address = address + byte_width - 1;
+ 
+-	ACPI_DEBUG_PRINT((ACPI_DB_IO, "Address %p LastAddress %p Length %X",
+-			  ACPI_CAST_PTR(void, address), ACPI_CAST_PTR(void,
+-								      last_address),
+-			  byte_width));
++	ACPI_DEBUG_PRINT((ACPI_DB_IO,
++			  "Address %8.8X%8.8X LastAddress %8.8X%8.8X Length %X",
++			  ACPI_FORMAT_UINT64(address),
++			  ACPI_FORMAT_UINT64(last_address), byte_width));
+ 
+ 	/* Maximum 16-bit address in I/O space */
+ 
+ 	if (last_address > ACPI_UINT16_MAX) {
+ 		ACPI_ERROR((AE_INFO,
+-			    "Illegal I/O port address/length above 64K: %p/0x%X",
+-			    ACPI_CAST_PTR(void, address), byte_width));
++			    "Illegal I/O port address/length above 64K: %8.8X%8.8X/0x%X",
++			    ACPI_FORMAT_UINT64(address), byte_width));
+ 		return_ACPI_STATUS(AE_LIMIT);
+ 	}
+ 
+@@ -181,8 +181,8 @@ acpi_hw_validate_io_request(acpi_io_address address, u32 bit_width)
+ 
+ 			if (acpi_gbl_osi_data >= port_info->osi_dependency) {
+ 				ACPI_DEBUG_PRINT((ACPI_DB_IO,
+-						  "Denied AML access to port 0x%p/%X (%s 0x%.4X-0x%.4X)",
+-						  ACPI_CAST_PTR(void, address),
++						  "Denied AML access to port 0x%8.8X%8.8X/%X (%s 0x%.4X-0x%.4X)",
++						  ACPI_FORMAT_UINT64(address),
+ 						  byte_width, port_info->name,
+ 						  port_info->start,
+ 						  port_info->end));
+diff --git a/drivers/acpi/acpica/nsdump.c b/drivers/acpi/acpica/nsdump.c
+index 80f097eb7381..d259393505fa 100644
+--- a/drivers/acpi/acpica/nsdump.c
++++ b/drivers/acpi/acpica/nsdump.c
+@@ -271,12 +271,11 @@ acpi_ns_dump_one_object(acpi_handle obj_handle,
+ 		switch (type) {
+ 		case ACPI_TYPE_PROCESSOR:
+ 
+-			acpi_os_printf("ID %02X Len %02X Addr %p\n",
++			acpi_os_printf("ID %02X Len %02X Addr %8.8X%8.8X\n",
+ 				       obj_desc->processor.proc_id,
+ 				       obj_desc->processor.length,
+-				       ACPI_CAST_PTR(void,
+-						     obj_desc->processor.
+-						     address));
++				       ACPI_FORMAT_UINT64(obj_desc->processor.
++							  address));
+ 			break;
+ 
+ 		case ACPI_TYPE_DEVICE:
+@@ -347,8 +346,9 @@ acpi_ns_dump_one_object(acpi_handle obj_handle,
+ 							       space_id));
+ 			if (obj_desc->region.flags & AOPOBJ_DATA_VALID) {
+ 				acpi_os_printf(" Addr %8.8X%8.8X Len %.4X\n",
+-					       ACPI_FORMAT_NATIVE_UINT
+-					       (obj_desc->region.address),
++					       ACPI_FORMAT_UINT64(obj_desc->
++								  region.
++								  address),
+ 					       obj_desc->region.length);
+ 			} else {
+ 				acpi_os_printf
+diff --git a/drivers/acpi/acpica/tbdata.c b/drivers/acpi/acpica/tbdata.c
+index 6a144957aadd..fd5998b2b46b 100644
+--- a/drivers/acpi/acpica/tbdata.c
++++ b/drivers/acpi/acpica/tbdata.c
+@@ -113,9 +113,9 @@ acpi_tb_acquire_table(struct acpi_table_desc *table_desc,
+ 	case ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL:
+ 	case ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL:
+ 
+-		table =
+-		    ACPI_CAST_PTR(struct acpi_table_header,
+-				  table_desc->address);
++		table = ACPI_CAST_PTR(struct acpi_table_header,
++				      ACPI_PHYSADDR_TO_PTR(table_desc->
++							   address));
+ 		break;
+ 
+ 	default:
+@@ -214,7 +214,8 @@ acpi_tb_acquire_temp_table(struct acpi_table_desc *table_desc,
+ 	case ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL:
+ 	case ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL:
+ 
+-		table_header = ACPI_CAST_PTR(struct acpi_table_header, address);
++		table_header = ACPI_CAST_PTR(struct acpi_table_header,
++					     ACPI_PHYSADDR_TO_PTR(address));
+ 		if (!table_header) {
+ 			return (AE_NO_MEMORY);
+ 		}
+@@ -398,14 +399,14 @@ acpi_tb_verify_temp_table(struct acpi_table_desc * table_desc, char *signature)
+ 					    table_desc->length);
+ 		if (ACPI_FAILURE(status)) {
+ 			ACPI_EXCEPTION((AE_INFO, AE_NO_MEMORY,
+-					"%4.4s " ACPI_PRINTF_UINT
++					"%4.4s 0x%8.8X%8.8X"
+ 					" Attempted table install failed",
+ 					acpi_ut_valid_acpi_name(table_desc->
+ 								signature.
+ 								ascii) ?
+ 					table_desc->signature.ascii : "????",
+-					ACPI_FORMAT_TO_UINT(table_desc->
+-							    address)));
++					ACPI_FORMAT_UINT64(table_desc->
++							   address)));
+ 			goto invalidate_and_exit;
+ 		}
+ 	}
+diff --git a/drivers/acpi/acpica/tbinstal.c b/drivers/acpi/acpica/tbinstal.c
+index 7fbc2b9dcbbb..7e69bc73bd16 100644
+--- a/drivers/acpi/acpica/tbinstal.c
++++ b/drivers/acpi/acpica/tbinstal.c
+@@ -187,8 +187,9 @@ acpi_tb_install_fixed_table(acpi_physical_address address,
+ 	status = acpi_tb_acquire_temp_table(&new_table_desc, address,
+ 					    ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL);
+ 	if (ACPI_FAILURE(status)) {
+-		ACPI_ERROR((AE_INFO, "Could not acquire table length at %p",
+-			    ACPI_CAST_PTR(void, address)));
++		ACPI_ERROR((AE_INFO,
++			    "Could not acquire table length at %8.8X%8.8X",
++			    ACPI_FORMAT_UINT64(address)));
+ 		return_ACPI_STATUS(status);
+ 	}
+ 
+@@ -246,8 +247,9 @@ acpi_tb_install_standard_table(acpi_physical_address address,
+ 
+ 	status = acpi_tb_acquire_temp_table(&new_table_desc, address, flags);
+ 	if (ACPI_FAILURE(status)) {
+-		ACPI_ERROR((AE_INFO, "Could not acquire table length at %p",
+-			    ACPI_CAST_PTR(void, address)));
++		ACPI_ERROR((AE_INFO,
++			    "Could not acquire table length at %8.8X%8.8X",
++			    ACPI_FORMAT_UINT64(address)));
+ 		return_ACPI_STATUS(status);
+ 	}
+ 
+@@ -258,9 +260,10 @@ acpi_tb_install_standard_table(acpi_physical_address address,
+ 	if (!reload &&
+ 	    acpi_gbl_disable_ssdt_table_install &&
+ 	    ACPI_COMPARE_NAME(&new_table_desc.signature, ACPI_SIG_SSDT)) {
+-		ACPI_INFO((AE_INFO, "Ignoring installation of %4.4s at %p",
+-			   new_table_desc.signature.ascii, ACPI_CAST_PTR(void,
+-									 address)));
++		ACPI_INFO((AE_INFO,
++			   "Ignoring installation of %4.4s at %8.8X%8.8X",
++			   new_table_desc.signature.ascii,
++			   ACPI_FORMAT_UINT64(address)));
+ 		goto release_and_exit;
+ 	}
+ 
+@@ -428,11 +431,11 @@ finish_override:
+ 		return;
+ 	}
+ 
+-	ACPI_INFO((AE_INFO, "%4.4s " ACPI_PRINTF_UINT
+-		   " %s table override, new table: " ACPI_PRINTF_UINT,
++	ACPI_INFO((AE_INFO, "%4.4s 0x%8.8X%8.8X"
++		   " %s table override, new table: 0x%8.8X%8.8X",
+ 		   old_table_desc->signature.ascii,
+-		   ACPI_FORMAT_TO_UINT(old_table_desc->address),
+-		   override_type, ACPI_FORMAT_TO_UINT(new_table_desc.address)));
++		   ACPI_FORMAT_UINT64(old_table_desc->address),
++		   override_type, ACPI_FORMAT_UINT64(new_table_desc.address)));
+ 
+ 	/* We can now uninstall the original table */
+ 
+@@ -516,7 +519,7 @@ void acpi_tb_uninstall_table(struct acpi_table_desc *table_desc)
+ 
+ 	if ((table_desc->flags & ACPI_TABLE_ORIGIN_MASK) ==
+ 	    ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL) {
+-		ACPI_FREE(ACPI_CAST_PTR(void, table_desc->address));
++		ACPI_FREE(ACPI_PHYSADDR_TO_PTR(table_desc->address));
+ 	}
+ 
+ 	table_desc->address = ACPI_PTR_TO_PHYSADDR(NULL);
+diff --git a/drivers/acpi/acpica/tbprint.c b/drivers/acpi/acpica/tbprint.c
+index ef16c06e5091..77ba5c71c6e7 100644
+--- a/drivers/acpi/acpica/tbprint.c
++++ b/drivers/acpi/acpica/tbprint.c
+@@ -127,18 +127,12 @@ acpi_tb_print_table_header(acpi_physical_address address,
+ {
+ 	struct acpi_table_header local_header;
+ 
+-	/*
+-	 * The reason that we use ACPI_PRINTF_UINT and ACPI_FORMAT_TO_UINT is to
+-	 * support both 32-bit and 64-bit hosts/addresses in a consistent manner.
+-	 * The %p specifier does not emit uniform output on all hosts. On some,
+-	 * leading zeros are not supported.
+-	 */
+ 	if (ACPI_COMPARE_NAME(header->signature, ACPI_SIG_FACS)) {
+ 
+ 		/* FACS only has signature and length fields */
+ 
+-		ACPI_INFO((AE_INFO, "%-4.4s " ACPI_PRINTF_UINT " %06X",
+-			   header->signature, ACPI_FORMAT_TO_UINT(address),
++		ACPI_INFO((AE_INFO, "%-4.4s 0x%8.8X%8.8X %06X",
++			   header->signature, ACPI_FORMAT_UINT64(address),
+ 			   header->length));
+ 	} else if (ACPI_VALIDATE_RSDP_SIG(header->signature)) {
+ 
+@@ -149,9 +143,8 @@ acpi_tb_print_table_header(acpi_physical_address address,
+ 					  header)->oem_id, ACPI_OEM_ID_SIZE);
+ 		acpi_tb_fix_string(local_header.oem_id, ACPI_OEM_ID_SIZE);
+ 
+-		ACPI_INFO((AE_INFO,
+-			   "RSDP " ACPI_PRINTF_UINT " %06X (v%.2d %-6.6s)",
+-			   ACPI_FORMAT_TO_UINT(address),
++		ACPI_INFO((AE_INFO, "RSDP 0x%8.8X%8.8X %06X (v%.2d %-6.6s)",
++			   ACPI_FORMAT_UINT64(address),
+ 			   (ACPI_CAST_PTR(struct acpi_table_rsdp, header)->
+ 			    revision >
+ 			    0) ? ACPI_CAST_PTR(struct acpi_table_rsdp,
+@@ -165,9 +158,9 @@ acpi_tb_print_table_header(acpi_physical_address address,
+ 		acpi_tb_cleanup_table_header(&local_header, header);
+ 
+ 		ACPI_INFO((AE_INFO,
+-			   "%-4.4s " ACPI_PRINTF_UINT
++			   "%-4.4s 0x%8.8X%8.8X"
+ 			   " %06X (v%.2d %-6.6s %-8.8s %08X %-4.4s %08X)",
+-			   local_header.signature, ACPI_FORMAT_TO_UINT(address),
++			   local_header.signature, ACPI_FORMAT_UINT64(address),
+ 			   local_header.length, local_header.revision,
+ 			   local_header.oem_id, local_header.oem_table_id,
+ 			   local_header.oem_revision,
+diff --git a/drivers/acpi/acpica/tbxfroot.c b/drivers/acpi/acpica/tbxfroot.c
+index eac52cf14f1a..fa76a3603aa1 100644
+--- a/drivers/acpi/acpica/tbxfroot.c
++++ b/drivers/acpi/acpica/tbxfroot.c
+@@ -142,7 +142,7 @@ acpi_status acpi_tb_validate_rsdp(struct acpi_table_rsdp * rsdp)
+  *
+  ******************************************************************************/
+ 
+-acpi_status __init acpi_find_root_pointer(acpi_size *table_address)
++acpi_status __init acpi_find_root_pointer(acpi_physical_address * table_address)
+ {
+ 	u8 *table_ptr;
+ 	u8 *mem_rover;
+@@ -200,7 +200,8 @@ acpi_status __init acpi_find_root_pointer(acpi_size *table_address)
+ 			physical_address +=
+ 			    (u32) ACPI_PTR_DIFF(mem_rover, table_ptr);
+ 
+-			*table_address = physical_address;
++			*table_address =
++			    (acpi_physical_address) physical_address;
+ 			return_ACPI_STATUS(AE_OK);
+ 		}
+ 	}
+@@ -233,7 +234,7 @@ acpi_status __init acpi_find_root_pointer(acpi_size *table_address)
+ 		    (ACPI_HI_RSDP_WINDOW_BASE +
+ 		     ACPI_PTR_DIFF(mem_rover, table_ptr));
+ 
+-		*table_address = physical_address;
++		*table_address = (acpi_physical_address) physical_address;
+ 		return_ACPI_STATUS(AE_OK);
+ 	}
+ 
+diff --git a/drivers/acpi/acpica/utaddress.c b/drivers/acpi/acpica/utaddress.c
+index 1279f50da757..911ea8e7fe87 100644
+--- a/drivers/acpi/acpica/utaddress.c
++++ b/drivers/acpi/acpica/utaddress.c
+@@ -107,10 +107,10 @@ acpi_ut_add_address_range(acpi_adr_space_type space_id,
+ 	acpi_gbl_address_range_list[space_id] = range_info;
+ 
+ 	ACPI_DEBUG_PRINT((ACPI_DB_NAMES,
+-			  "\nAdded [%4.4s] address range: 0x%p-0x%p\n",
++			  "\nAdded [%4.4s] address range: 0x%8.8X%8.8X-0x%8.8X%8.8X\n",
+ 			  acpi_ut_get_node_name(range_info->region_node),
+-			  ACPI_CAST_PTR(void, address),
+-			  ACPI_CAST_PTR(void, range_info->end_address)));
++			  ACPI_FORMAT_UINT64(address),
++			  ACPI_FORMAT_UINT64(range_info->end_address)));
+ 
+ 	(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
+ 	return_ACPI_STATUS(AE_OK);
+@@ -160,15 +160,13 @@ acpi_ut_remove_address_range(acpi_adr_space_type space_id,
+ 			}
+ 
+ 			ACPI_DEBUG_PRINT((ACPI_DB_NAMES,
+-					  "\nRemoved [%4.4s] address range: 0x%p-0x%p\n",
++					  "\nRemoved [%4.4s] address range: 0x%8.8X%8.8X-0x%8.8X%8.8X\n",
+ 					  acpi_ut_get_node_name(range_info->
+ 								region_node),
+-					  ACPI_CAST_PTR(void,
+-							range_info->
+-							start_address),
+-					  ACPI_CAST_PTR(void,
+-							range_info->
+-							end_address)));
++					  ACPI_FORMAT_UINT64(range_info->
++							     start_address),
++					  ACPI_FORMAT_UINT64(range_info->
++							     end_address)));
+ 
+ 			ACPI_FREE(range_info);
+ 			return_VOID;
+@@ -245,16 +243,14 @@ acpi_ut_check_address_range(acpi_adr_space_type space_id,
+ 								  region_node);
+ 
+ 				ACPI_WARNING((AE_INFO,
+-					      "%s range 0x%p-0x%p conflicts with OpRegion 0x%p-0x%p (%s)",
++					      "%s range 0x%8.8X%8.8X-0x%8.8X%8.8X conflicts with OpRegion 0x%8.8X%8.8X-0x%8.8X%8.8X (%s)",
+ 					      acpi_ut_get_region_name(space_id),
+-					      ACPI_CAST_PTR(void, address),
+-					      ACPI_CAST_PTR(void, end_address),
+-					      ACPI_CAST_PTR(void,
+-							    range_info->
+-							    start_address),
+-					      ACPI_CAST_PTR(void,
+-							    range_info->
+-							    end_address),
++					      ACPI_FORMAT_UINT64(address),
++					      ACPI_FORMAT_UINT64(end_address),
++					      ACPI_FORMAT_UINT64(range_info->
++								 start_address),
++					      ACPI_FORMAT_UINT64(range_info->
++								 end_address),
+ 					      pathname));
+ 				ACPI_FREE(pathname);
+ 			}
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 5589a6e2a023..8244f013f210 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -573,7 +573,7 @@ EXPORT_SYMBOL_GPL(acpi_dev_get_resources);
+  * @ares: Input ACPI resource object.
+  * @types: Valid resource types of IORESOURCE_XXX
+  *
+- * This is a hepler function to support acpi_dev_get_resources(), which filters
++ * This is a helper function to support acpi_dev_get_resources(), which filters
+  * ACPI resource objects according to resource types.
+  */
+ int acpi_dev_filter_resource_type(struct acpi_resource *ares,
+diff --git a/drivers/acpi/sbshc.c b/drivers/acpi/sbshc.c
+index 26e5b5060523..bf034f8b7c1a 100644
+--- a/drivers/acpi/sbshc.c
++++ b/drivers/acpi/sbshc.c
+@@ -14,6 +14,7 @@
+ #include <linux/delay.h>
+ #include <linux/module.h>
+ #include <linux/interrupt.h>
++#include <linux/dmi.h>
+ #include "sbshc.h"
+ 
+ #define PREFIX "ACPI: "
+@@ -87,6 +88,8 @@ enum acpi_smb_offset {
+ 	ACPI_SMB_ALARM_DATA = 0x26,	/* 2 bytes alarm data */
+ };
+ 
++static bool macbook;
++
+ static inline int smb_hc_read(struct acpi_smb_hc *hc, u8 address, u8 *data)
+ {
+ 	return ec_read(hc->offset + address, data);
+@@ -132,6 +135,8 @@ static int acpi_smbus_transaction(struct acpi_smb_hc *hc, u8 protocol,
+ 	}
+ 
+ 	mutex_lock(&hc->lock);
++	if (macbook)
++		udelay(5);
+ 	if (smb_hc_read(hc, ACPI_SMB_PROTOCOL, &temp))
+ 		goto end;
+ 	if (temp) {
+@@ -257,12 +262,29 @@ extern int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit,
+ 			      acpi_handle handle, acpi_ec_query_func func,
+ 			      void *data);
+ 
++static int macbook_dmi_match(const struct dmi_system_id *d)
++{
++	pr_debug("Detected MacBook, enabling workaround\n");
++	macbook = true;
++	return 0;
++}
++
++static struct dmi_system_id acpi_smbus_dmi_table[] = {
++	{ macbook_dmi_match, "Apple MacBook", {
++	  DMI_MATCH(DMI_BOARD_VENDOR, "Apple"),
++	  DMI_MATCH(DMI_PRODUCT_NAME, "MacBook") },
++	},
++	{ },
++};
++
+ static int acpi_smbus_hc_add(struct acpi_device *device)
+ {
+ 	int status;
+ 	unsigned long long val;
+ 	struct acpi_smb_hc *hc;
+ 
++	dmi_check_system(acpi_smbus_dmi_table);
++
+ 	if (!device)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index d1f168b73634..773e964f14d9 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1672,8 +1672,8 @@ out:
+ 
+ static void loop_remove(struct loop_device *lo)
+ {
+-	del_gendisk(lo->lo_disk);
+ 	blk_cleanup_queue(lo->lo_queue);
++	del_gendisk(lo->lo_disk);
+ 	blk_mq_free_tag_set(&lo->tag_set);
+ 	put_disk(lo->lo_disk);
+ 	kfree(lo);
+diff --git a/drivers/gpio/gpiolib-sysfs.c b/drivers/gpio/gpiolib-sysfs.c
+index 7722ed53bd65..af3bc7a8033b 100644
+--- a/drivers/gpio/gpiolib-sysfs.c
++++ b/drivers/gpio/gpiolib-sysfs.c
+@@ -551,6 +551,7 @@ static struct class gpio_class = {
+  */
+ int gpiod_export(struct gpio_desc *desc, bool direction_may_change)
+ {
++	struct gpio_chip	*chip;
+ 	unsigned long		flags;
+ 	int			status;
+ 	const char		*ioname = NULL;
+@@ -568,8 +569,16 @@ int gpiod_export(struct gpio_desc *desc, bool direction_may_change)
+ 		return -EINVAL;
+ 	}
+ 
++	chip = desc->chip;
++
+ 	mutex_lock(&sysfs_lock);
+ 
++	/* check if chip is being removed */
++	if (!chip || !chip->exported) {
++		status = -ENODEV;
++		goto fail_unlock;
++	}
++
+ 	spin_lock_irqsave(&gpio_lock, flags);
+ 	if (!test_bit(FLAG_REQUESTED, &desc->flags) ||
+ 	     test_bit(FLAG_EXPORT, &desc->flags)) {
+@@ -783,12 +792,15 @@ void gpiochip_unexport(struct gpio_chip *chip)
+ {
+ 	int			status;
+ 	struct device		*dev;
++	struct gpio_desc *desc;
++	unsigned int i;
+ 
+ 	mutex_lock(&sysfs_lock);
+ 	dev = class_find_device(&gpio_class, NULL, chip, match_export);
+ 	if (dev) {
+ 		put_device(dev);
+ 		device_unregister(dev);
++		/* prevent further gpiod exports */
+ 		chip->exported = false;
+ 		status = 0;
+ 	} else
+@@ -797,6 +809,13 @@ void gpiochip_unexport(struct gpio_chip *chip)
+ 
+ 	if (status)
+ 		chip_dbg(chip, "%s: status %d\n", __func__, status);
++
++	/* unregister gpiod class devices owned by sysfs */
++	for (i = 0; i < chip->ngpio; i++) {
++		desc = &chip->desc[i];
++		if (test_and_clear_bit(FLAG_SYSFS, &desc->flags))
++			gpiod_free(desc);
++	}
+ }
+ 
+ static int __init gpiolib_sysfs_init(void)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index d8135adb2238..39762a7d2ec7 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -429,9 +429,10 @@ static int unregister_process_nocpsch(struct device_queue_manager *dqm,
+ 
+ 	BUG_ON(!dqm || !qpd);
+ 
+-	BUG_ON(!list_empty(&qpd->queues_list));
++	pr_debug("In func %s\n", __func__);
+ 
+-	pr_debug("kfd: In func %s\n", __func__);
++	pr_debug("qpd->queues_list is %s\n",
++			list_empty(&qpd->queues_list) ? "empty" : "not empty");
+ 
+ 	retval = 0;
+ 	mutex_lock(&dqm->lock);
+@@ -878,6 +879,8 @@ static int create_queue_cpsch(struct device_queue_manager *dqm, struct queue *q,
+ 		return -ENOMEM;
+ 	}
+ 
++	init_sdma_vm(dqm, q, qpd);
++
+ 	retval = mqd->init_mqd(mqd, &q->mqd, &q->mqd_mem_obj,
+ 				&q->gart_mqd_addr, &q->properties);
+ 	if (retval != 0)
+diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
+index 10574a0c3a55..5769db4f51f3 100644
+--- a/drivers/gpu/drm/drm_irq.c
++++ b/drivers/gpu/drm/drm_irq.c
+@@ -131,12 +131,11 @@ static void drm_update_vblank_count(struct drm_device *dev, int crtc)
+ 
+ 	/* Reinitialize corresponding vblank timestamp if high-precision query
+ 	 * available. Skip this step if query unsupported or failed. Will
+-	 * reinitialize delayed at next vblank interrupt in that case.
++	 * reinitialize delayed at next vblank interrupt in that case and
++	 * assign 0 for now, to mark the vblanktimestamp as invalid.
+ 	 */
+-	if (rc) {
+-		tslot = atomic_read(&vblank->count) + diff;
+-		vblanktimestamp(dev, crtc, tslot) = t_vblank;
+-	}
++	tslot = atomic_read(&vblank->count) + diff;
++	vblanktimestamp(dev, crtc, tslot) = rc ? t_vblank : (struct timeval) {0, 0};
+ 
+ 	smp_mb__before_atomic();
+ 	atomic_add(diff, &vblank->count);
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index a74aaf9242b9..88b36a9173c9 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -1176,7 +1176,7 @@ intel_dp_compute_config(struct intel_encoder *encoder,
+ 
+ 	pipe_config->has_dp_encoder = true;
+ 	pipe_config->has_drrs = false;
+-	pipe_config->has_audio = intel_dp->has_audio;
++	pipe_config->has_audio = intel_dp->has_audio && port != PORT_A;
+ 
+ 	if (is_edp(intel_dp) && intel_connector->panel.fixed_mode) {
+ 		intel_fixed_panel_mode(intel_connector->panel.fixed_mode,
+@@ -2026,8 +2026,8 @@ static void intel_dp_get_config(struct intel_encoder *encoder,
+ 	int dotclock;
+ 
+ 	tmp = I915_READ(intel_dp->output_reg);
+-	if (tmp & DP_AUDIO_OUTPUT_ENABLE)
+-		pipe_config->has_audio = true;
++
++	pipe_config->has_audio = tmp & DP_AUDIO_OUTPUT_ENABLE && port != PORT_A;
+ 
+ 	if ((port == PORT_A) || !HAS_PCH_CPT(dev)) {
+ 		if (tmp & DP_SYNC_HS_HIGH)
+diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c
+index 071b96d6e146..fbc2a83795fa 100644
+--- a/drivers/gpu/drm/i915/intel_lvds.c
++++ b/drivers/gpu/drm/i915/intel_lvds.c
+@@ -812,12 +812,28 @@ static int intel_dual_link_lvds_callback(const struct dmi_system_id *id)
+ static const struct dmi_system_id intel_dual_link_lvds[] = {
+ 	{
+ 		.callback = intel_dual_link_lvds_callback,
+-		.ident = "Apple MacBook Pro (Core i5/i7 Series)",
++		.ident = "Apple MacBook Pro 15\" (2010)",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro6,2"),
++		},
++	},
++	{
++		.callback = intel_dual_link_lvds_callback,
++		.ident = "Apple MacBook Pro 15\" (2011)",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro8,2"),
+ 		},
+ 	},
++	{
++		.callback = intel_dual_link_lvds_callback,
++		.ident = "Apple MacBook Pro 15\" (2012)",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro9,1"),
++		},
++	},
+ 	{ }	/* terminating entry */
+ };
+ 
+@@ -847,6 +863,11 @@ static bool compute_is_dual_link_lvds(struct intel_lvds_encoder *lvds_encoder)
+ 	if (i915.lvds_channel_mode > 0)
+ 		return i915.lvds_channel_mode == 2;
+ 
++	/* single channel LVDS is limited to 112 MHz */
++	if (lvds_encoder->attached_connector->base.panel.fixed_mode->clock
++	    > 112999)
++		return true;
++
+ 	if (dmi_check_system(intel_dual_link_lvds))
+ 		return true;
+ 
+@@ -1104,6 +1125,8 @@ void intel_lvds_init(struct drm_device *dev)
+ out:
+ 	mutex_unlock(&dev->mode_config.mutex);
+ 
++	intel_panel_init(&intel_connector->panel, fixed_mode, downclock_mode);
++
+ 	lvds_encoder->is_dual_link = compute_is_dual_link_lvds(lvds_encoder);
+ 	DRM_DEBUG_KMS("detected %s-link lvds configuration\n",
+ 		      lvds_encoder->is_dual_link ? "dual" : "single");
+@@ -1118,7 +1141,6 @@ out:
+ 	}
+ 	drm_connector_register(connector);
+ 
+-	intel_panel_init(&intel_connector->panel, fixed_mode, downclock_mode);
+ 	intel_panel_setup_backlight(connector, INVALID_PIPE);
+ 
+ 	return;
+diff --git a/drivers/gpu/drm/radeon/radeon_asic.c b/drivers/gpu/drm/radeon/radeon_asic.c
+index c0ecd128b14b..7348f222684d 100644
+--- a/drivers/gpu/drm/radeon/radeon_asic.c
++++ b/drivers/gpu/drm/radeon/radeon_asic.c
+@@ -1180,7 +1180,7 @@ static struct radeon_asic rs780_asic = {
+ static struct radeon_asic_ring rv770_uvd_ring = {
+ 	.ib_execute = &uvd_v1_0_ib_execute,
+ 	.emit_fence = &uvd_v2_2_fence_emit,
+-	.emit_semaphore = &uvd_v1_0_semaphore_emit,
++	.emit_semaphore = &uvd_v2_2_semaphore_emit,
+ 	.cs_parse = &radeon_uvd_cs_parse,
+ 	.ring_test = &uvd_v1_0_ring_test,
+ 	.ib_test = &uvd_v1_0_ib_test,
+diff --git a/drivers/gpu/drm/radeon/radeon_asic.h b/drivers/gpu/drm/radeon/radeon_asic.h
+index 72bdd3bf0d8e..c2fd3a5e6c55 100644
+--- a/drivers/gpu/drm/radeon/radeon_asic.h
++++ b/drivers/gpu/drm/radeon/radeon_asic.h
+@@ -919,6 +919,10 @@ void uvd_v1_0_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib);
+ int uvd_v2_2_resume(struct radeon_device *rdev);
+ void uvd_v2_2_fence_emit(struct radeon_device *rdev,
+ 			 struct radeon_fence *fence);
++bool uvd_v2_2_semaphore_emit(struct radeon_device *rdev,
++			     struct radeon_ring *ring,
++			     struct radeon_semaphore *semaphore,
++			     bool emit_wait);
+ 
+ /* uvd v3.1 */
+ bool uvd_v3_1_semaphore_emit(struct radeon_device *rdev,
+diff --git a/drivers/gpu/drm/radeon/radeon_audio.c b/drivers/gpu/drm/radeon/radeon_audio.c
+index b7d33a13db9f..b7c6bb69f3c7 100644
+--- a/drivers/gpu/drm/radeon/radeon_audio.c
++++ b/drivers/gpu/drm/radeon/radeon_audio.c
+@@ -464,6 +464,10 @@ void radeon_audio_detect(struct drm_connector *connector,
+ 		return;
+ 
+ 	rdev = connector->encoder->dev->dev_private;
++
++	if (!radeon_audio_chipset_supported(rdev))
++		return;
++
+ 	radeon_encoder = to_radeon_encoder(connector->encoder);
+ 	dig = radeon_encoder->enc_priv;
+ 
+diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
+index b292aca0f342..edafd3c2b170 100644
+--- a/drivers/gpu/drm/radeon/radeon_ttm.c
++++ b/drivers/gpu/drm/radeon/radeon_ttm.c
+@@ -591,8 +591,7 @@ static void radeon_ttm_tt_unpin_userptr(struct ttm_tt *ttm)
+ {
+ 	struct radeon_device *rdev = radeon_get_rdev(ttm->bdev);
+ 	struct radeon_ttm_tt *gtt = (void *)ttm;
+-	struct scatterlist *sg;
+-	int i;
++	struct sg_page_iter sg_iter;
+ 
+ 	int write = !(gtt->userflags & RADEON_GEM_USERPTR_READONLY);
+ 	enum dma_data_direction direction = write ?
+@@ -605,9 +604,8 @@ static void radeon_ttm_tt_unpin_userptr(struct ttm_tt *ttm)
+ 	/* free the sg table and pages again */
+ 	dma_unmap_sg(rdev->dev, ttm->sg->sgl, ttm->sg->nents, direction);
+ 
+-	for_each_sg(ttm->sg->sgl, sg, ttm->sg->nents, i) {
+-		struct page *page = sg_page(sg);
+-
++	for_each_sg_page(ttm->sg->sgl, &sg_iter, ttm->sg->nents, 0) {
++		struct page *page = sg_page_iter_page(&sg_iter);
+ 		if (!(gtt->userflags & RADEON_GEM_USERPTR_READONLY))
+ 			set_page_dirty(page);
+ 
+diff --git a/drivers/gpu/drm/radeon/radeon_uvd.c b/drivers/gpu/drm/radeon/radeon_uvd.c
+index c10b2aec6450..cd630287cf0a 100644
+--- a/drivers/gpu/drm/radeon/radeon_uvd.c
++++ b/drivers/gpu/drm/radeon/radeon_uvd.c
+@@ -396,6 +396,29 @@ static int radeon_uvd_cs_msg_decode(uint32_t *msg, unsigned buf_sizes[])
+ 	return 0;
+ }
+ 
++static int radeon_uvd_validate_codec(struct radeon_cs_parser *p,
++				     unsigned stream_type)
++{
++	switch (stream_type) {
++	case 0: /* H264 */
++	case 1: /* VC1 */
++		/* always supported */
++		return 0;
++
++	case 3: /* MPEG2 */
++	case 4: /* MPEG4 */
++		/* only since UVD 3 */
++		if (p->rdev->family >= CHIP_PALM)
++			return 0;
++
++		/* fall through */
++	default:
++		DRM_ERROR("UVD codec not supported by hardware %d!\n",
++			  stream_type);
++		return -EINVAL;
++	}
++}
++
+ static int radeon_uvd_cs_msg(struct radeon_cs_parser *p, struct radeon_bo *bo,
+ 			     unsigned offset, unsigned buf_sizes[])
+ {
+@@ -436,50 +459,70 @@ static int radeon_uvd_cs_msg(struct radeon_cs_parser *p, struct radeon_bo *bo,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (msg_type == 1) {
+-		/* it's a decode msg, calc buffer sizes */
+-		r = radeon_uvd_cs_msg_decode(msg, buf_sizes);
+-		/* calc image size (width * height) */
+-		img_size = msg[6] * msg[7];
++	switch (msg_type) {
++	case 0:
++		/* it's a create msg, calc image size (width * height) */
++		img_size = msg[7] * msg[8];
++
++		r = radeon_uvd_validate_codec(p, msg[4]);
++		radeon_bo_kunmap(bo);
++		if (r)
++			return r;
++
++		/* try to alloc a new handle */
++		for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) {
++			if (atomic_read(&p->rdev->uvd.handles[i]) == handle) {
++				DRM_ERROR("Handle 0x%x already in use!\n", handle);
++				return -EINVAL;
++			}
++
++			if (!atomic_cmpxchg(&p->rdev->uvd.handles[i], 0, handle)) {
++				p->rdev->uvd.filp[i] = p->filp;
++				p->rdev->uvd.img_size[i] = img_size;
++				return 0;
++			}
++		}
++
++		DRM_ERROR("No more free UVD handles!\n");
++		return -EINVAL;
++
++	case 1:
++		/* it's a decode msg, validate codec and calc buffer sizes */
++		r = radeon_uvd_validate_codec(p, msg[4]);
++		if (!r)
++			r = radeon_uvd_cs_msg_decode(msg, buf_sizes);
+ 		radeon_bo_kunmap(bo);
+ 		if (r)
+ 			return r;
+ 
+-	} else if (msg_type == 2) {
++		/* validate the handle */
++		for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) {
++			if (atomic_read(&p->rdev->uvd.handles[i]) == handle) {
++				if (p->rdev->uvd.filp[i] != p->filp) {
++					DRM_ERROR("UVD handle collision detected!\n");
++					return -EINVAL;
++				}
++				return 0;
++			}
++		}
++
++		DRM_ERROR("Invalid UVD handle 0x%x!\n", handle);
++		return -ENOENT;
++
++	case 2:
+ 		/* it's a destroy msg, free the handle */
+ 		for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i)
+ 			atomic_cmpxchg(&p->rdev->uvd.handles[i], handle, 0);
+ 		radeon_bo_kunmap(bo);
+ 		return 0;
+-	} else {
+-		/* it's a create msg, calc image size (width * height) */
+-		img_size = msg[7] * msg[8];
+-		radeon_bo_kunmap(bo);
+ 
+-		if (msg_type != 0) {
+-			DRM_ERROR("Illegal UVD message type (%d)!\n", msg_type);
+-			return -EINVAL;
+-		}
+-
+-		/* it's a create msg, no special handling needed */
+-	}
+-
+-	/* create or decode, validate the handle */
+-	for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) {
+-		if (atomic_read(&p->rdev->uvd.handles[i]) == handle)
+-			return 0;
+-	}
++	default:
+ 
+-	/* handle not found try to alloc a new one */
+-	for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) {
+-		if (!atomic_cmpxchg(&p->rdev->uvd.handles[i], 0, handle)) {
+-			p->rdev->uvd.filp[i] = p->filp;
+-			p->rdev->uvd.img_size[i] = img_size;
+-			return 0;
+-		}
++		DRM_ERROR("Illegal UVD message type (%d)!\n", msg_type);
++		return -EINVAL;
+ 	}
+ 
+-	DRM_ERROR("No more free UVD handles!\n");
++	BUG();
+ 	return -EINVAL;
+ }
+ 
+diff --git a/drivers/gpu/drm/radeon/radeon_vce.c b/drivers/gpu/drm/radeon/radeon_vce.c
+index 976fe432f4e2..7ed561225007 100644
+--- a/drivers/gpu/drm/radeon/radeon_vce.c
++++ b/drivers/gpu/drm/radeon/radeon_vce.c
+@@ -493,18 +493,27 @@ int radeon_vce_cs_reloc(struct radeon_cs_parser *p, int lo, int hi,
+  *
+  * @p: parser context
+  * @handle: handle to validate
++ * @allocated: allocated a new handle?
+  *
+  * Validates the handle and return the found session index or -EINVAL
+  * we we don't have another free session index.
+  */
+-int radeon_vce_validate_handle(struct radeon_cs_parser *p, uint32_t handle)
++static int radeon_vce_validate_handle(struct radeon_cs_parser *p,
++				      uint32_t handle, bool *allocated)
+ {
+ 	unsigned i;
+ 
++	*allocated = false;
++
+ 	/* validate the handle */
+ 	for (i = 0; i < RADEON_MAX_VCE_HANDLES; ++i) {
+-		if (atomic_read(&p->rdev->vce.handles[i]) == handle)
++		if (atomic_read(&p->rdev->vce.handles[i]) == handle) {
++			if (p->rdev->vce.filp[i] != p->filp) {
++				DRM_ERROR("VCE handle collision detected!\n");
++				return -EINVAL;
++			}
+ 			return i;
++		}
+ 	}
+ 
+ 	/* handle not found try to alloc a new one */
+@@ -512,6 +521,7 @@ int radeon_vce_validate_handle(struct radeon_cs_parser *p, uint32_t handle)
+ 		if (!atomic_cmpxchg(&p->rdev->vce.handles[i], 0, handle)) {
+ 			p->rdev->vce.filp[i] = p->filp;
+ 			p->rdev->vce.img_size[i] = 0;
++			*allocated = true;
+ 			return i;
+ 		}
+ 	}
+@@ -529,10 +539,10 @@ int radeon_vce_validate_handle(struct radeon_cs_parser *p, uint32_t handle)
+ int radeon_vce_cs_parse(struct radeon_cs_parser *p)
+ {
+ 	int session_idx = -1;
+-	bool destroyed = false;
++	bool destroyed = false, created = false, allocated = false;
+ 	uint32_t tmp, handle = 0;
+ 	uint32_t *size = &tmp;
+-	int i, r;
++	int i, r = 0;
+ 
+ 	while (p->idx < p->chunk_ib->length_dw) {
+ 		uint32_t len = radeon_get_ib_value(p, p->idx);
+@@ -540,18 +550,21 @@ int radeon_vce_cs_parse(struct radeon_cs_parser *p)
+ 
+ 		if ((len < 8) || (len & 3)) {
+ 			DRM_ERROR("invalid VCE command length (%d)!\n", len);
+-                	return -EINVAL;
++			r = -EINVAL;
++			goto out;
+ 		}
+ 
+ 		if (destroyed) {
+ 			DRM_ERROR("No other command allowed after destroy!\n");
+-			return -EINVAL;
++			r = -EINVAL;
++			goto out;
+ 		}
+ 
+ 		switch (cmd) {
+ 		case 0x00000001: // session
+ 			handle = radeon_get_ib_value(p, p->idx + 2);
+-			session_idx = radeon_vce_validate_handle(p, handle);
++			session_idx = radeon_vce_validate_handle(p, handle,
++								 &allocated);
+ 			if (session_idx < 0)
+ 				return session_idx;
+ 			size = &p->rdev->vce.img_size[session_idx];
+@@ -561,6 +574,13 @@ int radeon_vce_cs_parse(struct radeon_cs_parser *p)
+ 			break;
+ 
+ 		case 0x01000001: // create
++			created = true;
++			if (!allocated) {
++				DRM_ERROR("Handle already in use!\n");
++				r = -EINVAL;
++				goto out;
++			}
++
+ 			*size = radeon_get_ib_value(p, p->idx + 8) *
+ 				radeon_get_ib_value(p, p->idx + 10) *
+ 				8 * 3 / 2;
+@@ -577,12 +597,12 @@ int radeon_vce_cs_parse(struct radeon_cs_parser *p)
+ 			r = radeon_vce_cs_reloc(p, p->idx + 10, p->idx + 9,
+ 						*size);
+ 			if (r)
+-				return r;
++				goto out;
+ 
+ 			r = radeon_vce_cs_reloc(p, p->idx + 12, p->idx + 11,
+ 						*size / 3);
+ 			if (r)
+-				return r;
++				goto out;
+ 			break;
+ 
+ 		case 0x02000001: // destroy
+@@ -593,7 +613,7 @@ int radeon_vce_cs_parse(struct radeon_cs_parser *p)
+ 			r = radeon_vce_cs_reloc(p, p->idx + 3, p->idx + 2,
+ 						*size * 2);
+ 			if (r)
+-				return r;
++				goto out;
+ 			break;
+ 
+ 		case 0x05000004: // video bitstream buffer
+@@ -601,36 +621,47 @@ int radeon_vce_cs_parse(struct radeon_cs_parser *p)
+ 			r = radeon_vce_cs_reloc(p, p->idx + 3, p->idx + 2,
+ 						tmp);
+ 			if (r)
+-				return r;
++				goto out;
+ 			break;
+ 
+ 		case 0x05000005: // feedback buffer
+ 			r = radeon_vce_cs_reloc(p, p->idx + 3, p->idx + 2,
+ 						4096);
+ 			if (r)
+-				return r;
++				goto out;
+ 			break;
+ 
+ 		default:
+ 			DRM_ERROR("invalid VCE command (0x%x)!\n", cmd);
+-			return -EINVAL;
++			r = -EINVAL;
++			goto out;
+ 		}
+ 
+ 		if (session_idx == -1) {
+ 			DRM_ERROR("no session command at start of IB\n");
+-			return -EINVAL;
++			r = -EINVAL;
++			goto out;
+ 		}
+ 
+ 		p->idx += len / 4;
+ 	}
+ 
+-	if (destroyed) {
+-		/* IB contains a destroy msg, free the handle */
++	if (allocated && !created) {
++		DRM_ERROR("New session without create command!\n");
++		r = -ENOENT;
++	}
++
++out:
++	if ((!r && destroyed) || (r && allocated)) {
++		/*
++		 * IB contains a destroy msg or we have allocated an
++		 * handle and got an error, anyway free the handle
++		 */
+ 		for (i = 0; i < RADEON_MAX_VCE_HANDLES; ++i)
+ 			atomic_cmpxchg(&p->rdev->vce.handles[i], handle, 0);
+ 	}
+ 
+-	return 0;
++	return r;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/radeon/rv770d.h b/drivers/gpu/drm/radeon/rv770d.h
+index 3cf1e2921545..9ef2064b1c9c 100644
+--- a/drivers/gpu/drm/radeon/rv770d.h
++++ b/drivers/gpu/drm/radeon/rv770d.h
+@@ -989,6 +989,9 @@
+ 			 ((n) & 0x3FFF) << 16)
+ 
+ /* UVD */
++#define UVD_SEMA_ADDR_LOW				0xef00
++#define UVD_SEMA_ADDR_HIGH				0xef04
++#define UVD_SEMA_CMD					0xef08
+ #define UVD_GPCOM_VCPU_CMD				0xef0c
+ #define UVD_GPCOM_VCPU_DATA0				0xef10
+ #define UVD_GPCOM_VCPU_DATA1				0xef14
+diff --git a/drivers/gpu/drm/radeon/uvd_v1_0.c b/drivers/gpu/drm/radeon/uvd_v1_0.c
+index e72b3cb59358..c6b1cbca47fc 100644
+--- a/drivers/gpu/drm/radeon/uvd_v1_0.c
++++ b/drivers/gpu/drm/radeon/uvd_v1_0.c
+@@ -466,18 +466,8 @@ bool uvd_v1_0_semaphore_emit(struct radeon_device *rdev,
+ 			     struct radeon_semaphore *semaphore,
+ 			     bool emit_wait)
+ {
+-	uint64_t addr = semaphore->gpu_addr;
+-
+-	radeon_ring_write(ring, PACKET0(UVD_SEMA_ADDR_LOW, 0));
+-	radeon_ring_write(ring, (addr >> 3) & 0x000FFFFF);
+-
+-	radeon_ring_write(ring, PACKET0(UVD_SEMA_ADDR_HIGH, 0));
+-	radeon_ring_write(ring, (addr >> 23) & 0x000FFFFF);
+-
+-	radeon_ring_write(ring, PACKET0(UVD_SEMA_CMD, 0));
+-	radeon_ring_write(ring, emit_wait ? 1 : 0);
+-
+-	return true;
++	/* disable semaphores for UVD V1 hardware */
++	return false;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/radeon/uvd_v2_2.c b/drivers/gpu/drm/radeon/uvd_v2_2.c
+index 89193519f8a1..7ed778cec7c6 100644
+--- a/drivers/gpu/drm/radeon/uvd_v2_2.c
++++ b/drivers/gpu/drm/radeon/uvd_v2_2.c
+@@ -60,6 +60,35 @@ void uvd_v2_2_fence_emit(struct radeon_device *rdev,
+ }
+ 
+ /**
++ * uvd_v2_2_semaphore_emit - emit semaphore command
++ *
++ * @rdev: radeon_device pointer
++ * @ring: radeon_ring pointer
++ * @semaphore: semaphore to emit commands for
++ * @emit_wait: true if we should emit a wait command
++ *
++ * Emit a semaphore command (either wait or signal) to the UVD ring.
++ */
++bool uvd_v2_2_semaphore_emit(struct radeon_device *rdev,
++			     struct radeon_ring *ring,
++			     struct radeon_semaphore *semaphore,
++			     bool emit_wait)
++{
++	uint64_t addr = semaphore->gpu_addr;
++
++	radeon_ring_write(ring, PACKET0(UVD_SEMA_ADDR_LOW, 0));
++	radeon_ring_write(ring, (addr >> 3) & 0x000FFFFF);
++
++	radeon_ring_write(ring, PACKET0(UVD_SEMA_ADDR_HIGH, 0));
++	radeon_ring_write(ring, (addr >> 23) & 0x000FFFFF);
++
++	radeon_ring_write(ring, PACKET0(UVD_SEMA_CMD, 0));
++	radeon_ring_write(ring, emit_wait ? 1 : 0);
++
++	return true;
++}
++
++/**
+  * uvd_v2_2_resume - memory controller programming
+  *
+  * @rdev: radeon_device pointer
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index d570030d899c..06441a43c3aa 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -859,19 +859,27 @@ static void cma_save_ib_info(struct rdma_cm_id *id, struct rdma_cm_id *listen_id
+ 	memcpy(&ib->sib_addr, &path->dgid, 16);
+ }
+ 
++static __be16 ss_get_port(const struct sockaddr_storage *ss)
++{
++	if (ss->ss_family == AF_INET)
++		return ((struct sockaddr_in *)ss)->sin_port;
++	else if (ss->ss_family == AF_INET6)
++		return ((struct sockaddr_in6 *)ss)->sin6_port;
++	BUG();
++}
++
+ static void cma_save_ip4_info(struct rdma_cm_id *id, struct rdma_cm_id *listen_id,
+ 			      struct cma_hdr *hdr)
+ {
+-	struct sockaddr_in *listen4, *ip4;
++	struct sockaddr_in *ip4;
+ 
+-	listen4 = (struct sockaddr_in *) &listen_id->route.addr.src_addr;
+ 	ip4 = (struct sockaddr_in *) &id->route.addr.src_addr;
+-	ip4->sin_family = listen4->sin_family;
++	ip4->sin_family = AF_INET;
+ 	ip4->sin_addr.s_addr = hdr->dst_addr.ip4.addr;
+-	ip4->sin_port = listen4->sin_port;
++	ip4->sin_port = ss_get_port(&listen_id->route.addr.src_addr);
+ 
+ 	ip4 = (struct sockaddr_in *) &id->route.addr.dst_addr;
+-	ip4->sin_family = listen4->sin_family;
++	ip4->sin_family = AF_INET;
+ 	ip4->sin_addr.s_addr = hdr->src_addr.ip4.addr;
+ 	ip4->sin_port = hdr->port;
+ }
+@@ -879,16 +887,15 @@ static void cma_save_ip4_info(struct rdma_cm_id *id, struct rdma_cm_id *listen_i
+ static void cma_save_ip6_info(struct rdma_cm_id *id, struct rdma_cm_id *listen_id,
+ 			      struct cma_hdr *hdr)
+ {
+-	struct sockaddr_in6 *listen6, *ip6;
++	struct sockaddr_in6 *ip6;
+ 
+-	listen6 = (struct sockaddr_in6 *) &listen_id->route.addr.src_addr;
+ 	ip6 = (struct sockaddr_in6 *) &id->route.addr.src_addr;
+-	ip6->sin6_family = listen6->sin6_family;
++	ip6->sin6_family = AF_INET6;
+ 	ip6->sin6_addr = hdr->dst_addr.ip6;
+-	ip6->sin6_port = listen6->sin6_port;
++	ip6->sin6_port = ss_get_port(&listen_id->route.addr.src_addr);
+ 
+ 	ip6 = (struct sockaddr_in6 *) &id->route.addr.dst_addr;
+-	ip6->sin6_family = listen6->sin6_family;
++	ip6->sin6_family = AF_INET6;
+ 	ip6->sin6_addr = hdr->src_addr.ip6;
+ 	ip6->sin6_port = hdr->port;
+ }
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 414739295d04..713a96237a80 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -925,10 +925,11 @@ static int crypt_convert(struct crypt_config *cc,
+ 
+ 		switch (r) {
+ 		/* async */
+-		case -EINPROGRESS:
+ 		case -EBUSY:
+ 			wait_for_completion(&ctx->restart);
+ 			reinit_completion(&ctx->restart);
++			/* fall through*/
++		case -EINPROGRESS:
+ 			ctx->req = NULL;
+ 			ctx->cc_sector++;
+ 			continue;
+@@ -1345,8 +1346,10 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
+ 	struct dm_crypt_io *io = container_of(ctx, struct dm_crypt_io, ctx);
+ 	struct crypt_config *cc = io->cc;
+ 
+-	if (error == -EINPROGRESS)
++	if (error == -EINPROGRESS) {
++		complete(&ctx->restart);
+ 		return;
++	}
+ 
+ 	if (!error && cc->iv_gen_ops && cc->iv_gen_ops->post)
+ 		error = cc->iv_gen_ops->post(cc, iv_of_dmreq(cc, dmreq), dmreq);
+@@ -1357,15 +1360,12 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
+ 	crypt_free_req(cc, req_of_dmreq(cc, dmreq), io->base_bio);
+ 
+ 	if (!atomic_dec_and_test(&ctx->cc_pending))
+-		goto done;
++		return;
+ 
+ 	if (bio_data_dir(io->base_bio) == READ)
+ 		kcryptd_crypt_read_done(io);
+ 	else
+ 		kcryptd_crypt_write_io_submit(io, 1);
+-done:
+-	if (!completion_done(&ctx->restart))
+-		complete(&ctx->restart);
+ }
+ 
+ static void kcryptd_crypt(struct work_struct *work)
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index e6178787ce3d..e47d1dd046da 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -4754,12 +4754,12 @@ static void md_free(struct kobject *ko)
+ 	if (mddev->sysfs_state)
+ 		sysfs_put(mddev->sysfs_state);
+ 
++	if (mddev->queue)
++		blk_cleanup_queue(mddev->queue);
+ 	if (mddev->gendisk) {
+ 		del_gendisk(mddev->gendisk);
+ 		put_disk(mddev->gendisk);
+ 	}
+-	if (mddev->queue)
+-		blk_cleanup_queue(mddev->queue);
+ 
+ 	kfree(mddev);
+ }
+diff --git a/drivers/media/platform/marvell-ccic/mcam-core.c b/drivers/media/platform/marvell-ccic/mcam-core.c
+index dd5b1415f974..f902eb4ee569 100644
+--- a/drivers/media/platform/marvell-ccic/mcam-core.c
++++ b/drivers/media/platform/marvell-ccic/mcam-core.c
+@@ -116,8 +116,8 @@ static struct mcam_format_struct {
+ 		.planar		= false,
+ 	},
+ 	{
+-		.desc		= "UYVY 4:2:2",
+-		.pixelformat	= V4L2_PIX_FMT_UYVY,
++		.desc		= "YVYU 4:2:2",
++		.pixelformat	= V4L2_PIX_FMT_YVYU,
+ 		.mbus_code	= MEDIA_BUS_FMT_YUYV8_2X8,
+ 		.bpp		= 2,
+ 		.planar		= false,
+@@ -748,7 +748,7 @@ static void mcam_ctlr_image(struct mcam_camera *cam)
+ 
+ 	switch (fmt->pixelformat) {
+ 	case V4L2_PIX_FMT_YUYV:
+-	case V4L2_PIX_FMT_UYVY:
++	case V4L2_PIX_FMT_YVYU:
+ 		widthy = fmt->width * 2;
+ 		widthuv = 0;
+ 		break;
+@@ -784,15 +784,15 @@ static void mcam_ctlr_image(struct mcam_camera *cam)
+ 	case V4L2_PIX_FMT_YUV420:
+ 	case V4L2_PIX_FMT_YVU420:
+ 		mcam_reg_write_mask(cam, REG_CTRL0,
+-			C0_DF_YUV | C0_YUV_420PL | C0_YUVE_YVYU, C0_DF_MASK);
++			C0_DF_YUV | C0_YUV_420PL | C0_YUVE_VYUY, C0_DF_MASK);
+ 		break;
+ 	case V4L2_PIX_FMT_YUYV:
+ 		mcam_reg_write_mask(cam, REG_CTRL0,
+-			C0_DF_YUV | C0_YUV_PACKED | C0_YUVE_UYVY, C0_DF_MASK);
++			C0_DF_YUV | C0_YUV_PACKED | C0_YUVE_NOSWAP, C0_DF_MASK);
+ 		break;
+-	case V4L2_PIX_FMT_UYVY:
++	case V4L2_PIX_FMT_YVYU:
+ 		mcam_reg_write_mask(cam, REG_CTRL0,
+-			C0_DF_YUV | C0_YUV_PACKED | C0_YUVE_YUYV, C0_DF_MASK);
++			C0_DF_YUV | C0_YUV_PACKED | C0_YUVE_SWAP24, C0_DF_MASK);
+ 		break;
+ 	case V4L2_PIX_FMT_JPEG:
+ 		mcam_reg_write_mask(cam, REG_CTRL0,
+diff --git a/drivers/media/platform/marvell-ccic/mcam-core.h b/drivers/media/platform/marvell-ccic/mcam-core.h
+index aa0c6eac254a..7ffdf4dbaf8c 100644
+--- a/drivers/media/platform/marvell-ccic/mcam-core.h
++++ b/drivers/media/platform/marvell-ccic/mcam-core.h
+@@ -330,10 +330,10 @@ int mccic_resume(struct mcam_camera *cam);
+ #define	  C0_YUVE_YVYU	  0x00010000	/* Y1CrY0Cb		*/
+ #define	  C0_YUVE_VYUY	  0x00020000	/* CrY1CbY0		*/
+ #define	  C0_YUVE_UYVY	  0x00030000	/* CbY1CrY0		*/
+-#define	  C0_YUVE_XYUV	  0x00000000	/* 420: .YUV		*/
+-#define	  C0_YUVE_XYVU	  0x00010000	/* 420: .YVU		*/
+-#define	  C0_YUVE_XUVY	  0x00020000	/* 420: .UVY		*/
+-#define	  C0_YUVE_XVUY	  0x00030000	/* 420: .VUY		*/
++#define	  C0_YUVE_NOSWAP  0x00000000	/* no bytes swapping	*/
++#define	  C0_YUVE_SWAP13  0x00010000	/* swap byte 1 and 3	*/
++#define	  C0_YUVE_SWAP24  0x00020000	/* swap byte 2 and 4	*/
++#define	  C0_YUVE_SWAP1324 0x00030000	/* swap bytes 1&3 and 2&4 */
+ /* Bayer bits 18,19 if needed */
+ #define	  C0_EOF_VSYNC	  0x00400000	/* Generate EOF by VSYNC */
+ #define	  C0_VEDGE_CTRL   0x00800000	/* Detect falling edge of VSYNC */
+diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
+index c69afb5e264e..ed2e71a74a58 100644
+--- a/drivers/mmc/card/block.c
++++ b/drivers/mmc/card/block.c
+@@ -1029,6 +1029,18 @@ static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type)
+ 	md->reset_done &= ~type;
+ }
+ 
++int mmc_access_rpmb(struct mmc_queue *mq)
++{
++	struct mmc_blk_data *md = mq->data;
++	/*
++	 * If this is a RPMB partition access, return ture
++	 */
++	if (md && md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB)
++		return true;
++
++	return false;
++}
++
+ static int mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req)
+ {
+ 	struct mmc_blk_data *md = mq->data;
+diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
+index 236d194c2883..8efa3684aef8 100644
+--- a/drivers/mmc/card/queue.c
++++ b/drivers/mmc/card/queue.c
+@@ -38,7 +38,7 @@ static int mmc_prep_request(struct request_queue *q, struct request *req)
+ 		return BLKPREP_KILL;
+ 	}
+ 
+-	if (mq && mmc_card_removed(mq->card))
++	if (mq && (mmc_card_removed(mq->card) || mmc_access_rpmb(mq)))
+ 		return BLKPREP_KILL;
+ 
+ 	req->cmd_flags |= REQ_DONTPREP;
+diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
+index 5752d50049a3..99e6521e6169 100644
+--- a/drivers/mmc/card/queue.h
++++ b/drivers/mmc/card/queue.h
+@@ -73,4 +73,6 @@ extern void mmc_queue_bounce_post(struct mmc_queue_req *);
+ extern int mmc_packed_init(struct mmc_queue *, struct mmc_card *);
+ extern void mmc_packed_clean(struct mmc_queue *);
+ 
++extern int mmc_access_rpmb(struct mmc_queue *);
++
+ #endif
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index 23f10f72e5f3..57a8d00672d3 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -2648,6 +2648,7 @@ int mmc_pm_notify(struct notifier_block *notify_block,
+ 	switch (mode) {
+ 	case PM_HIBERNATION_PREPARE:
+ 	case PM_SUSPEND_PREPARE:
++	case PM_RESTORE_PREPARE:
+ 		spin_lock_irqsave(&host->lock, flags);
+ 		host->rescan_disable = 1;
+ 		spin_unlock_irqrestore(&host->lock, flags);
+diff --git a/drivers/mmc/host/sh_mmcif.c b/drivers/mmc/host/sh_mmcif.c
+index 7d9d6a321521..5165ae75d540 100644
+--- a/drivers/mmc/host/sh_mmcif.c
++++ b/drivers/mmc/host/sh_mmcif.c
+@@ -1402,7 +1402,7 @@ static int sh_mmcif_probe(struct platform_device *pdev)
+ 	host		= mmc_priv(mmc);
+ 	host->mmc	= mmc;
+ 	host->addr	= reg;
+-	host->timeout	= msecs_to_jiffies(1000);
++	host->timeout	= msecs_to_jiffies(10000);
+ 	host->ccs_enable = !pd || !pd->ccs_unsupported;
+ 	host->clk_ctrl2_enable = pd && pd->clk_ctrl2_present;
+ 
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index 89dca77ca038..18ee2089df4a 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -1110,7 +1110,7 @@ void devm_pinctrl_put(struct pinctrl *p)
+ EXPORT_SYMBOL_GPL(devm_pinctrl_put);
+ 
+ int pinctrl_register_map(struct pinctrl_map const *maps, unsigned num_maps,
+-			 bool dup, bool locked)
++			 bool dup)
+ {
+ 	int i, ret;
+ 	struct pinctrl_maps *maps_node;
+@@ -1178,11 +1178,9 @@ int pinctrl_register_map(struct pinctrl_map const *maps, unsigned num_maps,
+ 		maps_node->maps = maps;
+ 	}
+ 
+-	if (!locked)
+-		mutex_lock(&pinctrl_maps_mutex);
++	mutex_lock(&pinctrl_maps_mutex);
+ 	list_add_tail(&maps_node->node, &pinctrl_maps);
+-	if (!locked)
+-		mutex_unlock(&pinctrl_maps_mutex);
++	mutex_unlock(&pinctrl_maps_mutex);
+ 
+ 	return 0;
+ }
+@@ -1197,7 +1195,7 @@ int pinctrl_register_map(struct pinctrl_map const *maps, unsigned num_maps,
+ int pinctrl_register_mappings(struct pinctrl_map const *maps,
+ 			      unsigned num_maps)
+ {
+-	return pinctrl_register_map(maps, num_maps, true, false);
++	return pinctrl_register_map(maps, num_maps, true);
+ }
+ 
+ void pinctrl_unregister_map(struct pinctrl_map const *map)
+diff --git a/drivers/pinctrl/core.h b/drivers/pinctrl/core.h
+index 75476b3d87da..b24ea846c867 100644
+--- a/drivers/pinctrl/core.h
++++ b/drivers/pinctrl/core.h
+@@ -183,7 +183,7 @@ static inline struct pin_desc *pin_desc_get(struct pinctrl_dev *pctldev,
+ }
+ 
+ int pinctrl_register_map(struct pinctrl_map const *maps, unsigned num_maps,
+-			 bool dup, bool locked);
++			 bool dup);
+ void pinctrl_unregister_map(struct pinctrl_map const *map);
+ 
+ extern int pinctrl_force_sleep(struct pinctrl_dev *pctldev);
+diff --git a/drivers/pinctrl/devicetree.c b/drivers/pinctrl/devicetree.c
+index eda13de2e7c0..0bbf7d71b281 100644
+--- a/drivers/pinctrl/devicetree.c
++++ b/drivers/pinctrl/devicetree.c
+@@ -92,7 +92,7 @@ static int dt_remember_or_free_map(struct pinctrl *p, const char *statename,
+ 	dt_map->num_maps = num_maps;
+ 	list_add_tail(&dt_map->node, &p->dt_maps);
+ 
+-	return pinctrl_register_map(map, num_maps, false, true);
++	return pinctrl_register_map(map, num_maps, false);
+ }
+ 
+ struct pinctrl_dev *of_pinctrl_get(struct device_node *np)
+diff --git a/drivers/rtc/rtc-armada38x.c b/drivers/rtc/rtc-armada38x.c
+index 43e04af39e09..cb70ced7e0db 100644
+--- a/drivers/rtc/rtc-armada38x.c
++++ b/drivers/rtc/rtc-armada38x.c
+@@ -40,6 +40,13 @@ struct armada38x_rtc {
+ 	void __iomem	    *regs;
+ 	void __iomem	    *regs_soc;
+ 	spinlock_t	    lock;
++	/*
++	 * While setting the time, the RTC TIME register should not be
++	 * accessed. Setting the RTC time involves sleeping during
++	 * 100ms, so a mutex instead of a spinlock is used to protect
++	 * it
++	 */
++	struct mutex	    mutex_time;
+ 	int		    irq;
+ };
+ 
+@@ -59,8 +66,7 @@ static int armada38x_rtc_read_time(struct device *dev, struct rtc_time *tm)
+ 	struct armada38x_rtc *rtc = dev_get_drvdata(dev);
+ 	unsigned long time, time_check, flags;
+ 
+-	spin_lock_irqsave(&rtc->lock, flags);
+-
++	mutex_lock(&rtc->mutex_time);
+ 	time = readl(rtc->regs + RTC_TIME);
+ 	/*
+ 	 * WA for failing time set attempts. As stated in HW ERRATA if
+@@ -71,7 +77,7 @@ static int armada38x_rtc_read_time(struct device *dev, struct rtc_time *tm)
+ 	if ((time_check - time) > 1)
+ 		time_check = readl(rtc->regs + RTC_TIME);
+ 
+-	spin_unlock_irqrestore(&rtc->lock, flags);
++	mutex_unlock(&rtc->mutex_time);
+ 
+ 	rtc_time_to_tm(time_check, tm);
+ 
+@@ -94,19 +100,12 @@ static int armada38x_rtc_set_time(struct device *dev, struct rtc_time *tm)
+ 	 * then wait for 100ms before writing to the time register to be
+ 	 * sure that the data will be taken into account.
+ 	 */
+-	spin_lock_irqsave(&rtc->lock, flags);
+-
++	mutex_lock(&rtc->mutex_time);
+ 	rtc_delayed_write(0, rtc, RTC_STATUS);
+-
+-	spin_unlock_irqrestore(&rtc->lock, flags);
+-
+ 	msleep(100);
+-
+-	spin_lock_irqsave(&rtc->lock, flags);
+-
+ 	rtc_delayed_write(time, rtc, RTC_TIME);
++	mutex_unlock(&rtc->mutex_time);
+ 
+-	spin_unlock_irqrestore(&rtc->lock, flags);
+ out:
+ 	return ret;
+ }
+@@ -230,6 +229,7 @@ static __init int armada38x_rtc_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	spin_lock_init(&rtc->lock);
++	mutex_init(&rtc->mutex_time);
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rtc");
+ 	rtc->regs = devm_ioremap_resource(&pdev->dev, res);
+diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
+index f1e57425e39f..5bab1c684bb1 100644
+--- a/drivers/tty/hvc/hvc_xen.c
++++ b/drivers/tty/hvc/hvc_xen.c
+@@ -299,11 +299,27 @@ static int xen_initial_domain_console_init(void)
+ 	return 0;
+ }
+ 
++static void xen_console_update_evtchn(struct xencons_info *info)
++{
++	if (xen_hvm_domain()) {
++		uint64_t v;
++		int err;
++
++		err = hvm_get_parameter(HVM_PARAM_CONSOLE_EVTCHN, &v);
++		if (!err && v)
++			info->evtchn = v;
++	} else
++		info->evtchn = xen_start_info->console.domU.evtchn;
++}
++
+ void xen_console_resume(void)
+ {
+ 	struct xencons_info *info = vtermno_to_xencons(HVC_COOKIE);
+-	if (info != NULL && info->irq)
++	if (info != NULL && info->irq) {
++		if (!xen_initial_domain())
++			xen_console_update_evtchn(info);
+ 		rebind_evtchn_irq(info->evtchn, info->irq);
++	}
+ }
+ 
+ static void xencons_disconnect_backend(struct xencons_info *info)
+diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
+index 4cde85501444..837d1778970b 100644
+--- a/drivers/vfio/vfio.c
++++ b/drivers/vfio/vfio.c
+@@ -711,6 +711,8 @@ void *vfio_del_group_dev(struct device *dev)
+ 	void *device_data = device->device_data;
+ 	struct vfio_unbound_dev *unbound;
+ 	unsigned int i = 0;
++	long ret;
++	bool interrupted = false;
+ 
+ 	/*
+ 	 * The group exists so long as we have a device reference.  Get
+@@ -756,9 +758,22 @@ void *vfio_del_group_dev(struct device *dev)
+ 
+ 		vfio_device_put(device);
+ 
+-	} while (wait_event_interruptible_timeout(vfio.release_q,
+-						  !vfio_dev_present(group, dev),
+-						  HZ * 10) <= 0);
++		if (interrupted) {
++			ret = wait_event_timeout(vfio.release_q,
++					!vfio_dev_present(group, dev), HZ * 10);
++		} else {
++			ret = wait_event_interruptible_timeout(vfio.release_q,
++					!vfio_dev_present(group, dev), HZ * 10);
++			if (ret == -ERESTARTSYS) {
++				interrupted = true;
++				dev_warn(dev,
++					 "Device is currently in use, task"
++					 " \"%s\" (%d) "
++					 "blocked until device is released",
++					 current->comm, task_pid_nr(current));
++			}
++		}
++	} while (ret <= 0);
+ 
+ 	vfio_group_put(group);
+ 
+diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
+index 5db43fc100a4..7dd46312c180 100644
+--- a/drivers/xen/events/events_2l.c
++++ b/drivers/xen/events/events_2l.c
+@@ -345,6 +345,15 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
+ 	return IRQ_HANDLED;
+ }
+ 
++static void evtchn_2l_resume(void)
++{
++	int i;
++
++	for_each_online_cpu(i)
++		memset(per_cpu(cpu_evtchn_mask, i), 0, sizeof(xen_ulong_t) *
++				EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
++}
++
+ static const struct evtchn_ops evtchn_ops_2l = {
+ 	.max_channels      = evtchn_2l_max_channels,
+ 	.nr_channels       = evtchn_2l_max_channels,
+@@ -356,6 +365,7 @@ static const struct evtchn_ops evtchn_ops_2l = {
+ 	.mask              = evtchn_2l_mask,
+ 	.unmask            = evtchn_2l_unmask,
+ 	.handle_events     = evtchn_2l_handle_events,
++	.resume	           = evtchn_2l_resume,
+ };
+ 
+ void __init xen_evtchn_2l_init(void)
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 70fba973a107..2b8553bd8715 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -529,8 +529,8 @@ static unsigned int __startup_pirq(unsigned int irq)
+ 	if (rc)
+ 		goto err;
+ 
+-	bind_evtchn_to_cpu(evtchn, 0);
+ 	info->evtchn = evtchn;
++	bind_evtchn_to_cpu(evtchn, 0);
+ 
+ 	rc = xen_evtchn_port_setup(info);
+ 	if (rc)
+@@ -1279,8 +1279,9 @@ void rebind_evtchn_irq(int evtchn, int irq)
+ 
+ 	mutex_unlock(&irq_mapping_update_lock);
+ 
+-	/* new event channels are always bound to cpu 0 */
+-	irq_set_affinity(irq, cpumask_of(0));
++        bind_evtchn_to_cpu(evtchn, info->cpu);
++	/* This will be deferred until interrupt is processed */
++	irq_set_affinity(irq, cpumask_of(info->cpu));
+ 
+ 	/* Unmask the event channel. */
+ 	enable_irq(irq);
+diff --git a/drivers/xen/xen-pciback/conf_space.c b/drivers/xen/xen-pciback/conf_space.c
+index 75fe3d466515..9c234209d8b5 100644
+--- a/drivers/xen/xen-pciback/conf_space.c
++++ b/drivers/xen/xen-pciback/conf_space.c
+@@ -16,8 +16,8 @@
+ #include "conf_space.h"
+ #include "conf_space_quirks.h"
+ 
+-bool permissive;
+-module_param(permissive, bool, 0644);
++bool xen_pcibk_permissive;
++module_param_named(permissive, xen_pcibk_permissive, bool, 0644);
+ 
+ /* This is where xen_pcibk_read_config_byte, xen_pcibk_read_config_word,
+  * xen_pcibk_write_config_word, and xen_pcibk_write_config_byte are created. */
+@@ -262,7 +262,7 @@ int xen_pcibk_config_write(struct pci_dev *dev, int offset, int size, u32 value)
+ 		 * This means that some fields may still be read-only because
+ 		 * they have entries in the config_field list that intercept
+ 		 * the write and do nothing. */
+-		if (dev_data->permissive || permissive) {
++		if (dev_data->permissive || xen_pcibk_permissive) {
+ 			switch (size) {
+ 			case 1:
+ 				err = pci_write_config_byte(dev, offset,
+diff --git a/drivers/xen/xen-pciback/conf_space.h b/drivers/xen/xen-pciback/conf_space.h
+index 2e1d73d1d5d0..62461a8ba1d6 100644
+--- a/drivers/xen/xen-pciback/conf_space.h
++++ b/drivers/xen/xen-pciback/conf_space.h
+@@ -64,7 +64,7 @@ struct config_field_entry {
+ 	void *data;
+ };
+ 
+-extern bool permissive;
++extern bool xen_pcibk_permissive;
+ 
+ #define OFFSET(cfg_entry) ((cfg_entry)->base_offset+(cfg_entry)->field->offset)
+ 
+diff --git a/drivers/xen/xen-pciback/conf_space_header.c b/drivers/xen/xen-pciback/conf_space_header.c
+index 2d7369391472..f8baf463dd35 100644
+--- a/drivers/xen/xen-pciback/conf_space_header.c
++++ b/drivers/xen/xen-pciback/conf_space_header.c
+@@ -105,7 +105,7 @@ static int command_write(struct pci_dev *dev, int offset, u16 value, void *data)
+ 
+ 	cmd->val = value;
+ 
+-	if (!permissive && (!dev_data || !dev_data->permissive))
++	if (!xen_pcibk_permissive && (!dev_data || !dev_data->permissive))
+ 		return 0;
+ 
+ 	/* Only allow the guest to control certain bits. */
+diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
+index 564b31584860..5390a674b5e3 100644
+--- a/drivers/xen/xenbus/xenbus_probe.c
++++ b/drivers/xen/xenbus/xenbus_probe.c
+@@ -57,6 +57,7 @@
+ #include <xen/xen.h>
+ #include <xen/xenbus.h>
+ #include <xen/events.h>
++#include <xen/xen-ops.h>
+ #include <xen/page.h>
+ 
+ #include <xen/hvm.h>
+@@ -735,6 +736,30 @@ static int __init xenstored_local_init(void)
+ 	return err;
+ }
+ 
++static int xenbus_resume_cb(struct notifier_block *nb,
++			    unsigned long action, void *data)
++{
++	int err = 0;
++
++	if (xen_hvm_domain()) {
++		uint64_t v;
++
++		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
++		if (!err && v)
++			xen_store_evtchn = v;
++		else
++			pr_warn("Cannot update xenstore event channel: %d\n",
++				err);
++	} else
++		xen_store_evtchn = xen_start_info->store_evtchn;
++
++	return err;
++}
++
++static struct notifier_block xenbus_resume_nb = {
++	.notifier_call = xenbus_resume_cb,
++};
++
+ static int __init xenbus_init(void)
+ {
+ 	int err = 0;
+@@ -793,6 +818,10 @@ static int __init xenbus_init(void)
+ 		goto out_error;
+ 	}
+ 
++	if ((xen_store_domain_type != XS_LOCAL) &&
++	    (xen_store_domain_type != XS_UNKNOWN))
++		xen_resume_notifier_register(&xenbus_resume_nb);
++
+ #ifdef CONFIG_XEN_COMPAT_XENFS
+ 	/*
+ 	 * Create xenfs mountpoint in /proc for compatibility with
+diff --git a/fs/coredump.c b/fs/coredump.c
+index f319926ddf8c..bbbe139ab280 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -657,7 +657,7 @@ void do_coredump(const siginfo_t *siginfo)
+ 		 */
+ 		if (!uid_eq(inode->i_uid, current_fsuid()))
+ 			goto close_fail;
+-		if (!cprm.file->f_op->write)
++		if (!(cprm.file->f_mode & FMODE_CAN_WRITE))
+ 			goto close_fail;
+ 		if (do_truncate(cprm.file->f_path.dentry, 0, 0, cprm.file))
+ 			goto close_fail;
+diff --git a/fs/namei.c b/fs/namei.c
+index caa38a24e1f7..50a8583e8156 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -3228,7 +3228,7 @@ static struct file *path_openat(int dfd, struct filename *pathname,
+ 
+ 	if (unlikely(file->f_flags & __O_TMPFILE)) {
+ 		error = do_tmpfile(dfd, pathname, nd, flags, op, file, &opened);
+-		goto out;
++		goto out2;
+ 	}
+ 
+ 	error = path_init(dfd, pathname->name, flags, nd);
+@@ -3258,6 +3258,7 @@ static struct file *path_openat(int dfd, struct filename *pathname,
+ 	}
+ out:
+ 	path_cleanup(nd);
++out2:
+ 	if (!(opened & FILE_OPENED)) {
+ 		BUG_ON(!error);
+ 		put_filp(file);
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 4622ee32a5e2..38ed1e1bed41 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -3178,6 +3178,12 @@ bool fs_fully_visible(struct file_system_type *type)
+ 		if (mnt->mnt.mnt_sb->s_type != type)
+ 			continue;
+ 
++		/* This mount is not fully visible if it's root directory
++		 * is not the root directory of the filesystem.
++		 */
++		if (mnt->mnt.mnt_root != mnt->mnt.mnt_sb->s_root)
++			continue;
++
+ 		/* This mount is not fully visible if there are any child mounts
+ 		 * that cover anything except for empty directories.
+ 		 */
+diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
+index ecdbae19a766..090d8ce25bd1 100644
+--- a/fs/nilfs2/btree.c
++++ b/fs/nilfs2/btree.c
+@@ -388,7 +388,7 @@ static int nilfs_btree_root_broken(const struct nilfs_btree_node *node,
+ 	nchildren = nilfs_btree_node_get_nchildren(node);
+ 
+ 	if (unlikely(level < NILFS_BTREE_LEVEL_NODE_MIN ||
+-		     level > NILFS_BTREE_LEVEL_MAX ||
++		     level >= NILFS_BTREE_LEVEL_MAX ||
+ 		     nchildren < 0 ||
+ 		     nchildren > NILFS_BTREE_ROOT_NCHILDREN_MAX)) {
+ 		pr_crit("NILFS: bad btree root (inode number=%lu): level = %d, flags = 0x%x, nchildren = %d\n",
+diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c
+index a6944b25fd5b..fdf4b41d0609 100644
+--- a/fs/ocfs2/dlm/dlmmaster.c
++++ b/fs/ocfs2/dlm/dlmmaster.c
+@@ -757,6 +757,19 @@ lookup:
+ 	if (tmpres) {
+ 		spin_unlock(&dlm->spinlock);
+ 		spin_lock(&tmpres->spinlock);
++
++		/*
++		 * Right after dlm spinlock was released, dlm_thread could have
++		 * purged the lockres. Check if lockres got unhashed. If so
++		 * start over.
++		 */
++		if (hlist_unhashed(&tmpres->hash_node)) {
++			spin_unlock(&tmpres->spinlock);
++			dlm_lockres_put(tmpres);
++			tmpres = NULL;
++			goto lookup;
++		}
++
+ 		/* Wait on the thread that is mastering the resource */
+ 		if (tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN) {
+ 			__dlm_wait_on_lockres(tmpres);
+diff --git a/include/acpi/acpixf.h b/include/acpi/acpixf.h
+index d56f5d722138..65aa4fa0ae4e 100644
+--- a/include/acpi/acpixf.h
++++ b/include/acpi/acpixf.h
+@@ -431,13 +431,13 @@ ACPI_EXTERNAL_RETURN_STATUS(acpi_status __init acpi_load_tables(void))
+ ACPI_EXTERNAL_RETURN_STATUS(acpi_status __init acpi_reallocate_root_table(void))
+ 
+ ACPI_EXTERNAL_RETURN_STATUS(acpi_status __init
+-			    acpi_find_root_pointer(acpi_size * rsdp_address))
+-
++			    acpi_find_root_pointer(acpi_physical_address *
++						   rsdp_address))
+ ACPI_EXTERNAL_RETURN_STATUS(acpi_status
+-			    acpi_get_table_header(acpi_string signature,
+-						  u32 instance,
+-						  struct acpi_table_header
+-						  *out_table_header))
++			     acpi_get_table_header(acpi_string signature,
++						   u32 instance,
++						   struct acpi_table_header
++						   *out_table_header))
+ ACPI_EXTERNAL_RETURN_STATUS(acpi_status
+ 			     acpi_get_table(acpi_string signature, u32 instance,
+ 					    struct acpi_table_header
+diff --git a/include/linux/nilfs2_fs.h b/include/linux/nilfs2_fs.h
+index ff3fea3194c6..9abb763e4b86 100644
+--- a/include/linux/nilfs2_fs.h
++++ b/include/linux/nilfs2_fs.h
+@@ -460,7 +460,7 @@ struct nilfs_btree_node {
+ /* level */
+ #define NILFS_BTREE_LEVEL_DATA          0
+ #define NILFS_BTREE_LEVEL_NODE_MIN      (NILFS_BTREE_LEVEL_DATA + 1)
+-#define NILFS_BTREE_LEVEL_MAX           14
++#define NILFS_BTREE_LEVEL_MAX           14	/* Max level (exclusive) */
+ 
+ /**
+  * struct nilfs_palloc_group_desc - block group descriptor
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index d487f8dc6d39..72a5224c8084 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1141,10 +1141,10 @@ int memory_failure(unsigned long pfn, int trapno, int flags)
+ 	 * The check (unnecessarily) ignores LRU pages being isolated and
+ 	 * walked by the page reclaim code, however that's not a big loss.
+ 	 */
+-	if (!PageHuge(p) && !PageTransTail(p)) {
+-		if (!PageLRU(p))
+-			shake_page(p, 0);
+-		if (!PageLRU(p)) {
++	if (!PageHuge(p)) {
++		if (!PageLRU(hpage))
++			shake_page(hpage, 0);
++		if (!PageLRU(hpage)) {
+ 			/*
+ 			 * shake_page could have turned it free.
+ 			 */
+@@ -1721,12 +1721,12 @@ int soft_offline_page(struct page *page, int flags)
+ 	} else if (ret == 0) { /* for free pages */
+ 		if (PageHuge(page)) {
+ 			set_page_hwpoison_huge_page(hpage);
+-			dequeue_hwpoisoned_huge_page(hpage);
+-			atomic_long_add(1 << compound_order(hpage),
++			if (!dequeue_hwpoisoned_huge_page(hpage))
++				atomic_long_add(1 << compound_order(hpage),
+ 					&num_poisoned_pages);
+ 		} else {
+-			SetPageHWPoison(page);
+-			atomic_long_inc(&num_poisoned_pages);
++			if (!TestSetPageHWPoison(page))
++				atomic_long_inc(&num_poisoned_pages);
+ 		}
+ 	}
+ 	unset_migratetype_isolate(page, MIGRATE_MOVABLE);
+diff --git a/mm/page-writeback.c b/mm/page-writeback.c
+index 644bcb665773..ad05f2f7bb65 100644
+--- a/mm/page-writeback.c
++++ b/mm/page-writeback.c
+@@ -580,7 +580,7 @@ static long long pos_ratio_polynom(unsigned long setpoint,
+ 	long x;
+ 
+ 	x = div64_s64(((s64)setpoint - (s64)dirty) << RATELIMIT_CALC_SHIFT,
+-		    limit - setpoint + 1);
++		      (limit - setpoint) | 1);
+ 	pos_ratio = x;
+ 	pos_ratio = pos_ratio * x >> RATELIMIT_CALC_SHIFT;
+ 	pos_ratio = pos_ratio * x >> RATELIMIT_CALC_SHIFT;
+@@ -807,7 +807,7 @@ static unsigned long bdi_position_ratio(struct backing_dev_info *bdi,
+ 	 * scale global setpoint to bdi's:
+ 	 *	bdi_setpoint = setpoint * bdi_thresh / thresh
+ 	 */
+-	x = div_u64((u64)bdi_thresh << 16, thresh + 1);
++	x = div_u64((u64)bdi_thresh << 16, thresh | 1);
+ 	bdi_setpoint = setpoint * (u64)x >> 16;
+ 	/*
+ 	 * Use span=(8*write_bw) in single bdi case as indicated by
+@@ -822,7 +822,7 @@ static unsigned long bdi_position_ratio(struct backing_dev_info *bdi,
+ 
+ 	if (bdi_dirty < x_intercept - span / 4) {
+ 		pos_ratio = div64_u64(pos_ratio * (x_intercept - bdi_dirty),
+-				    x_intercept - bdi_setpoint + 1);
++				      (x_intercept - bdi_setpoint) | 1);
+ 	} else
+ 		pos_ratio /= 4;
+ 
+diff --git a/sound/oss/sequencer.c b/sound/oss/sequencer.c
+index c0eea1dfe90f..f19da4b47c1d 100644
+--- a/sound/oss/sequencer.c
++++ b/sound/oss/sequencer.c
+@@ -681,13 +681,8 @@ static int seq_timing_event(unsigned char *event_rec)
+ 			break;
+ 
+ 		case TMR_ECHO:
+-			if (seq_mode == SEQ_2)
+-				seq_copy_to_input(event_rec, 8);
+-			else
+-			{
+-				parm = (parm << 8 | SEQ_ECHO);
+-				seq_copy_to_input((unsigned char *) &parm, 4);
+-			}
++			parm = (parm << 8 | SEQ_ECHO);
++			seq_copy_to_input((unsigned char *) &parm, 4);
+ 			break;
+ 
+ 		default:;
+@@ -1324,7 +1319,6 @@ int sequencer_ioctl(int dev, struct file *file, unsigned int cmd, void __user *a
+ 	int mode = translate_mode(file);
+ 	struct synth_info inf;
+ 	struct seq_event_rec event_rec;
+-	unsigned long flags;
+ 	int __user *p = arg;
+ 
+ 	orig_dev = dev = dev >> 4;
+@@ -1479,9 +1473,7 @@ int sequencer_ioctl(int dev, struct file *file, unsigned int cmd, void __user *a
+ 		case SNDCTL_SEQ_OUTOFBAND:
+ 			if (copy_from_user(&event_rec, arg, sizeof(event_rec)))
+ 				return -EFAULT;
+-			spin_lock_irqsave(&lock,flags);
+ 			play_event(event_rec.arr);
+-			spin_unlock_irqrestore(&lock,flags);
+ 			return 0;
+ 
+ 		case SNDCTL_MIDI_INFO:

diff --git a/1004_linux-4.0.5.patch b/1004_linux-4.0.5.patch
new file mode 100644
index 0000000..84509c0
--- /dev/null
+++ b/1004_linux-4.0.5.patch
@@ -0,0 +1,4937 @@
+diff --git a/Documentation/hwmon/tmp401 b/Documentation/hwmon/tmp401
+index 8eb88e974055..711f75e189eb 100644
+--- a/Documentation/hwmon/tmp401
++++ b/Documentation/hwmon/tmp401
+@@ -20,7 +20,7 @@ Supported chips:
+     Datasheet: http://focus.ti.com/docs/prod/folders/print/tmp432.html
+   * Texas Instruments TMP435
+     Prefix: 'tmp435'
+-    Addresses scanned: I2C 0x37, 0x48 - 0x4f
++    Addresses scanned: I2C 0x48 - 0x4f
+     Datasheet: http://focus.ti.com/docs/prod/folders/print/tmp435.html
+ 
+ Authors:
+diff --git a/Documentation/serial/tty.txt b/Documentation/serial/tty.txt
+index 1e52d67d0abf..dbe6623fed1c 100644
+--- a/Documentation/serial/tty.txt
++++ b/Documentation/serial/tty.txt
+@@ -198,6 +198,9 @@ TTY_IO_ERROR		If set, causes all subsequent userspace read/write
+ 
+ TTY_OTHER_CLOSED	Device is a pty and the other side has closed.
+ 
++TTY_OTHER_DONE		Device is a pty and the other side has closed and
++			all pending input processing has been completed.
++
+ TTY_NO_WRITE_SPLIT	Prevent driver from splitting up writes into
+ 			smaller chunks.
+ 
+diff --git a/Documentation/virtual/kvm/mmu.txt b/Documentation/virtual/kvm/mmu.txt
+index 53838d9c6295..c59bd9bc41ef 100644
+--- a/Documentation/virtual/kvm/mmu.txt
++++ b/Documentation/virtual/kvm/mmu.txt
+@@ -169,6 +169,10 @@ Shadow pages contain the following information:
+     Contains the value of cr4.smep && !cr0.wp for which the page is valid
+     (pages for which this is true are different from other pages; see the
+     treatment of cr0.wp=0 below).
++  role.smap_andnot_wp:
++    Contains the value of cr4.smap && !cr0.wp for which the page is valid
++    (pages for which this is true are different from other pages; see the
++    treatment of cr0.wp=0 below).
+   gfn:
+     Either the guest page table containing the translations shadowed by this
+     page, or the base page frame for linear translations.  See role.direct.
+@@ -344,10 +348,16 @@ on fault type:
+ 
+ (user write faults generate a #PF)
+ 
+-In the first case there is an additional complication if CR4.SMEP is
+-enabled: since we've turned the page into a kernel page, the kernel may now
+-execute it.  We handle this by also setting spte.nx.  If we get a user
+-fetch or read fault, we'll change spte.u=1 and spte.nx=gpte.nx back.
++In the first case there are two additional complications:
++- if CR4.SMEP is enabled: since we've turned the page into a kernel page,
++  the kernel may now execute it.  We handle this by also setting spte.nx.
++  If we get a user fetch or read fault, we'll change spte.u=1 and
++  spte.nx=gpte.nx back.
++- if CR4.SMAP is disabled: since the page has been changed to a kernel
++  page, it can not be reused when CR4.SMAP is enabled. We set
++  CR4.SMAP && !CR0.WP into shadow page's role to avoid this case. Note,
++  here we do not care the case that CR4.SMAP is enabled since KVM will
++  directly inject #PF to guest due to failed permission check.
+ 
+ To prevent an spte that was converted into a kernel page with cr0.wp=0
+ from being written by the kernel after cr0.wp has changed to 1, we make
+diff --git a/Makefile b/Makefile
+index 3d16bcc87585..1880cf77059b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
+index 067551b6920a..9917a45fc430 100644
+--- a/arch/arc/include/asm/atomic.h
++++ b/arch/arc/include/asm/atomic.h
+@@ -99,7 +99,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
+ 	atomic_ops_unlock(flags);					\
+ }
+ 
+-#define ATOMIC_OP_RETURN(op, c_op)					\
++#define ATOMIC_OP_RETURN(op, c_op, asm_op)				\
+ static inline int atomic_##op##_return(int i, atomic_t *v)		\
+ {									\
+ 	unsigned long flags;						\
+diff --git a/arch/arm/boot/dts/Makefile b/arch/arm/boot/dts/Makefile
+index a1c776b8dcec..992ea0b063d5 100644
+--- a/arch/arm/boot/dts/Makefile
++++ b/arch/arm/boot/dts/Makefile
+@@ -215,7 +215,7 @@ dtb-$(CONFIG_SOC_IMX25) += \
+ 	imx25-eukrea-mbimxsd25-baseboard-dvi-vga.dtb \
+ 	imx25-karo-tx25.dtb \
+ 	imx25-pdk.dtb
+-dtb-$(CONFIG_SOC_IMX31) += \
++dtb-$(CONFIG_SOC_IMX27) += \
+ 	imx27-apf27.dtb \
+ 	imx27-apf27dev.dtb \
+ 	imx27-eukrea-mbimxsd27-baseboard.dtb \
+diff --git a/arch/arm/boot/dts/exynos4412-trats2.dts b/arch/arm/boot/dts/exynos4412-trats2.dts
+index 173ffa479ad3..792394dd0f2a 100644
+--- a/arch/arm/boot/dts/exynos4412-trats2.dts
++++ b/arch/arm/boot/dts/exynos4412-trats2.dts
+@@ -736,7 +736,7 @@
+ 
+ 			display-timings {
+ 				timing-0 {
+-					clock-frequency = <0>;
++					clock-frequency = <57153600>;
+ 					hactive = <720>;
+ 					vactive = <1280>;
+ 					hfront-porch = <5>;
+diff --git a/arch/arm/boot/dts/imx27.dtsi b/arch/arm/boot/dts/imx27.dtsi
+index 4b063b68db44..9ce1d2128749 100644
+--- a/arch/arm/boot/dts/imx27.dtsi
++++ b/arch/arm/boot/dts/imx27.dtsi
+@@ -531,7 +531,7 @@
+ 
+ 			fec: ethernet@1002b000 {
+ 				compatible = "fsl,imx27-fec";
+-				reg = <0x1002b000 0x4000>;
++				reg = <0x1002b000 0x1000>;
+ 				interrupts = <50>;
+ 				clocks = <&clks IMX27_CLK_FEC_IPG_GATE>,
+ 					 <&clks IMX27_CLK_FEC_AHB_GATE>;
+diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
+index f8ccc21fa032..4e7f40c577e6 100644
+--- a/arch/arm/kernel/entry-common.S
++++ b/arch/arm/kernel/entry-common.S
+@@ -33,7 +33,9 @@ ret_fast_syscall:
+  UNWIND(.fnstart	)
+  UNWIND(.cantunwind	)
+ 	disable_irq				@ disable interrupts
+-	ldr	r1, [tsk, #TI_FLAGS]
++	ldr	r1, [tsk, #TI_FLAGS]		@ re-check for syscall tracing
++	tst	r1, #_TIF_SYSCALL_WORK
++	bne	__sys_trace_return
+ 	tst	r1, #_TIF_WORK_MASK
+ 	bne	fast_work_pending
+ 	asm_trace_hardirqs_on
+diff --git a/arch/arm/mach-exynos/pm_domains.c b/arch/arm/mach-exynos/pm_domains.c
+index 37266a826437..1f02bcb350e5 100644
+--- a/arch/arm/mach-exynos/pm_domains.c
++++ b/arch/arm/mach-exynos/pm_domains.c
+@@ -169,7 +169,7 @@ no_clk:
+ 		args.np = np;
+ 		args.args_count = 0;
+ 		child_domain = of_genpd_get_from_provider(&args);
+-		if (!child_domain)
++		if (IS_ERR(child_domain))
+ 			continue;
+ 
+ 		if (of_parse_phandle_with_args(np, "power-domains",
+@@ -177,7 +177,7 @@ no_clk:
+ 			continue;
+ 
+ 		parent_domain = of_genpd_get_from_provider(&args);
+-		if (!parent_domain)
++		if (IS_ERR(parent_domain))
+ 			continue;
+ 
+ 		if (pm_genpd_add_subdomain(parent_domain, child_domain))
+diff --git a/arch/arm/mach-exynos/sleep.S b/arch/arm/mach-exynos/sleep.S
+index 31d25834b9c4..cf950790fbdc 100644
+--- a/arch/arm/mach-exynos/sleep.S
++++ b/arch/arm/mach-exynos/sleep.S
+@@ -23,14 +23,7 @@
+ #define CPU_MASK	0xff0ffff0
+ #define CPU_CORTEX_A9	0x410fc090
+ 
+-	/*
+-	 * The following code is located into the .data section. This is to
+-	 * allow l2x0_regs_phys to be accessed with a relative load while we
+-	 * can't rely on any MMU translation. We could have put l2x0_regs_phys
+-	 * in the .text section as well, but some setups might insist on it to
+-	 * be truly read-only. (Reference from: arch/arm/kernel/sleep.S)
+-	 */
+-	.data
++	.text
+ 	.align
+ 
+ 	/*
+@@ -69,10 +62,12 @@ ENTRY(exynos_cpu_resume_ns)
+ 	cmp	r0, r1
+ 	bne	skip_cp15
+ 
+-	adr	r0, cp15_save_power
++	adr	r0, _cp15_save_power
+ 	ldr	r1, [r0]
+-	adr	r0, cp15_save_diag
++	ldr	r1, [r0, r1]
++	adr	r0, _cp15_save_diag
+ 	ldr	r2, [r0]
++	ldr	r2, [r0, r2]
+ 	mov	r0, #SMC_CMD_C15RESUME
+ 	dsb
+ 	smc	#0
+@@ -118,14 +113,20 @@ skip_l2x0:
+ skip_cp15:
+ 	b	cpu_resume
+ ENDPROC(exynos_cpu_resume_ns)
++
++	.align
++_cp15_save_power:
++	.long	cp15_save_power - .
++_cp15_save_diag:
++	.long	cp15_save_diag - .
++#ifdef CONFIG_CACHE_L2X0
++1:	.long	l2x0_saved_regs - .
++#endif /* CONFIG_CACHE_L2X0 */
++
++	.data
+ 	.globl cp15_save_diag
+ cp15_save_diag:
+ 	.long	0	@ cp15 diagnostic
+ 	.globl cp15_save_power
+ cp15_save_power:
+ 	.long	0	@ cp15 power control
+-
+-#ifdef CONFIG_CACHE_L2X0
+-	.align
+-1:	.long	l2x0_saved_regs - .
+-#endif /* CONFIG_CACHE_L2X0 */
+diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
+index 4e6ef896c619..7186382672b5 100644
+--- a/arch/arm/mm/mmu.c
++++ b/arch/arm/mm/mmu.c
+@@ -1112,22 +1112,22 @@ void __init sanity_check_meminfo(void)
+ 			}
+ 
+ 			/*
+-			 * Find the first non-section-aligned page, and point
++			 * Find the first non-pmd-aligned page, and point
+ 			 * memblock_limit at it. This relies on rounding the
+-			 * limit down to be section-aligned, which happens at
+-			 * the end of this function.
++			 * limit down to be pmd-aligned, which happens at the
++			 * end of this function.
+ 			 *
+ 			 * With this algorithm, the start or end of almost any
+-			 * bank can be non-section-aligned. The only exception
+-			 * is that the start of the bank 0 must be section-
++			 * bank can be non-pmd-aligned. The only exception is
++			 * that the start of the bank 0 must be section-
+ 			 * aligned, since otherwise memory would need to be
+ 			 * allocated when mapping the start of bank 0, which
+ 			 * occurs before any free memory is mapped.
+ 			 */
+ 			if (!memblock_limit) {
+-				if (!IS_ALIGNED(block_start, SECTION_SIZE))
++				if (!IS_ALIGNED(block_start, PMD_SIZE))
+ 					memblock_limit = block_start;
+-				else if (!IS_ALIGNED(block_end, SECTION_SIZE))
++				else if (!IS_ALIGNED(block_end, PMD_SIZE))
+ 					memblock_limit = arm_lowmem_limit;
+ 			}
+ 
+@@ -1137,12 +1137,12 @@ void __init sanity_check_meminfo(void)
+ 	high_memory = __va(arm_lowmem_limit - 1) + 1;
+ 
+ 	/*
+-	 * Round the memblock limit down to a section size.  This
++	 * Round the memblock limit down to a pmd size.  This
+ 	 * helps to ensure that we will allocate memory from the
+-	 * last full section, which should be mapped.
++	 * last full pmd, which should be mapped.
+ 	 */
+ 	if (memblock_limit)
+-		memblock_limit = round_down(memblock_limit, SECTION_SIZE);
++		memblock_limit = round_down(memblock_limit, PMD_SIZE);
+ 	if (!memblock_limit)
+ 		memblock_limit = arm_lowmem_limit;
+ 
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index edba042b2325..dc6a4842683a 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -487,7 +487,7 @@ emit_cond_jmp:
+ 			return -EINVAL;
+ 		}
+ 
+-		imm64 = (u64)insn1.imm << 32 | imm;
++		imm64 = (u64)insn1.imm << 32 | (u32)imm;
+ 		emit_a64_mov_i64(dst, imm64, ctx);
+ 
+ 		return 1;
+diff --git a/arch/mips/kernel/elf.c b/arch/mips/kernel/elf.c
+index d2c09f6475c5..f20cedcb50f1 100644
+--- a/arch/mips/kernel/elf.c
++++ b/arch/mips/kernel/elf.c
+@@ -76,14 +76,6 @@ int arch_elf_pt_proc(void *_ehdr, void *_phdr, struct file *elf,
+ 
+ 	/* Lets see if this is an O32 ELF */
+ 	if (ehdr32->e_ident[EI_CLASS] == ELFCLASS32) {
+-		/* FR = 1 for N32 */
+-		if (ehdr32->e_flags & EF_MIPS_ABI2)
+-			state->overall_fp_mode = FP_FR1;
+-		else
+-			/* Set a good default FPU mode for O32 */
+-			state->overall_fp_mode = cpu_has_mips_r6 ?
+-				FP_FRE : FP_FR0;
+-
+ 		if (ehdr32->e_flags & EF_MIPS_FP64) {
+ 			/*
+ 			 * Set MIPS_ABI_FP_OLD_64 for EF_MIPS_FP64. We will override it
+@@ -104,9 +96,6 @@ int arch_elf_pt_proc(void *_ehdr, void *_phdr, struct file *elf,
+ 				  (char *)&abiflags,
+ 				  sizeof(abiflags));
+ 	} else {
+-		/* FR=1 is really the only option for 64-bit */
+-		state->overall_fp_mode = FP_FR1;
+-
+ 		if (phdr64->p_type != PT_MIPS_ABIFLAGS)
+ 			return 0;
+ 		if (phdr64->p_filesz < sizeof(abiflags))
+@@ -147,6 +136,7 @@ int arch_check_elf(void *_ehdr, bool has_interpreter,
+ 	struct elf32_hdr *ehdr = _ehdr;
+ 	struct mode_req prog_req, interp_req;
+ 	int fp_abi, interp_fp_abi, abi0, abi1, max_abi;
++	bool is_mips64;
+ 
+ 	if (!config_enabled(CONFIG_MIPS_O32_FP64_SUPPORT))
+ 		return 0;
+@@ -162,10 +152,22 @@ int arch_check_elf(void *_ehdr, bool has_interpreter,
+ 		abi0 = abi1 = fp_abi;
+ 	}
+ 
+-	/* ABI limits. O32 = FP_64A, N32/N64 = FP_SOFT */
+-	max_abi = ((ehdr->e_ident[EI_CLASS] == ELFCLASS32) &&
+-		   (!(ehdr->e_flags & EF_MIPS_ABI2))) ?
+-		MIPS_ABI_FP_64A : MIPS_ABI_FP_SOFT;
++	is_mips64 = (ehdr->e_ident[EI_CLASS] == ELFCLASS64) ||
++		    (ehdr->e_flags & EF_MIPS_ABI2);
++
++	if (is_mips64) {
++		/* MIPS64 code always uses FR=1, thus the default is easy */
++		state->overall_fp_mode = FP_FR1;
++
++		/* Disallow access to the various FPXX & FP64 ABIs */
++		max_abi = MIPS_ABI_FP_SOFT;
++	} else {
++		/* Default to a mode capable of running code expecting FR=0 */
++		state->overall_fp_mode = cpu_has_mips_r6 ? FP_FRE : FP_FR0;
++
++		/* Allow all ABIs we know about */
++		max_abi = MIPS_ABI_FP_64A;
++	}
+ 
+ 	if ((abi0 > max_abi && abi0 != MIPS_ABI_FP_UNKNOWN) ||
+ 	    (abi1 > max_abi && abi1 != MIPS_ABI_FP_UNKNOWN))
+diff --git a/arch/parisc/include/asm/elf.h b/arch/parisc/include/asm/elf.h
+index 3391d061eccc..78c9fd32c554 100644
+--- a/arch/parisc/include/asm/elf.h
++++ b/arch/parisc/include/asm/elf.h
+@@ -348,6 +348,10 @@ struct pt_regs;	/* forward declaration... */
+ 
+ #define ELF_HWCAP	0
+ 
++#define STACK_RND_MASK	(is_32bit_task() ? \
++				0x7ff >> (PAGE_SHIFT - 12) : \
++				0x3ffff >> (PAGE_SHIFT - 12))
++
+ struct mm_struct;
+ extern unsigned long arch_randomize_brk(struct mm_struct *);
+ #define arch_randomize_brk arch_randomize_brk
+diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
+index e1ffea2f9a0b..5aba01ac457f 100644
+--- a/arch/parisc/kernel/sys_parisc.c
++++ b/arch/parisc/kernel/sys_parisc.c
+@@ -77,6 +77,9 @@ static unsigned long mmap_upper_limit(void)
+ 	if (stack_base > STACK_SIZE_MAX)
+ 		stack_base = STACK_SIZE_MAX;
+ 
++	/* Add space for stack randomization. */
++	stack_base += (STACK_RND_MASK << PAGE_SHIFT);
++
+ 	return PAGE_ALIGN(STACK_TOP - stack_base);
+ }
+ 
+diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
+index 15c99b649b04..b2eb4686bd8f 100644
+--- a/arch/powerpc/kernel/mce.c
++++ b/arch/powerpc/kernel/mce.c
+@@ -73,7 +73,7 @@ void save_mce_event(struct pt_regs *regs, long handled,
+ 		    uint64_t nip, uint64_t addr)
+ {
+ 	uint64_t srr1;
+-	int index = __this_cpu_inc_return(mce_nest_count);
++	int index = __this_cpu_inc_return(mce_nest_count) - 1;
+ 	struct machine_check_event *mce = this_cpu_ptr(&mce_event[index]);
+ 
+ 	/*
+@@ -184,7 +184,7 @@ void machine_check_queue_event(void)
+ 	if (!get_mce_event(&evt, MCE_EVENT_RELEASE))
+ 		return;
+ 
+-	index = __this_cpu_inc_return(mce_queue_count);
++	index = __this_cpu_inc_return(mce_queue_count) - 1;
+ 	/* If queue is full, just return for now. */
+ 	if (index >= MAX_MC_EVT) {
+ 		__this_cpu_dec(mce_queue_count);
+diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
+index f096e72262f4..1db685104ffc 100644
+--- a/arch/powerpc/kernel/vmlinux.lds.S
++++ b/arch/powerpc/kernel/vmlinux.lds.S
+@@ -213,6 +213,7 @@ SECTIONS
+ 		*(.opd)
+ 	}
+ 
++	. = ALIGN(256);
+ 	.got : AT(ADDR(.got) - LOAD_OFFSET) {
+ 		__toc_start = .;
+ #ifndef CONFIG_RELOCATABLE
+diff --git a/arch/s390/crypto/ghash_s390.c b/arch/s390/crypto/ghash_s390.c
+index 7940dc90e80b..b258110da952 100644
+--- a/arch/s390/crypto/ghash_s390.c
++++ b/arch/s390/crypto/ghash_s390.c
+@@ -16,11 +16,12 @@
+ #define GHASH_DIGEST_SIZE	16
+ 
+ struct ghash_ctx {
+-	u8 icv[16];
+-	u8 key[16];
++	u8 key[GHASH_BLOCK_SIZE];
+ };
+ 
+ struct ghash_desc_ctx {
++	u8 icv[GHASH_BLOCK_SIZE];
++	u8 key[GHASH_BLOCK_SIZE];
+ 	u8 buffer[GHASH_BLOCK_SIZE];
+ 	u32 bytes;
+ };
+@@ -28,8 +29,10 @@ struct ghash_desc_ctx {
+ static int ghash_init(struct shash_desc *desc)
+ {
+ 	struct ghash_desc_ctx *dctx = shash_desc_ctx(desc);
++	struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm);
+ 
+ 	memset(dctx, 0, sizeof(*dctx));
++	memcpy(dctx->key, ctx->key, GHASH_BLOCK_SIZE);
+ 
+ 	return 0;
+ }
+@@ -45,7 +48,6 @@ static int ghash_setkey(struct crypto_shash *tfm,
+ 	}
+ 
+ 	memcpy(ctx->key, key, GHASH_BLOCK_SIZE);
+-	memset(ctx->icv, 0, GHASH_BLOCK_SIZE);
+ 
+ 	return 0;
+ }
+@@ -54,7 +56,6 @@ static int ghash_update(struct shash_desc *desc,
+ 			 const u8 *src, unsigned int srclen)
+ {
+ 	struct ghash_desc_ctx *dctx = shash_desc_ctx(desc);
+-	struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm);
+ 	unsigned int n;
+ 	u8 *buf = dctx->buffer;
+ 	int ret;
+@@ -70,7 +71,7 @@ static int ghash_update(struct shash_desc *desc,
+ 		src += n;
+ 
+ 		if (!dctx->bytes) {
+-			ret = crypt_s390_kimd(KIMD_GHASH, ctx, buf,
++			ret = crypt_s390_kimd(KIMD_GHASH, dctx, buf,
+ 					      GHASH_BLOCK_SIZE);
+ 			if (ret != GHASH_BLOCK_SIZE)
+ 				return -EIO;
+@@ -79,7 +80,7 @@ static int ghash_update(struct shash_desc *desc,
+ 
+ 	n = srclen & ~(GHASH_BLOCK_SIZE - 1);
+ 	if (n) {
+-		ret = crypt_s390_kimd(KIMD_GHASH, ctx, src, n);
++		ret = crypt_s390_kimd(KIMD_GHASH, dctx, src, n);
+ 		if (ret != n)
+ 			return -EIO;
+ 		src += n;
+@@ -94,7 +95,7 @@ static int ghash_update(struct shash_desc *desc,
+ 	return 0;
+ }
+ 
+-static int ghash_flush(struct ghash_ctx *ctx, struct ghash_desc_ctx *dctx)
++static int ghash_flush(struct ghash_desc_ctx *dctx)
+ {
+ 	u8 *buf = dctx->buffer;
+ 	int ret;
+@@ -104,24 +105,24 @@ static int ghash_flush(struct ghash_ctx *ctx, struct ghash_desc_ctx *dctx)
+ 
+ 		memset(pos, 0, dctx->bytes);
+ 
+-		ret = crypt_s390_kimd(KIMD_GHASH, ctx, buf, GHASH_BLOCK_SIZE);
++		ret = crypt_s390_kimd(KIMD_GHASH, dctx, buf, GHASH_BLOCK_SIZE);
+ 		if (ret != GHASH_BLOCK_SIZE)
+ 			return -EIO;
++
++		dctx->bytes = 0;
+ 	}
+ 
+-	dctx->bytes = 0;
+ 	return 0;
+ }
+ 
+ static int ghash_final(struct shash_desc *desc, u8 *dst)
+ {
+ 	struct ghash_desc_ctx *dctx = shash_desc_ctx(desc);
+-	struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm);
+ 	int ret;
+ 
+-	ret = ghash_flush(ctx, dctx);
++	ret = ghash_flush(dctx);
+ 	if (!ret)
+-		memcpy(dst, ctx->icv, GHASH_BLOCK_SIZE);
++		memcpy(dst, dctx->icv, GHASH_BLOCK_SIZE);
+ 	return ret;
+ }
+ 
+diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
+index e08ec38f8c6e..e10112da008d 100644
+--- a/arch/s390/include/asm/pgtable.h
++++ b/arch/s390/include/asm/pgtable.h
+@@ -600,7 +600,7 @@ static inline int pmd_large(pmd_t pmd)
+ 	return (pmd_val(pmd) & _SEGMENT_ENTRY_LARGE) != 0;
+ }
+ 
+-static inline int pmd_pfn(pmd_t pmd)
++static inline unsigned long pmd_pfn(pmd_t pmd)
+ {
+ 	unsigned long origin_mask;
+ 
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index a236e39cc385..1c0fb570b5c2 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -212,6 +212,7 @@ union kvm_mmu_page_role {
+ 		unsigned nxe:1;
+ 		unsigned cr0_wp:1;
+ 		unsigned smep_andnot_wp:1;
++		unsigned smap_andnot_wp:1;
+ 	};
+ };
+ 
+@@ -404,6 +405,7 @@ struct kvm_vcpu_arch {
+ 	struct kvm_mmu_memory_cache mmu_page_header_cache;
+ 
+ 	struct fpu guest_fpu;
++	bool eager_fpu;
+ 	u64 xcr0;
+ 	u64 guest_supported_xcr0;
+ 	u32 guest_xstate_size;
+@@ -735,6 +737,7 @@ struct kvm_x86_ops {
+ 	void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
+ 	unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
+ 	void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
++	void (*fpu_activate)(struct kvm_vcpu *vcpu);
+ 	void (*fpu_deactivate)(struct kvm_vcpu *vcpu);
+ 
+ 	void (*tlb_flush)(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
+index 3c036cb4a370..11dd8f23fcea 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce.c
++++ b/arch/x86/kernel/cpu/mcheck/mce.c
+@@ -705,6 +705,7 @@ static int mce_no_way_out(struct mce *m, char **msg, unsigned long *validp,
+ 			  struct pt_regs *regs)
+ {
+ 	int i, ret = 0;
++	char *tmp;
+ 
+ 	for (i = 0; i < mca_cfg.banks; i++) {
+ 		m->status = mce_rdmsrl(MSR_IA32_MCx_STATUS(i));
+@@ -713,9 +714,11 @@ static int mce_no_way_out(struct mce *m, char **msg, unsigned long *validp,
+ 			if (quirk_no_way_out)
+ 				quirk_no_way_out(i, m, regs);
+ 		}
+-		if (mce_severity(m, mca_cfg.tolerant, msg, true) >=
+-		    MCE_PANIC_SEVERITY)
++
++		if (mce_severity(m, mca_cfg.tolerant, &tmp, true) >= MCE_PANIC_SEVERITY) {
++			*msg = tmp;
+ 			ret = 1;
++		}
+ 	}
+ 	return ret;
+ }
+diff --git a/arch/x86/kernel/cpu/perf_event_intel_rapl.c b/arch/x86/kernel/cpu/perf_event_intel_rapl.c
+index c4bb8b8e5017..76d8cbe5a10f 100644
+--- a/arch/x86/kernel/cpu/perf_event_intel_rapl.c
++++ b/arch/x86/kernel/cpu/perf_event_intel_rapl.c
+@@ -680,6 +680,7 @@ static int __init rapl_pmu_init(void)
+ 		break;
+ 	case 60: /* Haswell */
+ 	case 69: /* Haswell-Celeron */
++	case 61: /* Broadwell */
+ 		rapl_cntr_mask = RAPL_IDX_HSW;
+ 		rapl_pmu_events_group.attrs = rapl_events_hsw_attr;
+ 		break;
+diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
+index d5651fce0b71..f341d56b7883 100644
+--- a/arch/x86/kernel/i387.c
++++ b/arch/x86/kernel/i387.c
+@@ -169,6 +169,21 @@ static void init_thread_xstate(void)
+ 		xstate_size = sizeof(struct i387_fxsave_struct);
+ 	else
+ 		xstate_size = sizeof(struct i387_fsave_struct);
++
++	/*
++	 * Quirk: we don't yet handle the XSAVES* instructions
++	 * correctly, as we don't correctly convert between
++	 * standard and compacted format when interfacing
++	 * with user-space - so disable it for now.
++	 *
++	 * The difference is small: with recent CPUs the
++	 * compacted format is only marginally smaller than
++	 * the standard FPU state format.
++	 *
++	 * ( This is easy to backport while we are fixing
++	 *   XSAVES* support. )
++	 */
++	setup_clear_cpu_cap(X86_FEATURE_XSAVES);
+ }
+ 
+ /*
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 8a80737ee6e6..307f9ec28e08 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -16,6 +16,8 @@
+ #include <linux/module.h>
+ #include <linux/vmalloc.h>
+ #include <linux/uaccess.h>
++#include <asm/i387.h> /* For use_eager_fpu.  Ugh! */
++#include <asm/fpu-internal.h> /* For use_eager_fpu.  Ugh! */
+ #include <asm/user.h>
+ #include <asm/xsave.h>
+ #include "cpuid.h"
+@@ -95,6 +97,8 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
+ 	if (best && (best->eax & (F(XSAVES) | F(XSAVEC))))
+ 		best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
+ 
++	vcpu->arch.eager_fpu = guest_cpuid_has_mpx(vcpu);
++
+ 	/*
+ 	 * The existing code assumes virtual address is 48-bit in the canonical
+ 	 * address checks; exit if it is ever changed.
+diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
+index 4452eedfaedd..9bec2b8cdced 100644
+--- a/arch/x86/kvm/cpuid.h
++++ b/arch/x86/kvm/cpuid.h
+@@ -111,4 +111,12 @@ static inline bool guest_cpuid_has_rtm(struct kvm_vcpu *vcpu)
+ 	best = kvm_find_cpuid_entry(vcpu, 7, 0);
+ 	return best && (best->ebx & bit(X86_FEATURE_RTM));
+ }
++
++static inline bool guest_cpuid_has_mpx(struct kvm_vcpu *vcpu)
++{
++	struct kvm_cpuid_entry2 *best;
++
++	best = kvm_find_cpuid_entry(vcpu, 7, 0);
++	return best && (best->ebx & bit(X86_FEATURE_MPX));
++}
+ #endif
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index cee759299a35..88ee9282a57e 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -3736,8 +3736,8 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu,
+ 	}
+ }
+ 
+-void update_permission_bitmask(struct kvm_vcpu *vcpu,
+-		struct kvm_mmu *mmu, bool ept)
++static void update_permission_bitmask(struct kvm_vcpu *vcpu,
++				      struct kvm_mmu *mmu, bool ept)
+ {
+ 	unsigned bit, byte, pfec;
+ 	u8 map;
+@@ -3918,6 +3918,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
+ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
+ {
+ 	bool smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP);
++	bool smap = kvm_read_cr4_bits(vcpu, X86_CR4_SMAP);
+ 	struct kvm_mmu *context = &vcpu->arch.mmu;
+ 
+ 	MMU_WARN_ON(VALID_PAGE(context->root_hpa));
+@@ -3936,6 +3937,8 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
+ 	context->base_role.cr0_wp  = is_write_protection(vcpu);
+ 	context->base_role.smep_andnot_wp
+ 		= smep && !is_write_protection(vcpu);
++	context->base_role.smap_andnot_wp
++		= smap && !is_write_protection(vcpu);
+ }
+ EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
+ 
+@@ -4207,12 +4210,18 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
+ 		       const u8 *new, int bytes)
+ {
+ 	gfn_t gfn = gpa >> PAGE_SHIFT;
+-	union kvm_mmu_page_role mask = { .word = 0 };
+ 	struct kvm_mmu_page *sp;
+ 	LIST_HEAD(invalid_list);
+ 	u64 entry, gentry, *spte;
+ 	int npte;
+ 	bool remote_flush, local_flush, zap_page;
++	union kvm_mmu_page_role mask = (union kvm_mmu_page_role) {
++		.cr0_wp = 1,
++		.cr4_pae = 1,
++		.nxe = 1,
++		.smep_andnot_wp = 1,
++		.smap_andnot_wp = 1,
++	};
+ 
+ 	/*
+ 	 * If we don't have indirect shadow pages, it means no page is
+@@ -4238,7 +4247,6 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
+ 	++vcpu->kvm->stat.mmu_pte_write;
+ 	kvm_mmu_audit(vcpu, AUDIT_PRE_PTE_WRITE);
+ 
+-	mask.cr0_wp = mask.cr4_pae = mask.nxe = 1;
+ 	for_each_gfn_indirect_valid_sp(vcpu->kvm, sp, gfn) {
+ 		if (detect_write_misaligned(sp, gpa, bytes) ||
+ 		      detect_write_flooding(sp)) {
+diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
+index c7d65637c851..0ada65ecddcf 100644
+--- a/arch/x86/kvm/mmu.h
++++ b/arch/x86/kvm/mmu.h
+@@ -71,8 +71,6 @@ enum {
+ int handle_mmio_page_fault_common(struct kvm_vcpu *vcpu, u64 addr, bool direct);
+ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu);
+ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly);
+-void update_permission_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
+-		bool ept);
+ 
+ static inline unsigned int kvm_mmu_available_pages(struct kvm *kvm)
+ {
+@@ -166,6 +164,8 @@ static inline bool permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
+ 	int index = (pfec >> 1) +
+ 		    (smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1));
+ 
++	WARN_ON(pfec & PFERR_RSVD_MASK);
++
+ 	return (mmu->permissions[index] >> pte_access) & 1;
+ }
+ 
+diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
+index fd49c867b25a..6e6d115fe9b5 100644
+--- a/arch/x86/kvm/paging_tmpl.h
++++ b/arch/x86/kvm/paging_tmpl.h
+@@ -718,6 +718,13 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
+ 					      mmu_is_nested(vcpu));
+ 		if (likely(r != RET_MMIO_PF_INVALID))
+ 			return r;
++
++		/*
++		 * page fault with PFEC.RSVD  = 1 is caused by shadow
++		 * page fault, should not be used to walk guest page
++		 * table.
++		 */
++		error_code &= ~PFERR_RSVD_MASK;
+ 	};
+ 
+ 	r = mmu_topup_memory_caches(vcpu);
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index cc618c882f90..a4e62fcfabcb 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -4374,6 +4374,7 @@ static struct kvm_x86_ops svm_x86_ops = {
+ 	.cache_reg = svm_cache_reg,
+ 	.get_rflags = svm_get_rflags,
+ 	.set_rflags = svm_set_rflags,
++	.fpu_activate = svm_fpu_activate,
+ 	.fpu_deactivate = svm_fpu_deactivate,
+ 
+ 	.tlb_flush = svm_flush_tlb,
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index a60bd3aa0965..5318d64674b0 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -10179,6 +10179,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
+ 	.cache_reg = vmx_cache_reg,
+ 	.get_rflags = vmx_get_rflags,
+ 	.set_rflags = vmx_set_rflags,
++	.fpu_activate = vmx_fpu_activate,
+ 	.fpu_deactivate = vmx_fpu_deactivate,
+ 
+ 	.tlb_flush = vmx_flush_tlb,
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index e222ba5d2beb..8838057da9c3 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -702,8 +702,9 @@ EXPORT_SYMBOL_GPL(kvm_set_xcr);
+ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ {
+ 	unsigned long old_cr4 = kvm_read_cr4(vcpu);
+-	unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE |
+-				   X86_CR4_PAE | X86_CR4_SMEP;
++	unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |
++				   X86_CR4_SMEP | X86_CR4_SMAP;
++
+ 	if (cr4 & CR4_RESERVED_BITS)
+ 		return 1;
+ 
+@@ -744,9 +745,6 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ 	    (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)))
+ 		kvm_mmu_reset_context(vcpu);
+ 
+-	if ((cr4 ^ old_cr4) & X86_CR4_SMAP)
+-		update_permission_bitmask(vcpu, vcpu->arch.walk_mmu, false);
+-
+ 	if ((cr4 ^ old_cr4) & X86_CR4_OSXSAVE)
+ 		kvm_update_cpuid(vcpu);
+ 
+@@ -6141,6 +6139,8 @@ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
+ 		return;
+ 
+ 	page = gfn_to_page(vcpu->kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
++	if (is_error_page(page))
++		return;
+ 	kvm_x86_ops->set_apic_access_page_addr(vcpu, page_to_phys(page));
+ 
+ 	/*
+@@ -6996,7 +6996,9 @@ void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
+ 	fpu_save_init(&vcpu->arch.guest_fpu);
+ 	__kernel_fpu_end();
+ 	++vcpu->stat.fpu_reload;
+-	kvm_make_request(KVM_REQ_DEACTIVATE_FPU, vcpu);
++	if (!vcpu->arch.eager_fpu)
++		kvm_make_request(KVM_REQ_DEACTIVATE_FPU, vcpu);
++
+ 	trace_kvm_fpu(0);
+ }
+ 
+@@ -7012,11 +7014,21 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
+ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
+ 						unsigned int id)
+ {
++	struct kvm_vcpu *vcpu;
++
+ 	if (check_tsc_unstable() && atomic_read(&kvm->online_vcpus) != 0)
+ 		printk_once(KERN_WARNING
+ 		"kvm: SMP vm created on host with unstable TSC; "
+ 		"guest TSC will not be reliable\n");
+-	return kvm_x86_ops->vcpu_create(kvm, id);
++
++	vcpu = kvm_x86_ops->vcpu_create(kvm, id);
++
++	/*
++	 * Activate fpu unconditionally in case the guest needs eager FPU.  It will be
++	 * deactivated soon if it doesn't.
++	 */
++	kvm_x86_ops->fpu_activate(vcpu);
++	return vcpu;
+ }
+ 
+ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
+diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
+index f9eeae871593..5aa1f6e281d2 100644
+--- a/drivers/acpi/osl.c
++++ b/drivers/acpi/osl.c
+@@ -182,7 +182,7 @@ static void __init acpi_request_region (struct acpi_generic_address *gas,
+ 		request_mem_region(addr, length, desc);
+ }
+ 
+-static int __init acpi_reserve_resources(void)
++static void __init acpi_reserve_resources(void)
+ {
+ 	acpi_request_region(&acpi_gbl_FADT.xpm1a_event_block, acpi_gbl_FADT.pm1_event_length,
+ 		"ACPI PM1a_EVT_BLK");
+@@ -211,10 +211,7 @@ static int __init acpi_reserve_resources(void)
+ 	if (!(acpi_gbl_FADT.gpe1_block_length & 0x1))
+ 		acpi_request_region(&acpi_gbl_FADT.xgpe1_block,
+ 			       acpi_gbl_FADT.gpe1_block_length, "ACPI GPE1_BLK");
+-
+-	return 0;
+ }
+-device_initcall(acpi_reserve_resources);
+ 
+ void acpi_os_printf(const char *fmt, ...)
+ {
+@@ -1845,6 +1842,7 @@ acpi_status __init acpi_os_initialize(void)
+ 
+ acpi_status __init acpi_os_initialize1(void)
+ {
++	acpi_reserve_resources();
+ 	kacpid_wq = alloc_workqueue("kacpid", 0, 1);
+ 	kacpi_notify_wq = alloc_workqueue("kacpi_notify", 0, 1);
+ 	kacpi_hotplug_wq = alloc_ordered_workqueue("kacpi_hotplug", 0);
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 33bb06e006c9..adce56fa9cef 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -66,6 +66,7 @@ enum board_ids {
+ 	board_ahci_yes_fbs,
+ 
+ 	/* board IDs for specific chipsets in alphabetical order */
++	board_ahci_avn,
+ 	board_ahci_mcp65,
+ 	board_ahci_mcp77,
+ 	board_ahci_mcp89,
+@@ -84,6 +85,8 @@ enum board_ids {
+ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
+ static int ahci_vt8251_hardreset(struct ata_link *link, unsigned int *class,
+ 				 unsigned long deadline);
++static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,
++			      unsigned long deadline);
+ static void ahci_mcp89_apple_enable(struct pci_dev *pdev);
+ static bool is_mcp89_apple(struct pci_dev *pdev);
+ static int ahci_p5wdh_hardreset(struct ata_link *link, unsigned int *class,
+@@ -107,6 +110,11 @@ static struct ata_port_operations ahci_p5wdh_ops = {
+ 	.hardreset		= ahci_p5wdh_hardreset,
+ };
+ 
++static struct ata_port_operations ahci_avn_ops = {
++	.inherits		= &ahci_ops,
++	.hardreset		= ahci_avn_hardreset,
++};
++
+ static const struct ata_port_info ahci_port_info[] = {
+ 	/* by features */
+ 	[board_ahci] = {
+@@ -151,6 +159,12 @@ static const struct ata_port_info ahci_port_info[] = {
+ 		.port_ops	= &ahci_ops,
+ 	},
+ 	/* by chipsets */
++	[board_ahci_avn] = {
++		.flags		= AHCI_FLAG_COMMON,
++		.pio_mask	= ATA_PIO4,
++		.udma_mask	= ATA_UDMA6,
++		.port_ops	= &ahci_avn_ops,
++	},
+ 	[board_ahci_mcp65] = {
+ 		AHCI_HFLAGS	(AHCI_HFLAG_NO_FPDMA_AA | AHCI_HFLAG_NO_PMP |
+ 				 AHCI_HFLAG_YES_NCQ),
+@@ -290,14 +304,14 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x1f27), board_ahci }, /* Avoton RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x1f2e), board_ahci }, /* Avoton RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x1f2f), board_ahci }, /* Avoton RAID */
+-	{ PCI_VDEVICE(INTEL, 0x1f32), board_ahci }, /* Avoton AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x1f33), board_ahci }, /* Avoton AHCI */
+-	{ PCI_VDEVICE(INTEL, 0x1f34), board_ahci }, /* Avoton RAID */
+-	{ PCI_VDEVICE(INTEL, 0x1f35), board_ahci }, /* Avoton RAID */
+-	{ PCI_VDEVICE(INTEL, 0x1f36), board_ahci }, /* Avoton RAID */
+-	{ PCI_VDEVICE(INTEL, 0x1f37), board_ahci }, /* Avoton RAID */
+-	{ PCI_VDEVICE(INTEL, 0x1f3e), board_ahci }, /* Avoton RAID */
+-	{ PCI_VDEVICE(INTEL, 0x1f3f), board_ahci }, /* Avoton RAID */
++	{ PCI_VDEVICE(INTEL, 0x1f32), board_ahci_avn }, /* Avoton AHCI */
++	{ PCI_VDEVICE(INTEL, 0x1f33), board_ahci_avn }, /* Avoton AHCI */
++	{ PCI_VDEVICE(INTEL, 0x1f34), board_ahci_avn }, /* Avoton RAID */
++	{ PCI_VDEVICE(INTEL, 0x1f35), board_ahci_avn }, /* Avoton RAID */
++	{ PCI_VDEVICE(INTEL, 0x1f36), board_ahci_avn }, /* Avoton RAID */
++	{ PCI_VDEVICE(INTEL, 0x1f37), board_ahci_avn }, /* Avoton RAID */
++	{ PCI_VDEVICE(INTEL, 0x1f3e), board_ahci_avn }, /* Avoton RAID */
++	{ PCI_VDEVICE(INTEL, 0x1f3f), board_ahci_avn }, /* Avoton RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x2823), board_ahci }, /* Wellsburg RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x2827), board_ahci }, /* Wellsburg RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x8d02), board_ahci }, /* Wellsburg AHCI */
+@@ -670,6 +684,79 @@ static int ahci_p5wdh_hardreset(struct ata_link *link, unsigned int *class,
+ 	return rc;
+ }
+ 
++/*
++ * ahci_avn_hardreset - attempt more aggressive recovery of Avoton ports.
++ *
++ * It has been observed with some SSDs that the timing of events in the
++ * link synchronization phase can leave the port in a state that can not
++ * be recovered by a SATA-hard-reset alone.  The failing signature is
++ * SStatus.DET stuck at 1 ("Device presence detected but Phy
++ * communication not established").  It was found that unloading and
++ * reloading the driver when this problem occurs allows the drive
++ * connection to be recovered (DET advanced to 0x3).  The critical
++ * component of reloading the driver is that the port state machines are
++ * reset by bouncing "port enable" in the AHCI PCS configuration
++ * register.  So, reproduce that effect by bouncing a port whenever we
++ * see DET==1 after a reset.
++ */
++static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,
++			      unsigned long deadline)
++{
++	const unsigned long *timing = sata_ehc_deb_timing(&link->eh_context);
++	struct ata_port *ap = link->ap;
++	struct ahci_port_priv *pp = ap->private_data;
++	struct ahci_host_priv *hpriv = ap->host->private_data;
++	u8 *d2h_fis = pp->rx_fis + RX_FIS_D2H_REG;
++	unsigned long tmo = deadline - jiffies;
++	struct ata_taskfile tf;
++	bool online;
++	int rc, i;
++
++	DPRINTK("ENTER\n");
++
++	ahci_stop_engine(ap);
++
++	for (i = 0; i < 2; i++) {
++		u16 val;
++		u32 sstatus;
++		int port = ap->port_no;
++		struct ata_host *host = ap->host;
++		struct pci_dev *pdev = to_pci_dev(host->dev);
++
++		/* clear D2H reception area to properly wait for D2H FIS */
++		ata_tf_init(link->device, &tf);
++		tf.command = ATA_BUSY;
++		ata_tf_to_fis(&tf, 0, 0, d2h_fis);
++
++		rc = sata_link_hardreset(link, timing, deadline, &online,
++				ahci_check_ready);
++
++		if (sata_scr_read(link, SCR_STATUS, &sstatus) != 0 ||
++				(sstatus & 0xf) != 1)
++			break;
++
++		ata_link_printk(link, KERN_INFO, "avn bounce port%d\n",
++				port);
++
++		pci_read_config_word(pdev, 0x92, &val);
++		val &= ~(1 << port);
++		pci_write_config_word(pdev, 0x92, val);
++		ata_msleep(ap, 1000);
++		val |= 1 << port;
++		pci_write_config_word(pdev, 0x92, val);
++		deadline += tmo;
++	}
++
++	hpriv->start_engine(ap);
++
++	if (online)
++		*class = ahci_dev_classify(ap);
++
++	DPRINTK("EXIT, rc=%d, class=%u\n", rc, *class);
++	return rc;
++}
++
++
+ #ifdef CONFIG_PM
+ static int ahci_pci_device_suspend(struct pci_dev *pdev, pm_message_t mesg)
+ {
+diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
+index 61a9c07e0dff..287c4ba0219f 100644
+--- a/drivers/ata/libahci.c
++++ b/drivers/ata/libahci.c
+@@ -1707,8 +1707,7 @@ static void ahci_handle_port_interrupt(struct ata_port *ap,
+ 	if (unlikely(resetting))
+ 		status &= ~PORT_IRQ_BAD_PMP;
+ 
+-	/* if LPM is enabled, PHYRDY doesn't mean anything */
+-	if (ap->link.lpm_policy > ATA_LPM_MAX_POWER) {
++	if (sata_lpm_ignore_phy_events(&ap->link)) {
+ 		status &= ~PORT_IRQ_PHYRDY;
+ 		ahci_scr_write(&ap->link, SCR_ERROR, SERR_PHYRDY_CHG);
+ 	}
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 23dac3babfe3..87b4b7f9fdc6 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -4214,7 +4214,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 	{ "Crucial_CT*MX100*",		"MU01",	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+-	{ "Samsung SSD 850 PRO*",	NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
++	{ "Samsung SSD 8*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 
+ 	/*
+@@ -6728,6 +6728,38 @@ u32 ata_wait_register(struct ata_port *ap, void __iomem *reg, u32 mask, u32 val,
+ 	return tmp;
+ }
+ 
++/**
++ *	sata_lpm_ignore_phy_events - test if PHY event should be ignored
++ *	@link: Link receiving the event
++ *
++ *	Test whether the received PHY event has to be ignored or not.
++ *
++ *	LOCKING:
++ *	None:
++ *
++ *	RETURNS:
++ *	True if the event has to be ignored.
++ */
++bool sata_lpm_ignore_phy_events(struct ata_link *link)
++{
++	unsigned long lpm_timeout = link->last_lpm_change +
++				    msecs_to_jiffies(ATA_TMOUT_SPURIOUS_PHY);
++
++	/* if LPM is enabled, PHYRDY doesn't mean anything */
++	if (link->lpm_policy > ATA_LPM_MAX_POWER)
++		return true;
++
++	/* ignore the first PHY event after the LPM policy changed
++	 * as it is might be spurious
++	 */
++	if ((link->flags & ATA_LFLAG_CHANGED) &&
++	    time_before(jiffies, lpm_timeout))
++		return true;
++
++	return false;
++}
++EXPORT_SYMBOL_GPL(sata_lpm_ignore_phy_events);
++
+ /*
+  * Dummy port_ops
+  */
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index d2029a462e2c..89c3d83e1ca7 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -3489,6 +3489,9 @@ static int ata_eh_set_lpm(struct ata_link *link, enum ata_lpm_policy policy,
+ 		}
+ 	}
+ 
++	link->last_lpm_change = jiffies;
++	link->flags |= ATA_LFLAG_CHANGED;
++
+ 	return 0;
+ 
+ fail:
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 237f23f68bfc..1daa0ea2f1ac 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -1443,8 +1443,10 @@ static struct clk_core *__clk_set_parent_before(struct clk_core *clk,
+ 	 */
+ 	if (clk->prepare_count) {
+ 		clk_core_prepare(parent);
++		flags = clk_enable_lock();
+ 		clk_core_enable(parent);
+ 		clk_core_enable(clk);
++		clk_enable_unlock(flags);
+ 	}
+ 
+ 	/* update the clk tree topology */
+@@ -1459,13 +1461,17 @@ static void __clk_set_parent_after(struct clk_core *core,
+ 				   struct clk_core *parent,
+ 				   struct clk_core *old_parent)
+ {
++	unsigned long flags;
++
+ 	/*
+ 	 * Finish the migration of prepare state and undo the changes done
+ 	 * for preventing a race with clk_enable().
+ 	 */
+ 	if (core->prepare_count) {
++		flags = clk_enable_lock();
+ 		clk_core_disable(core);
+ 		clk_core_disable(old_parent);
++		clk_enable_unlock(flags);
+ 		clk_core_unprepare(old_parent);
+ 	}
+ }
+@@ -1489,8 +1495,10 @@ static int __clk_set_parent(struct clk_core *clk, struct clk_core *parent,
+ 		clk_enable_unlock(flags);
+ 
+ 		if (clk->prepare_count) {
++			flags = clk_enable_lock();
+ 			clk_core_disable(clk);
+ 			clk_core_disable(parent);
++			clk_enable_unlock(flags);
+ 			clk_core_unprepare(parent);
+ 		}
+ 		return ret;
+diff --git a/drivers/clk/samsung/clk-exynos5420.c b/drivers/clk/samsung/clk-exynos5420.c
+index 07d666cc6a29..bea4a173eef5 100644
+--- a/drivers/clk/samsung/clk-exynos5420.c
++++ b/drivers/clk/samsung/clk-exynos5420.c
+@@ -271,6 +271,7 @@ static const struct samsung_clk_reg_dump exynos5420_set_clksrc[] = {
+ 	{ .offset = SRC_MASK_PERIC0,		.value = 0x11111110, },
+ 	{ .offset = SRC_MASK_PERIC1,		.value = 0x11111100, },
+ 	{ .offset = SRC_MASK_ISP,		.value = 0x11111000, },
++	{ .offset = GATE_BUS_TOP,		.value = 0xffffffff, },
+ 	{ .offset = GATE_BUS_DISP1,		.value = 0xffffffff, },
+ 	{ .offset = GATE_IP_PERIC,		.value = 0xffffffff, },
+ };
+diff --git a/drivers/firmware/dmi_scan.c b/drivers/firmware/dmi_scan.c
+index 2eebd28b4c40..ccc20188f00c 100644
+--- a/drivers/firmware/dmi_scan.c
++++ b/drivers/firmware/dmi_scan.c
+@@ -499,18 +499,19 @@ static int __init dmi_present(const u8 *buf)
+ 	buf += 16;
+ 
+ 	if (memcmp(buf, "_DMI_", 5) == 0 && dmi_checksum(buf, 15)) {
++		if (smbios_ver)
++			dmi_ver = smbios_ver;
++		else
++			dmi_ver = (buf[14] & 0xF0) << 4 | (buf[14] & 0x0F);
+ 		dmi_num = get_unaligned_le16(buf + 12);
+ 		dmi_len = get_unaligned_le16(buf + 6);
+ 		dmi_base = get_unaligned_le32(buf + 8);
+ 
+ 		if (dmi_walk_early(dmi_decode) == 0) {
+ 			if (smbios_ver) {
+-				dmi_ver = smbios_ver;
+ 				pr_info("SMBIOS %d.%d present.\n",
+ 				       dmi_ver >> 8, dmi_ver & 0xFF);
+ 			} else {
+-				dmi_ver = (buf[14] & 0xF0) << 4 |
+-					   (buf[14] & 0x0F);
+ 				pr_info("Legacy DMI %d.%d present.\n",
+ 				       dmi_ver >> 8, dmi_ver & 0xFF);
+ 			}
+diff --git a/drivers/gpio/gpio-kempld.c b/drivers/gpio/gpio-kempld.c
+index 443518f63f15..a6b0def4bd7b 100644
+--- a/drivers/gpio/gpio-kempld.c
++++ b/drivers/gpio/gpio-kempld.c
+@@ -117,7 +117,7 @@ static int kempld_gpio_get_direction(struct gpio_chip *chip, unsigned offset)
+ 		= container_of(chip, struct kempld_gpio_data, chip);
+ 	struct kempld_device_data *pld = gpio->pld;
+ 
+-	return kempld_gpio_get_bit(pld, KEMPLD_GPIO_DIR_NUM(offset), offset);
++	return !kempld_gpio_get_bit(pld, KEMPLD_GPIO_DIR_NUM(offset), offset);
+ }
+ 
+ static int kempld_gpio_pincount(struct kempld_device_data *pld)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index 498399323a8c..406624a0b201 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -729,7 +729,7 @@ static ssize_t node_show(struct kobject *kobj, struct attribute *attr,
+ 				kfd2kgd->get_max_engine_clock_in_mhz(
+ 					dev->gpu->kgd));
+ 		sysfs_show_64bit_prop(buffer, "local_mem_size",
+-				kfd2kgd->get_vmem_size(dev->gpu->kgd));
++				(unsigned long long int) 0);
+ 
+ 		sysfs_show_32bit_prop(buffer, "fw_version",
+ 				kfd2kgd->get_fw_version(
+diff --git a/drivers/gpu/drm/drm_plane_helper.c b/drivers/gpu/drm/drm_plane_helper.c
+index 5ba5792bfdba..98b125763ecd 100644
+--- a/drivers/gpu/drm/drm_plane_helper.c
++++ b/drivers/gpu/drm/drm_plane_helper.c
+@@ -476,6 +476,9 @@ int drm_plane_helper_commit(struct drm_plane *plane,
+ 		if (!crtc[i])
+ 			continue;
+ 
++		if (crtc[i]->cursor == plane)
++			continue;
++
+ 		/* There's no other way to figure out whether the crtc is running. */
+ 		ret = drm_crtc_vblank_get(crtc[i]);
+ 		if (ret == 0) {
+diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c
+index 1afc0b419da2..965a45619f6b 100644
+--- a/drivers/gpu/drm/radeon/atombios_crtc.c
++++ b/drivers/gpu/drm/radeon/atombios_crtc.c
+@@ -1789,7 +1789,9 @@ static int radeon_get_shared_nondp_ppll(struct drm_crtc *crtc)
+ 			if ((crtc->mode.clock == test_crtc->mode.clock) &&
+ 			    (adjusted_clock == test_adjusted_clock) &&
+ 			    (radeon_crtc->ss_enabled == test_radeon_crtc->ss_enabled) &&
+-			    (test_radeon_crtc->pll_id != ATOM_PPLL_INVALID))
++			    (test_radeon_crtc->pll_id != ATOM_PPLL_INVALID) &&
++			    (drm_detect_monitor_audio(radeon_connector_edid(test_radeon_crtc->connector)) ==
++			     drm_detect_monitor_audio(radeon_connector_edid(radeon_crtc->connector))))
+ 				return test_radeon_crtc->pll_id;
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/radeon/atombios_dp.c b/drivers/gpu/drm/radeon/atombios_dp.c
+index 8d74de82456e..8b2c4c890507 100644
+--- a/drivers/gpu/drm/radeon/atombios_dp.c
++++ b/drivers/gpu/drm/radeon/atombios_dp.c
+@@ -412,19 +412,21 @@ bool radeon_dp_getdpcd(struct radeon_connector *radeon_connector)
+ {
+ 	struct radeon_connector_atom_dig *dig_connector = radeon_connector->con_priv;
+ 	u8 msg[DP_DPCD_SIZE];
+-	int ret;
++	int ret, i;
+ 
+-	ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_DPCD_REV, msg,
+-			       DP_DPCD_SIZE);
+-	if (ret > 0) {
+-		memcpy(dig_connector->dpcd, msg, DP_DPCD_SIZE);
++	for (i = 0; i < 7; i++) {
++		ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_DPCD_REV, msg,
++				       DP_DPCD_SIZE);
++		if (ret == DP_DPCD_SIZE) {
++			memcpy(dig_connector->dpcd, msg, DP_DPCD_SIZE);
+ 
+-		DRM_DEBUG_KMS("DPCD: %*ph\n", (int)sizeof(dig_connector->dpcd),
+-			      dig_connector->dpcd);
++			DRM_DEBUG_KMS("DPCD: %*ph\n", (int)sizeof(dig_connector->dpcd),
++				      dig_connector->dpcd);
+ 
+-		radeon_dp_probe_oui(radeon_connector);
++			radeon_dp_probe_oui(radeon_connector);
+ 
+-		return true;
++			return true;
++		}
+ 	}
+ 	dig_connector->dpcd[0] = 0;
+ 	return false;
+diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
+index 3e670d344a20..19aafb71fd8e 100644
+--- a/drivers/gpu/drm/radeon/cik.c
++++ b/drivers/gpu/drm/radeon/cik.c
+@@ -5804,7 +5804,7 @@ static int cik_pcie_gart_enable(struct radeon_device *rdev)
+ 	/* restore context1-15 */
+ 	/* set vm size, must be a multiple of 4 */
+ 	WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0);
+-	WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn);
++	WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn - 1);
+ 	for (i = 1; i < 16; i++) {
+ 		if (i < 8)
+ 			WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),
+diff --git a/drivers/gpu/drm/radeon/evergreen_hdmi.c b/drivers/gpu/drm/radeon/evergreen_hdmi.c
+index 0926739c9fa7..9953356fe263 100644
+--- a/drivers/gpu/drm/radeon/evergreen_hdmi.c
++++ b/drivers/gpu/drm/radeon/evergreen_hdmi.c
+@@ -400,7 +400,7 @@ void evergreen_hdmi_enable(struct drm_encoder *encoder, bool enable)
+ 	if (enable) {
+ 		struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
+ 
+-		if (drm_detect_monitor_audio(radeon_connector_edid(connector))) {
++		if (connector && drm_detect_monitor_audio(radeon_connector_edid(connector))) {
+ 			WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset,
+ 			       HDMI_AVI_INFO_SEND | /* enable AVI info frames */
+ 			       HDMI_AVI_INFO_CONT | /* required for audio info values to be updated */
+@@ -438,7 +438,8 @@ void evergreen_dp_enable(struct drm_encoder *encoder, bool enable)
+ 	if (!dig || !dig->afmt)
+ 		return;
+ 
+-	if (enable && drm_detect_monitor_audio(radeon_connector_edid(connector))) {
++	if (enable && connector &&
++	    drm_detect_monitor_audio(radeon_connector_edid(connector))) {
+ 		struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
+ 		struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ 		struct radeon_connector_atom_dig *dig_connector;
+diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
+index dab00812abaa..02d585455f49 100644
+--- a/drivers/gpu/drm/radeon/ni.c
++++ b/drivers/gpu/drm/radeon/ni.c
+@@ -1272,7 +1272,8 @@ static int cayman_pcie_gart_enable(struct radeon_device *rdev)
+ 	 */
+ 	for (i = 1; i < 8; i++) {
+ 		WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR + (i << 2), 0);
+-		WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR + (i << 2), rdev->vm_manager.max_pfn);
++		WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR + (i << 2),
++			rdev->vm_manager.max_pfn - 1);
+ 		WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),
+ 		       rdev->vm_manager.saved_table_addr[i]);
+ 	}
+diff --git a/drivers/gpu/drm/radeon/radeon_audio.c b/drivers/gpu/drm/radeon/radeon_audio.c
+index b7c6bb69f3c7..88c04bc0a7f6 100644
+--- a/drivers/gpu/drm/radeon/radeon_audio.c
++++ b/drivers/gpu/drm/radeon/radeon_audio.c
+@@ -460,9 +460,6 @@ void radeon_audio_detect(struct drm_connector *connector,
+ 	if (!connector || !connector->encoder)
+ 		return;
+ 
+-	if (!radeon_encoder_is_digital(connector->encoder))
+-		return;
+-
+ 	rdev = connector->encoder->dev->dev_private;
+ 
+ 	if (!radeon_audio_chipset_supported(rdev))
+@@ -471,26 +468,26 @@ void radeon_audio_detect(struct drm_connector *connector,
+ 	radeon_encoder = to_radeon_encoder(connector->encoder);
+ 	dig = radeon_encoder->enc_priv;
+ 
+-	if (!dig->afmt)
+-		return;
+-
+ 	if (status == connector_status_connected) {
+-		struct radeon_connector *radeon_connector = to_radeon_connector(connector);
++		struct radeon_connector *radeon_connector;
++		int sink_type;
++
++		if (!drm_detect_monitor_audio(radeon_connector_edid(connector))) {
++			radeon_encoder->audio = NULL;
++			return;
++		}
++
++		radeon_connector = to_radeon_connector(connector);
++		sink_type = radeon_dp_getsinktype(radeon_connector);
+ 
+ 		if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort &&
+-		    radeon_dp_getsinktype(radeon_connector) ==
+-		    CONNECTOR_OBJECT_ID_DISPLAYPORT)
++			sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT)
+ 			radeon_encoder->audio = rdev->audio.dp_funcs;
+ 		else
+ 			radeon_encoder->audio = rdev->audio.hdmi_funcs;
+ 
+ 		dig->afmt->pin = radeon_audio_get_pin(connector->encoder);
+-		if (drm_detect_monitor_audio(radeon_connector_edid(connector))) {
+-			radeon_audio_enable(rdev, dig->afmt->pin, 0xf);
+-		} else {
+-			radeon_audio_enable(rdev, dig->afmt->pin, 0);
+-			dig->afmt->pin = NULL;
+-		}
++		radeon_audio_enable(rdev, dig->afmt->pin, 0xf);
+ 	} else {
+ 		radeon_audio_enable(rdev, dig->afmt->pin, 0);
+ 		dig->afmt->pin = NULL;
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index 27973e3faf0e..27def67cb6be 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -1333,10 +1333,8 @@ out:
+ 	/* updated in get modes as well since we need to know if it's analog or digital */
+ 	radeon_connector_update_scratch_regs(connector, ret);
+ 
+-	if (radeon_audio != 0) {
+-		radeon_connector_get_edid(connector);
++	if (radeon_audio != 0)
+ 		radeon_audio_detect(connector, ret);
+-	}
+ 
+ exit:
+ 	pm_runtime_mark_last_busy(connector->dev->dev);
+@@ -1661,10 +1659,8 @@ radeon_dp_detect(struct drm_connector *connector, bool force)
+ 
+ 	radeon_connector_update_scratch_regs(connector, ret);
+ 
+-	if (radeon_audio != 0) {
+-		radeon_connector_get_edid(connector);
++	if (radeon_audio != 0)
+ 		radeon_audio_detect(connector, ret);
+-	}
+ 
+ out:
+ 	pm_runtime_mark_last_busy(connector->dev->dev);
+diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
+index a7fb2735d4a9..f433491fab6f 100644
+--- a/drivers/gpu/drm/radeon/si.c
++++ b/drivers/gpu/drm/radeon/si.c
+@@ -4288,7 +4288,7 @@ static int si_pcie_gart_enable(struct radeon_device *rdev)
+ 	/* empty context1-15 */
+ 	/* set vm size, must be a multiple of 4 */
+ 	WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0);
+-	WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn);
++	WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn - 1);
+ 	/* Assign the pt base to something valid for now; the pts used for
+ 	 * the VMs are determined by the application and setup and assigned
+ 	 * on the fly in the vm part of radeon_gart.c
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index e77658cd037c..2caf5b2f3446 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -39,7 +39,6 @@ MODULE_AUTHOR("Nestor Lopez Casado <nlopezcasad@logitech.com>");
+ /* bits 1..20 are reserved for classes */
+ #define HIDPP_QUIRK_DELAYED_INIT		BIT(21)
+ #define HIDPP_QUIRK_WTP_PHYSICAL_BUTTONS	BIT(22)
+-#define HIDPP_QUIRK_MULTI_INPUT			BIT(23)
+ 
+ /*
+  * There are two hidpp protocols in use, the first version hidpp10 is known
+@@ -701,12 +700,6 @@ static int wtp_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 		struct hid_field *field, struct hid_usage *usage,
+ 		unsigned long **bit, int *max)
+ {
+-	struct hidpp_device *hidpp = hid_get_drvdata(hdev);
+-
+-	if ((hidpp->quirks & HIDPP_QUIRK_MULTI_INPUT) &&
+-	    (field->application == HID_GD_KEYBOARD))
+-		return 0;
+-
+ 	return -1;
+ }
+ 
+@@ -715,10 +708,6 @@ static void wtp_populate_input(struct hidpp_device *hidpp,
+ {
+ 	struct wtp_data *wd = hidpp->private_data;
+ 
+-	if ((hidpp->quirks & HIDPP_QUIRK_MULTI_INPUT) && origin_is_hid_core)
+-		/* this is the generic hid-input call */
+-		return;
+-
+ 	__set_bit(EV_ABS, input_dev->evbit);
+ 	__set_bit(EV_KEY, input_dev->evbit);
+ 	__clear_bit(EV_REL, input_dev->evbit);
+@@ -1234,10 +1223,6 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	if (hidpp->quirks & HIDPP_QUIRK_DELAYED_INIT)
+ 		connect_mask &= ~HID_CONNECT_HIDINPUT;
+ 
+-	/* Re-enable hidinput for multi-input devices */
+-	if (hidpp->quirks & HIDPP_QUIRK_MULTI_INPUT)
+-		connect_mask |= HID_CONNECT_HIDINPUT;
+-
+ 	ret = hid_hw_start(hdev, connect_mask);
+ 	if (ret) {
+ 		hid_err(hdev, "%s:hid_hw_start returned error\n", __func__);
+@@ -1285,11 +1270,6 @@ static const struct hid_device_id hidpp_devices[] = {
+ 	  HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,
+ 		USB_DEVICE_ID_LOGITECH_T651),
+ 	  .driver_data = HIDPP_QUIRK_CLASS_WTP },
+-	{ /* Keyboard TK820 */
+-	  HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
+-		USB_VENDOR_ID_LOGITECH, 0x4102),
+-	  .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_MULTI_INPUT |
+-			 HIDPP_QUIRK_CLASS_WTP },
+ 
+ 	{ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
+ 		USB_VENDOR_ID_LOGITECH, HID_ANY_ID)},
+diff --git a/drivers/hwmon/nct6683.c b/drivers/hwmon/nct6683.c
+index f3830db02d46..37f01702d081 100644
+--- a/drivers/hwmon/nct6683.c
++++ b/drivers/hwmon/nct6683.c
+@@ -439,6 +439,7 @@ nct6683_create_attr_group(struct device *dev, struct sensor_template_group *tg,
+ 				 (*t)->dev_attr.attr.name, tg->base + i);
+ 			if ((*t)->s2) {
+ 				a2 = &su->u.a2;
++				sysfs_attr_init(&a2->dev_attr.attr);
+ 				a2->dev_attr.attr.name = su->name;
+ 				a2->nr = (*t)->u.s.nr + i;
+ 				a2->index = (*t)->u.s.index;
+@@ -449,6 +450,7 @@ nct6683_create_attr_group(struct device *dev, struct sensor_template_group *tg,
+ 				*attrs = &a2->dev_attr.attr;
+ 			} else {
+ 				a = &su->u.a1;
++				sysfs_attr_init(&a->dev_attr.attr);
+ 				a->dev_attr.attr.name = su->name;
+ 				a->index = (*t)->u.index + i;
+ 				a->dev_attr.attr.mode =
+diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
+index 1be41177b620..0773930c110e 100644
+--- a/drivers/hwmon/nct6775.c
++++ b/drivers/hwmon/nct6775.c
+@@ -994,6 +994,7 @@ nct6775_create_attr_group(struct device *dev, struct sensor_template_group *tg,
+ 				 (*t)->dev_attr.attr.name, tg->base + i);
+ 			if ((*t)->s2) {
+ 				a2 = &su->u.a2;
++				sysfs_attr_init(&a2->dev_attr.attr);
+ 				a2->dev_attr.attr.name = su->name;
+ 				a2->nr = (*t)->u.s.nr + i;
+ 				a2->index = (*t)->u.s.index;
+@@ -1004,6 +1005,7 @@ nct6775_create_attr_group(struct device *dev, struct sensor_template_group *tg,
+ 				*attrs = &a2->dev_attr.attr;
+ 			} else {
+ 				a = &su->u.a1;
++				sysfs_attr_init(&a->dev_attr.attr);
+ 				a->dev_attr.attr.name = su->name;
+ 				a->index = (*t)->u.index + i;
+ 				a->dev_attr.attr.mode =
+diff --git a/drivers/hwmon/ntc_thermistor.c b/drivers/hwmon/ntc_thermistor.c
+index 112e4d45e4a0..68800115876b 100644
+--- a/drivers/hwmon/ntc_thermistor.c
++++ b/drivers/hwmon/ntc_thermistor.c
+@@ -239,8 +239,10 @@ static struct ntc_thermistor_platform_data *
+ ntc_thermistor_parse_dt(struct platform_device *pdev)
+ {
+ 	struct iio_channel *chan;
++	enum iio_chan_type type;
+ 	struct device_node *np = pdev->dev.of_node;
+ 	struct ntc_thermistor_platform_data *pdata;
++	int ret;
+ 
+ 	if (!np)
+ 		return NULL;
+@@ -253,6 +255,13 @@ ntc_thermistor_parse_dt(struct platform_device *pdev)
+ 	if (IS_ERR(chan))
+ 		return ERR_CAST(chan);
+ 
++	ret = iio_get_channel_type(chan, &type);
++	if (ret < 0)
++		return ERR_PTR(ret);
++
++	if (type != IIO_VOLTAGE)
++		return ERR_PTR(-EINVAL);
++
+ 	if (of_property_read_u32(np, "pullup-uv", &pdata->pullup_uv))
+ 		return ERR_PTR(-ENODEV);
+ 	if (of_property_read_u32(np, "pullup-ohm", &pdata->pullup_ohm))
+diff --git a/drivers/hwmon/tmp401.c b/drivers/hwmon/tmp401.c
+index 99664ebc738d..ccf4cffe0ee1 100644
+--- a/drivers/hwmon/tmp401.c
++++ b/drivers/hwmon/tmp401.c
+@@ -44,7 +44,7 @@
+ #include <linux/sysfs.h>
+ 
+ /* Addresses to scan */
+-static const unsigned short normal_i2c[] = { 0x37, 0x48, 0x49, 0x4a, 0x4c, 0x4d,
++static const unsigned short normal_i2c[] = { 0x48, 0x49, 0x4a, 0x4c, 0x4d,
+ 	0x4e, 0x4f, I2C_CLIENT_END };
+ 
+ enum chips { tmp401, tmp411, tmp431, tmp432, tmp435 };
+diff --git a/drivers/iio/accel/st_accel_core.c b/drivers/iio/accel/st_accel_core.c
+index 53f32629283a..6805db0e4f07 100644
+--- a/drivers/iio/accel/st_accel_core.c
++++ b/drivers/iio/accel/st_accel_core.c
+@@ -465,6 +465,7 @@ int st_accel_common_probe(struct iio_dev *indio_dev)
+ 
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->info = &accel_info;
++	mutex_init(&adata->tb.buf_lock);
+ 
+ 	st_sensors_power_enable(indio_dev);
+ 
+diff --git a/drivers/iio/adc/axp288_adc.c b/drivers/iio/adc/axp288_adc.c
+index 08bcfb061ca5..56008a86b78f 100644
+--- a/drivers/iio/adc/axp288_adc.c
++++ b/drivers/iio/adc/axp288_adc.c
+@@ -53,39 +53,42 @@ static const struct iio_chan_spec const axp288_adc_channels[] = {
+ 		.channel = 0,
+ 		.address = AXP288_TS_ADC_H,
+ 		.datasheet_name = "TS_PIN",
++		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 	}, {
+ 		.indexed = 1,
+ 		.type = IIO_TEMP,
+ 		.channel = 1,
+ 		.address = AXP288_PMIC_ADC_H,
+ 		.datasheet_name = "PMIC_TEMP",
++		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 	}, {
+ 		.indexed = 1,
+ 		.type = IIO_TEMP,
+ 		.channel = 2,
+ 		.address = AXP288_GP_ADC_H,
+ 		.datasheet_name = "GPADC",
++		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 	}, {
+ 		.indexed = 1,
+ 		.type = IIO_CURRENT,
+ 		.channel = 3,
+ 		.address = AXP20X_BATT_CHRG_I_H,
+ 		.datasheet_name = "BATT_CHG_I",
+-		.info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
++		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 	}, {
+ 		.indexed = 1,
+ 		.type = IIO_CURRENT,
+ 		.channel = 4,
+ 		.address = AXP20X_BATT_DISCHRG_I_H,
+ 		.datasheet_name = "BATT_DISCHRG_I",
+-		.info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
++		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 	}, {
+ 		.indexed = 1,
+ 		.type = IIO_VOLTAGE,
+ 		.channel = 5,
+ 		.address = AXP20X_BATT_V_H,
+ 		.datasheet_name = "BATT_V",
+-		.info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
++		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 	},
+ };
+ 
+@@ -151,9 +154,6 @@ static int axp288_adc_read_raw(struct iio_dev *indio_dev,
+ 						chan->address))
+ 			dev_err(&indio_dev->dev, "TS pin restore\n");
+ 		break;
+-	case IIO_CHAN_INFO_PROCESSED:
+-		ret = axp288_adc_read_channel(val, chan->address, info->regmap);
+-		break;
+ 	default:
+ 		ret = -EINVAL;
+ 	}
+diff --git a/drivers/iio/adc/cc10001_adc.c b/drivers/iio/adc/cc10001_adc.c
+index 51e2a83c9404..115f6e99a7fa 100644
+--- a/drivers/iio/adc/cc10001_adc.c
++++ b/drivers/iio/adc/cc10001_adc.c
+@@ -35,8 +35,9 @@
+ #define CC10001_ADC_EOC_SET		BIT(0)
+ 
+ #define CC10001_ADC_CHSEL_SAMPLED	0x0c
+-#define CC10001_ADC_POWER_UP		0x10
+-#define CC10001_ADC_POWER_UP_SET	BIT(0)
++#define CC10001_ADC_POWER_DOWN		0x10
++#define CC10001_ADC_POWER_DOWN_SET	BIT(0)
++
+ #define CC10001_ADC_DEBUG		0x14
+ #define CC10001_ADC_DATA_COUNT		0x20
+ 
+@@ -62,7 +63,6 @@ struct cc10001_adc_device {
+ 	u16 *buf;
+ 
+ 	struct mutex lock;
+-	unsigned long channel_map;
+ 	unsigned int start_delay_ns;
+ 	unsigned int eoc_delay_ns;
+ };
+@@ -79,6 +79,18 @@ static inline u32 cc10001_adc_read_reg(struct cc10001_adc_device *adc_dev,
+ 	return readl(adc_dev->reg_base + reg);
+ }
+ 
++static void cc10001_adc_power_up(struct cc10001_adc_device *adc_dev)
++{
++	cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_DOWN, 0);
++	ndelay(adc_dev->start_delay_ns);
++}
++
++static void cc10001_adc_power_down(struct cc10001_adc_device *adc_dev)
++{
++	cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_DOWN,
++			      CC10001_ADC_POWER_DOWN_SET);
++}
++
+ static void cc10001_adc_start(struct cc10001_adc_device *adc_dev,
+ 			      unsigned int channel)
+ {
+@@ -88,6 +100,7 @@ static void cc10001_adc_start(struct cc10001_adc_device *adc_dev,
+ 	val = (channel & CC10001_ADC_CH_MASK) | CC10001_ADC_MODE_SINGLE_CONV;
+ 	cc10001_adc_write_reg(adc_dev, CC10001_ADC_CONFIG, val);
+ 
++	udelay(1);
+ 	val = cc10001_adc_read_reg(adc_dev, CC10001_ADC_CONFIG);
+ 	val = val | CC10001_ADC_START_CONV;
+ 	cc10001_adc_write_reg(adc_dev, CC10001_ADC_CONFIG, val);
+@@ -129,6 +142,7 @@ static irqreturn_t cc10001_adc_trigger_h(int irq, void *p)
+ 	struct iio_dev *indio_dev;
+ 	unsigned int delay_ns;
+ 	unsigned int channel;
++	unsigned int scan_idx;
+ 	bool sample_invalid;
+ 	u16 *data;
+ 	int i;
+@@ -139,20 +153,17 @@ static irqreturn_t cc10001_adc_trigger_h(int irq, void *p)
+ 
+ 	mutex_lock(&adc_dev->lock);
+ 
+-	cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP,
+-			      CC10001_ADC_POWER_UP_SET);
+-
+-	/* Wait for 8 (6+2) clock cycles before activating START */
+-	ndelay(adc_dev->start_delay_ns);
++	cc10001_adc_power_up(adc_dev);
+ 
+ 	/* Calculate delay step for eoc and sampled data */
+ 	delay_ns = adc_dev->eoc_delay_ns / CC10001_MAX_POLL_COUNT;
+ 
+ 	i = 0;
+ 	sample_invalid = false;
+-	for_each_set_bit(channel, indio_dev->active_scan_mask,
++	for_each_set_bit(scan_idx, indio_dev->active_scan_mask,
+ 				  indio_dev->masklength) {
+ 
++		channel = indio_dev->channels[scan_idx].channel;
+ 		cc10001_adc_start(adc_dev, channel);
+ 
+ 		data[i] = cc10001_adc_poll_done(indio_dev, channel, delay_ns);
+@@ -166,7 +177,7 @@ static irqreturn_t cc10001_adc_trigger_h(int irq, void *p)
+ 	}
+ 
+ done:
+-	cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP, 0);
++	cc10001_adc_power_down(adc_dev);
+ 
+ 	mutex_unlock(&adc_dev->lock);
+ 
+@@ -185,11 +196,7 @@ static u16 cc10001_adc_read_raw_voltage(struct iio_dev *indio_dev,
+ 	unsigned int delay_ns;
+ 	u16 val;
+ 
+-	cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP,
+-			      CC10001_ADC_POWER_UP_SET);
+-
+-	/* Wait for 8 (6+2) clock cycles before activating START */
+-	ndelay(adc_dev->start_delay_ns);
++	cc10001_adc_power_up(adc_dev);
+ 
+ 	/* Calculate delay step for eoc and sampled data */
+ 	delay_ns = adc_dev->eoc_delay_ns / CC10001_MAX_POLL_COUNT;
+@@ -198,7 +205,7 @@ static u16 cc10001_adc_read_raw_voltage(struct iio_dev *indio_dev,
+ 
+ 	val = cc10001_adc_poll_done(indio_dev, chan->channel, delay_ns);
+ 
+-	cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP, 0);
++	cc10001_adc_power_down(adc_dev);
+ 
+ 	return val;
+ }
+@@ -224,7 +231,7 @@ static int cc10001_adc_read_raw(struct iio_dev *indio_dev,
+ 
+ 	case IIO_CHAN_INFO_SCALE:
+ 		ret = regulator_get_voltage(adc_dev->reg);
+-		if (ret)
++		if (ret < 0)
+ 			return ret;
+ 
+ 		*val = ret / 1000;
+@@ -255,22 +262,22 @@ static const struct iio_info cc10001_adc_info = {
+ 	.update_scan_mode = &cc10001_update_scan_mode,
+ };
+ 
+-static int cc10001_adc_channel_init(struct iio_dev *indio_dev)
++static int cc10001_adc_channel_init(struct iio_dev *indio_dev,
++				    unsigned long channel_map)
+ {
+-	struct cc10001_adc_device *adc_dev = iio_priv(indio_dev);
+ 	struct iio_chan_spec *chan_array, *timestamp;
+ 	unsigned int bit, idx = 0;
+ 
+-	indio_dev->num_channels = bitmap_weight(&adc_dev->channel_map,
+-						CC10001_ADC_NUM_CHANNELS);
++	indio_dev->num_channels = bitmap_weight(&channel_map,
++						CC10001_ADC_NUM_CHANNELS) + 1;
+ 
+-	chan_array = devm_kcalloc(&indio_dev->dev, indio_dev->num_channels + 1,
++	chan_array = devm_kcalloc(&indio_dev->dev, indio_dev->num_channels,
+ 				  sizeof(struct iio_chan_spec),
+ 				  GFP_KERNEL);
+ 	if (!chan_array)
+ 		return -ENOMEM;
+ 
+-	for_each_set_bit(bit, &adc_dev->channel_map, CC10001_ADC_NUM_CHANNELS) {
++	for_each_set_bit(bit, &channel_map, CC10001_ADC_NUM_CHANNELS) {
+ 		struct iio_chan_spec *chan = &chan_array[idx];
+ 
+ 		chan->type = IIO_VOLTAGE;
+@@ -305,6 +312,7 @@ static int cc10001_adc_probe(struct platform_device *pdev)
+ 	unsigned long adc_clk_rate;
+ 	struct resource *res;
+ 	struct iio_dev *indio_dev;
++	unsigned long channel_map;
+ 	int ret;
+ 
+ 	indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*adc_dev));
+@@ -313,9 +321,9 @@ static int cc10001_adc_probe(struct platform_device *pdev)
+ 
+ 	adc_dev = iio_priv(indio_dev);
+ 
+-	adc_dev->channel_map = GENMASK(CC10001_ADC_NUM_CHANNELS - 1, 0);
++	channel_map = GENMASK(CC10001_ADC_NUM_CHANNELS - 1, 0);
+ 	if (!of_property_read_u32(node, "adc-reserved-channels", &ret))
+-		adc_dev->channel_map &= ~ret;
++		channel_map &= ~ret;
+ 
+ 	adc_dev->reg = devm_regulator_get(&pdev->dev, "vref");
+ 	if (IS_ERR(adc_dev->reg))
+@@ -361,7 +369,7 @@ static int cc10001_adc_probe(struct platform_device *pdev)
+ 	adc_dev->start_delay_ns = adc_dev->eoc_delay_ns * CC10001_WAIT_CYCLES;
+ 
+ 	/* Setup the ADC channels available on the device */
+-	ret = cc10001_adc_channel_init(indio_dev);
++	ret = cc10001_adc_channel_init(indio_dev, channel_map);
+ 	if (ret < 0)
+ 		goto err_disable_clk;
+ 
+diff --git a/drivers/iio/adc/qcom-spmi-vadc.c b/drivers/iio/adc/qcom-spmi-vadc.c
+index 3211729bcb0b..0c4618b4d515 100644
+--- a/drivers/iio/adc/qcom-spmi-vadc.c
++++ b/drivers/iio/adc/qcom-spmi-vadc.c
+@@ -18,6 +18,7 @@
+ #include <linux/iio/iio.h>
+ #include <linux/interrupt.h>
+ #include <linux/kernel.h>
++#include <linux/math64.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+@@ -471,11 +472,11 @@ static s32 vadc_calibrate(struct vadc_priv *vadc,
+ 			  const struct vadc_channel_prop *prop, u16 adc_code)
+ {
+ 	const struct vadc_prescale_ratio *prescale;
+-	s32 voltage;
++	s64 voltage;
+ 
+ 	voltage = adc_code - vadc->graph[prop->calibration].gnd;
+ 	voltage *= vadc->graph[prop->calibration].dx;
+-	voltage = voltage / vadc->graph[prop->calibration].dy;
++	voltage = div64_s64(voltage, vadc->graph[prop->calibration].dy);
+ 
+ 	if (prop->calibration == VADC_CALIB_ABSOLUTE)
+ 		voltage += vadc->graph[prop->calibration].dx;
+@@ -487,7 +488,7 @@ static s32 vadc_calibrate(struct vadc_priv *vadc,
+ 
+ 	voltage = voltage * prescale->den;
+ 
+-	return voltage / prescale->num;
++	return div64_s64(voltage, prescale->num);
+ }
+ 
+ static int vadc_decimation_from_dt(u32 value)
+diff --git a/drivers/iio/adc/xilinx-xadc-core.c b/drivers/iio/adc/xilinx-xadc-core.c
+index a221f7329b79..ce93bd8e3f68 100644
+--- a/drivers/iio/adc/xilinx-xadc-core.c
++++ b/drivers/iio/adc/xilinx-xadc-core.c
+@@ -856,6 +856,7 @@ static int xadc_read_raw(struct iio_dev *indio_dev,
+ 			switch (chan->address) {
+ 			case XADC_REG_VCCINT:
+ 			case XADC_REG_VCCAUX:
++			case XADC_REG_VREFP:
+ 			case XADC_REG_VCCBRAM:
+ 			case XADC_REG_VCCPINT:
+ 			case XADC_REG_VCCPAUX:
+@@ -996,7 +997,7 @@ static const struct iio_event_spec xadc_voltage_events[] = {
+ 	.num_event_specs = (_alarm) ? ARRAY_SIZE(xadc_voltage_events) : 0, \
+ 	.scan_index = (_scan_index), \
+ 	.scan_type = { \
+-		.sign = 'u', \
++		.sign = ((_addr) == XADC_REG_VREFN) ? 's' : 'u', \
+ 		.realbits = 12, \
+ 		.storagebits = 16, \
+ 		.shift = 4, \
+@@ -1008,7 +1009,7 @@ static const struct iio_event_spec xadc_voltage_events[] = {
+ static const struct iio_chan_spec xadc_channels[] = {
+ 	XADC_CHAN_TEMP(0, 8, XADC_REG_TEMP),
+ 	XADC_CHAN_VOLTAGE(0, 9, XADC_REG_VCCINT, "vccint", true),
+-	XADC_CHAN_VOLTAGE(1, 10, XADC_REG_VCCINT, "vccaux", true),
++	XADC_CHAN_VOLTAGE(1, 10, XADC_REG_VCCAUX, "vccaux", true),
+ 	XADC_CHAN_VOLTAGE(2, 14, XADC_REG_VCCBRAM, "vccbram", true),
+ 	XADC_CHAN_VOLTAGE(3, 5, XADC_REG_VCCPINT, "vccpint", true),
+ 	XADC_CHAN_VOLTAGE(4, 6, XADC_REG_VCCPAUX, "vccpaux", true),
+diff --git a/drivers/iio/adc/xilinx-xadc.h b/drivers/iio/adc/xilinx-xadc.h
+index c7487e8d7f80..54adc5087210 100644
+--- a/drivers/iio/adc/xilinx-xadc.h
++++ b/drivers/iio/adc/xilinx-xadc.h
+@@ -145,9 +145,9 @@ static inline int xadc_write_adc_reg(struct xadc *xadc, unsigned int reg,
+ #define XADC_REG_MAX_VCCPINT	0x28
+ #define XADC_REG_MAX_VCCPAUX	0x29
+ #define XADC_REG_MAX_VCCO_DDR	0x2a
+-#define XADC_REG_MIN_VCCPINT	0x2b
+-#define XADC_REG_MIN_VCCPAUX	0x2c
+-#define XADC_REG_MIN_VCCO_DDR	0x2d
++#define XADC_REG_MIN_VCCPINT	0x2c
++#define XADC_REG_MIN_VCCPAUX	0x2d
++#define XADC_REG_MIN_VCCO_DDR	0x2e
+ 
+ #define XADC_REG_CONF0		0x40
+ #define XADC_REG_CONF1		0x41
+diff --git a/drivers/iio/common/st_sensors/st_sensors_core.c b/drivers/iio/common/st_sensors/st_sensors_core.c
+index edd13d2b4121..8dd0477e201c 100644
+--- a/drivers/iio/common/st_sensors/st_sensors_core.c
++++ b/drivers/iio/common/st_sensors/st_sensors_core.c
+@@ -304,8 +304,6 @@ int st_sensors_init_sensor(struct iio_dev *indio_dev,
+ 	struct st_sensors_platform_data *of_pdata;
+ 	int err = 0;
+ 
+-	mutex_init(&sdata->tb.buf_lock);
+-
+ 	/* If OF/DT pdata exists, it will take precedence of anything else */
+ 	of_pdata = st_sensors_of_probe(indio_dev->dev.parent, pdata);
+ 	if (of_pdata)
+diff --git a/drivers/iio/gyro/st_gyro_core.c b/drivers/iio/gyro/st_gyro_core.c
+index f07a2336f7dc..566f7d2df031 100644
+--- a/drivers/iio/gyro/st_gyro_core.c
++++ b/drivers/iio/gyro/st_gyro_core.c
+@@ -317,6 +317,7 @@ int st_gyro_common_probe(struct iio_dev *indio_dev)
+ 
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->info = &gyro_info;
++	mutex_init(&gdata->tb.buf_lock);
+ 
+ 	st_sensors_power_enable(indio_dev);
+ 
+diff --git a/drivers/iio/light/hid-sensor-prox.c b/drivers/iio/light/hid-sensor-prox.c
+index 3ecf79ed08ac..88f21bbe947c 100644
+--- a/drivers/iio/light/hid-sensor-prox.c
++++ b/drivers/iio/light/hid-sensor-prox.c
+@@ -43,8 +43,6 @@ struct prox_state {
+ static const struct iio_chan_spec prox_channels[] = {
+ 	{
+ 		.type = IIO_PROXIMITY,
+-		.modified = 1,
+-		.channel2 = IIO_NO_MOD,
+ 		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 		.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_OFFSET) |
+ 		BIT(IIO_CHAN_INFO_SCALE) |
+diff --git a/drivers/iio/magnetometer/st_magn_core.c b/drivers/iio/magnetometer/st_magn_core.c
+index 8ade473f99fe..2e56f812a644 100644
+--- a/drivers/iio/magnetometer/st_magn_core.c
++++ b/drivers/iio/magnetometer/st_magn_core.c
+@@ -369,6 +369,7 @@ int st_magn_common_probe(struct iio_dev *indio_dev)
+ 
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->info = &magn_info;
++	mutex_init(&mdata->tb.buf_lock);
+ 
+ 	st_sensors_power_enable(indio_dev);
+ 
+diff --git a/drivers/iio/pressure/hid-sensor-press.c b/drivers/iio/pressure/hid-sensor-press.c
+index 1af314926ebd..476a7d03d2ce 100644
+--- a/drivers/iio/pressure/hid-sensor-press.c
++++ b/drivers/iio/pressure/hid-sensor-press.c
+@@ -47,8 +47,6 @@ struct press_state {
+ static const struct iio_chan_spec press_channels[] = {
+ 	{
+ 		.type = IIO_PRESSURE,
+-		.modified = 1,
+-		.channel2 = IIO_NO_MOD,
+ 		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ 		.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_OFFSET) |
+ 		BIT(IIO_CHAN_INFO_SCALE) |
+diff --git a/drivers/iio/pressure/st_pressure_core.c b/drivers/iio/pressure/st_pressure_core.c
+index 97baf40d424b..e881fa6291e9 100644
+--- a/drivers/iio/pressure/st_pressure_core.c
++++ b/drivers/iio/pressure/st_pressure_core.c
+@@ -417,6 +417,7 @@ int st_press_common_probe(struct iio_dev *indio_dev)
+ 
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 	indio_dev->info = &press_info;
++	mutex_init(&press_data->tb.buf_lock);
+ 
+ 	st_sensors_power_enable(indio_dev);
+ 
+diff --git a/drivers/infiniband/core/iwpm_msg.c b/drivers/infiniband/core/iwpm_msg.c
+index b85ddbc979e0..e5558b2660f2 100644
+--- a/drivers/infiniband/core/iwpm_msg.c
++++ b/drivers/infiniband/core/iwpm_msg.c
+@@ -33,7 +33,7 @@
+ 
+ #include "iwpm_util.h"
+ 
+-static const char iwpm_ulib_name[] = "iWarpPortMapperUser";
++static const char iwpm_ulib_name[IWPM_ULIBNAME_SIZE] = "iWarpPortMapperUser";
+ static int iwpm_ulib_version = 3;
+ static int iwpm_user_pid = IWPM_PID_UNDEFINED;
+ static atomic_t echo_nlmsg_seq;
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index 991dc6b20a58..79363b687195 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -315,7 +315,7 @@ static void elantech_report_semi_mt_data(struct input_dev *dev,
+ 					 unsigned int x2, unsigned int y2)
+ {
+ 	elantech_set_slot(dev, 0, num_fingers != 0, x1, y1);
+-	elantech_set_slot(dev, 1, num_fingers == 2, x2, y2);
++	elantech_set_slot(dev, 1, num_fingers >= 2, x2, y2);
+ }
+ 
+ /*
+diff --git a/drivers/iommu/amd_iommu_v2.c b/drivers/iommu/amd_iommu_v2.c
+index 6d5a5c44453b..173e70dbf61b 100644
+--- a/drivers/iommu/amd_iommu_v2.c
++++ b/drivers/iommu/amd_iommu_v2.c
+@@ -266,6 +266,7 @@ static void put_pasid_state(struct pasid_state *pasid_state)
+ 
+ static void put_pasid_state_wait(struct pasid_state *pasid_state)
+ {
++	atomic_dec(&pasid_state->count);
+ 	wait_event(pasid_state->wq, !atomic_read(&pasid_state->count));
+ 	free_pasid_state(pasid_state);
+ }
+diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
+index a3adde6519f0..bd6252b01510 100644
+--- a/drivers/iommu/arm-smmu.c
++++ b/drivers/iommu/arm-smmu.c
+@@ -224,14 +224,7 @@
+ #define RESUME_TERMINATE		(1 << 0)
+ 
+ #define TTBCR2_SEP_SHIFT		15
+-#define TTBCR2_SEP_MASK			0x7
+-
+-#define TTBCR2_ADDR_32			0
+-#define TTBCR2_ADDR_36			1
+-#define TTBCR2_ADDR_40			2
+-#define TTBCR2_ADDR_42			3
+-#define TTBCR2_ADDR_44			4
+-#define TTBCR2_ADDR_48			5
++#define TTBCR2_SEP_UPSTREAM		(0x7 << TTBCR2_SEP_SHIFT)
+ 
+ #define TTBRn_HI_ASID_SHIFT            16
+ 
+@@ -783,26 +776,7 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain,
+ 		writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR);
+ 		if (smmu->version > ARM_SMMU_V1) {
+ 			reg = pgtbl_cfg->arm_lpae_s1_cfg.tcr >> 32;
+-			switch (smmu->va_size) {
+-			case 32:
+-				reg |= (TTBCR2_ADDR_32 << TTBCR2_SEP_SHIFT);
+-				break;
+-			case 36:
+-				reg |= (TTBCR2_ADDR_36 << TTBCR2_SEP_SHIFT);
+-				break;
+-			case 40:
+-				reg |= (TTBCR2_ADDR_40 << TTBCR2_SEP_SHIFT);
+-				break;
+-			case 42:
+-				reg |= (TTBCR2_ADDR_42 << TTBCR2_SEP_SHIFT);
+-				break;
+-			case 44:
+-				reg |= (TTBCR2_ADDR_44 << TTBCR2_SEP_SHIFT);
+-				break;
+-			case 48:
+-				reg |= (TTBCR2_ADDR_48 << TTBCR2_SEP_SHIFT);
+-				break;
+-			}
++			reg |= TTBCR2_SEP_UPSTREAM;
+ 			writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR2);
+ 		}
+ 	} else {
+diff --git a/drivers/lguest/core.c b/drivers/lguest/core.c
+index 7dc93aa004c8..312ffd3d0017 100644
+--- a/drivers/lguest/core.c
++++ b/drivers/lguest/core.c
+@@ -173,7 +173,7 @@ static void unmap_switcher(void)
+ bool lguest_address_ok(const struct lguest *lg,
+ 		       unsigned long addr, unsigned long len)
+ {
+-	return (addr+len) / PAGE_SIZE < lg->pfn_limit && (addr+len >= addr);
++	return addr+len <= lg->pfn_limit * PAGE_SIZE && (addr+len >= addr);
+ }
+ 
+ /*
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 6554d9148927..757f1ba34c4d 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -823,6 +823,12 @@ void dm_consume_args(struct dm_arg_set *as, unsigned num_args)
+ }
+ EXPORT_SYMBOL(dm_consume_args);
+ 
++static bool __table_type_request_based(unsigned table_type)
++{
++	return (table_type == DM_TYPE_REQUEST_BASED ||
++		table_type == DM_TYPE_MQ_REQUEST_BASED);
++}
++
+ static int dm_table_set_type(struct dm_table *t)
+ {
+ 	unsigned i;
+@@ -855,8 +861,7 @@ static int dm_table_set_type(struct dm_table *t)
+ 		 * Determine the type from the live device.
+ 		 * Default to bio-based if device is new.
+ 		 */
+-		if (live_md_type == DM_TYPE_REQUEST_BASED ||
+-		    live_md_type == DM_TYPE_MQ_REQUEST_BASED)
++		if (__table_type_request_based(live_md_type))
+ 			request_based = 1;
+ 		else
+ 			bio_based = 1;
+@@ -906,7 +911,7 @@ static int dm_table_set_type(struct dm_table *t)
+ 			}
+ 		t->type = DM_TYPE_MQ_REQUEST_BASED;
+ 
+-	} else if (hybrid && list_empty(devices) && live_md_type != DM_TYPE_NONE) {
++	} else if (list_empty(devices) && __table_type_request_based(live_md_type)) {
+ 		/* inherit live MD type */
+ 		t->type = live_md_type;
+ 
+@@ -928,10 +933,7 @@ struct target_type *dm_table_get_immutable_target_type(struct dm_table *t)
+ 
+ bool dm_table_request_based(struct dm_table *t)
+ {
+-	unsigned table_type = dm_table_get_type(t);
+-
+-	return (table_type == DM_TYPE_REQUEST_BASED ||
+-		table_type == DM_TYPE_MQ_REQUEST_BASED);
++	return __table_type_request_based(dm_table_get_type(t));
+ }
+ 
+ bool dm_table_mq_request_based(struct dm_table *t)
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 8001fe9e3434..9b4e30a82e4a 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1642,8 +1642,7 @@ static int dm_merge_bvec(struct request_queue *q,
+ 	struct mapped_device *md = q->queuedata;
+ 	struct dm_table *map = dm_get_live_table_fast(md);
+ 	struct dm_target *ti;
+-	sector_t max_sectors;
+-	int max_size = 0;
++	sector_t max_sectors, max_size = 0;
+ 
+ 	if (unlikely(!map))
+ 		goto out;
+@@ -1658,8 +1657,16 @@ static int dm_merge_bvec(struct request_queue *q,
+ 	max_sectors = min(max_io_len(bvm->bi_sector, ti),
+ 			  (sector_t) queue_max_sectors(q));
+ 	max_size = (max_sectors << SECTOR_SHIFT) - bvm->bi_size;
+-	if (unlikely(max_size < 0)) /* this shouldn't _ever_ happen */
+-		max_size = 0;
++
++	/*
++	 * FIXME: this stop-gap fix _must_ be cleaned up (by passing a sector_t
++	 * to the targets' merge function since it holds sectors not bytes).
++	 * Just doing this as an interim fix for stable@ because the more
++	 * comprehensive cleanup of switching to sector_t will impact every
++	 * DM target that implements a ->merge hook.
++	 */
++	if (max_size > INT_MAX)
++		max_size = INT_MAX;
+ 
+ 	/*
+ 	 * merge_bvec_fn() returns number of bytes
+@@ -1667,7 +1674,7 @@ static int dm_merge_bvec(struct request_queue *q,
+ 	 * max is precomputed maximal io size
+ 	 */
+ 	if (max_size && ti->type->merge)
+-		max_size = ti->type->merge(ti, bvm, biovec, max_size);
++		max_size = ti->type->merge(ti, bvm, biovec, (int) max_size);
+ 	/*
+ 	 * If the target doesn't support merge method and some of the devices
+ 	 * provided their merge_bvec method (we know this by looking for the
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index e47d1dd046da..907534b7f40d 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -4138,12 +4138,12 @@ action_store(struct mddev *mddev, const char *page, size_t len)
+ 	if (!mddev->pers || !mddev->pers->sync_request)
+ 		return -EINVAL;
+ 
+-	if (cmd_match(page, "frozen"))
+-		set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+-	else
+-		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 
+ 	if (cmd_match(page, "idle") || cmd_match(page, "frozen")) {
++		if (cmd_match(page, "frozen"))
++			set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
++		else
++			clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 		flush_workqueue(md_misc_wq);
+ 		if (mddev->sync_thread) {
+ 			set_bit(MD_RECOVERY_INTR, &mddev->recovery);
+@@ -4156,16 +4156,17 @@ action_store(struct mddev *mddev, const char *page, size_t len)
+ 		   test_bit(MD_RECOVERY_NEEDED, &mddev->recovery))
+ 		return -EBUSY;
+ 	else if (cmd_match(page, "resync"))
+-		set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
++		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 	else if (cmd_match(page, "recover")) {
++		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 		set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
+-		set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ 	} else if (cmd_match(page, "reshape")) {
+ 		int err;
+ 		if (mddev->pers->start_reshape == NULL)
+ 			return -EINVAL;
+ 		err = mddev_lock(mddev);
+ 		if (!err) {
++			clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 			err = mddev->pers->start_reshape(mddev);
+ 			mddev_unlock(mddev);
+ 		}
+@@ -4177,6 +4178,7 @@ action_store(struct mddev *mddev, const char *page, size_t len)
+ 			set_bit(MD_RECOVERY_CHECK, &mddev->recovery);
+ 		else if (!cmd_match(page, "repair"))
+ 			return -EINVAL;
++		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 		set_bit(MD_RECOVERY_REQUESTED, &mddev->recovery);
+ 		set_bit(MD_RECOVERY_SYNC, &mddev->recovery);
+ 	}
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index 3b5d7f704aa3..903391ce9353 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -517,6 +517,9 @@ static void raid0_make_request(struct mddev *mddev, struct bio *bio)
+ 			 ? (sector & (chunk_sects-1))
+ 			 : sector_div(sector, chunk_sects));
+ 
++		/* Restore due to sector_div */
++		sector = bio->bi_iter.bi_sector;
++
+ 		if (sectors < bio_sectors(bio)) {
+ 			split = bio_split(bio, sectors, GFP_NOIO, fs_bio_set);
+ 			bio_chain(split, bio);
+@@ -524,7 +527,6 @@ static void raid0_make_request(struct mddev *mddev, struct bio *bio)
+ 			split = bio;
+ 		}
+ 
+-		sector = bio->bi_iter.bi_sector;
+ 		zone = find_zone(mddev->private, &sector);
+ 		tmp_dev = map_sector(mddev, zone, sector, &sector);
+ 		split->bi_bdev = tmp_dev->bdev;
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index cd2f96b2c572..007ab861eca0 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -1933,7 +1933,8 @@ static int resize_stripes(struct r5conf *conf, int newsize)
+ 
+ 	conf->slab_cache = sc;
+ 	conf->active_name = 1-conf->active_name;
+-	conf->pool_size = newsize;
++	if (!err)
++		conf->pool_size = newsize;
+ 	return err;
+ }
+ 
+diff --git a/drivers/mfd/da9052-core.c b/drivers/mfd/da9052-core.c
+index ae498b53ee40..46e3840c7a37 100644
+--- a/drivers/mfd/da9052-core.c
++++ b/drivers/mfd/da9052-core.c
+@@ -433,6 +433,10 @@ EXPORT_SYMBOL_GPL(da9052_adc_read_temp);
+ static const struct mfd_cell da9052_subdev_info[] = {
+ 	{
+ 		.name = "da9052-regulator",
++		.id = 0,
++	},
++	{
++		.name = "da9052-regulator",
+ 		.id = 1,
+ 	},
+ 	{
+@@ -484,10 +488,6 @@ static const struct mfd_cell da9052_subdev_info[] = {
+ 		.id = 13,
+ 	},
+ 	{
+-		.name = "da9052-regulator",
+-		.id = 14,
+-	},
+-	{
+ 		.name = "da9052-onkey",
+ 	},
+ 	{
+diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
+index 03d7c7521d97..9a39e0b7e583 100644
+--- a/drivers/mmc/host/atmel-mci.c
++++ b/drivers/mmc/host/atmel-mci.c
+@@ -1304,7 +1304,7 @@ static void atmci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 
+ 	if (ios->clock) {
+ 		unsigned int clock_min = ~0U;
+-		u32 clkdiv;
++		int clkdiv;
+ 
+ 		spin_lock_bh(&host->lock);
+ 		if (!host->mode_reg) {
+@@ -1328,7 +1328,12 @@ static void atmci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 		/* Calculate clock divider */
+ 		if (host->caps.has_odd_clk_div) {
+ 			clkdiv = DIV_ROUND_UP(host->bus_hz, clock_min) - 2;
+-			if (clkdiv > 511) {
++			if (clkdiv < 0) {
++				dev_warn(&mmc->class_dev,
++					 "clock %u too fast; using %lu\n",
++					 clock_min, host->bus_hz / 2);
++				clkdiv = 0;
++			} else if (clkdiv > 511) {
+ 				dev_warn(&mmc->class_dev,
+ 				         "clock %u too slow; using %lu\n",
+ 				         clock_min, host->bus_hz / (511 + 2));
+diff --git a/drivers/mtd/ubi/block.c b/drivers/mtd/ubi/block.c
+index db2c05b6fe7f..c9eb78f10a0d 100644
+--- a/drivers/mtd/ubi/block.c
++++ b/drivers/mtd/ubi/block.c
+@@ -310,6 +310,8 @@ static void ubiblock_do_work(struct work_struct *work)
+ 	blk_rq_map_sg(req->q, req, pdu->usgl.sg);
+ 
+ 	ret = ubiblock_read(pdu);
++	rq_flush_dcache_pages(req);
++
+ 	blk_mq_end_request(req, ret);
+ }
+ 
+diff --git a/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c b/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c
+index 6262612dec45..7a3231d8b933 100644
+--- a/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c
++++ b/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c
+@@ -512,11 +512,9 @@ static int brcmf_msgbuf_query_dcmd(struct brcmf_pub *drvr, int ifidx,
+ 				     msgbuf->rx_pktids,
+ 				     msgbuf->ioctl_resp_pktid);
+ 	if (msgbuf->ioctl_resp_ret_len != 0) {
+-		if (!skb) {
+-			brcmf_err("Invalid packet id idx recv'd %d\n",
+-				  msgbuf->ioctl_resp_pktid);
++		if (!skb)
+ 			return -EBADF;
+-		}
++
+ 		memcpy(buf, skb->data, (len < msgbuf->ioctl_resp_ret_len) ?
+ 				       len : msgbuf->ioctl_resp_ret_len);
+ 	}
+@@ -875,10 +873,8 @@ brcmf_msgbuf_process_txstatus(struct brcmf_msgbuf *msgbuf, void *buf)
+ 	flowid -= BRCMF_NROF_H2D_COMMON_MSGRINGS;
+ 	skb = brcmf_msgbuf_get_pktid(msgbuf->drvr->bus_if->dev,
+ 				     msgbuf->tx_pktids, idx);
+-	if (!skb) {
+-		brcmf_err("Invalid packet id idx recv'd %d\n", idx);
++	if (!skb)
+ 		return;
+-	}
+ 
+ 	set_bit(flowid, msgbuf->txstatus_done_map);
+ 	commonring = msgbuf->flowrings[flowid];
+@@ -1157,6 +1153,8 @@ brcmf_msgbuf_process_rx_complete(struct brcmf_msgbuf *msgbuf, void *buf)
+ 
+ 	skb = brcmf_msgbuf_get_pktid(msgbuf->drvr->bus_if->dev,
+ 				     msgbuf->rx_pktids, idx);
++	if (!skb)
++		return;
+ 
+ 	if (data_offset)
+ 		skb_pull(skb, data_offset);
+diff --git a/drivers/net/wireless/iwlwifi/mvm/d3.c b/drivers/net/wireless/iwlwifi/mvm/d3.c
+index 14e8fd661889..fd5a0bb1493f 100644
+--- a/drivers/net/wireless/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/iwlwifi/mvm/d3.c
+@@ -1742,8 +1742,10 @@ static void iwl_mvm_query_netdetect_reasons(struct iwl_mvm *mvm,
+ 	int i, j, n_matches, ret;
+ 
+ 	fw_status = iwl_mvm_get_wakeup_status(mvm, vif);
+-	if (!IS_ERR_OR_NULL(fw_status))
++	if (!IS_ERR_OR_NULL(fw_status)) {
+ 		reasons = le32_to_cpu(fw_status->wakeup_reasons);
++		kfree(fw_status);
++	}
+ 
+ 	if (reasons & IWL_WOWLAN_WAKEUP_BY_RFKILL_DEASSERTED)
+ 		wakeup.rfkill_release = true;
+@@ -1860,15 +1862,15 @@ static int __iwl_mvm_resume(struct iwl_mvm *mvm, bool test)
+ 	/* get the BSS vif pointer again */
+ 	vif = iwl_mvm_get_bss_vif(mvm);
+ 	if (IS_ERR_OR_NULL(vif))
+-		goto out_unlock;
++		goto err;
+ 
+ 	ret = iwl_trans_d3_resume(mvm->trans, &d3_status, test);
+ 	if (ret)
+-		goto out_unlock;
++		goto err;
+ 
+ 	if (d3_status != IWL_D3_STATUS_ALIVE) {
+ 		IWL_INFO(mvm, "Device was reset during suspend\n");
+-		goto out_unlock;
++		goto err;
+ 	}
+ 
+ 	/* query SRAM first in case we want event logging */
+@@ -1886,7 +1888,8 @@ static int __iwl_mvm_resume(struct iwl_mvm *mvm, bool test)
+ 	/* has unlocked the mutex, so skip that */
+ 	goto out;
+ 
+- out_unlock:
++err:
++	iwl_mvm_free_nd(mvm);
+ 	mutex_unlock(&mvm->mutex);
+ 
+  out:
+diff --git a/drivers/net/wireless/iwlwifi/pcie/trans.c b/drivers/net/wireless/iwlwifi/pcie/trans.c
+index 69935aa5a1b3..cb72edb3d16a 100644
+--- a/drivers/net/wireless/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/iwlwifi/pcie/trans.c
+@@ -5,8 +5,8 @@
+  *
+  * GPL LICENSE SUMMARY
+  *
+- * Copyright(c) 2007 - 2014 Intel Corporation. All rights reserved.
+- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
++ * Copyright(c) 2007 - 2015 Intel Corporation. All rights reserved.
++ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of version 2 of the GNU General Public License as
+@@ -31,8 +31,8 @@
+  *
+  * BSD LICENSE
+  *
+- * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.
+- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
++ * Copyright(c) 2005 - 2015 Intel Corporation. All rights reserved.
++ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  * All rights reserved.
+  *
+  * Redistribution and use in source and binary forms, with or without
+@@ -104,7 +104,7 @@ static void iwl_pcie_free_fw_monitor(struct iwl_trans *trans)
+ static void iwl_pcie_alloc_fw_monitor(struct iwl_trans *trans)
+ {
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+-	struct page *page;
++	struct page *page = NULL;
+ 	dma_addr_t phys;
+ 	u32 size;
+ 	u8 power;
+@@ -131,6 +131,7 @@ static void iwl_pcie_alloc_fw_monitor(struct iwl_trans *trans)
+ 				    DMA_FROM_DEVICE);
+ 		if (dma_mapping_error(trans->dev, phys)) {
+ 			__free_pages(page, order);
++			page = NULL;
+ 			continue;
+ 		}
+ 		IWL_INFO(trans,
+diff --git a/drivers/net/wireless/rt2x00/rt2800usb.c b/drivers/net/wireless/rt2x00/rt2800usb.c
+index 8444313eabe2..8694dddcce9a 100644
+--- a/drivers/net/wireless/rt2x00/rt2800usb.c
++++ b/drivers/net/wireless/rt2x00/rt2800usb.c
+@@ -1040,6 +1040,7 @@ static struct usb_device_id rt2800usb_device_table[] = {
+ 	{ USB_DEVICE(0x07d1, 0x3c17) },
+ 	{ USB_DEVICE(0x2001, 0x3317) },
+ 	{ USB_DEVICE(0x2001, 0x3c1b) },
++	{ USB_DEVICE(0x2001, 0x3c25) },
+ 	/* Draytek */
+ 	{ USB_DEVICE(0x07fa, 0x7712) },
+ 	/* DVICO */
+diff --git a/drivers/net/wireless/rtlwifi/usb.c b/drivers/net/wireless/rtlwifi/usb.c
+index 46ee956d0235..27cd6cabf6c5 100644
+--- a/drivers/net/wireless/rtlwifi/usb.c
++++ b/drivers/net/wireless/rtlwifi/usb.c
+@@ -126,7 +126,7 @@ static int _usbctrl_vendorreq_sync_read(struct usb_device *udev, u8 request,
+ 
+ 	do {
+ 		status = usb_control_msg(udev, pipe, request, reqtype, value,
+-					 index, pdata, len, 0); /*max. timeout*/
++					 index, pdata, len, 1000);
+ 		if (status < 0) {
+ 			/* firmware download is checksumed, don't retry */
+ 			if ((value >= FW_8192C_START_ADDRESS &&
+diff --git a/drivers/power/reset/at91-reset.c b/drivers/power/reset/at91-reset.c
+index 13584e24736a..4d7d60e593b8 100644
+--- a/drivers/power/reset/at91-reset.c
++++ b/drivers/power/reset/at91-reset.c
+@@ -212,9 +212,9 @@ static int at91_reset_platform_probe(struct platform_device *pdev)
+ 		res = platform_get_resource(pdev, IORESOURCE_MEM, idx + 1 );
+ 		at91_ramc_base[idx] = devm_ioremap(&pdev->dev, res->start,
+ 						   resource_size(res));
+-		if (IS_ERR(at91_ramc_base[idx])) {
++		if (!at91_ramc_base[idx]) {
+ 			dev_err(&pdev->dev, "Could not map ram controller address\n");
+-			return PTR_ERR(at91_ramc_base[idx]);
++			return -ENOMEM;
+ 		}
+ 	}
+ 
+diff --git a/drivers/pwm/pwm-img.c b/drivers/pwm/pwm-img.c
+index 476171a768d6..8a029f9bc18c 100644
+--- a/drivers/pwm/pwm-img.c
++++ b/drivers/pwm/pwm-img.c
+@@ -16,6 +16,7 @@
+ #include <linux/mfd/syscon.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
++#include <linux/of_device.h>
+ #include <linux/platform_device.h>
+ #include <linux/pwm.h>
+ #include <linux/regmap.h>
+@@ -38,7 +39,22 @@
+ #define PERIP_PWM_PDM_CONTROL_CH_MASK		0x1
+ #define PERIP_PWM_PDM_CONTROL_CH_SHIFT(ch)	((ch) * 4)
+ 
+-#define MAX_TMBASE_STEPS			65536
++/*
++ * PWM period is specified with a timebase register,
++ * in number of step periods. The PWM duty cycle is also
++ * specified in step periods, in the [0, $timebase] range.
++ * In other words, the timebase imposes the duty cycle
++ * resolution. Therefore, let's constraint the timebase to
++ * a minimum value to allow a sane range of duty cycle values.
++ * Imposing a minimum timebase, will impose a maximum PWM frequency.
++ *
++ * The value chosen is completely arbitrary.
++ */
++#define MIN_TMBASE_STEPS			16
++
++struct img_pwm_soc_data {
++	u32 max_timebase;
++};
+ 
+ struct img_pwm_chip {
+ 	struct device	*dev;
+@@ -47,6 +63,9 @@ struct img_pwm_chip {
+ 	struct clk	*sys_clk;
+ 	void __iomem	*base;
+ 	struct regmap	*periph_regs;
++	int		max_period_ns;
++	int		min_period_ns;
++	const struct img_pwm_soc_data   *data;
+ };
+ 
+ static inline struct img_pwm_chip *to_img_pwm_chip(struct pwm_chip *chip)
+@@ -72,24 +91,31 @@ static int img_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	u32 val, div, duty, timebase;
+ 	unsigned long mul, output_clk_hz, input_clk_hz;
+ 	struct img_pwm_chip *pwm_chip = to_img_pwm_chip(chip);
++	unsigned int max_timebase = pwm_chip->data->max_timebase;
++
++	if (period_ns < pwm_chip->min_period_ns ||
++	    period_ns > pwm_chip->max_period_ns) {
++		dev_err(chip->dev, "configured period not in range\n");
++		return -ERANGE;
++	}
+ 
+ 	input_clk_hz = clk_get_rate(pwm_chip->pwm_clk);
+ 	output_clk_hz = DIV_ROUND_UP(NSEC_PER_SEC, period_ns);
+ 
+ 	mul = DIV_ROUND_UP(input_clk_hz, output_clk_hz);
+-	if (mul <= MAX_TMBASE_STEPS) {
++	if (mul <= max_timebase) {
+ 		div = PWM_CTRL_CFG_NO_SUB_DIV;
+ 		timebase = DIV_ROUND_UP(mul, 1);
+-	} else if (mul <= MAX_TMBASE_STEPS * 8) {
++	} else if (mul <= max_timebase * 8) {
+ 		div = PWM_CTRL_CFG_SUB_DIV0;
+ 		timebase = DIV_ROUND_UP(mul, 8);
+-	} else if (mul <= MAX_TMBASE_STEPS * 64) {
++	} else if (mul <= max_timebase * 64) {
+ 		div = PWM_CTRL_CFG_SUB_DIV1;
+ 		timebase = DIV_ROUND_UP(mul, 64);
+-	} else if (mul <= MAX_TMBASE_STEPS * 512) {
++	} else if (mul <= max_timebase * 512) {
+ 		div = PWM_CTRL_CFG_SUB_DIV0_DIV1;
+ 		timebase = DIV_ROUND_UP(mul, 512);
+-	} else if (mul > MAX_TMBASE_STEPS * 512) {
++	} else if (mul > max_timebase * 512) {
+ 		dev_err(chip->dev,
+ 			"failed to configure timebase steps/divider value\n");
+ 		return -EINVAL;
+@@ -143,11 +169,27 @@ static const struct pwm_ops img_pwm_ops = {
+ 	.owner = THIS_MODULE,
+ };
+ 
++static const struct img_pwm_soc_data pistachio_pwm = {
++	.max_timebase = 255,
++};
++
++static const struct of_device_id img_pwm_of_match[] = {
++	{
++		.compatible = "img,pistachio-pwm",
++		.data = &pistachio_pwm,
++	},
++	{ }
++};
++MODULE_DEVICE_TABLE(of, img_pwm_of_match);
++
+ static int img_pwm_probe(struct platform_device *pdev)
+ {
+ 	int ret;
++	u64 val;
++	unsigned long clk_rate;
+ 	struct resource *res;
+ 	struct img_pwm_chip *pwm;
++	const struct of_device_id *of_dev_id;
+ 
+ 	pwm = devm_kzalloc(&pdev->dev, sizeof(*pwm), GFP_KERNEL);
+ 	if (!pwm)
+@@ -160,6 +202,11 @@ static int img_pwm_probe(struct platform_device *pdev)
+ 	if (IS_ERR(pwm->base))
+ 		return PTR_ERR(pwm->base);
+ 
++	of_dev_id = of_match_device(img_pwm_of_match, &pdev->dev);
++	if (!of_dev_id)
++		return -ENODEV;
++	pwm->data = of_dev_id->data;
++
+ 	pwm->periph_regs = syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
+ 							   "img,cr-periph");
+ 	if (IS_ERR(pwm->periph_regs))
+@@ -189,6 +236,17 @@ static int img_pwm_probe(struct platform_device *pdev)
+ 		goto disable_sysclk;
+ 	}
+ 
++	clk_rate = clk_get_rate(pwm->pwm_clk);
++
++	/* The maximum input clock divider is 512 */
++	val = (u64)NSEC_PER_SEC * 512 * pwm->data->max_timebase;
++	do_div(val, clk_rate);
++	pwm->max_period_ns = val;
++
++	val = (u64)NSEC_PER_SEC * MIN_TMBASE_STEPS;
++	do_div(val, clk_rate);
++	pwm->min_period_ns = val;
++
+ 	pwm->chip.dev = &pdev->dev;
+ 	pwm->chip.ops = &img_pwm_ops;
+ 	pwm->chip.base = -1;
+@@ -228,12 +286,6 @@ static int img_pwm_remove(struct platform_device *pdev)
+ 	return pwmchip_remove(&pwm_chip->chip);
+ }
+ 
+-static const struct of_device_id img_pwm_of_match[] = {
+-	{ .compatible = "img,pistachio-pwm", },
+-	{ }
+-};
+-MODULE_DEVICE_TABLE(of, img_pwm_of_match);
+-
+ static struct platform_driver img_pwm_driver = {
+ 	.driver = {
+ 		.name = "img-pwm",
+diff --git a/drivers/regulator/da9052-regulator.c b/drivers/regulator/da9052-regulator.c
+index 8a4df7a1f2ee..e628d4c2f2ae 100644
+--- a/drivers/regulator/da9052-regulator.c
++++ b/drivers/regulator/da9052-regulator.c
+@@ -394,6 +394,7 @@ static inline struct da9052_regulator_info *find_regulator_info(u8 chip_id,
+ 
+ static int da9052_regulator_probe(struct platform_device *pdev)
+ {
++	const struct mfd_cell *cell = mfd_get_cell(pdev);
+ 	struct regulator_config config = { };
+ 	struct da9052_regulator *regulator;
+ 	struct da9052 *da9052;
+@@ -409,7 +410,7 @@ static int da9052_regulator_probe(struct platform_device *pdev)
+ 	regulator->da9052 = da9052;
+ 
+ 	regulator->info = find_regulator_info(regulator->da9052->chip_id,
+-					      pdev->id);
++					      cell->id);
+ 	if (regulator->info == NULL) {
+ 		dev_err(&pdev->dev, "invalid regulator ID specified\n");
+ 		return -EINVAL;
+@@ -419,7 +420,7 @@ static int da9052_regulator_probe(struct platform_device *pdev)
+ 	config.driver_data = regulator;
+ 	config.regmap = da9052->regmap;
+ 	if (pdata && pdata->regulators) {
+-		config.init_data = pdata->regulators[pdev->id];
++		config.init_data = pdata->regulators[cell->id];
+ 	} else {
+ #ifdef CONFIG_OF
+ 		struct device_node *nproot = da9052->dev->of_node;
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 3290a3ed5b31..a661d339adf7 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1624,6 +1624,7 @@ static unsigned int sd_completed_bytes(struct scsi_cmnd *scmd)
+ {
+ 	u64 start_lba = blk_rq_pos(scmd->request);
+ 	u64 end_lba = blk_rq_pos(scmd->request) + (scsi_bufflen(scmd) / 512);
++	u64 factor = scmd->device->sector_size / 512;
+ 	u64 bad_lba;
+ 	int info_valid;
+ 	/*
+@@ -1645,16 +1646,9 @@ static unsigned int sd_completed_bytes(struct scsi_cmnd *scmd)
+ 	if (scsi_bufflen(scmd) <= scmd->device->sector_size)
+ 		return 0;
+ 
+-	if (scmd->device->sector_size < 512) {
+-		/* only legitimate sector_size here is 256 */
+-		start_lba <<= 1;
+-		end_lba <<= 1;
+-	} else {
+-		/* be careful ... don't want any overflows */
+-		unsigned int factor = scmd->device->sector_size / 512;
+-		do_div(start_lba, factor);
+-		do_div(end_lba, factor);
+-	}
++	/* be careful ... don't want any overflows */
++	do_div(start_lba, factor);
++	do_div(end_lba, factor);
+ 
+ 	/* The bad lba was reported incorrectly, we have no idea where
+ 	 * the error is.
+@@ -2212,8 +2206,7 @@ got_data:
+ 	if (sector_size != 512 &&
+ 	    sector_size != 1024 &&
+ 	    sector_size != 2048 &&
+-	    sector_size != 4096 &&
+-	    sector_size != 256) {
++	    sector_size != 4096) {
+ 		sd_printk(KERN_NOTICE, sdkp, "Unsupported sector size %d.\n",
+ 			  sector_size);
+ 		/*
+@@ -2268,8 +2261,6 @@ got_data:
+ 		sdkp->capacity <<= 2;
+ 	else if (sector_size == 1024)
+ 		sdkp->capacity <<= 1;
+-	else if (sector_size == 256)
+-		sdkp->capacity >>= 1;
+ 
+ 	blk_queue_physical_block_size(sdp->request_queue,
+ 				      sdkp->physical_block_size);
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index bf8c5c1e254e..75efaaeb0eca 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -1565,8 +1565,7 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
+ 		break;
+ 	default:
+ 		vm_srb->data_in = UNKNOWN_TYPE;
+-		vm_srb->win8_extension.srb_flags |= (SRB_FLAGS_DATA_IN |
+-						     SRB_FLAGS_DATA_OUT);
++		vm_srb->win8_extension.srb_flags |= SRB_FLAGS_NO_DATA_TRANSFER;
+ 		break;
+ 	}
+ 
+diff --git a/drivers/staging/gdm724x/gdm_mux.c b/drivers/staging/gdm724x/gdm_mux.c
+index d1ab996b3305..a21a51efaad0 100644
+--- a/drivers/staging/gdm724x/gdm_mux.c
++++ b/drivers/staging/gdm724x/gdm_mux.c
+@@ -158,7 +158,7 @@ static int up_to_host(struct mux_rx *r)
+ 	unsigned int start_flag;
+ 	unsigned int payload_size;
+ 	unsigned short packet_type;
+-	int dummy_cnt;
++	int total_len;
+ 	u32 packet_size_sum = r->offset;
+ 	int index;
+ 	int ret = TO_HOST_INVALID_PACKET;
+@@ -176,10 +176,10 @@ static int up_to_host(struct mux_rx *r)
+ 			break;
+ 		}
+ 
+-		dummy_cnt = ALIGN(MUX_HEADER_SIZE + payload_size, 4);
++		total_len = ALIGN(MUX_HEADER_SIZE + payload_size, 4);
+ 
+ 		if (len - packet_size_sum <
+-			MUX_HEADER_SIZE + payload_size + dummy_cnt) {
++			total_len) {
+ 			pr_err("invalid payload : %d %d %04x\n",
+ 			       payload_size, len, packet_type);
+ 			break;
+@@ -202,7 +202,7 @@ static int up_to_host(struct mux_rx *r)
+ 			break;
+ 		}
+ 
+-		packet_size_sum += MUX_HEADER_SIZE + payload_size + dummy_cnt;
++		packet_size_sum += total_len;
+ 		if (len - packet_size_sum <= MUX_HEADER_SIZE + 2) {
+ 			ret = r->callback(NULL,
+ 					0,
+@@ -361,7 +361,6 @@ static int gdm_mux_send(void *priv_dev, void *data, int len, int tty_index,
+ 	struct mux_pkt_header *mux_header;
+ 	struct mux_tx *t = NULL;
+ 	static u32 seq_num = 1;
+-	int dummy_cnt;
+ 	int total_len;
+ 	int ret;
+ 	unsigned long flags;
+@@ -374,9 +373,7 @@ static int gdm_mux_send(void *priv_dev, void *data, int len, int tty_index,
+ 
+ 	spin_lock_irqsave(&mux_dev->write_lock, flags);
+ 
+-	dummy_cnt = ALIGN(MUX_HEADER_SIZE + len, 4);
+-
+-	total_len = len + MUX_HEADER_SIZE + dummy_cnt;
++	total_len = ALIGN(MUX_HEADER_SIZE + len, 4);
+ 
+ 	t = alloc_mux_tx(total_len);
+ 	if (!t) {
+@@ -392,7 +389,8 @@ static int gdm_mux_send(void *priv_dev, void *data, int len, int tty_index,
+ 	mux_header->packet_type = __cpu_to_le16(packet_type[tty_index]);
+ 
+ 	memcpy(t->buf+MUX_HEADER_SIZE, data, len);
+-	memset(t->buf+MUX_HEADER_SIZE+len, 0, dummy_cnt);
++	memset(t->buf+MUX_HEADER_SIZE+len, 0, total_len - MUX_HEADER_SIZE -
++	       len);
+ 
+ 	t->len = total_len;
+ 	t->callback = cb;
+diff --git a/drivers/staging/vt6655/device_main.c b/drivers/staging/vt6655/device_main.c
+index 03b2a90b9ac0..992236f605d8 100644
+--- a/drivers/staging/vt6655/device_main.c
++++ b/drivers/staging/vt6655/device_main.c
+@@ -911,7 +911,11 @@ static int vnt_int_report_rate(struct vnt_private *priv,
+ 
+ 	if (!(tsr1 & TSR1_TERR)) {
+ 		info->status.rates[0].idx = idx;
+-		info->flags |= IEEE80211_TX_STAT_ACK;
++
++		if (info->flags & IEEE80211_TX_CTL_NO_ACK)
++			info->flags |= IEEE80211_TX_STAT_NOACK_TRANSMITTED;
++		else
++			info->flags |= IEEE80211_TX_STAT_ACK;
+ 	}
+ 
+ 	return 0;
+@@ -936,9 +940,6 @@ static int device_tx_srv(struct vnt_private *pDevice, unsigned int uIdx)
+ 		//Only the status of first TD in the chain is correct
+ 		if (pTD->m_td1TD1.byTCR & TCR_STP) {
+ 			if ((pTD->pTDInfo->byFlags & TD_FLAGS_NETIF_SKB) != 0) {
+-
+-				vnt_int_report_rate(pDevice, pTD->pTDInfo, byTsr0, byTsr1);
+-
+ 				if (!(byTsr1 & TSR1_TERR)) {
+ 					if (byTsr0 != 0) {
+ 						pr_debug(" Tx[%d] OK but has error. tsr1[%02X] tsr0[%02X]\n",
+@@ -957,6 +958,9 @@ static int device_tx_srv(struct vnt_private *pDevice, unsigned int uIdx)
+ 						 (int)uIdx, byTsr1, byTsr0);
+ 				}
+ 			}
++
++			vnt_int_report_rate(pDevice, pTD->pTDInfo, byTsr0, byTsr1);
++
+ 			device_free_tx_buf(pDevice, pTD);
+ 			pDevice->iTDUsed[uIdx]--;
+ 		}
+@@ -988,10 +992,8 @@ static void device_free_tx_buf(struct vnt_private *pDevice, PSTxDesc pDesc)
+ 				 PCI_DMA_TODEVICE);
+ 	}
+ 
+-	if (pTDInfo->byFlags & TD_FLAGS_NETIF_SKB)
++	if (skb)
+ 		ieee80211_tx_status_irqsafe(pDevice->hw, skb);
+-	else
+-		dev_kfree_skb_irq(skb);
+ 
+ 	pTDInfo->skb_dma = 0;
+ 	pTDInfo->skb = NULL;
+@@ -1201,14 +1203,6 @@ static int vnt_tx_packet(struct vnt_private *priv, struct sk_buff *skb)
+ 	if (dma_idx == TYPE_AC0DMA)
+ 		head_td->pTDInfo->byFlags = TD_FLAGS_NETIF_SKB;
+ 
+-	priv->iTDUsed[dma_idx]++;
+-
+-	/* Take ownership */
+-	wmb();
+-	head_td->m_td0TD0.f1Owner = OWNED_BY_NIC;
+-
+-	/* get Next */
+-	wmb();
+ 	priv->apCurrTD[dma_idx] = head_td->next;
+ 
+ 	spin_unlock_irqrestore(&priv->lock, flags);
+@@ -1229,11 +1223,18 @@ static int vnt_tx_packet(struct vnt_private *priv, struct sk_buff *skb)
+ 
+ 	head_td->buff_addr = cpu_to_le32(head_td->pTDInfo->skb_dma);
+ 
++	/* Poll Transmit the adapter */
++	wmb();
++	head_td->m_td0TD0.f1Owner = OWNED_BY_NIC;
++	wmb(); /* second memory barrier */
++
+ 	if (head_td->pTDInfo->byFlags & TD_FLAGS_NETIF_SKB)
+ 		MACvTransmitAC0(priv->PortOffset);
+ 	else
+ 		MACvTransmit0(priv->PortOffset);
+ 
++	priv->iTDUsed[dma_idx]++;
++
+ 	spin_unlock_irqrestore(&priv->lock, flags);
+ 
+ 	return 0;
+@@ -1413,9 +1414,16 @@ static void vnt_bss_info_changed(struct ieee80211_hw *hw,
+ 
+ 	priv->current_aid = conf->aid;
+ 
+-	if (changed & BSS_CHANGED_BSSID)
++	if (changed & BSS_CHANGED_BSSID) {
++		unsigned long flags;
++
++		spin_lock_irqsave(&priv->lock, flags);
++
+ 		MACvWriteBSSIDAddress(priv->PortOffset, (u8 *)conf->bssid);
+ 
++		spin_unlock_irqrestore(&priv->lock, flags);
++	}
++
+ 	if (changed & BSS_CHANGED_BASIC_RATES) {
+ 		priv->basic_rates = conf->basic_rates;
+ 
+diff --git a/drivers/staging/vt6656/rxtx.c b/drivers/staging/vt6656/rxtx.c
+index 33baf26de4b5..ee9ce165dcde 100644
+--- a/drivers/staging/vt6656/rxtx.c
++++ b/drivers/staging/vt6656/rxtx.c
+@@ -805,10 +805,18 @@ int vnt_tx_packet(struct vnt_private *priv, struct sk_buff *skb)
+ 		vnt_schedule_command(priv, WLAN_CMD_SETPOWER);
+ 	}
+ 
+-	if (current_rate > RATE_11M)
+-		pkt_type = priv->packet_type;
+-	else
++	if (current_rate > RATE_11M) {
++		if (info->band == IEEE80211_BAND_5GHZ) {
++			pkt_type = PK_TYPE_11A;
++		} else {
++			if (tx_rate->flags & IEEE80211_TX_RC_USE_CTS_PROTECT)
++				pkt_type = PK_TYPE_11GB;
++			else
++				pkt_type = PK_TYPE_11GA;
++		}
++	} else {
+ 		pkt_type = PK_TYPE_11B;
++	}
+ 
+ 	spin_lock_irqsave(&priv->lock, flags);
+ 
+diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
+index f6c954c4635f..4073869d2090 100644
+--- a/drivers/target/target_core_pscsi.c
++++ b/drivers/target/target_core_pscsi.c
+@@ -521,6 +521,7 @@ static int pscsi_configure_device(struct se_device *dev)
+ 					" pdv_host_id: %d\n", pdv->pdv_host_id);
+ 				return -EINVAL;
+ 			}
++			pdv->pdv_lld_host = sh;
+ 		}
+ 	} else {
+ 		if (phv->phv_mode == PHV_VIRTUAL_HOST_ID) {
+@@ -603,6 +604,8 @@ static void pscsi_free_device(struct se_device *dev)
+ 		if ((phv->phv_mode == PHV_LLD_SCSI_HOST_NO) &&
+ 		    (phv->phv_lld_host != NULL))
+ 			scsi_host_put(phv->phv_lld_host);
++		else if (pdv->pdv_lld_host)
++			scsi_host_put(pdv->pdv_lld_host);
+ 
+ 		if ((sd->type == TYPE_DISK) || (sd->type == TYPE_ROM))
+ 			scsi_device_put(sd);
+diff --git a/drivers/target/target_core_pscsi.h b/drivers/target/target_core_pscsi.h
+index 1bd757dff8ee..820d3052b775 100644
+--- a/drivers/target/target_core_pscsi.h
++++ b/drivers/target/target_core_pscsi.h
+@@ -45,6 +45,7 @@ struct pscsi_dev_virt {
+ 	int	pdv_lun_id;
+ 	struct block_device *pdv_bd;
+ 	struct scsi_device *pdv_sd;
++	struct Scsi_Host *pdv_lld_host;
+ } ____cacheline_aligned;
+ 
+ typedef enum phv_modes {
+diff --git a/drivers/thermal/armada_thermal.c b/drivers/thermal/armada_thermal.c
+index c2556cf5186b..01255fd65135 100644
+--- a/drivers/thermal/armada_thermal.c
++++ b/drivers/thermal/armada_thermal.c
+@@ -224,9 +224,9 @@ static const struct armada_thermal_data armada380_data = {
+ 	.is_valid_shift = 10,
+ 	.temp_shift = 0,
+ 	.temp_mask = 0x3ff,
+-	.coef_b = 1169498786UL,
+-	.coef_m = 2000000UL,
+-	.coef_div = 4289,
++	.coef_b = 2931108200UL,
++	.coef_m = 5000000UL,
++	.coef_div = 10502,
+ 	.inverted = true,
+ };
+ 
+diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
+index 5bab1c684bb1..7a3d146a5f0e 100644
+--- a/drivers/tty/hvc/hvc_xen.c
++++ b/drivers/tty/hvc/hvc_xen.c
+@@ -289,7 +289,7 @@ static int xen_initial_domain_console_init(void)
+ 			return -ENOMEM;
+ 	}
+ 
+-	info->irq = bind_virq_to_irq(VIRQ_CONSOLE, 0);
++	info->irq = bind_virq_to_irq(VIRQ_CONSOLE, 0, false);
+ 	info->vtermno = HVC_COOKIE;
+ 
+ 	spin_lock(&xencons_lock);
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index c4343764cc5b..bce16e405d59 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -3170,7 +3170,7 @@ static int gsmtty_break_ctl(struct tty_struct *tty, int state)
+ 	return gsmtty_modem_update(dlci, encode);
+ }
+ 
+-static void gsmtty_remove(struct tty_driver *driver, struct tty_struct *tty)
++static void gsmtty_cleanup(struct tty_struct *tty)
+ {
+ 	struct gsm_dlci *dlci = tty->driver_data;
+ 	struct gsm_mux *gsm = dlci->gsm;
+@@ -3178,7 +3178,6 @@ static void gsmtty_remove(struct tty_driver *driver, struct tty_struct *tty)
+ 	dlci_put(dlci);
+ 	dlci_put(gsm->dlci[0]);
+ 	mux_put(gsm);
+-	driver->ttys[tty->index] = NULL;
+ }
+ 
+ /* Virtual ttys for the demux */
+@@ -3199,7 +3198,7 @@ static const struct tty_operations gsmtty_ops = {
+ 	.tiocmget		= gsmtty_tiocmget,
+ 	.tiocmset		= gsmtty_tiocmset,
+ 	.break_ctl		= gsmtty_break_ctl,
+-	.remove			= gsmtty_remove,
++	.cleanup		= gsmtty_cleanup,
+ };
+ 
+ 
+diff --git a/drivers/tty/n_hdlc.c b/drivers/tty/n_hdlc.c
+index 644ddb841d9f..bbc4ce66c2c1 100644
+--- a/drivers/tty/n_hdlc.c
++++ b/drivers/tty/n_hdlc.c
+@@ -600,7 +600,7 @@ static ssize_t n_hdlc_tty_read(struct tty_struct *tty, struct file *file,
+ 	add_wait_queue(&tty->read_wait, &wait);
+ 
+ 	for (;;) {
+-		if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) {
++		if (test_bit(TTY_OTHER_DONE, &tty->flags)) {
+ 			ret = -EIO;
+ 			break;
+ 		}
+@@ -828,7 +828,7 @@ static unsigned int n_hdlc_tty_poll(struct tty_struct *tty, struct file *filp,
+ 		/* set bits for operations that won't block */
+ 		if (n_hdlc->rx_buf_list.head)
+ 			mask |= POLLIN | POLLRDNORM;	/* readable */
+-		if (test_bit(TTY_OTHER_CLOSED, &tty->flags))
++		if (test_bit(TTY_OTHER_DONE, &tty->flags))
+ 			mask |= POLLHUP;
+ 		if (tty_hung_up_p(filp))
+ 			mask |= POLLHUP;
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index cf6e0f2e1331..cc57a3a6b02b 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -1949,6 +1949,18 @@ static inline int input_available_p(struct tty_struct *tty, int poll)
+ 		return ldata->commit_head - ldata->read_tail >= amt;
+ }
+ 
++static inline int check_other_done(struct tty_struct *tty)
++{
++	int done = test_bit(TTY_OTHER_DONE, &tty->flags);
++	if (done) {
++		/* paired with cmpxchg() in check_other_closed(); ensures
++		 * read buffer head index is not stale
++		 */
++		smp_mb__after_atomic();
++	}
++	return done;
++}
++
+ /**
+  *	copy_from_read_buf	-	copy read data directly
+  *	@tty: terminal device
+@@ -2167,7 +2179,7 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
+ 	struct n_tty_data *ldata = tty->disc_data;
+ 	unsigned char __user *b = buf;
+ 	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+-	int c;
++	int c, done;
+ 	int minimum, time;
+ 	ssize_t retval = 0;
+ 	long timeout;
+@@ -2235,8 +2247,10 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
+ 		    ((minimum - (b - buf)) >= 1))
+ 			ldata->minimum_to_wake = (minimum - (b - buf));
+ 
++		done = check_other_done(tty);
++
+ 		if (!input_available_p(tty, 0)) {
+-			if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) {
++			if (done) {
+ 				retval = -EIO;
+ 				break;
+ 			}
+@@ -2443,12 +2457,12 @@ static unsigned int n_tty_poll(struct tty_struct *tty, struct file *file,
+ 
+ 	poll_wait(file, &tty->read_wait, wait);
+ 	poll_wait(file, &tty->write_wait, wait);
++	if (check_other_done(tty))
++		mask |= POLLHUP;
+ 	if (input_available_p(tty, 1))
+ 		mask |= POLLIN | POLLRDNORM;
+ 	if (tty->packet && tty->link->ctrl_status)
+ 		mask |= POLLPRI | POLLIN | POLLRDNORM;
+-	if (test_bit(TTY_OTHER_CLOSED, &tty->flags))
+-		mask |= POLLHUP;
+ 	if (tty_hung_up_p(file))
+ 		mask |= POLLHUP;
+ 	if (!(mask & (POLLHUP | POLLIN | POLLRDNORM))) {
+diff --git a/drivers/tty/pty.c b/drivers/tty/pty.c
+index e72ee629cead..4d5e8409769c 100644
+--- a/drivers/tty/pty.c
++++ b/drivers/tty/pty.c
+@@ -53,9 +53,8 @@ static void pty_close(struct tty_struct *tty, struct file *filp)
+ 	/* Review - krefs on tty_link ?? */
+ 	if (!tty->link)
+ 		return;
+-	tty_flush_to_ldisc(tty->link);
+ 	set_bit(TTY_OTHER_CLOSED, &tty->link->flags);
+-	wake_up_interruptible(&tty->link->read_wait);
++	tty_flip_buffer_push(tty->link->port);
+ 	wake_up_interruptible(&tty->link->write_wait);
+ 	if (tty->driver->subtype == PTY_TYPE_MASTER) {
+ 		set_bit(TTY_OTHER_CLOSED, &tty->flags);
+@@ -243,7 +242,9 @@ static int pty_open(struct tty_struct *tty, struct file *filp)
+ 		goto out;
+ 
+ 	clear_bit(TTY_IO_ERROR, &tty->flags);
++	/* TTY_OTHER_CLOSED must be cleared before TTY_OTHER_DONE */
+ 	clear_bit(TTY_OTHER_CLOSED, &tty->link->flags);
++	clear_bit(TTY_OTHER_DONE, &tty->link->flags);
+ 	set_bit(TTY_THROTTLED, &tty->flags);
+ 	return 0;
+ 
+diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
+index 75661641f5fe..2f78b77f0f81 100644
+--- a/drivers/tty/tty_buffer.c
++++ b/drivers/tty/tty_buffer.c
+@@ -37,6 +37,28 @@
+ 
+ #define TTY_BUFFER_PAGE	(((PAGE_SIZE - sizeof(struct tty_buffer)) / 2) & ~0xFF)
+ 
++/*
++ * If all tty flip buffers have been processed by flush_to_ldisc() or
++ * dropped by tty_buffer_flush(), check if the linked pty has been closed.
++ * If so, wake the reader/poll to process
++ */
++static inline void check_other_closed(struct tty_struct *tty)
++{
++	unsigned long flags, old;
++
++	/* transition from TTY_OTHER_CLOSED => TTY_OTHER_DONE must be atomic */
++	for (flags = ACCESS_ONCE(tty->flags);
++	     test_bit(TTY_OTHER_CLOSED, &flags);
++	     ) {
++		old = flags;
++		__set_bit(TTY_OTHER_DONE, &flags);
++		flags = cmpxchg(&tty->flags, old, flags);
++		if (old == flags) {
++			wake_up_interruptible(&tty->read_wait);
++			break;
++		}
++	}
++}
+ 
+ /**
+  *	tty_buffer_lock_exclusive	-	gain exclusive access to buffer
+@@ -229,6 +251,8 @@ void tty_buffer_flush(struct tty_struct *tty, struct tty_ldisc *ld)
+ 	if (ld && ld->ops->flush_buffer)
+ 		ld->ops->flush_buffer(tty);
+ 
++	check_other_closed(tty);
++
+ 	atomic_dec(&buf->priority);
+ 	mutex_unlock(&buf->lock);
+ }
+@@ -471,8 +495,10 @@ static void flush_to_ldisc(struct work_struct *work)
+ 		smp_rmb();
+ 		count = head->commit - head->read;
+ 		if (!count) {
+-			if (next == NULL)
++			if (next == NULL) {
++				check_other_closed(tty);
+ 				break;
++			}
+ 			buf->head = next;
+ 			tty_buffer_free(port, head);
+ 			continue;
+@@ -489,19 +515,6 @@ static void flush_to_ldisc(struct work_struct *work)
+ }
+ 
+ /**
+- *	tty_flush_to_ldisc
+- *	@tty: tty to push
+- *
+- *	Push the terminal flip buffers to the line discipline.
+- *
+- *	Must not be called from IRQ context.
+- */
+-void tty_flush_to_ldisc(struct tty_struct *tty)
+-{
+-	flush_work(&tty->port->buf.work);
+-}
+-
+-/**
+  *	tty_flip_buffer_push	-	terminal
+  *	@port: tty port to push
+  *
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index c42765b3a060..0495c94a23d7 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -1295,6 +1295,7 @@ static void purge_configs_funcs(struct gadget_info *gi)
+ 			}
+ 		}
+ 		c->next_interface_id = 0;
++		memset(c->interface, 0, sizeof(c->interface));
+ 		c->superspeed = 0;
+ 		c->highspeed = 0;
+ 		c->fullspeed = 0;
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index eeedde8c435a..6994c99e58a6 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -2026,8 +2026,13 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 		break;
+ 	case COMP_DEV_ERR:
+ 	case COMP_STALL:
++		frame->status = -EPROTO;
++		skip_td = true;
++		break;
+ 	case COMP_TX_ERR:
+ 		frame->status = -EPROTO;
++		if (event_trb != td->last_trb)
++			return 0;
+ 		skip_td = true;
+ 		break;
+ 	case COMP_STOP:
+@@ -2640,7 +2645,7 @@ irqreturn_t xhci_irq(struct usb_hcd *hcd)
+ 		xhci_halt(xhci);
+ hw_died:
+ 		spin_unlock(&xhci->lock);
+-		return -ESHUTDOWN;
++		return IRQ_HANDLED;
+ 	}
+ 
+ 	/*
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 8e421b89632d..ea75e8ccd3c1 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1267,7 +1267,7 @@ union xhci_trb {
+  * since the command ring is 64-byte aligned.
+  * It must also be greater than 16.
+  */
+-#define TRBS_PER_SEGMENT	64
++#define TRBS_PER_SEGMENT	256
+ /* Allow two commands + a link TRB, along with any reserved command TRBs */
+ #define MAX_RSVD_CMD_TRBS	(TRBS_PER_SEGMENT - 3)
+ #define TRB_SEGMENT_SIZE	(TRBS_PER_SEGMENT*16)
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 84ce2d74894c..9031750e7404 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -127,6 +127,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */
+ 	{ USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */
+ 	{ USB_DEVICE(0x10C4, 0x8977) },	/* CEL MeshWorks DevKit Device */
++	{ USB_DEVICE(0x10C4, 0x8998) }, /* KCF Technologies PRN */
+ 	{ USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */
+ 	{ USB_DEVICE(0x10C4, 0xEA61) }, /* Silicon Labs factory default */
+ 	{ USB_DEVICE(0x10C4, 0xEA70) }, /* Silicon Labs factory default */
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 829604d11f3f..f5257af33ecf 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -61,7 +61,6 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(DCU10_VENDOR_ID, DCU10_PRODUCT_ID) },
+ 	{ USB_DEVICE(SITECOM_VENDOR_ID, SITECOM_PRODUCT_ID) },
+ 	{ USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_ID) },
+-	{ USB_DEVICE(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_ID) },
+ 	{ USB_DEVICE(SIEMENS_VENDOR_ID, SIEMENS_PRODUCT_ID_SX1),
+ 		.driver_info = PL2303_QUIRK_UART_STATE_IDX0 },
+ 	{ USB_DEVICE(SIEMENS_VENDOR_ID, SIEMENS_PRODUCT_ID_X65),
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index 71fd9da1d6e7..e3b7af8adfb7 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -62,10 +62,6 @@
+ #define ALCATEL_VENDOR_ID	0x11f7
+ #define ALCATEL_PRODUCT_ID	0x02df
+ 
+-/* Samsung I330 phone cradle */
+-#define SAMSUNG_VENDOR_ID	0x04e8
+-#define SAMSUNG_PRODUCT_ID	0x8001
+-
+ #define SIEMENS_VENDOR_ID	0x11f5
+ #define SIEMENS_PRODUCT_ID_SX1	0x0001
+ #define SIEMENS_PRODUCT_ID_X65	0x0003
+diff --git a/drivers/usb/serial/visor.c b/drivers/usb/serial/visor.c
+index bf2bd40e5f2a..60afb39eb73c 100644
+--- a/drivers/usb/serial/visor.c
++++ b/drivers/usb/serial/visor.c
+@@ -95,7 +95,7 @@ static const struct usb_device_id id_table[] = {
+ 		.driver_info = (kernel_ulong_t)&palm_os_4_probe },
+ 	{ USB_DEVICE(ACER_VENDOR_ID, ACER_S10_ID),
+ 		.driver_info = (kernel_ulong_t)&palm_os_4_probe },
+-	{ USB_DEVICE(SAMSUNG_VENDOR_ID, SAMSUNG_SCH_I330_ID),
++	{ USB_DEVICE_INTERFACE_CLASS(SAMSUNG_VENDOR_ID, SAMSUNG_SCH_I330_ID, 0xff),
+ 		.driver_info = (kernel_ulong_t)&palm_os_4_probe },
+ 	{ USB_DEVICE(SAMSUNG_VENDOR_ID, SAMSUNG_SPH_I500_ID),
+ 		.driver_info = (kernel_ulong_t)&palm_os_4_probe },
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index d684b4b8108f..caf188800c67 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -766,6 +766,13 @@ UNUSUAL_DEV(  0x059f, 0x0643, 0x0000, 0x0000,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_GO_SLOW ),
+ 
++/* Reported by Christian Schaller <cschalle@redhat.com> */
++UNUSUAL_DEV(  0x059f, 0x0651, 0x0000, 0x0000,
++		"LaCie",
++		"External HDD",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_NO_WP_DETECT ),
++
+ /* Submitted by Joel Bourquard <numlock@freesurf.ch>
+  * Some versions of this device need the SubClass and Protocol overrides
+  * while others don't.
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 2b8553bd8715..38387950490e 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -957,7 +957,7 @@ unsigned xen_evtchn_nr_channels(void)
+ }
+ EXPORT_SYMBOL_GPL(xen_evtchn_nr_channels);
+ 
+-int bind_virq_to_irq(unsigned int virq, unsigned int cpu)
++int bind_virq_to_irq(unsigned int virq, unsigned int cpu, bool percpu)
+ {
+ 	struct evtchn_bind_virq bind_virq;
+ 	int evtchn, irq, ret;
+@@ -971,8 +971,12 @@ int bind_virq_to_irq(unsigned int virq, unsigned int cpu)
+ 		if (irq < 0)
+ 			goto out;
+ 
+-		irq_set_chip_and_handler_name(irq, &xen_percpu_chip,
+-					      handle_percpu_irq, "virq");
++		if (percpu)
++			irq_set_chip_and_handler_name(irq, &xen_percpu_chip,
++						      handle_percpu_irq, "virq");
++		else
++			irq_set_chip_and_handler_name(irq, &xen_dynamic_chip,
++						      handle_edge_irq, "virq");
+ 
+ 		bind_virq.virq = virq;
+ 		bind_virq.vcpu = cpu;
+@@ -1062,7 +1066,7 @@ int bind_virq_to_irqhandler(unsigned int virq, unsigned int cpu,
+ {
+ 	int irq, retval;
+ 
+-	irq = bind_virq_to_irq(virq, cpu);
++	irq = bind_virq_to_irq(virq, cpu, irqflags & IRQF_PERCPU);
+ 	if (irq < 0)
+ 		return irq;
+ 	retval = request_irq(irq, handler, irqflags, devname, dev_id);
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index d925f55e4857..8081aba116a7 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -928,7 +928,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ 			total_size = total_mapping_size(elf_phdata,
+ 							loc->elf_ex.e_phnum);
+ 			if (!total_size) {
+-				error = -EINVAL;
++				retval = -EINVAL;
+ 				goto out_free_dentry;
+ 			}
+ 		}
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 0a795c969c78..8b33da6ec3dd 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -8548,7 +8548,9 @@ int btrfs_set_block_group_ro(struct btrfs_root *root,
+ out:
+ 	if (cache->flags & BTRFS_BLOCK_GROUP_SYSTEM) {
+ 		alloc_flags = update_block_group_flags(root, cache->flags);
++		lock_chunks(root->fs_info->chunk_root);
+ 		check_system_chunk(trans, root, alloc_flags);
++		unlock_chunks(root->fs_info->chunk_root);
+ 	}
+ 
+ 	btrfs_end_transaction(trans, root);
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 8222f6f74147..44a7e0398d97 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4626,6 +4626,7 @@ int btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
+ {
+ 	u64 chunk_offset;
+ 
++	ASSERT(mutex_is_locked(&extent_root->fs_info->chunk_mutex));
+ 	chunk_offset = find_next_chunk(extent_root->fs_info);
+ 	return __btrfs_alloc_chunk(trans, extent_root, chunk_offset, type);
+ }
+diff --git a/fs/dcache.c b/fs/dcache.c
+index c71e3732e53b..922f23ef6041 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -1205,13 +1205,13 @@ ascend:
+ 		/* might go back up the wrong parent if we have had a rename. */
+ 		if (need_seqretry(&rename_lock, seq))
+ 			goto rename_retry;
+-		next = child->d_child.next;
+-		while (unlikely(child->d_flags & DCACHE_DENTRY_KILLED)) {
++		/* go into the first sibling still alive */
++		do {
++			next = child->d_child.next;
+ 			if (next == &this_parent->d_subdirs)
+ 				goto ascend;
+ 			child = list_entry(next, struct dentry, d_child);
+-			next = next->next;
+-		}
++		} while (unlikely(child->d_flags & DCACHE_DENTRY_KILLED));
+ 		rcu_read_unlock();
+ 		goto resume;
+ 	}
+diff --git a/fs/exec.c b/fs/exec.c
+index 00400cf522dc..120244523647 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -659,6 +659,9 @@ int setup_arg_pages(struct linux_binprm *bprm,
+ 	if (stack_base > STACK_SIZE_MAX)
+ 		stack_base = STACK_SIZE_MAX;
+ 
++	/* Add space for stack randomization. */
++	stack_base += (STACK_RND_MASK << PAGE_SHIFT);
++
+ 	/* Make sure we didn't let the argument array grow too large. */
+ 	if (vma->vm_end - vma->vm_start > stack_base)
+ 		return -ENOMEM;
+diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
+index 3445035c7e01..d41843181818 100644
+--- a/fs/ext4/ext4_jbd2.c
++++ b/fs/ext4/ext4_jbd2.c
+@@ -87,6 +87,12 @@ int __ext4_journal_stop(const char *where, unsigned int line, handle_t *handle)
+ 		ext4_put_nojournal(handle);
+ 		return 0;
+ 	}
++
++	if (!handle->h_transaction) {
++		err = jbd2_journal_stop(handle);
++		return handle->h_err ? handle->h_err : err;
++	}
++
+ 	sb = handle->h_transaction->t_journal->j_private;
+ 	err = handle->h_err;
+ 	rc = jbd2_journal_stop(handle);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 16f6365f65e7..ea4ee1732143 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -377,7 +377,7 @@ static int ext4_valid_extent(struct inode *inode, struct ext4_extent *ext)
+ 	ext4_lblk_t lblock = le32_to_cpu(ext->ee_block);
+ 	ext4_lblk_t last = lblock + len - 1;
+ 
+-	if (lblock > last)
++	if (len == 0 || lblock > last)
+ 		return 0;
+ 	return ext4_data_block_valid(EXT4_SB(inode->i_sb), block, len);
+ }
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 852cc521f327..1f252b4e0f51 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4233,7 +4233,7 @@ static void ext4_update_other_inodes_time(struct super_block *sb,
+ 	int inode_size = EXT4_INODE_SIZE(sb);
+ 
+ 	oi.orig_ino = orig_ino;
+-	ino = orig_ino & ~(inodes_per_block - 1);
++	ino = (orig_ino & ~(inodes_per_block - 1)) + 1;
+ 	for (i = 0; i < inodes_per_block; i++, ino++, buf += inode_size) {
+ 		if (ino == orig_ino)
+ 			continue;
+diff --git a/fs/fhandle.c b/fs/fhandle.c
+index 999ff5c3cab0..d59712dfa3e7 100644
+--- a/fs/fhandle.c
++++ b/fs/fhandle.c
+@@ -195,8 +195,9 @@ static int handle_to_path(int mountdirfd, struct file_handle __user *ufh,
+ 		goto out_err;
+ 	}
+ 	/* copy the full handle */
+-	if (copy_from_user(handle, ufh,
+-			   sizeof(struct file_handle) +
++	*handle = f_handle;
++	if (copy_from_user(&handle->f_handle,
++			   &ufh->f_handle,
+ 			   f_handle.handle_bytes)) {
+ 		retval = -EFAULT;
+ 		goto out_handle;
+diff --git a/fs/fs_pin.c b/fs/fs_pin.c
+index b06c98796afb..611b5408f6ec 100644
+--- a/fs/fs_pin.c
++++ b/fs/fs_pin.c
+@@ -9,8 +9,8 @@ static DEFINE_SPINLOCK(pin_lock);
+ void pin_remove(struct fs_pin *pin)
+ {
+ 	spin_lock(&pin_lock);
+-	hlist_del(&pin->m_list);
+-	hlist_del(&pin->s_list);
++	hlist_del_init(&pin->m_list);
++	hlist_del_init(&pin->s_list);
+ 	spin_unlock(&pin_lock);
+ 	spin_lock_irq(&pin->wait.lock);
+ 	pin->done = 1;
+diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
+index b5128c6e63ad..a9079d035ae5 100644
+--- a/fs/jbd2/recovery.c
++++ b/fs/jbd2/recovery.c
+@@ -842,15 +842,23 @@ static int scan_revoke_records(journal_t *journal, struct buffer_head *bh,
+ {
+ 	jbd2_journal_revoke_header_t *header;
+ 	int offset, max;
++	int csum_size = 0;
++	__u32 rcount;
+ 	int record_len = 4;
+ 
+ 	header = (jbd2_journal_revoke_header_t *) bh->b_data;
+ 	offset = sizeof(jbd2_journal_revoke_header_t);
+-	max = be32_to_cpu(header->r_count);
++	rcount = be32_to_cpu(header->r_count);
+ 
+ 	if (!jbd2_revoke_block_csum_verify(journal, header))
+ 		return -EINVAL;
+ 
++	if (jbd2_journal_has_csum_v2or3(journal))
++		csum_size = sizeof(struct jbd2_journal_revoke_tail);
++	if (rcount > journal->j_blocksize - csum_size)
++		return -EINVAL;
++	max = rcount;
++
+ 	if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))
+ 		record_len = 8;
+ 
+diff --git a/fs/jbd2/revoke.c b/fs/jbd2/revoke.c
+index c6cbaef2bda1..14214da80eb8 100644
+--- a/fs/jbd2/revoke.c
++++ b/fs/jbd2/revoke.c
+@@ -577,7 +577,7 @@ static void write_one_revoke_record(journal_t *journal,
+ {
+ 	int csum_size = 0;
+ 	struct buffer_head *descriptor;
+-	int offset;
++	int sz, offset;
+ 	journal_header_t *header;
+ 
+ 	/* If we are already aborting, this all becomes a noop.  We
+@@ -594,9 +594,14 @@ static void write_one_revoke_record(journal_t *journal,
+ 	if (jbd2_journal_has_csum_v2or3(journal))
+ 		csum_size = sizeof(struct jbd2_journal_revoke_tail);
+ 
++	if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))
++		sz = 8;
++	else
++		sz = 4;
++
+ 	/* Make sure we have a descriptor with space left for the record */
+ 	if (descriptor) {
+-		if (offset >= journal->j_blocksize - csum_size) {
++		if (offset + sz > journal->j_blocksize - csum_size) {
+ 			flush_descriptor(journal, descriptor, offset, write_op);
+ 			descriptor = NULL;
+ 		}
+@@ -619,16 +624,13 @@ static void write_one_revoke_record(journal_t *journal,
+ 		*descriptorp = descriptor;
+ 	}
+ 
+-	if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT)) {
++	if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))
+ 		* ((__be64 *)(&descriptor->b_data[offset])) =
+ 			cpu_to_be64(record->blocknr);
+-		offset += 8;
+-
+-	} else {
++	else
+ 		* ((__be32 *)(&descriptor->b_data[offset])) =
+ 			cpu_to_be32(record->blocknr);
+-		offset += 4;
+-	}
++	offset += sz;
+ 
+ 	*offsetp = offset;
+ }
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 5f09370c90a8..ff2f2e6ad311 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -551,7 +551,6 @@ int jbd2_journal_extend(handle_t *handle, int nblocks)
+ 	int result;
+ 	int wanted;
+ 
+-	WARN_ON(!transaction);
+ 	if (is_handle_aborted(handle))
+ 		return -EROFS;
+ 	journal = transaction->t_journal;
+@@ -627,7 +626,6 @@ int jbd2__journal_restart(handle_t *handle, int nblocks, gfp_t gfp_mask)
+ 	tid_t		tid;
+ 	int		need_to_start, ret;
+ 
+-	WARN_ON(!transaction);
+ 	/* If we've had an abort of any type, don't even think about
+ 	 * actually doing the restart! */
+ 	if (is_handle_aborted(handle))
+@@ -785,7 +783,6 @@ do_get_write_access(handle_t *handle, struct journal_head *jh,
+ 	int need_copy = 0;
+ 	unsigned long start_lock, time_lock;
+ 
+-	WARN_ON(!transaction);
+ 	if (is_handle_aborted(handle))
+ 		return -EROFS;
+ 	journal = transaction->t_journal;
+@@ -1051,7 +1048,6 @@ int jbd2_journal_get_create_access(handle_t *handle, struct buffer_head *bh)
+ 	int err;
+ 
+ 	jbd_debug(5, "journal_head %p\n", jh);
+-	WARN_ON(!transaction);
+ 	err = -EROFS;
+ 	if (is_handle_aborted(handle))
+ 		goto out;
+@@ -1266,7 +1262,6 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ 	struct journal_head *jh;
+ 	int ret = 0;
+ 
+-	WARN_ON(!transaction);
+ 	if (is_handle_aborted(handle))
+ 		return -EROFS;
+ 	journal = transaction->t_journal;
+@@ -1397,7 +1392,6 @@ int jbd2_journal_forget (handle_t *handle, struct buffer_head *bh)
+ 	int err = 0;
+ 	int was_modified = 0;
+ 
+-	WARN_ON(!transaction);
+ 	if (is_handle_aborted(handle))
+ 		return -EROFS;
+ 	journal = transaction->t_journal;
+@@ -1530,8 +1524,22 @@ int jbd2_journal_stop(handle_t *handle)
+ 	tid_t tid;
+ 	pid_t pid;
+ 
+-	if (!transaction)
+-		goto free_and_exit;
++	if (!transaction) {
++		/*
++		 * Handle is already detached from the transaction so
++		 * there is nothing to do other than decrease a refcount,
++		 * or free the handle if refcount drops to zero
++		 */
++		if (--handle->h_ref > 0) {
++			jbd_debug(4, "h_ref %d -> %d\n", handle->h_ref + 1,
++							 handle->h_ref);
++			return err;
++		} else {
++			if (handle->h_rsv_handle)
++				jbd2_free_handle(handle->h_rsv_handle);
++			goto free_and_exit;
++		}
++	}
+ 	journal = transaction->t_journal;
+ 
+ 	J_ASSERT(journal_current_handle() == handle);
+@@ -2373,7 +2381,6 @@ int jbd2_journal_file_inode(handle_t *handle, struct jbd2_inode *jinode)
+ 	transaction_t *transaction = handle->h_transaction;
+ 	journal_t *journal;
+ 
+-	WARN_ON(!transaction);
+ 	if (is_handle_aborted(handle))
+ 		return -EROFS;
+ 	journal = transaction->t_journal;
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index 6acc9648f986..345b35fd329d 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -518,7 +518,14 @@ static struct kernfs_node *__kernfs_new_node(struct kernfs_root *root,
+ 	if (!kn)
+ 		goto err_out1;
+ 
+-	ret = ida_simple_get(&root->ino_ida, 1, 0, GFP_KERNEL);
++	/*
++	 * If the ino of the sysfs entry created for a kmem cache gets
++	 * allocated from an ida layer, which is accounted to the memcg that
++	 * owns the cache, the memcg will get pinned forever. So do not account
++	 * ino ida allocations.
++	 */
++	ret = ida_simple_get(&root->ino_ida, 1, 0,
++			     GFP_KERNEL | __GFP_NOACCOUNT);
+ 	if (ret < 0)
+ 		goto err_out2;
+ 	kn->ino = ret;
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 38ed1e1bed41..13b0f7bfc096 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1709,8 +1709,11 @@ struct vfsmount *collect_mounts(struct path *path)
+ {
+ 	struct mount *tree;
+ 	namespace_lock();
+-	tree = copy_tree(real_mount(path->mnt), path->dentry,
+-			 CL_COPY_ALL | CL_PRIVATE);
++	if (!check_mnt(real_mount(path->mnt)))
++		tree = ERR_PTR(-EINVAL);
++	else
++		tree = copy_tree(real_mount(path->mnt), path->dentry,
++				 CL_COPY_ALL | CL_PRIVATE);
+ 	namespace_unlock();
+ 	if (IS_ERR(tree))
+ 		return ERR_CAST(tree);
+diff --git a/fs/nfsd/blocklayout.c b/fs/nfsd/blocklayout.c
+index 03d647bf195d..cdefaa331a07 100644
+--- a/fs/nfsd/blocklayout.c
++++ b/fs/nfsd/blocklayout.c
+@@ -181,6 +181,17 @@ nfsd4_block_proc_layoutcommit(struct inode *inode,
+ }
+ 
+ const struct nfsd4_layout_ops bl_layout_ops = {
++	/*
++	 * Pretend that we send notification to the client.  This is a blatant
++	 * lie to force recent Linux clients to cache our device IDs.
++	 * We rarely ever change the device ID, so the harm of leaking deviceids
++	 * for a while isn't too bad.  Unfortunately RFC5661 is a complete mess
++	 * in this regard, but I filed errata 4119 for this a while ago, and
++	 * hopefully the Linux client will eventually start caching deviceids
++	 * without this again.
++	 */
++	.notify_types		=
++			NOTIFY_DEVICEID4_DELETE | NOTIFY_DEVICEID4_CHANGE,
+ 	.proc_getdeviceinfo	= nfsd4_block_proc_getdeviceinfo,
+ 	.encode_getdeviceinfo	= nfsd4_block_encode_getdeviceinfo,
+ 	.proc_layoutget		= nfsd4_block_proc_layoutget,
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index ee1cccdb083a..b4541ede7cb8 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4386,10 +4386,17 @@ static __be32 check_stateid_generation(stateid_t *in, stateid_t *ref, bool has_s
+ 	return nfserr_old_stateid;
+ }
+ 
++static __be32 nfsd4_check_openowner_confirmed(struct nfs4_ol_stateid *ols)
++{
++	if (ols->st_stateowner->so_is_open_owner &&
++	    !(openowner(ols->st_stateowner)->oo_flags & NFS4_OO_CONFIRMED))
++		return nfserr_bad_stateid;
++	return nfs_ok;
++}
++
+ static __be32 nfsd4_validate_stateid(struct nfs4_client *cl, stateid_t *stateid)
+ {
+ 	struct nfs4_stid *s;
+-	struct nfs4_ol_stateid *ols;
+ 	__be32 status = nfserr_bad_stateid;
+ 
+ 	if (ZERO_STATEID(stateid) || ONE_STATEID(stateid))
+@@ -4419,13 +4426,7 @@ static __be32 nfsd4_validate_stateid(struct nfs4_client *cl, stateid_t *stateid)
+ 		break;
+ 	case NFS4_OPEN_STID:
+ 	case NFS4_LOCK_STID:
+-		ols = openlockstateid(s);
+-		if (ols->st_stateowner->so_is_open_owner
+-	    			&& !(openowner(ols->st_stateowner)->oo_flags
+-						& NFS4_OO_CONFIRMED))
+-			status = nfserr_bad_stateid;
+-		else
+-			status = nfs_ok;
++		status = nfsd4_check_openowner_confirmed(openlockstateid(s));
+ 		break;
+ 	default:
+ 		printk("unknown stateid type %x\n", s->sc_type);
+@@ -4517,8 +4518,8 @@ nfs4_preprocess_stateid_op(struct net *net, struct nfsd4_compound_state *cstate,
+ 		status = nfs4_check_fh(current_fh, stp);
+ 		if (status)
+ 			goto out;
+-		if (stp->st_stateowner->so_is_open_owner
+-		    && !(openowner(stp->st_stateowner)->oo_flags & NFS4_OO_CONFIRMED))
++		status = nfsd4_check_openowner_confirmed(stp);
++		if (status)
+ 			goto out;
+ 		status = nfs4_check_openmode(stp, flags);
+ 		if (status)
+diff --git a/fs/omfs/inode.c b/fs/omfs/inode.c
+index 138321b0c6c2..454111a3308e 100644
+--- a/fs/omfs/inode.c
++++ b/fs/omfs/inode.c
+@@ -306,7 +306,8 @@ static const struct super_operations omfs_sops = {
+  */
+ static int omfs_get_imap(struct super_block *sb)
+ {
+-	unsigned int bitmap_size, count, array_size;
++	unsigned int bitmap_size, array_size;
++	int count;
+ 	struct omfs_sb_info *sbi = OMFS_SB(sb);
+ 	struct buffer_head *bh;
+ 	unsigned long **ptr;
+@@ -359,7 +360,7 @@ nomem:
+ }
+ 
+ enum {
+-	Opt_uid, Opt_gid, Opt_umask, Opt_dmask, Opt_fmask
++	Opt_uid, Opt_gid, Opt_umask, Opt_dmask, Opt_fmask, Opt_err
+ };
+ 
+ static const match_table_t tokens = {
+@@ -368,6 +369,7 @@ static const match_table_t tokens = {
+ 	{Opt_umask, "umask=%o"},
+ 	{Opt_dmask, "dmask=%o"},
+ 	{Opt_fmask, "fmask=%o"},
++	{Opt_err, NULL},
+ };
+ 
+ static int parse_options(char *options, struct omfs_sb_info *sbi)
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 24f640441bd9..84d693d37428 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -299,6 +299,9 @@ int ovl_copy_up_one(struct dentry *parent, struct dentry *dentry,
+ 	struct cred *override_cred;
+ 	char *link = NULL;
+ 
++	if (WARN_ON(!workdir))
++		return -EROFS;
++
+ 	ovl_path_upper(parent, &parentpath);
+ 	upperdir = parentpath.dentry;
+ 
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index d139405d2bfa..692ceda3bc21 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -222,6 +222,9 @@ static struct dentry *ovl_clear_empty(struct dentry *dentry,
+ 	struct kstat stat;
+ 	int err;
+ 
++	if (WARN_ON(!workdir))
++		return ERR_PTR(-EROFS);
++
+ 	err = ovl_lock_rename_workdir(workdir, upperdir);
+ 	if (err)
+ 		goto out;
+@@ -322,6 +325,9 @@ static int ovl_create_over_whiteout(struct dentry *dentry, struct inode *inode,
+ 	struct dentry *newdentry;
+ 	int err;
+ 
++	if (WARN_ON(!workdir))
++		return -EROFS;
++
+ 	err = ovl_lock_rename_workdir(workdir, upperdir);
+ 	if (err)
+ 		goto out;
+@@ -506,11 +512,28 @@ static int ovl_remove_and_whiteout(struct dentry *dentry, bool is_dir)
+ 	struct dentry *opaquedir = NULL;
+ 	int err;
+ 
+-	if (is_dir && OVL_TYPE_MERGE_OR_LOWER(ovl_path_type(dentry))) {
+-		opaquedir = ovl_check_empty_and_clear(dentry);
+-		err = PTR_ERR(opaquedir);
+-		if (IS_ERR(opaquedir))
+-			goto out;
++	if (WARN_ON(!workdir))
++		return -EROFS;
++
++	if (is_dir) {
++		if (OVL_TYPE_MERGE_OR_LOWER(ovl_path_type(dentry))) {
++			opaquedir = ovl_check_empty_and_clear(dentry);
++			err = PTR_ERR(opaquedir);
++			if (IS_ERR(opaquedir))
++				goto out;
++		} else {
++			LIST_HEAD(list);
++
++			/*
++			 * When removing an empty opaque directory, then it
++			 * makes no sense to replace it with an exact replica of
++			 * itself.  But emptiness still needs to be checked.
++			 */
++			err = ovl_check_empty_dir(dentry, &list);
++			ovl_cache_free(&list);
++			if (err)
++				goto out;
++		}
+ 	}
+ 
+ 	err = ovl_lock_rename_workdir(workdir, upperdir);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 5f0d1993e6e3..bf8537c7f455 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -529,7 +529,7 @@ static int ovl_remount(struct super_block *sb, int *flags, char *data)
+ {
+ 	struct ovl_fs *ufs = sb->s_fs_info;
+ 
+-	if (!(*flags & MS_RDONLY) && !ufs->upper_mnt)
++	if (!(*flags & MS_RDONLY) && (!ufs->upper_mnt || !ufs->workdir))
+ 		return -EROFS;
+ 
+ 	return 0;
+@@ -925,9 +925,10 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ 		ufs->workdir = ovl_workdir_create(ufs->upper_mnt, workpath.dentry);
+ 		err = PTR_ERR(ufs->workdir);
+ 		if (IS_ERR(ufs->workdir)) {
+-			pr_err("overlayfs: failed to create directory %s/%s\n",
+-			       ufs->config.workdir, OVL_WORKDIR_NAME);
+-			goto out_put_upper_mnt;
++			pr_warn("overlayfs: failed to create directory %s/%s (errno: %i); mounting read-only\n",
++				ufs->config.workdir, OVL_WORKDIR_NAME, -err);
++			sb->s_flags |= MS_RDONLY;
++			ufs->workdir = NULL;
+ 		}
+ 	}
+ 
+@@ -997,7 +998,6 @@ out_put_lower_mnt:
+ 	kfree(ufs->lower_mnt);
+ out_put_workdir:
+ 	dput(ufs->workdir);
+-out_put_upper_mnt:
+ 	mntput(ufs->upper_mnt);
+ out_put_lowerpath:
+ 	for (i = 0; i < numlower; i++)
+diff --git a/fs/xfs/libxfs/xfs_attr_leaf.c b/fs/xfs/libxfs/xfs_attr_leaf.c
+index 15105dbc9e28..0166e7e829a7 100644
+--- a/fs/xfs/libxfs/xfs_attr_leaf.c
++++ b/fs/xfs/libxfs/xfs_attr_leaf.c
+@@ -498,8 +498,8 @@ xfs_attr_shortform_add(xfs_da_args_t *args, int forkoff)
+  * After the last attribute is removed revert to original inode format,
+  * making all literal area available to the data fork once more.
+  */
+-STATIC void
+-xfs_attr_fork_reset(
++void
++xfs_attr_fork_remove(
+ 	struct xfs_inode	*ip,
+ 	struct xfs_trans	*tp)
+ {
+@@ -565,7 +565,7 @@ xfs_attr_shortform_remove(xfs_da_args_t *args)
+ 	    (mp->m_flags & XFS_MOUNT_ATTR2) &&
+ 	    (dp->i_d.di_format != XFS_DINODE_FMT_BTREE) &&
+ 	    !(args->op_flags & XFS_DA_OP_ADDNAME)) {
+-		xfs_attr_fork_reset(dp, args->trans);
++		xfs_attr_fork_remove(dp, args->trans);
+ 	} else {
+ 		xfs_idata_realloc(dp, -size, XFS_ATTR_FORK);
+ 		dp->i_d.di_forkoff = xfs_attr_shortform_bytesfit(dp, totsize);
+@@ -828,7 +828,7 @@ xfs_attr3_leaf_to_shortform(
+ 	if (forkoff == -1) {
+ 		ASSERT(dp->i_mount->m_flags & XFS_MOUNT_ATTR2);
+ 		ASSERT(dp->i_d.di_format != XFS_DINODE_FMT_BTREE);
+-		xfs_attr_fork_reset(dp, args->trans);
++		xfs_attr_fork_remove(dp, args->trans);
+ 		goto out;
+ 	}
+ 
+diff --git a/fs/xfs/libxfs/xfs_attr_leaf.h b/fs/xfs/libxfs/xfs_attr_leaf.h
+index e2929da7c3ba..4f3a60aa93d4 100644
+--- a/fs/xfs/libxfs/xfs_attr_leaf.h
++++ b/fs/xfs/libxfs/xfs_attr_leaf.h
+@@ -53,7 +53,7 @@ int	xfs_attr_shortform_remove(struct xfs_da_args *args);
+ int	xfs_attr_shortform_list(struct xfs_attr_list_context *context);
+ int	xfs_attr_shortform_allfit(struct xfs_buf *bp, struct xfs_inode *dp);
+ int	xfs_attr_shortform_bytesfit(xfs_inode_t *dp, int bytes);
+-
++void	xfs_attr_fork_remove(struct xfs_inode *ip, struct xfs_trans *tp);
+ 
+ /*
+  * Internal routines when attribute fork size == XFS_LBSIZE(mp).
+diff --git a/fs/xfs/xfs_attr_inactive.c b/fs/xfs/xfs_attr_inactive.c
+index 83af4c149635..487c8374a1e0 100644
+--- a/fs/xfs/xfs_attr_inactive.c
++++ b/fs/xfs/xfs_attr_inactive.c
+@@ -379,23 +379,31 @@ xfs_attr3_root_inactive(
+ 	return error;
+ }
+ 
++/*
++ * xfs_attr_inactive kills all traces of an attribute fork on an inode. It
++ * removes both the on-disk and in-memory inode fork. Note that this also has to
++ * handle the condition of inodes without attributes but with an attribute fork
++ * configured, so we can't use xfs_inode_hasattr() here.
++ *
++ * The in-memory attribute fork is removed even on error.
++ */
+ int
+-xfs_attr_inactive(xfs_inode_t *dp)
++xfs_attr_inactive(
++	struct xfs_inode	*dp)
+ {
+-	xfs_trans_t *trans;
+-	xfs_mount_t *mp;
+-	int error;
++	struct xfs_trans	*trans;
++	struct xfs_mount	*mp;
++	int			cancel_flags = 0;
++	int			lock_mode = XFS_ILOCK_SHARED;
++	int			error = 0;
+ 
+ 	mp = dp->i_mount;
+ 	ASSERT(! XFS_NOT_DQATTACHED(mp, dp));
+ 
+-	xfs_ilock(dp, XFS_ILOCK_SHARED);
+-	if (!xfs_inode_hasattr(dp) ||
+-	    dp->i_d.di_aformat == XFS_DINODE_FMT_LOCAL) {
+-		xfs_iunlock(dp, XFS_ILOCK_SHARED);
+-		return 0;
+-	}
+-	xfs_iunlock(dp, XFS_ILOCK_SHARED);
++	xfs_ilock(dp, lock_mode);
++	if (!XFS_IFORK_Q(dp))
++		goto out_destroy_fork;
++	xfs_iunlock(dp, lock_mode);
+ 
+ 	/*
+ 	 * Start our first transaction of the day.
+@@ -407,13 +415,18 @@ xfs_attr_inactive(xfs_inode_t *dp)
+ 	 * the inode in every transaction to let it float upward through
+ 	 * the log.
+ 	 */
++	lock_mode = 0;
+ 	trans = xfs_trans_alloc(mp, XFS_TRANS_ATTRINVAL);
+ 	error = xfs_trans_reserve(trans, &M_RES(mp)->tr_attrinval, 0, 0);
+-	if (error) {
+-		xfs_trans_cancel(trans, 0);
+-		return error;
+-	}
+-	xfs_ilock(dp, XFS_ILOCK_EXCL);
++	if (error)
++		goto out_cancel;
++
++	lock_mode = XFS_ILOCK_EXCL;
++	cancel_flags = XFS_TRANS_RELEASE_LOG_RES | XFS_TRANS_ABORT;
++	xfs_ilock(dp, lock_mode);
++
++	if (!XFS_IFORK_Q(dp))
++		goto out_cancel;
+ 
+ 	/*
+ 	 * No need to make quota reservations here. We expect to release some
+@@ -421,29 +434,31 @@ xfs_attr_inactive(xfs_inode_t *dp)
+ 	 */
+ 	xfs_trans_ijoin(trans, dp, 0);
+ 
+-	/*
+-	 * Decide on what work routines to call based on the inode size.
+-	 */
+-	if (!xfs_inode_hasattr(dp) ||
+-	    dp->i_d.di_aformat == XFS_DINODE_FMT_LOCAL) {
+-		error = 0;
+-		goto out;
++	/* invalidate and truncate the attribute fork extents */
++	if (dp->i_d.di_aformat != XFS_DINODE_FMT_LOCAL) {
++		error = xfs_attr3_root_inactive(&trans, dp);
++		if (error)
++			goto out_cancel;
++
++		error = xfs_itruncate_extents(&trans, dp, XFS_ATTR_FORK, 0);
++		if (error)
++			goto out_cancel;
+ 	}
+-	error = xfs_attr3_root_inactive(&trans, dp);
+-	if (error)
+-		goto out;
+ 
+-	error = xfs_itruncate_extents(&trans, dp, XFS_ATTR_FORK, 0);
+-	if (error)
+-		goto out;
++	/* Reset the attribute fork - this also destroys the in-core fork */
++	xfs_attr_fork_remove(dp, trans);
+ 
+ 	error = xfs_trans_commit(trans, XFS_TRANS_RELEASE_LOG_RES);
+-	xfs_iunlock(dp, XFS_ILOCK_EXCL);
+-
++	xfs_iunlock(dp, lock_mode);
+ 	return error;
+ 
+-out:
+-	xfs_trans_cancel(trans, XFS_TRANS_RELEASE_LOG_RES|XFS_TRANS_ABORT);
+-	xfs_iunlock(dp, XFS_ILOCK_EXCL);
++out_cancel:
++	xfs_trans_cancel(trans, cancel_flags);
++out_destroy_fork:
++	/* kill the in-core attr fork before we drop the inode lock */
++	if (dp->i_afp)
++		xfs_idestroy_fork(dp, XFS_ATTR_FORK);
++	if (lock_mode)
++		xfs_iunlock(dp, lock_mode);
+ 	return error;
+ }
+diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
+index a2e1cb8a568b..f3ba637a8ece 100644
+--- a/fs/xfs/xfs_file.c
++++ b/fs/xfs/xfs_file.c
+@@ -125,7 +125,7 @@ xfs_iozero(
+ 		status = 0;
+ 	} while (count);
+ 
+-	return (-status);
++	return status;
+ }
+ 
+ int
+diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
+index 6163767aa856..b1edda7890f4 100644
+--- a/fs/xfs/xfs_inode.c
++++ b/fs/xfs/xfs_inode.c
+@@ -1889,21 +1889,17 @@ xfs_inactive(
+ 	/*
+ 	 * If there are attributes associated with the file then blow them away
+ 	 * now.  The code calls a routine that recursively deconstructs the
+-	 * attribute fork.  We need to just commit the current transaction
+-	 * because we can't use it for xfs_attr_inactive().
++	 * attribute fork. If also blows away the in-core attribute fork.
+ 	 */
+-	if (ip->i_d.di_anextents > 0) {
+-		ASSERT(ip->i_d.di_forkoff != 0);
+-
++	if (XFS_IFORK_Q(ip)) {
+ 		error = xfs_attr_inactive(ip);
+ 		if (error)
+ 			return;
+ 	}
+ 
+-	if (ip->i_afp)
+-		xfs_idestroy_fork(ip, XFS_ATTR_FORK);
+-
++	ASSERT(!ip->i_afp);
+ 	ASSERT(ip->i_d.di_anextents == 0);
++	ASSERT(ip->i_d.di_forkoff == 0);
+ 
+ 	/*
+ 	 * Free the inode.
+diff --git a/include/drm/drm_pciids.h b/include/drm/drm_pciids.h
+index 2dd405c9be78..45c39a37f924 100644
+--- a/include/drm/drm_pciids.h
++++ b/include/drm/drm_pciids.h
+@@ -186,6 +186,7 @@
+ 	{0x1002, 0x6658, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
+ 	{0x1002, 0x665c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
+ 	{0x1002, 0x665d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
++	{0x1002, 0x665f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
+ 	{0x1002, 0x6660, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ 	{0x1002, 0x6663, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ 	{0x1002, 0x6664, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+diff --git a/include/linux/fs_pin.h b/include/linux/fs_pin.h
+index 9dc4e0384bfb..3886b3bffd7f 100644
+--- a/include/linux/fs_pin.h
++++ b/include/linux/fs_pin.h
+@@ -13,6 +13,8 @@ struct vfsmount;
+ static inline void init_fs_pin(struct fs_pin *p, void (*kill)(struct fs_pin *))
+ {
+ 	init_waitqueue_head(&p->wait);
++	INIT_HLIST_NODE(&p->s_list);
++	INIT_HLIST_NODE(&p->m_list);
+ 	p->kill = kill;
+ }
+ 
+diff --git a/include/linux/gfp.h b/include/linux/gfp.h
+index 51bd1e72a917..eb6fafe66bec 100644
+--- a/include/linux/gfp.h
++++ b/include/linux/gfp.h
+@@ -30,6 +30,7 @@ struct vm_area_struct;
+ #define ___GFP_HARDWALL		0x20000u
+ #define ___GFP_THISNODE		0x40000u
+ #define ___GFP_RECLAIMABLE	0x80000u
++#define ___GFP_NOACCOUNT	0x100000u
+ #define ___GFP_NOTRACK		0x200000u
+ #define ___GFP_NO_KSWAPD	0x400000u
+ #define ___GFP_OTHER_NODE	0x800000u
+@@ -85,6 +86,7 @@ struct vm_area_struct;
+ #define __GFP_HARDWALL   ((__force gfp_t)___GFP_HARDWALL) /* Enforce hardwall cpuset memory allocs */
+ #define __GFP_THISNODE	((__force gfp_t)___GFP_THISNODE)/* No fallback, no policies */
+ #define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE) /* Page is reclaimable */
++#define __GFP_NOACCOUNT	((__force gfp_t)___GFP_NOACCOUNT) /* Don't account to kmemcg */
+ #define __GFP_NOTRACK	((__force gfp_t)___GFP_NOTRACK)  /* Don't track with kmemcheck */
+ 
+ #define __GFP_NO_KSWAPD	((__force gfp_t)___GFP_NO_KSWAPD)
+diff --git a/include/linux/ktime.h b/include/linux/ktime.h
+index 5fc3d1083071..2b6a204bd8d4 100644
+--- a/include/linux/ktime.h
++++ b/include/linux/ktime.h
+@@ -166,19 +166,34 @@ static inline bool ktime_before(const ktime_t cmp1, const ktime_t cmp2)
+ }
+ 
+ #if BITS_PER_LONG < 64
+-extern u64 __ktime_divns(const ktime_t kt, s64 div);
+-static inline u64 ktime_divns(const ktime_t kt, s64 div)
++extern s64 __ktime_divns(const ktime_t kt, s64 div);
++static inline s64 ktime_divns(const ktime_t kt, s64 div)
+ {
++	/*
++	 * Negative divisors could cause an inf loop,
++	 * so bug out here.
++	 */
++	BUG_ON(div < 0);
+ 	if (__builtin_constant_p(div) && !(div >> 32)) {
+-		u64 ns = kt.tv64;
+-		do_div(ns, div);
+-		return ns;
++		s64 ns = kt.tv64;
++		u64 tmp = ns < 0 ? -ns : ns;
++
++		do_div(tmp, div);
++		return ns < 0 ? -tmp : tmp;
+ 	} else {
+ 		return __ktime_divns(kt, div);
+ 	}
+ }
+ #else /* BITS_PER_LONG < 64 */
+-# define ktime_divns(kt, div)		(u64)((kt).tv64 / (div))
++static inline s64 ktime_divns(const ktime_t kt, s64 div)
++{
++	/*
++	 * 32-bit implementation cannot handle negative divisors,
++	 * so catch them on 64bit as well.
++	 */
++	WARN_ON(div < 0);
++	return kt.tv64 / div;
++}
+ #endif
+ 
+ static inline s64 ktime_to_us(const ktime_t kt)
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 6b08cc106c21..f8994b4b122c 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -205,6 +205,7 @@ enum {
+ 	ATA_LFLAG_SW_ACTIVITY	= (1 << 7), /* keep activity stats */
+ 	ATA_LFLAG_NO_LPM	= (1 << 8), /* disable LPM on this link */
+ 	ATA_LFLAG_RST_ONCE	= (1 << 9), /* limit recovery to one reset */
++	ATA_LFLAG_CHANGED	= (1 << 10), /* LPM state changed on this link */
+ 
+ 	/* struct ata_port flags */
+ 	ATA_FLAG_SLAVE_POSS	= (1 << 0), /* host supports slave dev */
+@@ -310,6 +311,12 @@ enum {
+ 	 */
+ 	ATA_TMOUT_PMP_SRST_WAIT	= 5000,
+ 
++	/* When the LPM policy is set to ATA_LPM_MAX_POWER, there might
++	 * be a spurious PHY event, so ignore the first PHY event that
++	 * occurs within 10s after the policy change.
++	 */
++	ATA_TMOUT_SPURIOUS_PHY	= 10000,
++
+ 	/* ATA bus states */
+ 	BUS_UNKNOWN		= 0,
+ 	BUS_DMA			= 1,
+@@ -789,6 +796,8 @@ struct ata_link {
+ 	struct ata_eh_context	eh_context;
+ 
+ 	struct ata_device	device[ATA_MAX_DEVICES];
++
++	unsigned long		last_lpm_change; /* when last LPM change happened */
+ };
+ #define ATA_LINK_CLEAR_BEGIN		offsetof(struct ata_link, active_tag)
+ #define ATA_LINK_CLEAR_END		offsetof(struct ata_link, device[0])
+@@ -1202,6 +1211,7 @@ extern struct ata_device *ata_dev_pair(struct ata_device *adev);
+ extern int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev);
+ extern void ata_scsi_port_error_handler(struct Scsi_Host *host, struct ata_port *ap);
+ extern void ata_scsi_cmd_error_handler(struct Scsi_Host *host, struct ata_port *ap, struct list_head *eh_q);
++extern bool sata_lpm_ignore_phy_events(struct ata_link *link);
+ 
+ extern int ata_cable_40wire(struct ata_port *ap);
+ extern int ata_cable_80wire(struct ata_port *ap);
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index 72dff5fb0d0c..6c8918114804 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -463,6 +463,8 @@ memcg_kmem_newpage_charge(gfp_t gfp, struct mem_cgroup **memcg, int order)
+ 	if (!memcg_kmem_enabled())
+ 		return true;
+ 
++	if (gfp & __GFP_NOACCOUNT)
++		return true;
+ 	/*
+ 	 * __GFP_NOFAIL allocations will move on even if charging is not
+ 	 * possible. Therefore we don't even try, and have this allocation
+@@ -522,6 +524,8 @@ memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp)
+ {
+ 	if (!memcg_kmem_enabled())
+ 		return cachep;
++	if (gfp & __GFP_NOACCOUNT)
++		return cachep;
+ 	if (gfp & __GFP_NOFAIL)
+ 		return cachep;
+ 	if (in_interrupt() || (!current->mm) || (current->flags & PF_KTHREAD))
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index 6341f5be6e24..a30b172df6e1 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -18,7 +18,7 @@ static inline int rt_task(struct task_struct *p)
+ #ifdef CONFIG_RT_MUTEXES
+ extern int rt_mutex_getprio(struct task_struct *p);
+ extern void rt_mutex_setprio(struct task_struct *p, int prio);
+-extern int rt_mutex_check_prio(struct task_struct *task, int newprio);
++extern int rt_mutex_get_effective_prio(struct task_struct *task, int newprio);
+ extern struct task_struct *rt_mutex_get_top_task(struct task_struct *task);
+ extern void rt_mutex_adjust_pi(struct task_struct *p);
+ static inline bool tsk_is_pi_blocked(struct task_struct *tsk)
+@@ -31,9 +31,10 @@ static inline int rt_mutex_getprio(struct task_struct *p)
+ 	return p->normal_prio;
+ }
+ 
+-static inline int rt_mutex_check_prio(struct task_struct *task, int newprio)
++static inline int rt_mutex_get_effective_prio(struct task_struct *task,
++					      int newprio)
+ {
+-	return 0;
++	return newprio;
+ }
+ 
+ static inline struct task_struct *rt_mutex_get_top_task(struct task_struct *task)
+diff --git a/include/linux/tty.h b/include/linux/tty.h
+index 358a337af598..790752ac074a 100644
+--- a/include/linux/tty.h
++++ b/include/linux/tty.h
+@@ -339,6 +339,7 @@ struct tty_file_private {
+ #define TTY_EXCLUSIVE 		3	/* Exclusive open mode */
+ #define TTY_DEBUG 		4	/* Debugging */
+ #define TTY_DO_WRITE_WAKEUP 	5	/* Call write_wakeup after queuing new */
++#define TTY_OTHER_DONE		6	/* Closed pty has completed input processing */
+ #define TTY_LDISC_OPEN	 	11	/* Line discipline is open */
+ #define TTY_PTY_LOCK 		16	/* pty private */
+ #define TTY_NO_WRITE_SPLIT 	17	/* Preserve write boundaries to driver */
+@@ -462,7 +463,6 @@ extern int tty_hung_up_p(struct file *filp);
+ extern void do_SAK(struct tty_struct *tty);
+ extern void __do_SAK(struct tty_struct *tty);
+ extern void no_tty(void);
+-extern void tty_flush_to_ldisc(struct tty_struct *tty);
+ extern void tty_buffer_free_all(struct tty_port *port);
+ extern void tty_buffer_flush(struct tty_struct *tty, struct tty_ldisc *ld);
+ extern void tty_buffer_init(struct tty_port *port);
+diff --git a/include/xen/events.h b/include/xen/events.h
+index 5321cd9636e6..7d95fdf9cf3e 100644
+--- a/include/xen/events.h
++++ b/include/xen/events.h
+@@ -17,7 +17,7 @@ int bind_evtchn_to_irqhandler(unsigned int evtchn,
+ 			      irq_handler_t handler,
+ 			      unsigned long irqflags, const char *devname,
+ 			      void *dev_id);
+-int bind_virq_to_irq(unsigned int virq, unsigned int cpu);
++int bind_virq_to_irq(unsigned int virq, unsigned int cpu, bool percpu);
+ int bind_virq_to_irqhandler(unsigned int virq, unsigned int cpu,
+ 			    irq_handler_t handler,
+ 			    unsigned long irqflags, const char *devname,
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index 6357265a31ad..ce9108c059fb 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -265,15 +265,17 @@ struct task_struct *rt_mutex_get_top_task(struct task_struct *task)
+ }
+ 
+ /*
+- * Called by sched_setscheduler() to check whether the priority change
+- * is overruled by a possible priority boosting.
++ * Called by sched_setscheduler() to get the priority which will be
++ * effective after the change.
+  */
+-int rt_mutex_check_prio(struct task_struct *task, int newprio)
++int rt_mutex_get_effective_prio(struct task_struct *task, int newprio)
+ {
+ 	if (!task_has_pi_waiters(task))
+-		return 0;
++		return newprio;
+ 
+-	return task_top_pi_waiter(task)->task->prio <= newprio;
++	if (task_top_pi_waiter(task)->task->prio <= newprio)
++		return task_top_pi_waiter(task)->task->prio;
++	return newprio;
+ }
+ 
+ /*
+diff --git a/kernel/module.c b/kernel/module.c
+index ec53f594e9c9..538794ce3cc7 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -3366,6 +3366,9 @@ static int load_module(struct load_info *info, const char __user *uargs,
+ 	module_bug_cleanup(mod);
+ 	mutex_unlock(&module_mutex);
+ 
++	blocking_notifier_call_chain(&module_notify_list,
++				     MODULE_STATE_GOING, mod);
++
+ 	/* we can't deallocate the module until we clear memory protection */
+ 	unset_module_init_ro_nx(mod);
+ 	unset_module_core_ro_nx(mod);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 3d5f6f6d14c2..f4da2cbbfd7f 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -3295,15 +3295,18 @@ static void __setscheduler_params(struct task_struct *p,
+ 
+ /* Actually do priority change: must hold pi & rq lock. */
+ static void __setscheduler(struct rq *rq, struct task_struct *p,
+-			   const struct sched_attr *attr)
++			   const struct sched_attr *attr, bool keep_boost)
+ {
+ 	__setscheduler_params(p, attr);
+ 
+ 	/*
+-	 * If we get here, there was no pi waiters boosting the
+-	 * task. It is safe to use the normal prio.
++	 * Keep a potential priority boosting if called from
++	 * sched_setscheduler().
+ 	 */
+-	p->prio = normal_prio(p);
++	if (keep_boost)
++		p->prio = rt_mutex_get_effective_prio(p, normal_prio(p));
++	else
++		p->prio = normal_prio(p);
+ 
+ 	if (dl_prio(p->prio))
+ 		p->sched_class = &dl_sched_class;
+@@ -3403,7 +3406,7 @@ static int __sched_setscheduler(struct task_struct *p,
+ 	int newprio = dl_policy(attr->sched_policy) ? MAX_DL_PRIO - 1 :
+ 		      MAX_RT_PRIO - 1 - attr->sched_priority;
+ 	int retval, oldprio, oldpolicy = -1, queued, running;
+-	int policy = attr->sched_policy;
++	int new_effective_prio, policy = attr->sched_policy;
+ 	unsigned long flags;
+ 	const struct sched_class *prev_class;
+ 	struct rq *rq;
+@@ -3585,15 +3588,14 @@ change:
+ 	oldprio = p->prio;
+ 
+ 	/*
+-	 * Special case for priority boosted tasks.
+-	 *
+-	 * If the new priority is lower or equal (user space view)
+-	 * than the current (boosted) priority, we just store the new
++	 * Take priority boosted tasks into account. If the new
++	 * effective priority is unchanged, we just store the new
+ 	 * normal parameters and do not touch the scheduler class and
+ 	 * the runqueue. This will be done when the task deboost
+ 	 * itself.
+ 	 */
+-	if (rt_mutex_check_prio(p, newprio)) {
++	new_effective_prio = rt_mutex_get_effective_prio(p, newprio);
++	if (new_effective_prio == oldprio) {
+ 		__setscheduler_params(p, attr);
+ 		task_rq_unlock(rq, p, &flags);
+ 		return 0;
+@@ -3607,7 +3609,7 @@ change:
+ 		put_prev_task(rq, p);
+ 
+ 	prev_class = p->sched_class;
+-	__setscheduler(rq, p, attr);
++	__setscheduler(rq, p, attr, true);
+ 
+ 	if (running)
+ 		p->sched_class->set_curr_task(rq);
+@@ -4382,10 +4384,7 @@ long __sched io_schedule_timeout(long timeout)
+ 	long ret;
+ 
+ 	current->in_iowait = 1;
+-	if (old_iowait)
+-		blk_schedule_flush_plug(current);
+-	else
+-		blk_flush_plug(current);
++	blk_schedule_flush_plug(current);
+ 
+ 	delayacct_blkio_start();
+ 	rq = raw_rq();
+@@ -7357,7 +7356,7 @@ static void normalize_task(struct rq *rq, struct task_struct *p)
+ 	queued = task_on_rq_queued(p);
+ 	if (queued)
+ 		dequeue_task(rq, p, 0);
+-	__setscheduler(rq, p, &attr);
++	__setscheduler(rq, p, &attr, false);
+ 	if (queued) {
+ 		enqueue_task(rq, p, 0);
+ 		resched_curr(rq);
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index bee0c1f78091..38f586c076fe 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -266,21 +266,23 @@ lock_hrtimer_base(const struct hrtimer *timer, unsigned long *flags)
+ /*
+  * Divide a ktime value by a nanosecond value
+  */
+-u64 __ktime_divns(const ktime_t kt, s64 div)
++s64 __ktime_divns(const ktime_t kt, s64 div)
+ {
+-	u64 dclc;
+ 	int sft = 0;
++	s64 dclc;
++	u64 tmp;
+ 
+ 	dclc = ktime_to_ns(kt);
++	tmp = dclc < 0 ? -dclc : dclc;
++
+ 	/* Make sure the divisor is less than 2^32: */
+ 	while (div >> 32) {
+ 		sft++;
+ 		div >>= 1;
+ 	}
+-	dclc >>= sft;
+-	do_div(dclc, (unsigned long) div);
+-
+-	return dclc;
++	tmp >>= sft;
++	do_div(tmp, (unsigned long) div);
++	return dclc < 0 ? -tmp : tmp;
+ }
+ EXPORT_SYMBOL_GPL(__ktime_divns);
+ #endif /* BITS_PER_LONG >= 64 */
+diff --git a/lib/strnlen_user.c b/lib/strnlen_user.c
+index a28df5206d95..11649615c505 100644
+--- a/lib/strnlen_user.c
++++ b/lib/strnlen_user.c
+@@ -57,7 +57,8 @@ static inline long do_strnlen_user(const char __user *src, unsigned long count,
+ 			return res + find_zero(data) + 1 - align;
+ 		}
+ 		res += sizeof(unsigned long);
+-		if (unlikely(max < sizeof(unsigned long)))
++		/* We already handled 'unsigned long' bytes. Did we do it all ? */
++		if (unlikely(max <= sizeof(unsigned long)))
+ 			break;
+ 		max -= sizeof(unsigned long);
+ 		if (unlikely(__get_user(c,(unsigned long __user *)(src+res))))
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 5405aff5a590..f0fe4f2c1fa7 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -115,7 +115,8 @@
+ #define BYTES_PER_POINTER	sizeof(void *)
+ 
+ /* GFP bitmask for kmemleak internal allocations */
+-#define gfp_kmemleak_mask(gfp)	(((gfp) & (GFP_KERNEL | GFP_ATOMIC)) | \
++#define gfp_kmemleak_mask(gfp)	(((gfp) & (GFP_KERNEL | GFP_ATOMIC | \
++					   __GFP_NOACCOUNT)) | \
+ 				 __GFP_NORETRY | __GFP_NOMEMALLOC | \
+ 				 __GFP_NOWARN)
+ 
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index de5dc5e12691..0f7d73b3e4b1 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -2517,7 +2517,7 @@ static void __init check_numabalancing_enable(void)
+ 	if (numabalancing_override)
+ 		set_numabalancing_state(numabalancing_override == 1);
+ 
+-	if (nr_node_ids > 1 && !numabalancing_override) {
++	if (num_online_nodes() > 1 && !numabalancing_override) {
+ 		pr_info("%s automatic NUMA balancing. "
+ 			"Configure with numa_balancing= or the "
+ 			"kernel.numa_balancing sysctl",
+diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
+index 41a4abc7e98e..c4ec9239249a 100644
+--- a/net/ceph/osd_client.c
++++ b/net/ceph/osd_client.c
+@@ -1306,8 +1306,6 @@ static void __unregister_linger_request(struct ceph_osd_client *osdc,
+ 		if (list_empty(&req->r_osd_item))
+ 			req->r_osd = NULL;
+ 	}
+-
+-	list_del_init(&req->r_req_lru_item); /* can be on notarget */
+ 	ceph_osdc_put_request(req);
+ }
+ 
+@@ -2017,20 +2015,29 @@ static void kick_requests(struct ceph_osd_client *osdc, bool force_resend,
+ 		err = __map_request(osdc, req,
+ 				    force_resend || force_resend_writes);
+ 		dout("__map_request returned %d\n", err);
+-		if (err == 0)
+-			continue;  /* no change and no osd was specified */
+ 		if (err < 0)
+ 			continue;  /* hrm! */
+-		if (req->r_osd == NULL) {
+-			dout("tid %llu maps to no valid osd\n", req->r_tid);
+-			needmap++;  /* request a newer map */
+-			continue;
+-		}
++		if (req->r_osd == NULL || err > 0) {
++			if (req->r_osd == NULL) {
++				dout("lingering %p tid %llu maps to no osd\n",
++				     req, req->r_tid);
++				/*
++				 * A homeless lingering request makes
++				 * no sense, as it's job is to keep
++				 * a particular OSD connection open.
++				 * Request a newer map and kick the
++				 * request, knowing that it won't be
++				 * resent until we actually get a map
++				 * that can tell us where to send it.
++				 */
++				needmap++;
++			}
+ 
+-		dout("kicking lingering %p tid %llu osd%d\n", req, req->r_tid,
+-		     req->r_osd ? req->r_osd->o_osd : -1);
+-		__register_request(osdc, req);
+-		__unregister_linger_request(osdc, req);
++			dout("kicking lingering %p tid %llu osd%d\n", req,
++			     req->r_tid, req->r_osd ? req->r_osd->o_osd : -1);
++			__register_request(osdc, req);
++			__unregister_linger_request(osdc, req);
++		}
+ 	}
+ 	reset_changed_osds(osdc);
+ 	mutex_unlock(&osdc->request_mutex);
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 8d53d65bd2ab..81e8dc5cb7f9 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -204,6 +204,8 @@ enum ieee80211_packet_rx_flags {
+  * @IEEE80211_RX_CMNTR: received on cooked monitor already
+  * @IEEE80211_RX_BEACON_REPORTED: This frame was already reported
+  *	to cfg80211_report_obss_beacon().
++ * @IEEE80211_RX_REORDER_TIMER: this frame is released by the
++ *	reorder buffer timeout timer, not the normal RX path
+  *
+  * These flags are used across handling multiple interfaces
+  * for a single frame.
+@@ -211,6 +213,7 @@ enum ieee80211_packet_rx_flags {
+ enum ieee80211_rx_flags {
+ 	IEEE80211_RX_CMNTR		= BIT(0),
+ 	IEEE80211_RX_BEACON_REPORTED	= BIT(1),
++	IEEE80211_RX_REORDER_TIMER	= BIT(2),
+ };
+ 
+ struct ieee80211_rx_data {
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 1eb730bf8752..4c887d053333 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -2106,7 +2106,8 @@ ieee80211_deliver_skb(struct ieee80211_rx_data *rx)
+ 		/* deliver to local stack */
+ 		skb->protocol = eth_type_trans(skb, dev);
+ 		memset(skb->cb, 0, sizeof(skb->cb));
+-		if (rx->local->napi)
++		if (!(rx->flags & IEEE80211_RX_REORDER_TIMER) &&
++		    rx->local->napi)
+ 			napi_gro_receive(rx->local->napi, skb);
+ 		else
+ 			netif_receive_skb(skb);
+@@ -3215,7 +3216,7 @@ void ieee80211_release_reorder_timeout(struct sta_info *sta, int tid)
+ 		/* This is OK -- must be QoS data frame */
+ 		.security_idx = tid,
+ 		.seqno_idx = tid,
+-		.flags = 0,
++		.flags = IEEE80211_RX_REORDER_TIMER,
+ 	};
+ 	struct tid_ampdu_rx *tid_agg_rx;
+ 
+diff --git a/net/mac80211/wep.c b/net/mac80211/wep.c
+index a4220e92f0cc..efa3f48f1ec5 100644
+--- a/net/mac80211/wep.c
++++ b/net/mac80211/wep.c
+@@ -98,8 +98,7 @@ static u8 *ieee80211_wep_add_iv(struct ieee80211_local *local,
+ 
+ 	hdr->frame_control |= cpu_to_le16(IEEE80211_FCTL_PROTECTED);
+ 
+-	if (WARN_ON(skb_tailroom(skb) < IEEE80211_WEP_ICV_LEN ||
+-		    skb_headroom(skb) < IEEE80211_WEP_IV_LEN))
++	if (WARN_ON(skb_headroom(skb) < IEEE80211_WEP_IV_LEN))
+ 		return NULL;
+ 
+ 	hdrlen = ieee80211_hdrlen(hdr->frame_control);
+@@ -167,6 +166,9 @@ int ieee80211_wep_encrypt(struct ieee80211_local *local,
+ 	size_t len;
+ 	u8 rc4key[3 + WLAN_KEY_LEN_WEP104];
+ 
++	if (WARN_ON(skb_tailroom(skb) < IEEE80211_WEP_ICV_LEN))
++		return -1;
++
+ 	iv = ieee80211_wep_add_iv(local, skb, keylen, keyidx);
+ 	if (!iv)
+ 		return -1;
+diff --git a/net/sunrpc/auth_gss/gss_rpc_xdr.c b/net/sunrpc/auth_gss/gss_rpc_xdr.c
+index 1ec19f6f0c2b..eeeba5adee6d 100644
+--- a/net/sunrpc/auth_gss/gss_rpc_xdr.c
++++ b/net/sunrpc/auth_gss/gss_rpc_xdr.c
+@@ -793,20 +793,26 @@ int gssx_dec_accept_sec_context(struct rpc_rqst *rqstp,
+ {
+ 	u32 value_follows;
+ 	int err;
++	struct page *scratch;
++
++	scratch = alloc_page(GFP_KERNEL);
++	if (!scratch)
++		return -ENOMEM;
++	xdr_set_scratch_buffer(xdr, page_address(scratch), PAGE_SIZE);
+ 
+ 	/* res->status */
+ 	err = gssx_dec_status(xdr, &res->status);
+ 	if (err)
+-		return err;
++		goto out_free;
+ 
+ 	/* res->context_handle */
+ 	err = gssx_dec_bool(xdr, &value_follows);
+ 	if (err)
+-		return err;
++		goto out_free;
+ 	if (value_follows) {
+ 		err = gssx_dec_ctx(xdr, res->context_handle);
+ 		if (err)
+-			return err;
++			goto out_free;
+ 	} else {
+ 		res->context_handle = NULL;
+ 	}
+@@ -814,11 +820,11 @@ int gssx_dec_accept_sec_context(struct rpc_rqst *rqstp,
+ 	/* res->output_token */
+ 	err = gssx_dec_bool(xdr, &value_follows);
+ 	if (err)
+-		return err;
++		goto out_free;
+ 	if (value_follows) {
+ 		err = gssx_dec_buffer(xdr, res->output_token);
+ 		if (err)
+-			return err;
++			goto out_free;
+ 	} else {
+ 		res->output_token = NULL;
+ 	}
+@@ -826,14 +832,17 @@ int gssx_dec_accept_sec_context(struct rpc_rqst *rqstp,
+ 	/* res->delegated_cred_handle */
+ 	err = gssx_dec_bool(xdr, &value_follows);
+ 	if (err)
+-		return err;
++		goto out_free;
+ 	if (value_follows) {
+ 		/* we do not support upcall servers sending this data. */
+-		return -EINVAL;
++		err = -EINVAL;
++		goto out_free;
+ 	}
+ 
+ 	/* res->options */
+ 	err = gssx_dec_option_array(xdr, &res->options);
+ 
++out_free:
++	__free_page(scratch);
+ 	return err;
+ }
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index a8a1e14272a1..a002a6d1e6da 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2108,6 +2108,8 @@ static const struct pci_device_id azx_ids[] = {
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
+ 	{ PCI_DEVICE(0x1002, 0xaab0),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
++	{ PCI_DEVICE(0x1002, 0xaac8),
++	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
+ 	/* VIA VT8251/VT8237A */
+ 	{ PCI_DEVICE(0x1106, 0x3288),
+ 	  .driver_data = AZX_DRIVER_VIA | AZX_DCAPS_POSFIX_VIA },
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index da67ea8645a6..e27298bdcd6d 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -973,6 +973,14 @@ static const struct hda_codec_preset snd_hda_preset_conexant[] = {
+ 	  .patch = patch_conexant_auto },
+ 	{ .id = 0x14f150b9, .name = "CX20665",
+ 	  .patch = patch_conexant_auto },
++	{ .id = 0x14f150f1, .name = "CX20721",
++	  .patch = patch_conexant_auto },
++	{ .id = 0x14f150f2, .name = "CX20722",
++	  .patch = patch_conexant_auto },
++	{ .id = 0x14f150f3, .name = "CX20723",
++	  .patch = patch_conexant_auto },
++	{ .id = 0x14f150f4, .name = "CX20724",
++	  .patch = patch_conexant_auto },
+ 	{ .id = 0x14f1510f, .name = "CX20751/2",
+ 	  .patch = patch_conexant_auto },
+ 	{ .id = 0x14f15110, .name = "CX20751/2",
+@@ -1007,6 +1015,10 @@ MODULE_ALIAS("snd-hda-codec-id:14f150ab");
+ MODULE_ALIAS("snd-hda-codec-id:14f150ac");
+ MODULE_ALIAS("snd-hda-codec-id:14f150b8");
+ MODULE_ALIAS("snd-hda-codec-id:14f150b9");
++MODULE_ALIAS("snd-hda-codec-id:14f150f1");
++MODULE_ALIAS("snd-hda-codec-id:14f150f2");
++MODULE_ALIAS("snd-hda-codec-id:14f150f3");
++MODULE_ALIAS("snd-hda-codec-id:14f150f4");
+ MODULE_ALIAS("snd-hda-codec-id:14f1510f");
+ MODULE_ALIAS("snd-hda-codec-id:14f15110");
+ MODULE_ALIAS("snd-hda-codec-id:14f15111");
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2fd490b1764b..93c78c3c4b95 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5027,6 +5027,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x104d, 0x9099, "Sony VAIO S13", ALC275_FIXUP_SONY_DISABLE_AAMIX),
+ 	SND_PCI_QUIRK(0x10cf, 0x1475, "Lifebook", ALC269_FIXUP_LIFEBOOK),
+ 	SND_PCI_QUIRK(0x10cf, 0x15dc, "Lifebook T731", ALC269_FIXUP_LIFEBOOK_HP_PIN),
++	SND_PCI_QUIRK(0x10cf, 0x1757, "Lifebook E752", ALC269_FIXUP_LIFEBOOK_HP_PIN),
+ 	SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
+ 	SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_BXBT2807_MIC),
+@@ -5056,6 +5057,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x5026, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x5034, "Thinkpad T450", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x5036, "Thinkpad T450s", ALC292_FIXUP_TPT440_DOCK),
++	SND_PCI_QUIRK(0x17aa, 0x503c, "Thinkpad L450", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+ 	SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+@@ -5246,6 +5248,13 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x17, 0x40000000},
+ 		{0x1d, 0x40700001},
+ 		{0x21, 0x02211050}),
++	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell Inspiron 5548", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++		ALC255_STANDARD_PINS,
++		{0x12, 0x90a60180},
++		{0x14, 0x90170130},
++		{0x17, 0x40000000},
++		{0x1d, 0x40700001},
++		{0x21, 0x02211040}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		ALC256_STANDARD_PINS,
+ 		{0x13, 0x40000000}),
+diff --git a/sound/pci/hda/thinkpad_helper.c b/sound/pci/hda/thinkpad_helper.c
+index 2341fc334163..6ba0b5517c40 100644
+--- a/sound/pci/hda/thinkpad_helper.c
++++ b/sound/pci/hda/thinkpad_helper.c
+@@ -72,7 +72,6 @@ static void hda_fixup_thinkpad_acpi(struct hda_codec *codec,
+ 		if (led_set_func(TPACPI_LED_MUTE, false) >= 0) {
+ 			old_vmaster_hook = spec->vmaster_mute.hook;
+ 			spec->vmaster_mute.hook = update_tpacpi_mute_led;
+-			spec->vmaster_mute_enum = 1;
+ 			removefunc = false;
+ 		}
+ 		if (led_set_func(TPACPI_LED_MICMUTE, false) >= 0) {
+diff --git a/sound/soc/codecs/mc13783.c b/sound/soc/codecs/mc13783.c
+index 2ffb9a0570dc..3d44fc50e4d0 100644
+--- a/sound/soc/codecs/mc13783.c
++++ b/sound/soc/codecs/mc13783.c
+@@ -623,14 +623,14 @@ static int mc13783_probe(struct snd_soc_codec *codec)
+ 				AUDIO_SSI_SEL, 0);
+ 	else
+ 		mc13xxx_reg_rmw(priv->mc13xxx, MC13783_AUDIO_CODEC,
+-				0, AUDIO_SSI_SEL);
++				AUDIO_SSI_SEL, AUDIO_SSI_SEL);
+ 
+ 	if (priv->dac_ssi_port == MC13783_SSI1_PORT)
+ 		mc13xxx_reg_rmw(priv->mc13xxx, MC13783_AUDIO_DAC,
+ 				AUDIO_SSI_SEL, 0);
+ 	else
+ 		mc13xxx_reg_rmw(priv->mc13xxx, MC13783_AUDIO_DAC,
+-				0, AUDIO_SSI_SEL);
++				AUDIO_SSI_SEL, AUDIO_SSI_SEL);
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/codecs/uda1380.c b/sound/soc/codecs/uda1380.c
+index dc7778b6dd7f..c3c33bd0df1c 100644
+--- a/sound/soc/codecs/uda1380.c
++++ b/sound/soc/codecs/uda1380.c
+@@ -437,7 +437,7 @@ static int uda1380_set_dai_fmt_both(struct snd_soc_dai *codec_dai,
+ 	if ((fmt & SND_SOC_DAIFMT_MASTER_MASK) != SND_SOC_DAIFMT_CBS_CFS)
+ 		return -EINVAL;
+ 
+-	uda1380_write(codec, UDA1380_IFACE, iface);
++	uda1380_write_reg_cache(codec, UDA1380_IFACE, iface);
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/codecs/wm8960.c b/sound/soc/codecs/wm8960.c
+index 3035d9856415..e97a7615df85 100644
+--- a/sound/soc/codecs/wm8960.c
++++ b/sound/soc/codecs/wm8960.c
+@@ -395,7 +395,7 @@ static const struct snd_soc_dapm_route audio_paths[] = {
+ 	{ "Right Input Mixer", "Boost Switch", "Right Boost Mixer", },
+ 	{ "Right Input Mixer", NULL, "RINPUT1", },  /* Really Boost Switch */
+ 	{ "Right Input Mixer", NULL, "RINPUT2" },
+-	{ "Right Input Mixer", NULL, "LINPUT3" },
++	{ "Right Input Mixer", NULL, "RINPUT3" },
+ 
+ 	{ "Left ADC", NULL, "Left Input Mixer" },
+ 	{ "Right ADC", NULL, "Right Input Mixer" },
+diff --git a/sound/soc/codecs/wm8994.c b/sound/soc/codecs/wm8994.c
+index 4fbc7689339a..a1c04dab6684 100644
+--- a/sound/soc/codecs/wm8994.c
++++ b/sound/soc/codecs/wm8994.c
+@@ -2754,7 +2754,7 @@ static struct {
+ };
+ 
+ static int fs_ratios[] = {
+-	64, 128, 192, 256, 348, 512, 768, 1024, 1408, 1536
++	64, 128, 192, 256, 384, 512, 768, 1024, 1408, 1536
+ };
+ 
+ static int bclk_divs[] = {
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index b6f88202b8c9..e19a6765bd8a 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -3074,11 +3074,16 @@ snd_soc_dapm_new_control(struct snd_soc_dapm_context *dapm,
+ 	}
+ 
+ 	prefix = soc_dapm_prefix(dapm);
+-	if (prefix)
++	if (prefix) {
+ 		w->name = kasprintf(GFP_KERNEL, "%s %s", prefix, widget->name);
+-	else
++		if (widget->sname)
++			w->sname = kasprintf(GFP_KERNEL, "%s %s", prefix,
++					     widget->sname);
++	} else {
+ 		w->name = kasprintf(GFP_KERNEL, "%s", widget->name);
+-
++		if (widget->sname)
++			w->sname = kasprintf(GFP_KERNEL, "%s", widget->sname);
++	}
+ 	if (w->name == NULL) {
+ 		kfree(w);
+ 		return NULL;
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 32631a86078b..e21ec5abcc3a 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1117,6 +1117,8 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
+ 	switch (chip->usb_id) {
+ 	case USB_ID(0x045E, 0x075D): /* MS Lifecam Cinema  */
+ 	case USB_ID(0x045E, 0x076D): /* MS Lifecam HD-5000 */
++	case USB_ID(0x045E, 0x0772): /* MS Lifecam Studio */
++	case USB_ID(0x045E, 0x0779): /* MS Lifecam HD-3000 */
+ 	case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */
+ 		return true;
+ 	}
+diff --git a/tools/vm/Makefile b/tools/vm/Makefile
+index ac884b65a072..93aadaf7ff63 100644
+--- a/tools/vm/Makefile
++++ b/tools/vm/Makefile
+@@ -3,7 +3,7 @@
+ TARGETS=page-types slabinfo page_owner_sort
+ 
+ LIB_DIR = ../lib/api
+-LIBS = $(LIB_DIR)/libapikfs.a
++LIBS = $(LIB_DIR)/libapi.a
+ 
+ CC = $(CROSS_COMPILE)gcc
+ CFLAGS = -Wall -Wextra -I../lib/

diff --git a/1500_XATTR_USER_PREFIX.patch b/1500_XATTR_USER_PREFIX.patch
new file mode 100644
index 0000000..cc15cd5
--- /dev/null
+++ b/1500_XATTR_USER_PREFIX.patch
@@ -0,0 +1,54 @@
+From: Anthony G. Basile <blueness@gentoo.org>
+
+This patch adds support for a restricted user-controlled namespace on
+tmpfs filesystem used to house PaX flags.  The namespace must be of the
+form user.pax.* and its value cannot exceed a size of 8 bytes.
+
+This is needed even on all Gentoo systems so that XATTR_PAX flags
+are preserved for users who might build packages using portage on
+a tmpfs system with a non-hardened kernel and then switch to a
+hardened kernel with XATTR_PAX enabled.
+
+The namespace is added to any user with Extended Attribute support
+enabled for tmpfs.  Users who do not enable xattrs will not have
+the XATTR_PAX flags preserved.
+
+diff --git a/include/uapi/linux/xattr.h b/include/uapi/linux/xattr.h
+index e4629b9..6958086 100644
+--- a/include/uapi/linux/xattr.h
++++ b/include/uapi/linux/xattr.h
+@@ -63,5 +63,9 @@
+ #define XATTR_POSIX_ACL_DEFAULT  "posix_acl_default"
+ #define XATTR_NAME_POSIX_ACL_DEFAULT XATTR_SYSTEM_PREFIX XATTR_POSIX_ACL_DEFAULT
+ 
++/* User namespace */
++#define XATTR_PAX_PREFIX XATTR_USER_PREFIX "pax."
++#define XATTR_PAX_FLAGS_SUFFIX "flags"
++#define XATTR_NAME_PAX_FLAGS XATTR_PAX_PREFIX XATTR_PAX_FLAGS_SUFFIX
+ 
+ #endif /* _UAPI_LINUX_XATTR_H */
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 1c44af7..f23bb1b 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2201,6 +2201,7 @@ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ static int shmem_xattr_validate(const char *name)
+ {
+ 	struct { const char *prefix; size_t len; } arr[] = {
++		{ XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN},
+ 		{ XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN },
+ 		{ XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN }
+ 	};
+@@ -2256,6 +2257,12 @@ static int shmem_setxattr(struct dentry *dentry, const char *name,
+ 	if (err)
+ 		return err;
+ 
++	if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN)) {
++		if (strcmp(name, XATTR_NAME_PAX_FLAGS))
++			return -EOPNOTSUPP;
++		if (size > 8)
++			return -EINVAL;
++	}
+ 	return simple_xattr_set(&info->xattrs, name, value, size, flags);
+ }
+ 

diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 0000000..639fb3c
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,22 @@
+From: Ben Hutchings <ben@decadent.org.uk>
+Subject: fs: Enable link security restrictions by default
+Date: Fri, 02 Nov 2012 05:32:06 +0000
+Bug-Debian: https://bugs.debian.org/609455
+Forwarded: not-needed
+
+This reverts commit 561ec64ae67ef25cac8d72bb9c4bfc955edfd415
+('VFS: don't do protected {sym,hard}links by default').
+
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -651,8 +651,8 @@ static inline void put_link(struct namei
+ 	path_put(link);
+ }
+ 
+-int sysctl_protected_symlinks __read_mostly = 0;
+-int sysctl_protected_hardlinks __read_mostly = 0;
++int sysctl_protected_symlinks __read_mostly = 1;
++int sysctl_protected_hardlinks __read_mostly = 1;
+ 
+ /**
+  * may_follow_link - Check symlink following for unsafe situations

diff --git a/2600_select-REGMAP_IRQ-for-rt5033.patch b/2600_select-REGMAP_IRQ-for-rt5033.patch
new file mode 100644
index 0000000..92fb2e0
--- /dev/null
+++ b/2600_select-REGMAP_IRQ-for-rt5033.patch
@@ -0,0 +1,30 @@
+From 23a2a22a3f3f17de094f386a893f7047c10e44a0 Mon Sep 17 00:00:00 2001
+From: Artem Savkov <asavkov@redhat.com>
+Date: Thu, 5 Mar 2015 12:42:27 +0100
+Subject: mfd: rt5033: MFD_RT5033 needs to select REGMAP_IRQ
+
+Since commit 0b2712585(linux-next.git) this driver uses regmap_irq and so needs
+to select REGMAP_IRQ.
+
+This fixes the following compilation errors:
+ERROR: "regmap_irq_get_domain" [drivers/mfd/rt5033.ko] undefined!
+ERROR: "regmap_add_irq_chip" [drivers/mfd/rt5033.ko] undefined!
+
+Signed-off-by: Artem Savkov <asavkov@redhat.com>
+Signed-off-by: Lee Jones <lee.jones@linaro.org>
+
+diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
+index f8ef77d9a..f49f404 100644
+--- a/drivers/mfd/Kconfig
++++ b/drivers/mfd/Kconfig
+@@ -680,6 +680,7 @@ config MFD_RT5033
+ 	depends on I2C=y
+ 	select MFD_CORE
+ 	select REGMAP_I2C
++	select REGMAP_IRQ
+ 	help
+ 	  This driver provides for the Richtek RT5033 Power Management IC,
+ 	  which includes the I2C driver and the Core APIs. This driver provides
+-- 
+cgit v0.10.2
+

diff --git a/2700_ThinkPad-30-brightness-control-fix.patch b/2700_ThinkPad-30-brightness-control-fix.patch
new file mode 100644
index 0000000..b548c6d
--- /dev/null
+++ b/2700_ThinkPad-30-brightness-control-fix.patch
@@ -0,0 +1,67 @@
+diff --git a/drivers/acpi/blacklist.c b/drivers/acpi/blacklist.c
+index cb96296..6c242ed 100644
+--- a/drivers/acpi/blacklist.c
++++ b/drivers/acpi/blacklist.c
+@@ -269,6 +276,61 @@  static struct dmi_system_id acpi_osi_dmi_table[] __initdata = {
+ 	},
+ 
+ 	/*
++	 * The following Lenovo models have a broken workaround in the
++	 * acpi_video backlight implementation to meet the Windows 8
++	 * requirement of 101 backlight levels. Reverting to pre-Win8
++	 * behavior fixes the problem.
++	 */
++	{
++	.callback = dmi_disable_osi_win8,
++	.ident = "Lenovo ThinkPad L430",
++	.matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		     DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L430"),
++		},
++	},
++	{
++	.callback = dmi_disable_osi_win8,
++	.ident = "Lenovo ThinkPad T430s",
++	.matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		     DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T430s"),
++		},
++	},
++	{
++	.callback = dmi_disable_osi_win8,
++	.ident = "Lenovo ThinkPad T530",
++	.matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		     DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T530"),
++		},
++	},
++	{
++	.callback = dmi_disable_osi_win8,
++	.ident = "Lenovo ThinkPad W530",
++	.matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		     DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W530"),
++		},
++	},
++	{
++	.callback = dmi_disable_osi_win8,
++	.ident = "Lenovo ThinkPad X1 Carbon",
++	.matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		     DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X1 Carbon"),
++		},
++	},
++	{
++	.callback = dmi_disable_osi_win8,
++	.ident = "Lenovo ThinkPad X230",
++	.matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		     DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X230"),
++		},
++	},
++
++	/*
+ 	 * BIOS invocation of _OSI(Linux) is almost always a BIOS bug.
+ 	 * Linux ignores it, except for the machines enumerated below.
+ 	 */
+

diff --git a/2900_dev-root-proc-mount-fix.patch b/2900_dev-root-proc-mount-fix.patch
new file mode 100644
index 0000000..6ea86e2
--- /dev/null
+++ b/2900_dev-root-proc-mount-fix.patch
@@ -0,0 +1,30 @@
+--- a/init/do_mounts.c	2014-08-26 08:03:30.000013100 -0400
++++ b/init/do_mounts.c	2014-08-26 08:11:19.720014712 -0400
+@@ -484,7 +484,10 @@ void __init change_floppy(char *fmt, ...
+ 	va_start(args, fmt);
+ 	vsprintf(buf, fmt, args);
+ 	va_end(args);
+-	fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
++	if (saved_root_name[0])
++		fd = sys_open(saved_root_name, O_RDWR | O_NDELAY, 0);
++	else
++		fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
+ 	if (fd >= 0) {
+ 		sys_ioctl(fd, FDEJECT, 0);
+ 		sys_close(fd);
+@@ -527,8 +530,13 @@ void __init mount_root(void)
+ 	}
+ #endif
+ #ifdef CONFIG_BLOCK
+-	create_dev("/dev/root", ROOT_DEV);
+-	mount_block_root("/dev/root", root_mountflags);
++	if (saved_root_name[0]) {
++		create_dev(saved_root_name, ROOT_DEV);
++		mount_block_root(saved_root_name, root_mountflags);
++	} else {
++		create_dev("/dev/root", ROOT_DEV);
++		mount_block_root("/dev/root", root_mountflags);
++	}
+ #endif
+ }
+ 

diff --git a/2905_2disk-resume-image-fix.patch b/2905_2disk-resume-image-fix.patch
new file mode 100644
index 0000000..7e95d29
--- /dev/null
+++ b/2905_2disk-resume-image-fix.patch
@@ -0,0 +1,24 @@
+diff --git a/kernel/kmod.c b/kernel/kmod.c
+index fb32636..d968882 100644
+--- a/kernel/kmod.c
++++ b/kernel/kmod.c
+@@ -575,7 +575,8 @@
+ 		call_usermodehelper_freeinfo(sub_info);
+ 		return -EINVAL;
+ 	}
+-	helper_lock();
++	if (!(current->flags & PF_FREEZER_SKIP))
++		helper_lock();
+ 	if (!khelper_wq || usermodehelper_disabled) {
+ 		retval = -EBUSY;
+ 		goto out;
+@@ -611,7 +612,8 @@ wait_done:
+ out:
+ 	call_usermodehelper_freeinfo(sub_info);
+ unlock:
+-	helper_unlock();
++	if (!(current->flags & PF_FREEZER_SKIP))
++		helper_unlock();
+ 	return retval;
+ }
+ EXPORT_SYMBOL(call_usermodehelper_exec);

diff --git a/2910_lz4-compression-fix.patch b/2910_lz4-compression-fix.patch
new file mode 100644
index 0000000..1c55f32
--- /dev/null
+++ b/2910_lz4-compression-fix.patch
@@ -0,0 +1,30 @@
+--- a/lib/lz4/lz4_decompress.c	2015-04-13 16:20:04.896315560 +0800
++++ b/lib/lz4/lz4_decompress.c	2015-04-13 16:27:08.929317053 +0800
+@@ -139,8 +139,12 @@
+ 			/* Error: request to write beyond destination buffer */
+ 			if (cpy > oend)
+ 				goto _output_error;
++#if LZ4_ARCH64
++			if ((ref + COPYLENGTH) > oend)
++#else
+ 			if ((ref + COPYLENGTH) > oend ||
+ 					(op + COPYLENGTH) > oend)
++#endif
+ 				goto _output_error;
+ 			LZ4_SECURECOPY(ref, op, (oend - COPYLENGTH));
+ 			while (op < cpy)
+@@ -270,7 +274,13 @@
+ 		if (cpy > oend - COPYLENGTH) {
+ 			if (cpy > oend)
+ 				goto _output_error; /* write outside of buf */
+-
++#if LZ4_ARCH64
++			if ((ref + COPYLENGTH) > oend)
++#else
++			if ((ref + COPYLENGTH) > oend ||
++			    (op + COPYLENGTH) > oend)
++#endif
++				goto _output_error;
+ 			LZ4_SECURECOPY(ref, op, (oend - COPYLENGTH));
+ 			while (op < cpy)
+ 				*op++ = *ref++;

diff --git a/4200_fbcondecor-3.19.patch b/4200_fbcondecor-3.19.patch
new file mode 100644
index 0000000..29c379f
--- /dev/null
+++ b/4200_fbcondecor-3.19.patch
@@ -0,0 +1,2119 @@
+diff --git a/Documentation/fb/00-INDEX b/Documentation/fb/00-INDEX
+index fe85e7c..2230930 100644
+--- a/Documentation/fb/00-INDEX
++++ b/Documentation/fb/00-INDEX
+@@ -23,6 +23,8 @@ ep93xx-fb.txt
+ 	- info on the driver for EP93xx LCD controller.
+ fbcon.txt
+ 	- intro to and usage guide for the framebuffer console (fbcon).
++fbcondecor.txt
++	- info on the Framebuffer Console Decoration
+ framebuffer.txt
+ 	- introduction to frame buffer devices.
+ gxfb.txt
+diff --git a/Documentation/fb/fbcondecor.txt b/Documentation/fb/fbcondecor.txt
+new file mode 100644
+index 0000000..3388c61
+--- /dev/null
++++ b/Documentation/fb/fbcondecor.txt
+@@ -0,0 +1,207 @@
++What is it?
++-----------
++
++The framebuffer decorations are a kernel feature which allows displaying a 
++background picture on selected consoles.
++
++What do I need to get it to work?
++---------------------------------
++
++To get fbcondecor up-and-running you will have to:
++ 1) get a copy of splashutils [1] or a similar program
++ 2) get some fbcondecor themes
++ 3) build the kernel helper program
++ 4) build your kernel with the FB_CON_DECOR option enabled.
++
++To get fbcondecor operational right after fbcon initialization is finished, you
++will have to include a theme and the kernel helper into your initramfs image.
++Please refer to splashutils documentation for instructions on how to do that.
++
++[1] The splashutils package can be downloaded from:
++    http://github.com/alanhaggai/fbsplash
++
++The userspace helper
++--------------------
++
++The userspace fbcondecor helper (by default: /sbin/fbcondecor_helper) is called by the
++kernel whenever an important event occurs and the kernel needs some kind of
++job to be carried out. Important events include console switches and video
++mode switches (the kernel requests background images and configuration
++parameters for the current console). The fbcondecor helper must be accessible at
++all times. If it's not, fbcondecor will be switched off automatically.
++
++It's possible to set path to the fbcondecor helper by writing it to
++/proc/sys/kernel/fbcondecor.
++
++*****************************************************************************
++
++The information below is mostly technical stuff. There's probably no need to
++read it unless you plan to develop a userspace helper.
++
++The fbcondecor protocol
++-----------------------
++
++The fbcondecor protocol defines a communication interface between the kernel and
++the userspace fbcondecor helper.
++
++The kernel side is responsible for:
++
++ * rendering console text, using an image as a background (instead of a
++   standard solid color fbcon uses),
++ * accepting commands from the user via ioctls on the fbcondecor device,
++ * calling the userspace helper to set things up as soon as the fb subsystem 
++   is initialized.
++
++The userspace helper is responsible for everything else, including parsing
++configuration files, decompressing the image files whenever the kernel needs
++it, and communicating with the kernel if necessary.
++
++The fbcondecor protocol specifies how communication is done in both ways:
++kernel->userspace and userspace->helper.
++  
++Kernel -> Userspace
++-------------------
++
++The kernel communicates with the userspace helper by calling it and specifying
++the task to be done in a series of arguments.
++
++The arguments follow the pattern:
++<fbcondecor protocol version> <command> <parameters>
++
++All commands defined in fbcondecor protocol v2 have the following parameters:
++ virtual console
++ framebuffer number
++ theme
++
++Fbcondecor protocol v1 specified an additional 'fbcondecor mode' after the
++framebuffer number. Fbcondecor protocol v1 is deprecated and should not be used.
++
++Fbcondecor protocol v2 specifies the following commands:
++
++getpic
++------
++ The kernel issues this command to request image data. It's up to the 
++ userspace  helper to find a background image appropriate for the specified 
++ theme and the current resolution. The userspace helper should respond by 
++ issuing the FBIOCONDECOR_SETPIC ioctl.
++
++init
++----
++ The kernel issues this command after the fbcondecor device is created and
++ the fbcondecor interface is initialized. Upon receiving 'init', the userspace
++ helper should parse the kernel command line (/proc/cmdline) or otherwise
++ decide whether fbcondecor is to be activated.
++
++ To activate fbcondecor on the first console the helper should issue the
++ FBIOCONDECOR_SETCFG, FBIOCONDECOR_SETPIC and FBIOCONDECOR_SETSTATE commands,
++ in the above-mentioned order.
++
++ When the userspace helper is called in an early phase of the boot process
++ (right after the initialization of fbcon), no filesystems will be mounted.
++ The helper program should mount sysfs and then create the appropriate
++ framebuffer, fbcondecor and tty0 devices (if they don't already exist) to get
++ current display settings and to be able to communicate with the kernel side.
++ It should probably also mount the procfs to be able to parse the kernel
++ command line parameters.
++
++ Note that the console sem is not held when the kernel calls fbcondecor_helper
++ with the 'init' command. The fbcondecor helper should perform all ioctls with
++ origin set to FBCON_DECOR_IO_ORIG_USER.
++
++modechange
++----------
++ The kernel issues this command on a mode change. The helper's response should
++ be similar to the response to the 'init' command. Note that this time the
++ console sem is held and all ioctls must be performed with origin set to
++ FBCON_DECOR_IO_ORIG_KERNEL.
++
++
++Userspace -> Kernel
++-------------------
++
++Userspace programs can communicate with fbcondecor via ioctls on the
++fbcondecor device. These ioctls are to be used by both the userspace helper
++(called only by the kernel) and userspace configuration tools (run by the users).
++
++The fbcondecor helper should set the origin field to FBCON_DECOR_IO_ORIG_KERNEL
++when doing the appropriate ioctls. All userspace configuration tools should
++use FBCON_DECOR_IO_ORIG_USER. Failure to set the appropriate value in the origin
++field when performing ioctls from the kernel helper will most likely result
++in a console deadlock.
++
++FBCON_DECOR_IO_ORIG_KERNEL instructs fbcondecor not to try to acquire the console
++semaphore. Not surprisingly, FBCON_DECOR_IO_ORIG_USER instructs it to acquire
++the console sem.
++
++The framebuffer console decoration provides the following ioctls (all defined in 
++linux/fb.h):
++
++FBIOCONDECOR_SETPIC
++description: loads a background picture for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct fb_image*
++notes: 
++If called for consoles other than the current foreground one, the picture data
++will be ignored.
++
++If the current virtual console is running in a 8-bpp mode, the cmap substruct
++of fb_image has to be filled appropriately: start should be set to 16 (first
++16 colors are reserved for fbcon), len to a value <= 240 and red, green and
++blue should point to valid cmap data. The transp field is ingored. The fields
++dx, dy, bg_color, fg_color in fb_image are ignored as well.
++
++FBIOCONDECOR_SETCFG
++description: sets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++notes: The structure has to be filled with valid data.
++
++FBIOCONDECOR_GETCFG
++description: gets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++
++FBIOCONDECOR_SETSTATE
++description: sets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++          values: 0 = disabled, 1 = enabled.
++
++FBIOCONDECOR_GETSTATE
++description: gets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++          values: as in FBIOCONDECOR_SETSTATE
++
++Info on used structures:
++
++Definition of struct vc_decor can be found in linux/console_decor.h. It's
++heavily commented. Note that the 'theme' field should point to a string
++no longer than FBCON_DECOR_THEME_LEN. When FBIOCONDECOR_GETCFG call is
++performed, the theme field should point to a char buffer of length
++FBCON_DECOR_THEME_LEN.
++
++Definition of struct fbcon_decor_iowrapper can be found in linux/fb.h.
++The fields in this struct have the following meaning:
++
++vc: 
++Virtual console number.
++
++origin: 
++Specifies if the ioctl is performed as a response to a kernel request. The
++fbcondecor helper should set this field to FBCON_DECOR_IO_ORIG_KERNEL, userspace
++programs should set it to FBCON_DECOR_IO_ORIG_USER. This field is necessary to
++avoid console semaphore deadlocks.
++
++data: 
++Pointer to a data structure appropriate for the performed ioctl. Type of
++the data struct is specified in the ioctls description.
++
++*****************************************************************************
++
++Credit
++------
++
++Original 'bootsplash' project & implementation by:
++  Volker Poplawski <volker@poplawski.de>, Stefan Reinauer <stepan@suse.de>,
++  Steffen Winterfeldt <snwint@suse.de>, Michael Schroeder <mls@suse.de>,
++  Ken Wimer <wimer@suse.de>.
++
++Fbcondecor, fbcondecor protocol design, current implementation & docs by:
++  Michal Januszewski <michalj+fbcondecor@gmail.com>
++
+diff --git a/drivers/Makefile b/drivers/Makefile
+index 7183b6a..d576148 100644
+--- a/drivers/Makefile
++++ b/drivers/Makefile
+@@ -17,6 +17,10 @@ obj-y				+= pwm/
+ obj-$(CONFIG_PCI)		+= pci/
+ obj-$(CONFIG_PARISC)		+= parisc/
+ obj-$(CONFIG_RAPIDIO)		+= rapidio/
++# tty/ comes before char/ so that the VT console is the boot-time
++# default.
++obj-y				+= tty/
++obj-y				+= char/
+ obj-y				+= video/
+ obj-y				+= idle/
+ 
+@@ -42,11 +46,6 @@ obj-$(CONFIG_REGULATOR)		+= regulator/
+ # reset controllers early, since gpu drivers might rely on them to initialize
+ obj-$(CONFIG_RESET_CONTROLLER)	+= reset/
+ 
+-# tty/ comes before char/ so that the VT console is the boot-time
+-# default.
+-obj-y				+= tty/
+-obj-y				+= char/
+-
+ # iommu/ comes before gpu as gpu are using iommu controllers
+ obj-$(CONFIG_IOMMU_SUPPORT) += iommu/
+
+diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
+index fe1cd01..6d2e87a 100644
+--- a/drivers/video/console/Kconfig
++++ b/drivers/video/console/Kconfig
+@@ -126,6 +126,19 @@ config FRAMEBUFFER_CONSOLE_ROTATION
+          such that other users of the framebuffer will remain normally
+          oriented.
+ 
++config FB_CON_DECOR
++	bool "Support for the Framebuffer Console Decorations"
++	depends on FRAMEBUFFER_CONSOLE=y && !FB_TILEBLITTING
++	default n
++	---help---
++	  This option enables support for framebuffer console decorations which
++	  makes it possible to display images in the background of the system
++	  consoles.  Note that userspace utilities are necessary in order to take 
++	  advantage of these features. Refer to Documentation/fb/fbcondecor.txt 
++	  for more information.
++
++	  If unsure, say N.
++
+ config STI_CONSOLE
+         bool "STI text console"
+         depends on PARISC
+diff --git a/drivers/video/console/Makefile b/drivers/video/console/Makefile
+index 43bfa48..cc104b6f 100644
+--- a/drivers/video/console/Makefile
++++ b/drivers/video/console/Makefile
+@@ -16,4 +16,5 @@ obj-$(CONFIG_FRAMEBUFFER_CONSOLE)     += fbcon_rotate.o fbcon_cw.o fbcon_ud.o \
+                                          fbcon_ccw.o
+ endif
+ 
++obj-$(CONFIG_FB_CON_DECOR)     	  += fbcondecor.o cfbcondecor.o
+ obj-$(CONFIG_FB_STI)              += sticore.o
+diff --git a/drivers/video/console/bitblit.c b/drivers/video/console/bitblit.c
+index 61b182b..984384b 100644
+--- a/drivers/video/console/bitblit.c
++++ b/drivers/video/console/bitblit.c
+@@ -18,6 +18,7 @@
+ #include <linux/console.h>
+ #include <asm/types.h>
+ #include "fbcon.h"
++#include "fbcondecor.h"
+ 
+ /*
+  * Accelerated handlers.
+@@ -55,6 +56,13 @@ static void bit_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ 	area.height = height * vc->vc_font.height;
+ 	area.width = width * vc->vc_font.width;
+ 
++	if (fbcon_decor_active(info, vc)) {
++ 		area.sx += vc->vc_decor.tx;
++ 		area.sy += vc->vc_decor.ty;
++ 		area.dx += vc->vc_decor.tx;
++ 		area.dy += vc->vc_decor.ty;
++ 	}
++
+ 	info->fbops->fb_copyarea(info, &area);
+ }
+ 
+@@ -380,11 +388,15 @@ static void bit_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ 	cursor.image.depth = 1;
+ 	cursor.rop = ROP_XOR;
+ 
+-	if (info->fbops->fb_cursor)
+-		err = info->fbops->fb_cursor(info, &cursor);
++	if (fbcon_decor_active(info, vc)) {
++		fbcon_decor_cursor(info, &cursor);
++	} else {
++		if (info->fbops->fb_cursor)
++			err = info->fbops->fb_cursor(info, &cursor);
+ 
+-	if (err)
+-		soft_cursor(info, &cursor);
++		if (err)
++			soft_cursor(info, &cursor);
++	}
+ 
+ 	ops->cursor_reset = 0;
+ }
+diff --git a/drivers/video/console/cfbcondecor.c b/drivers/video/console/cfbcondecor.c
+new file mode 100644
+index 0000000..a2b4497
+--- /dev/null
++++ b/drivers/video/console/cfbcondecor.c
+@@ -0,0 +1,471 @@
++/*
++ *  linux/drivers/video/cfbcon_decor.c -- Framebuffer decor render functions
++ *
++ *  Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ *  Code based upon "Bootdecor" (C) 2001-2003
++ *       Volker Poplawski <volker@poplawski.de>,
++ *       Stefan Reinauer <stepan@suse.de>,
++ *       Steffen Winterfeldt <snwint@suse.de>,
++ *       Michael Schroeder <mls@suse.de>,
++ *       Ken Wimer <wimer@suse.de>.
++ *
++ *  This file is subject to the terms and conditions of the GNU General Public
++ *  License.  See the file COPYING in the main directory of this archive for
++ *  more details.
++ */
++#include <linux/module.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/selection.h>
++#include <linux/slab.h>
++#include <linux/vt_kern.h>
++#include <asm/irq.h>
++
++#include "fbcon.h"
++#include "fbcondecor.h"
++
++#define parse_pixel(shift,bpp,type)						\
++	do {									\
++		if (d & (0x80 >> (shift)))					\
++			dd2[(shift)] = fgx;					\
++		else								\
++			dd2[(shift)] = transparent ? *(type *)decor_src : bgx;	\
++		decor_src += (bpp);						\
++	} while (0)								\
++
++extern int get_color(struct vc_data *vc, struct fb_info *info,
++		     u16 c, int is_fg);
++
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc)
++{
++	int i, j, k;
++	int minlen = min(min(info->var.red.length, info->var.green.length),
++			     info->var.blue.length);
++	u32 col;
++
++	for (j = i = 0; i < 16; i++) {
++		k = color_table[i];
++
++		col = ((vc->vc_palette[j++]  >> (8-minlen))
++			<< info->var.red.offset);
++		col |= ((vc->vc_palette[j++] >> (8-minlen))
++			<< info->var.green.offset);
++		col |= ((vc->vc_palette[j++] >> (8-minlen))
++			<< info->var.blue.offset);
++			((u32 *)info->pseudo_palette)[k] = col;
++	}
++}
++
++void fbcon_decor_renderc(struct fb_info *info, int ypos, int xpos, int height,
++		      int width, u8* src, u32 fgx, u32 bgx, u8 transparent)
++{
++	unsigned int x, y;
++	u32 dd;
++	int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++	unsigned int d = ypos * info->fix.line_length + xpos * bytespp;
++	unsigned int ds = (ypos * info->var.xres + xpos) * bytespp;
++	u16 dd2[4];
++
++	u8* decor_src = (u8 *)(info->bgdecor.data + ds);
++	u8* dst = (u8 *)(info->screen_base + d);
++
++	if ((ypos + height) > info->var.yres || (xpos + width) > info->var.xres)
++		return;
++
++	for (y = 0; y < height; y++) {
++		switch (info->var.bits_per_pixel) {
++
++		case 32:
++			for (x = 0; x < width; x++) {
++
++				if ((x & 7) == 0)
++					d = *src++;
++				if (d & 0x80)
++					dd = fgx;
++				else
++					dd = transparent ?
++					     *(u32 *)decor_src : bgx;
++
++				d <<= 1;
++				decor_src += 4;
++				fb_writel(dd, dst);
++				dst += 4;
++			}
++			break;
++		case 24:
++			for (x = 0; x < width; x++) {
++
++				if ((x & 7) == 0)
++					d = *src++;
++				if (d & 0x80)
++					dd = fgx;
++				else
++					dd = transparent ?
++					     (*(u32 *)decor_src & 0xffffff) : bgx;
++
++				d <<= 1;
++				decor_src += 3;
++#ifdef __LITTLE_ENDIAN
++				fb_writew(dd & 0xffff, dst);
++				dst += 2;
++				fb_writeb((dd >> 16), dst);
++#else
++				fb_writew(dd >> 8, dst);
++				dst += 2;
++				fb_writeb(dd & 0xff, dst);
++#endif
++				dst++;
++			}
++			break;
++		case 16:
++			for (x = 0; x < width; x += 2) {
++				if ((x & 7) == 0)
++					d = *src++;
++
++				parse_pixel(0, 2, u16);
++				parse_pixel(1, 2, u16);
++#ifdef __LITTLE_ENDIAN
++				dd = dd2[0] | (dd2[1] << 16);
++#else
++				dd = dd2[1] | (dd2[0] << 16);
++#endif
++				d <<= 2;
++				fb_writel(dd, dst);
++				dst += 4;
++			}
++			break;
++
++		case 8:
++			for (x = 0; x < width; x += 4) {
++				if ((x & 7) == 0)
++					d = *src++;
++
++				parse_pixel(0, 1, u8);
++				parse_pixel(1, 1, u8);
++				parse_pixel(2, 1, u8);
++				parse_pixel(3, 1, u8);
++
++#ifdef __LITTLE_ENDIAN
++				dd = dd2[0] | (dd2[1] << 8) | (dd2[2] << 16) | (dd2[3] << 24);
++#else
++				dd = dd2[3] | (dd2[2] << 8) | (dd2[1] << 16) | (dd2[0] << 24);
++#endif
++				d <<= 4;
++				fb_writel(dd, dst);
++				dst += 4;
++			}
++		}
++
++		dst += info->fix.line_length - width * bytespp;
++		decor_src += (info->var.xres - width) * bytespp;
++	}
++}
++
++#define cc2cx(a) 						\
++	((info->fix.visual == FB_VISUAL_TRUECOLOR || 		\
++	  info->fix.visual == FB_VISUAL_DIRECTCOLOR) ? 		\
++	 ((u32*)info->pseudo_palette)[a] : a)
++
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info,
++		   const unsigned short *s, int count, int yy, int xx)
++{
++	unsigned short charmask = vc->vc_hi_font_mask ? 0x1ff : 0xff;
++	struct fbcon_ops *ops = info->fbcon_par;
++	int fg_color, bg_color, transparent;
++	u8 *src;
++	u32 bgx, fgx;
++	u16 c = scr_readw(s);
++
++	fg_color = get_color(vc, info, c, 1);
++        bg_color = get_color(vc, info, c, 0);
++
++	/* Don't paint the background image if console is blanked */
++	transparent = ops->blank_state ? 0 :
++		(vc->vc_decor.bg_color == bg_color);
++
++	xx = xx * vc->vc_font.width + vc->vc_decor.tx;
++	yy = yy * vc->vc_font.height + vc->vc_decor.ty;
++
++	fgx = cc2cx(fg_color);
++	bgx = cc2cx(bg_color);
++
++	while (count--) {
++		c = scr_readw(s++);
++		src = vc->vc_font.data + (c & charmask) * vc->vc_font.height *
++		      ((vc->vc_font.width + 7) >> 3);
++
++		fbcon_decor_renderc(info, yy, xx, vc->vc_font.height,
++			       vc->vc_font.width, src, fgx, bgx, transparent);
++		xx += vc->vc_font.width;
++	}
++}
++
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor)
++{
++	int i;
++	unsigned int dsize, s_pitch;
++	struct fbcon_ops *ops = info->fbcon_par;
++	struct vc_data* vc;
++	u8 *src;
++
++	/* we really don't need any cursors while the console is blanked */
++	if (info->state != FBINFO_STATE_RUNNING || ops->blank_state)
++		return;
++
++	vc = vc_cons[ops->currcon].d;
++
++	src = kmalloc(64 + sizeof(struct fb_image), GFP_ATOMIC);
++	if (!src)
++		return;
++
++	s_pitch = (cursor->image.width + 7) >> 3;
++	dsize = s_pitch * cursor->image.height;
++	if (cursor->enable) {
++		switch (cursor->rop) {
++		case ROP_XOR:
++			for (i = 0; i < dsize; i++)
++				src[i] = cursor->image.data[i] ^ cursor->mask[i];
++                        break;
++		case ROP_COPY:
++		default:
++			for (i = 0; i < dsize; i++)
++				src[i] = cursor->image.data[i] & cursor->mask[i];
++			break;
++		}
++	} else
++		memcpy(src, cursor->image.data, dsize);
++
++	fbcon_decor_renderc(info,
++			cursor->image.dy + vc->vc_decor.ty,
++			cursor->image.dx + vc->vc_decor.tx,
++			cursor->image.height,
++			cursor->image.width,
++			(u8*)src,
++			cc2cx(cursor->image.fg_color),
++			cc2cx(cursor->image.bg_color),
++			cursor->image.bg_color == vc->vc_decor.bg_color);
++
++	kfree(src);
++}
++
++static void decorset(u8 *dst, int height, int width, int dstbytes,
++		        u32 bgx, int bpp)
++{
++	int i;
++
++	if (bpp == 8)
++		bgx |= bgx << 8;
++	if (bpp == 16 || bpp == 8)
++		bgx |= bgx << 16;
++
++	while (height-- > 0) {
++		u8 *p = dst;
++
++		switch (bpp) {
++
++		case 32:
++			for (i=0; i < width; i++) {
++				fb_writel(bgx, p); p += 4;
++			}
++			break;
++		case 24:
++			for (i=0; i < width; i++) {
++#ifdef __LITTLE_ENDIAN
++				fb_writew((bgx & 0xffff),(u16*)p); p += 2;
++				fb_writeb((bgx >> 16),p++);
++#else
++				fb_writew((bgx >> 8),(u16*)p); p += 2;
++				fb_writeb((bgx & 0xff),p++);
++#endif
++			}
++		case 16:
++			for (i=0; i < width/4; i++) {
++				fb_writel(bgx,p); p += 4;
++				fb_writel(bgx,p); p += 4;
++			}
++			if (width & 2) {
++				fb_writel(bgx,p); p += 4;
++			}
++			if (width & 1)
++				fb_writew(bgx,(u16*)p);
++			break;
++		case 8:
++			for (i=0; i < width/4; i++) {
++				fb_writel(bgx,p); p += 4;
++			}
++
++			if (width & 2) {
++				fb_writew(bgx,p); p += 2;
++			}
++			if (width & 1)
++				fb_writeb(bgx,(u8*)p);
++			break;
++
++		}
++		dst += dstbytes;
++	}
++}
++
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes,
++		   int srclinebytes, int bpp)
++{
++	int i;
++
++	while (height-- > 0) {
++		u32 *p = (u32 *)dst;
++		u32 *q = (u32 *)src;
++
++		switch (bpp) {
++
++		case 32:
++			for (i=0; i < width; i++)
++				fb_writel(*q++, p++);
++			break;
++		case 24:
++			for (i=0; i < (width*3/4); i++)
++				fb_writel(*q++, p++);
++			if ((width*3) % 4) {
++				if (width & 2) {
++					fb_writeb(*(u8*)q, (u8*)p);
++				} else if (width & 1) {
++					fb_writew(*(u16*)q, (u16*)p);
++					fb_writeb(*(u8*)((u16*)q+1),(u8*)((u16*)p+2));
++				}
++			}
++			break;
++		case 16:
++			for (i=0; i < width/4; i++) {
++				fb_writel(*q++, p++);
++				fb_writel(*q++, p++);
++			}
++			if (width & 2)
++				fb_writel(*q++, p++);
++			if (width & 1)
++				fb_writew(*(u16*)q, (u16*)p);
++			break;
++		case 8:
++			for (i=0; i < width/4; i++)
++				fb_writel(*q++, p++);
++
++			if (width & 2) {
++				fb_writew(*(u16*)q, (u16*)p);
++				q = (u32*) ((u16*)q + 1);
++				p = (u32*) ((u16*)p + 1);
++			}
++			if (width & 1)
++				fb_writeb(*(u8*)q, (u8*)p);
++			break;
++		}
++
++		dst += linebytes;
++		src += srclinebytes;
++	}
++}
++
++static void decorfill(struct fb_info *info, int sy, int sx, int height,
++		       int width)
++{
++	int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++	int d  = sy * info->fix.line_length + sx * bytespp;
++	int ds = (sy * info->var.xres + sx) * bytespp;
++
++	fbcon_decor_copy((u8 *)(info->screen_base + d), (u8 *)(info->bgdecor.data + ds),
++		    height, width, info->fix.line_length, info->var.xres * bytespp,
++		    info->var.bits_per_pixel);
++}
++
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx,
++		    int height, int width)
++{
++	int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
++	struct fbcon_ops *ops = info->fbcon_par;
++	u8 *dst;
++	int transparent, bg_color = attr_bgcol_ec(bgshift, vc, info);
++
++	transparent = (vc->vc_decor.bg_color == bg_color);
++	sy = sy * vc->vc_font.height + vc->vc_decor.ty;
++	sx = sx * vc->vc_font.width + vc->vc_decor.tx;
++	height *= vc->vc_font.height;
++	width *= vc->vc_font.width;
++
++	/* Don't paint the background image if console is blanked */
++	if (transparent && !ops->blank_state) {
++		decorfill(info, sy, sx, height, width);
++	} else {
++		dst = (u8 *)(info->screen_base + sy * info->fix.line_length +
++			     sx * ((info->var.bits_per_pixel + 7) >> 3));
++		decorset(dst, height, width, info->fix.line_length, cc2cx(bg_color),
++			  info->var.bits_per_pixel);
++	}
++}
++
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info,
++			    int bottom_only)
++{
++	unsigned int tw = vc->vc_cols*vc->vc_font.width;
++	unsigned int th = vc->vc_rows*vc->vc_font.height;
++
++	if (!bottom_only) {
++		/* top margin */
++		decorfill(info, 0, 0, vc->vc_decor.ty, info->var.xres);
++		/* left margin */
++		decorfill(info, vc->vc_decor.ty, 0, th, vc->vc_decor.tx);
++		/* right margin */
++		decorfill(info, vc->vc_decor.ty, vc->vc_decor.tx + tw, th, 
++			   info->var.xres - vc->vc_decor.tx - tw);
++	}
++	decorfill(info, vc->vc_decor.ty + th, 0, 
++		   info->var.yres - vc->vc_decor.ty - th, info->var.xres);
++}
++
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, 
++			   int sx, int dx, int width)
++{
++	u16 *d = (u16 *) (vc->vc_origin + vc->vc_size_row * y + dx * 2);
++	u16 *s = d + (dx - sx);
++	u16 *start = d;
++	u16 *ls = d;
++	u16 *le = d + width;
++	u16 c;
++	int x = dx;
++	u16 attr = 1;
++
++	do {
++		c = scr_readw(d);
++		if (attr != (c & 0xff00)) {
++			attr = c & 0xff00;
++			if (d > start) {
++				fbcon_decor_putcs(vc, info, start, d - start, y, x);
++				x += d - start;
++				start = d;
++			}
++		}
++		if (s >= ls && s < le && c == scr_readw(s)) {
++			if (d > start) {
++				fbcon_decor_putcs(vc, info, start, d - start, y, x);
++				x += d - start + 1;
++				start = d + 1;
++			} else {
++				x++;
++				start++;
++			}
++		}
++		s++;
++		d++;
++	} while (d < le);
++	if (d > start)
++		fbcon_decor_putcs(vc, info, start, d - start, y, x);
++}
++
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank)
++{
++	if (blank) {
++		decorset((u8 *)info->screen_base, info->var.yres, info->var.xres,
++			  info->fix.line_length, 0, info->var.bits_per_pixel);
++	} else {
++		update_screen(vc);
++		fbcon_decor_clear_margins(vc, info, 0);
++	}
++}
++
+diff --git a/drivers/video/console/fbcon.c b/drivers/video/console/fbcon.c
+index f447734..da50d61 100644
+--- a/drivers/video/console/fbcon.c
++++ b/drivers/video/console/fbcon.c
+@@ -79,6 +79,7 @@
+ #include <asm/irq.h>
+ 
+ #include "fbcon.h"
++#include "../console/fbcondecor.h"
+ 
+ #ifdef FBCONDEBUG
+ #  define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __func__ , ## args)
+@@ -94,7 +95,7 @@ enum {
+ 
+ static struct display fb_display[MAX_NR_CONSOLES];
+ 
+-static signed char con2fb_map[MAX_NR_CONSOLES];
++signed char con2fb_map[MAX_NR_CONSOLES];
+ static signed char con2fb_map_boot[MAX_NR_CONSOLES];
+ 
+ static int logo_lines;
+@@ -286,7 +287,7 @@ static inline int fbcon_is_inactive(struct vc_data *vc, struct fb_info *info)
+ 		!vt_force_oops_output(vc);
+ }
+ 
+-static int get_color(struct vc_data *vc, struct fb_info *info,
++int get_color(struct vc_data *vc, struct fb_info *info,
+ 	      u16 c, int is_fg)
+ {
+ 	int depth = fb_get_color_depth(&info->var, &info->fix);
+@@ -551,6 +552,9 @@ static int do_fbcon_takeover(int show_logo)
+ 		info_idx = -1;
+ 	} else {
+ 		fbcon_has_console_bind = 1;
++#ifdef CONFIG_FB_CON_DECOR
++		fbcon_decor_init();
++#endif
+ 	}
+ 
+ 	return err;
+@@ -1007,6 +1011,12 @@ static const char *fbcon_startup(void)
+ 	rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ 	cols /= vc->vc_font.width;
+ 	rows /= vc->vc_font.height;
++
++	if (fbcon_decor_active(info, vc)) {
++		cols = vc->vc_decor.twidth / vc->vc_font.width;
++		rows = vc->vc_decor.theight / vc->vc_font.height;
++	}
++
+ 	vc_resize(vc, cols, rows);
+ 
+ 	DPRINTK("mode:   %s\n", info->fix.id);
+@@ -1036,7 +1046,7 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 	cap = info->flags;
+ 
+ 	if (vc != svc || logo_shown == FBCON_LOGO_DONTSHOW ||
+-	    (info->fix.type == FB_TYPE_TEXT))
++	    (info->fix.type == FB_TYPE_TEXT) || fbcon_decor_active(info, vc))
+ 		logo = 0;
+ 
+ 	if (var_to_display(p, &info->var, info))
+@@ -1260,6 +1270,11 @@ static void fbcon_clear(struct vc_data *vc, int sy, int sx, int height,
+ 		fbcon_clear_margins(vc, 0);
+ 	}
+ 
++ 	if (fbcon_decor_active(info, vc)) {
++ 		fbcon_decor_clear(vc, info, sy, sx, height, width);
++ 		return;
++ 	}
++
+ 	/* Split blits that cross physical y_wrap boundary */
+ 
+ 	y_break = p->vrows - p->yscroll;
+@@ -1279,10 +1294,15 @@ static void fbcon_putcs(struct vc_data *vc, const unsigned short *s,
+ 	struct display *p = &fb_display[vc->vc_num];
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 
+-	if (!fbcon_is_inactive(vc, info))
+-		ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
+-			   get_color(vc, info, scr_readw(s), 1),
+-			   get_color(vc, info, scr_readw(s), 0));
++	if (!fbcon_is_inactive(vc, info)) {
++
++		if (fbcon_decor_active(info, vc))
++			fbcon_decor_putcs(vc, info, s, count, ypos, xpos);
++		else
++			ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
++				   get_color(vc, info, scr_readw(s), 1),
++				   get_color(vc, info, scr_readw(s), 0));
++	}
+ }
+ 
+ static void fbcon_putc(struct vc_data *vc, int c, int ypos, int xpos)
+@@ -1298,8 +1318,13 @@ static void fbcon_clear_margins(struct vc_data *vc, int bottom_only)
+ 	struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 
+-	if (!fbcon_is_inactive(vc, info))
+-		ops->clear_margins(vc, info, bottom_only);
++	if (!fbcon_is_inactive(vc, info)) {
++	 	if (fbcon_decor_active(info, vc)) {
++	 		fbcon_decor_clear_margins(vc, info, bottom_only);
++ 		} else {
++			ops->clear_margins(vc, info, bottom_only);
++		}
++	}
+ }
+ 
+ static void fbcon_cursor(struct vc_data *vc, int mode)
+@@ -1819,7 +1844,7 @@ static int fbcon_scroll(struct vc_data *vc, int t, int b, int dir,
+ 			count = vc->vc_rows;
+ 		if (softback_top)
+ 			fbcon_softback_note(vc, t, count);
+-		if (logo_shown >= 0)
++		if (logo_shown >= 0 || fbcon_decor_active(info, vc))
+ 			goto redraw_up;
+ 		switch (p->scrollmode) {
+ 		case SCROLL_MOVE:
+@@ -1912,6 +1937,8 @@ static int fbcon_scroll(struct vc_data *vc, int t, int b, int dir,
+ 			count = vc->vc_rows;
+ 		if (logo_shown >= 0)
+ 			goto redraw_down;
++		if (fbcon_decor_active(info, vc))
++			goto redraw_down;
+ 		switch (p->scrollmode) {
+ 		case SCROLL_MOVE:
+ 			fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
+@@ -2060,6 +2087,13 @@ static void fbcon_bmove_rec(struct vc_data *vc, struct display *p, int sy, int s
+ 		}
+ 		return;
+ 	}
++
++	if (fbcon_decor_active(info, vc) && sy == dy && height == 1) {
++ 		/* must use slower redraw bmove to keep background pic intact */
++ 		fbcon_decor_bmove_redraw(vc, info, sy, sx, dx, width);
++ 		return;
++ 	}
++
+ 	ops->bmove(vc, info, real_y(p, sy), sx, real_y(p, dy), dx,
+ 		   height, width);
+ }
+@@ -2130,8 +2164,8 @@ static int fbcon_resize(struct vc_data *vc, unsigned int width,
+ 	var.yres = virt_h * virt_fh;
+ 	x_diff = info->var.xres - var.xres;
+ 	y_diff = info->var.yres - var.yres;
+-	if (x_diff < 0 || x_diff > virt_fw ||
+-	    y_diff < 0 || y_diff > virt_fh) {
++	if ((x_diff < 0 || x_diff > virt_fw ||
++		y_diff < 0 || y_diff > virt_fh) && !vc->vc_decor.state) {
+ 		const struct fb_videomode *mode;
+ 
+ 		DPRINTK("attempting resize %ix%i\n", var.xres, var.yres);
+@@ -2167,6 +2201,21 @@ static int fbcon_switch(struct vc_data *vc)
+ 
+ 	info = registered_fb[con2fb_map[vc->vc_num]];
+ 	ops = info->fbcon_par;
++	prev_console = ops->currcon;
++	if (prev_console != -1)
++		old_info = registered_fb[con2fb_map[prev_console]];
++
++#ifdef CONFIG_FB_CON_DECOR
++	if (!fbcon_decor_active_vc(vc) && info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++		struct vc_data *vc_curr = vc_cons[prev_console].d;
++		if (vc_curr && fbcon_decor_active_vc(vc_curr)) {
++			/* Clear the screen to avoid displaying funky colors during
++			 * palette updates. */
++			memset((u8*)info->screen_base + info->fix.line_length * info->var.yoffset,
++			       0, info->var.yres * info->fix.line_length);
++		}
++	}
++#endif
+ 
+ 	if (softback_top) {
+ 		if (softback_lines)
+@@ -2185,9 +2234,6 @@ static int fbcon_switch(struct vc_data *vc)
+ 		logo_shown = FBCON_LOGO_CANSHOW;
+ 	}
+ 
+-	prev_console = ops->currcon;
+-	if (prev_console != -1)
+-		old_info = registered_fb[con2fb_map[prev_console]];
+ 	/*
+ 	 * FIXME: If we have multiple fbdev's loaded, we need to
+ 	 * update all info->currcon.  Perhaps, we can place this
+@@ -2231,6 +2277,18 @@ static int fbcon_switch(struct vc_data *vc)
+ 			fbcon_del_cursor_timer(old_info);
+ 	}
+ 
++	if (fbcon_decor_active_vc(vc)) {
++		struct vc_data *vc_curr = vc_cons[prev_console].d;
++
++		if (!vc_curr->vc_decor.theme ||
++			strcmp(vc->vc_decor.theme, vc_curr->vc_decor.theme) ||
++			(fbcon_decor_active_nores(info, vc_curr) &&
++			 !fbcon_decor_active(info, vc_curr))) {
++			fbcon_decor_disable(vc, 0);
++			fbcon_decor_call_helper("modechange", vc->vc_num);
++		}
++	}
++
+ 	if (fbcon_is_inactive(vc, info) ||
+ 	    ops->blank_state != FB_BLANK_UNBLANK)
+ 		fbcon_del_cursor_timer(info);
+@@ -2339,15 +2397,20 @@ static int fbcon_blank(struct vc_data *vc, int blank, int mode_switch)
+ 		}
+ 	}
+ 
+- 	if (!fbcon_is_inactive(vc, info)) {
++	if (!fbcon_is_inactive(vc, info)) {
+ 		if (ops->blank_state != blank) {
+ 			ops->blank_state = blank;
+ 			fbcon_cursor(vc, blank ? CM_ERASE : CM_DRAW);
+ 			ops->cursor_flash = (!blank);
+ 
+-			if (!(info->flags & FBINFO_MISC_USEREVENT))
+-				if (fb_blank(info, blank))
+-					fbcon_generic_blank(vc, info, blank);
++			if (!(info->flags & FBINFO_MISC_USEREVENT)) {
++				if (fb_blank(info, blank)) {
++					if (fbcon_decor_active(info, vc))
++						fbcon_decor_blank(vc, info, blank);
++					else
++						fbcon_generic_blank(vc, info, blank);
++				}
++			}
+ 		}
+ 
+ 		if (!blank)
+@@ -2522,13 +2585,22 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
+ 	}
+ 
+ 	if (resize) {
++		/* reset wrap/pan */
+ 		int cols, rows;
+ 
+ 		cols = FBCON_SWAP(ops->rotate, info->var.xres, info->var.yres);
+ 		rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
++
++		if (fbcon_decor_active(info, vc)) {
++			info->var.xoffset = info->var.yoffset = p->yscroll = 0;
++			cols = vc->vc_decor.twidth;
++			rows = vc->vc_decor.theight;
++		}
+ 		cols /= w;
+ 		rows /= h;
++
+ 		vc_resize(vc, cols, rows);
++
+ 		if (CON_IS_VISIBLE(vc) && softback_buf)
+ 			fbcon_update_softback(vc);
+ 	} else if (CON_IS_VISIBLE(vc)
+@@ -2657,7 +2729,11 @@ static int fbcon_set_palette(struct vc_data *vc, unsigned char *table)
+ 	int i, j, k, depth;
+ 	u8 val;
+ 
+-	if (fbcon_is_inactive(vc, info))
++	if (fbcon_is_inactive(vc, info)
++#ifdef CONFIG_FB_CON_DECOR
++			|| vc->vc_num != fg_console
++#endif
++		)
+ 		return -EINVAL;
+ 
+ 	if (!CON_IS_VISIBLE(vc))
+@@ -2683,14 +2759,56 @@ static int fbcon_set_palette(struct vc_data *vc, unsigned char *table)
+ 	} else
+ 		fb_copy_cmap(fb_default_cmap(1 << depth), &palette_cmap);
+ 
+-	return fb_set_cmap(&palette_cmap, info);
++	if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++	    info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++
++		u16 *red, *green, *blue;
++		int minlen = min(min(info->var.red.length, info->var.green.length),
++				     info->var.blue.length);
++		int h;
++
++		struct fb_cmap cmap = {
++			.start = 0,
++			.len = (1 << minlen),
++			.red = NULL,
++			.green = NULL,
++			.blue = NULL,
++			.transp = NULL
++		};
++
++		red = kmalloc(256 * sizeof(u16) * 3, GFP_KERNEL);
++
++		if (!red)
++			goto out;
++
++		green = red + 256;
++		blue = green + 256;
++		cmap.red = red;
++		cmap.green = green;
++		cmap.blue = blue;
++
++		for (i = 0; i < cmap.len; i++) {
++			red[i] = green[i] = blue[i] = (0xffff * i)/(cmap.len-1);
++		}
++
++		h = fb_set_cmap(&cmap, info);
++		fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++		kfree(red);
++
++		return h;
++
++	} else if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++		   info->var.bits_per_pixel == 8 && info->bgdecor.cmap.red != NULL)
++		fb_set_cmap(&info->bgdecor.cmap, info);
++
++out:	return fb_set_cmap(&palette_cmap, info);
+ }
+ 
+ static u16 *fbcon_screen_pos(struct vc_data *vc, int offset)
+ {
+ 	unsigned long p;
+ 	int line;
+-	
++
+ 	if (vc->vc_num != fg_console || !softback_lines)
+ 		return (u16 *) (vc->vc_origin + offset);
+ 	line = offset / vc->vc_size_row;
+@@ -2909,7 +3027,14 @@ static void fbcon_modechanged(struct fb_info *info)
+ 		rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ 		cols /= vc->vc_font.width;
+ 		rows /= vc->vc_font.height;
+-		vc_resize(vc, cols, rows);
++
++		if (!fbcon_decor_active_nores(info, vc)) {
++			vc_resize(vc, cols, rows);
++		} else {
++			fbcon_decor_disable(vc, 0);
++			fbcon_decor_call_helper("modechange", vc->vc_num);
++		}
++
+ 		updatescrollmode(p, info, vc);
+ 		scrollback_max = 0;
+ 		scrollback_current = 0;
+@@ -2954,7 +3079,9 @@ static void fbcon_set_all_vcs(struct fb_info *info)
+ 		rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ 		cols /= vc->vc_font.width;
+ 		rows /= vc->vc_font.height;
+-		vc_resize(vc, cols, rows);
++		if (!fbcon_decor_active_nores(info, vc)) {
++			vc_resize(vc, cols, rows);
++		}
+ 	}
+ 
+ 	if (fg != -1)
+@@ -3596,6 +3723,7 @@ static void fbcon_exit(void)
+ 		}
+ 	}
+ 
++	fbcon_decor_exit();
+ 	fbcon_has_exited = 1;
+ }
+ 
+diff --git a/drivers/video/console/fbcondecor.c b/drivers/video/console/fbcondecor.c
+new file mode 100644
+index 0000000..babc8c5
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.c
+@@ -0,0 +1,555 @@
++/*
++ *  linux/drivers/video/console/fbcondecor.c -- Framebuffer console decorations
++ *
++ *  Copyright (C) 2004-2009 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ *  Code based upon "Bootsplash" (C) 2001-2003
++ *       Volker Poplawski <volker@poplawski.de>,
++ *       Stefan Reinauer <stepan@suse.de>,
++ *       Steffen Winterfeldt <snwint@suse.de>,
++ *       Michael Schroeder <mls@suse.de>,
++ *       Ken Wimer <wimer@suse.de>.
++ *
++ *  Compat ioctl support by Thorsten Klein <TK@Thorsten-Klein.de>.
++ *
++ *  This file is subject to the terms and conditions of the GNU General Public
++ *  License.  See the file COPYING in the main directory of this archive for
++ *  more details.
++ *
++ */
++#include <linux/module.h>
++#include <linux/kernel.h>
++#include <linux/string.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/vt_kern.h>
++#include <linux/vmalloc.h>
++#include <linux/unistd.h>
++#include <linux/syscalls.h>
++#include <linux/init.h>
++#include <linux/proc_fs.h>
++#include <linux/workqueue.h>
++#include <linux/kmod.h>
++#include <linux/miscdevice.h>
++#include <linux/device.h>
++#include <linux/fs.h>
++#include <linux/compat.h>
++#include <linux/console.h>
++
++#include <asm/uaccess.h>
++#include <asm/irq.h>
++
++#include "fbcon.h"
++#include "fbcondecor.h"
++
++extern signed char con2fb_map[];
++static int fbcon_decor_enable(struct vc_data *vc);
++char fbcon_decor_path[KMOD_PATH_LEN] = "/sbin/fbcondecor_helper";
++static int initialized = 0;
++
++int fbcon_decor_call_helper(char* cmd, unsigned short vc)
++{
++	char *envp[] = {
++		"HOME=/",
++		"PATH=/sbin:/bin",
++		NULL
++	};
++
++	char tfb[5];
++	char tcons[5];
++	unsigned char fb = (int) con2fb_map[vc];
++
++	char *argv[] = {
++		fbcon_decor_path,
++		"2",
++		cmd,
++		tcons,
++		tfb,
++		vc_cons[vc].d->vc_decor.theme,
++		NULL
++	};
++
++	snprintf(tfb,5,"%d",fb);
++	snprintf(tcons,5,"%d",vc);
++
++	return call_usermodehelper(fbcon_decor_path, argv, envp, UMH_WAIT_EXEC);
++}
++
++/* Disables fbcondecor on a virtual console; called with console sem held. */
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw)
++{
++	struct fb_info* info;
++
++	if (!vc->vc_decor.state)
++		return -EINVAL;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (info == NULL)
++		return -EINVAL;
++
++	vc->vc_decor.state = 0;
++	vc_resize(vc, info->var.xres / vc->vc_font.width,
++		  info->var.yres / vc->vc_font.height);
++
++	if (fg_console == vc->vc_num && redraw) {
++		redraw_screen(vc, 0);
++		update_region(vc, vc->vc_origin +
++			      vc->vc_size_row * vc->vc_top,
++			      vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++	}
++
++	printk(KERN_INFO "fbcondecor: switched decor state to 'off' on console %d\n",
++			 vc->vc_num);
++
++	return 0;
++}
++
++/* Enables fbcondecor on a virtual console; called with console sem held. */
++static int fbcon_decor_enable(struct vc_data *vc)
++{
++	struct fb_info* info;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (vc->vc_decor.twidth == 0 || vc->vc_decor.theight == 0 ||
++	    info == NULL || vc->vc_decor.state || (!info->bgdecor.data &&
++	    vc->vc_num == fg_console))
++		return -EINVAL;
++
++	vc->vc_decor.state = 1;
++	vc_resize(vc, vc->vc_decor.twidth / vc->vc_font.width,
++		  vc->vc_decor.theight / vc->vc_font.height);
++
++	if (fg_console == vc->vc_num) {
++		redraw_screen(vc, 0);
++		update_region(vc, vc->vc_origin +
++			      vc->vc_size_row * vc->vc_top,
++			      vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++		fbcon_decor_clear_margins(vc, info, 0);
++	}
++
++	printk(KERN_INFO "fbcondecor: switched decor state to 'on' on console %d\n",
++			 vc->vc_num);
++
++	return 0;
++}
++
++static inline int fbcon_decor_ioctl_dosetstate(struct vc_data *vc, unsigned int state, unsigned char origin)
++{
++	int ret;
++
++//	if (origin == FBCON_DECOR_IO_ORIG_USER)
++		console_lock();
++	if (!state)
++		ret = fbcon_decor_disable(vc, 1);
++	else
++		ret = fbcon_decor_enable(vc);
++//	if (origin == FBCON_DECOR_IO_ORIG_USER)
++		console_unlock();
++
++	return ret;
++}
++
++static inline void fbcon_decor_ioctl_dogetstate(struct vc_data *vc, unsigned int *state)
++{
++	*state = vc->vc_decor.state;
++}
++
++static int fbcon_decor_ioctl_dosetcfg(struct vc_data *vc, struct vc_decor *cfg, unsigned char origin)
++{
++	struct fb_info *info;
++	int len;
++	char *tmp;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (info == NULL || !cfg->twidth || !cfg->theight ||
++	    cfg->tx + cfg->twidth  > info->var.xres ||
++	    cfg->ty + cfg->theight > info->var.yres)
++		return -EINVAL;
++
++	len = strlen_user(cfg->theme);
++	if (!len || len > FBCON_DECOR_THEME_LEN)
++		return -EINVAL;
++	tmp = kmalloc(len, GFP_KERNEL);
++	if (!tmp)
++		return -ENOMEM;
++	if (copy_from_user(tmp, (void __user *)cfg->theme, len))
++		return -EFAULT;
++	cfg->theme = tmp;
++	cfg->state = 0;
++
++	/* If this ioctl is a response to a request from kernel, the console sem
++	 * is already held; we also don't need to disable decor because either the
++	 * new config and background picture will be successfully loaded, and the
++	 * decor will stay on, or in case of a failure it'll be turned off in fbcon. */
++//	if (origin == FBCON_DECOR_IO_ORIG_USER) {
++		console_lock();
++		if (vc->vc_decor.state)
++			fbcon_decor_disable(vc, 1);
++//	}
++
++	if (vc->vc_decor.theme)
++		kfree(vc->vc_decor.theme);
++
++	vc->vc_decor = *cfg;
++
++//	if (origin == FBCON_DECOR_IO_ORIG_USER)
++		console_unlock();
++
++	printk(KERN_INFO "fbcondecor: console %d using theme '%s'\n",
++			 vc->vc_num, vc->vc_decor.theme);
++	return 0;
++}
++
++static int fbcon_decor_ioctl_dogetcfg(struct vc_data *vc, struct vc_decor *decor)
++{
++	char __user *tmp;
++
++	tmp = decor->theme;
++	*decor = vc->vc_decor;
++	decor->theme = tmp;
++
++	if (vc->vc_decor.theme) {
++		if (copy_to_user(tmp, vc->vc_decor.theme, strlen(vc->vc_decor.theme) + 1))
++			return -EFAULT;
++	} else
++		if (put_user(0, tmp))
++			return -EFAULT;
++
++	return 0;
++}
++
++static int fbcon_decor_ioctl_dosetpic(struct vc_data *vc, struct fb_image *img, unsigned char origin)
++{
++	struct fb_info *info;
++	int len;
++	u8 *tmp;
++
++	if (vc->vc_num != fg_console)
++		return -EINVAL;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (info == NULL)
++		return -EINVAL;
++
++	if (img->width != info->var.xres || img->height != info->var.yres) {
++		printk(KERN_ERR "fbcondecor: picture dimensions mismatch\n");
++		printk(KERN_ERR "%dx%d vs %dx%d\n", img->width, img->height, info->var.xres, info->var.yres);
++		return -EINVAL;
++	}
++
++	if (img->depth != info->var.bits_per_pixel) {
++		printk(KERN_ERR "fbcondecor: picture depth mismatch\n");
++		return -EINVAL;
++	}
++
++	if (img->depth == 8) {
++		if (!img->cmap.len || !img->cmap.red || !img->cmap.green ||
++		    !img->cmap.blue)
++			return -EINVAL;
++
++		tmp = vmalloc(img->cmap.len * 3 * 2);
++		if (!tmp)
++			return -ENOMEM;
++
++		if (copy_from_user(tmp,
++			    	   (void __user*)img->cmap.red, (img->cmap.len << 1)) ||
++		    copy_from_user(tmp + (img->cmap.len << 1),
++			    	   (void __user*)img->cmap.green, (img->cmap.len << 1)) ||
++		    copy_from_user(tmp + (img->cmap.len << 2),
++			    	   (void __user*)img->cmap.blue, (img->cmap.len << 1))) {
++			vfree(tmp);
++			return -EFAULT;
++		}
++
++		img->cmap.transp = NULL;
++		img->cmap.red = (u16*)tmp;
++		img->cmap.green = img->cmap.red + img->cmap.len;
++		img->cmap.blue = img->cmap.green + img->cmap.len;
++	} else {
++		img->cmap.red = NULL;
++	}
++
++	len = ((img->depth + 7) >> 3) * img->width * img->height;
++
++	/*
++	 * Allocate an additional byte so that we never go outside of the
++	 * buffer boundaries in the rendering functions in a 24 bpp mode.
++	 */
++	tmp = vmalloc(len + 1);
++
++	if (!tmp)
++		goto out;
++
++	if (copy_from_user(tmp, (void __user*)img->data, len))
++		goto out;
++
++	img->data = tmp;
++
++	/* If this ioctl is a response to a request from kernel, the console sem
++	 * is already held. */
++//	if (origin == FBCON_DECOR_IO_ORIG_USER)
++		console_lock();
++
++	if (info->bgdecor.data)
++		vfree((u8*)info->bgdecor.data);
++	if (info->bgdecor.cmap.red)
++		vfree(info->bgdecor.cmap.red);
++
++	info->bgdecor = *img;
++
++	if (fbcon_decor_active_vc(vc) && fg_console == vc->vc_num) {
++		redraw_screen(vc, 0);
++		update_region(vc, vc->vc_origin +
++			      vc->vc_size_row * vc->vc_top,
++			      vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++		fbcon_decor_clear_margins(vc, info, 0);
++	}
++
++//	if (origin == FBCON_DECOR_IO_ORIG_USER)
++		console_unlock();
++
++	return 0;
++
++out:	if (img->cmap.red)
++		vfree(img->cmap.red);
++
++	if (tmp)
++		vfree(tmp);
++	return -ENOMEM;
++}
++
++static long fbcon_decor_ioctl(struct file *filp, u_int cmd, u_long arg)
++{
++	struct fbcon_decor_iowrapper __user *wrapper = (void __user*) arg;
++	struct vc_data *vc = NULL;
++	unsigned short vc_num = 0;
++	unsigned char origin = 0;
++	void __user *data = NULL;
++
++	if (!access_ok(VERIFY_READ, wrapper,
++			sizeof(struct fbcon_decor_iowrapper)))
++		return -EFAULT;
++
++	__get_user(vc_num, &wrapper->vc);
++	__get_user(origin, &wrapper->origin);
++	__get_user(data, &wrapper->data);
++
++	if (!vc_cons_allocated(vc_num))
++		return -EINVAL;
++
++	vc = vc_cons[vc_num].d;
++
++	switch (cmd) {
++	case FBIOCONDECOR_SETPIC:
++	{
++		struct fb_image img;
++		if (copy_from_user(&img, (struct fb_image __user *)data, sizeof(struct fb_image)))
++			return -EFAULT;
++
++		return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++	}
++	case FBIOCONDECOR_SETCFG:
++	{
++		struct vc_decor cfg;
++		if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++			return -EFAULT;
++
++		return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++	}
++	case FBIOCONDECOR_GETCFG:
++	{
++		int rval;
++		struct vc_decor cfg;
++
++		if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++			return -EFAULT;
++
++		rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++		if (copy_to_user(data, &cfg, sizeof(struct vc_decor)))
++			return -EFAULT;
++		return rval;
++	}
++	case FBIOCONDECOR_SETSTATE:
++	{
++		unsigned int state = 0;
++		if (get_user(state, (unsigned int __user *)data))
++			return -EFAULT;
++		return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++	}
++	case FBIOCONDECOR_GETSTATE:
++	{
++		unsigned int state = 0;
++		fbcon_decor_ioctl_dogetstate(vc, &state);
++		return put_user(state, (unsigned int __user *)data);
++	}
++
++	default:
++		return -ENOIOCTLCMD;
++	}
++}
++
++#ifdef CONFIG_COMPAT
++
++static long fbcon_decor_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) {
++
++	struct fbcon_decor_iowrapper32 __user *wrapper = (void __user *)arg;
++	struct vc_data *vc = NULL;
++	unsigned short vc_num = 0;
++	unsigned char origin = 0;
++	compat_uptr_t data_compat = 0;
++	void __user *data = NULL;
++
++	if (!access_ok(VERIFY_READ, wrapper,
++                       sizeof(struct fbcon_decor_iowrapper32)))
++		return -EFAULT;
++
++	__get_user(vc_num, &wrapper->vc);
++	__get_user(origin, &wrapper->origin);
++	__get_user(data_compat, &wrapper->data);
++	data = compat_ptr(data_compat);
++
++	if (!vc_cons_allocated(vc_num))
++		return -EINVAL;
++
++	vc = vc_cons[vc_num].d;
++
++	switch (cmd) {
++	case FBIOCONDECOR_SETPIC32:
++	{
++		struct fb_image32 img_compat;
++		struct fb_image img;
++
++		if (copy_from_user(&img_compat, (struct fb_image32 __user *)data, sizeof(struct fb_image32)))
++			return -EFAULT;
++
++		fb_image_from_compat(img, img_compat);
++
++		return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++	}
++
++	case FBIOCONDECOR_SETCFG32:
++	{
++		struct vc_decor32 cfg_compat;
++		struct vc_decor cfg;
++
++		if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++			return -EFAULT;
++
++		vc_decor_from_compat(cfg, cfg_compat);
++
++		return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++	}
++
++	case FBIOCONDECOR_GETCFG32:
++	{
++		int rval;
++		struct vc_decor32 cfg_compat;
++		struct vc_decor cfg;
++
++		if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++			return -EFAULT;
++		cfg.theme = compat_ptr(cfg_compat.theme);
++
++		rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++		vc_decor_to_compat(cfg_compat, cfg);
++
++		if (copy_to_user((struct vc_decor32 __user *)data, &cfg_compat, sizeof(struct vc_decor32)))
++			return -EFAULT;
++		return rval;
++	}
++
++	case FBIOCONDECOR_SETSTATE32:
++	{
++		compat_uint_t state_compat = 0;
++		unsigned int state = 0;
++
++		if (get_user(state_compat, (compat_uint_t __user *)data))
++			return -EFAULT;
++
++		state = (unsigned int)state_compat;
++
++		return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++	}
++
++	case FBIOCONDECOR_GETSTATE32:
++	{
++		compat_uint_t state_compat = 0;
++		unsigned int state = 0;
++
++		fbcon_decor_ioctl_dogetstate(vc, &state);
++		state_compat = (compat_uint_t)state;
++
++		return put_user(state_compat, (compat_uint_t __user *)data);
++	}
++
++	default:
++		return -ENOIOCTLCMD;
++	}
++}
++#else
++  #define fbcon_decor_compat_ioctl NULL
++#endif
++
++static struct file_operations fbcon_decor_ops = {
++	.owner = THIS_MODULE,
++	.unlocked_ioctl = fbcon_decor_ioctl,
++	.compat_ioctl = fbcon_decor_compat_ioctl
++};
++
++static struct miscdevice fbcon_decor_dev = {
++	.minor = MISC_DYNAMIC_MINOR,
++	.name = "fbcondecor",
++	.fops = &fbcon_decor_ops
++};
++
++void fbcon_decor_reset(void)
++{
++	int i;
++
++	for (i = 0; i < num_registered_fb; i++) {
++		registered_fb[i]->bgdecor.data = NULL;
++		registered_fb[i]->bgdecor.cmap.red = NULL;
++	}
++
++	for (i = 0; i < MAX_NR_CONSOLES && vc_cons[i].d; i++) {
++		vc_cons[i].d->vc_decor.state = vc_cons[i].d->vc_decor.twidth =
++						vc_cons[i].d->vc_decor.theight = 0;
++		vc_cons[i].d->vc_decor.theme = NULL;
++	}
++
++	return;
++}
++
++int fbcon_decor_init(void)
++{
++	int i;
++
++	fbcon_decor_reset();
++
++	if (initialized)
++		return 0;
++
++	i = misc_register(&fbcon_decor_dev);
++	if (i) {
++		printk(KERN_ERR "fbcondecor: failed to register device\n");
++		return i;
++	}
++
++	fbcon_decor_call_helper("init", 0);
++	initialized = 1;
++	return 0;
++}
++
++int fbcon_decor_exit(void)
++{
++	fbcon_decor_reset();
++	return 0;
++}
++
++EXPORT_SYMBOL(fbcon_decor_path);
+diff --git a/drivers/video/console/fbcondecor.h b/drivers/video/console/fbcondecor.h
+new file mode 100644
+index 0000000..3b3724b
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.h
+@@ -0,0 +1,78 @@
++/* 
++ *  linux/drivers/video/console/fbcondecor.h -- Framebuffer Console Decoration headers
++ *
++ *  Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ */
++
++#ifndef __FBCON_DECOR_H
++#define __FBCON_DECOR_H
++
++#ifndef _LINUX_FB_H
++#include <linux/fb.h>
++#endif
++
++/* This is needed for vc_cons in fbcmap.c */
++#include <linux/vt_kern.h>
++
++struct fb_cursor;
++struct fb_info;
++struct vc_data;
++
++#ifdef CONFIG_FB_CON_DECOR
++/* fbcondecor.c */
++int fbcon_decor_init(void);
++int fbcon_decor_exit(void);
++int fbcon_decor_call_helper(char* cmd, unsigned short cons);
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw);
++
++/* cfbcondecor.c */
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx);
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor);
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width);
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only);
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank);
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width);
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes, int srclinesbytes, int bpp);
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc);
++
++/* vt.c */
++void acquire_console_sem(void);
++void release_console_sem(void);
++void do_unblank_screen(int entering_gfx);
++
++/* struct vc_data *y */
++#define fbcon_decor_active_vc(y) (y->vc_decor.state && y->vc_decor.theme) 
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active_nores(x,y) (x->bgdecor.data && fbcon_decor_active_vc(y))
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active(x,y) (fbcon_decor_active_nores(x,y) &&		\
++			      x->bgdecor.width == x->var.xres && 	\
++			      x->bgdecor.height == x->var.yres &&	\
++			      x->bgdecor.depth == x->var.bits_per_pixel)
++
++
++#else /* CONFIG_FB_CON_DECOR */
++
++static inline void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx) {}
++static inline void fbcon_decor_putc(struct vc_data *vc, struct fb_info *info, int c, int ypos, int xpos) {}
++static inline void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor) {}
++static inline void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width) {}
++static inline void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only) {}
++static inline void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank) {}
++static inline void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width) {}
++static inline void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc) {}
++static inline int fbcon_decor_call_helper(char* cmd, unsigned short cons) { return 0; }
++static inline int fbcon_decor_init(void) { return 0; }
++static inline int fbcon_decor_exit(void) { return 0; }
++static inline int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw) { return 0; }
++
++#define fbcon_decor_active_vc(y) (0)
++#define fbcon_decor_active_nores(x,y) (0)
++#define fbcon_decor_active(x,y) (0)
++
++#endif /* CONFIG_FB_CON_DECOR */
++
++#endif /* __FBCON_DECOR_H */
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index e1f4727..2952e33 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -1204,7 +1204,6 @@ config FB_MATROX
+ 	select FB_CFB_FILLRECT
+ 	select FB_CFB_COPYAREA
+ 	select FB_CFB_IMAGEBLIT
+-	select FB_TILEBLITTING
+ 	select FB_MACMODES if PPC_PMAC
+ 	---help---
+ 	  Say Y here if you have a Matrox Millennium, Matrox Millennium II,
+diff --git a/drivers/video/fbdev/core/fbcmap.c b/drivers/video/fbdev/core/fbcmap.c
+index f89245b..05e036c 100644
+--- a/drivers/video/fbdev/core/fbcmap.c
++++ b/drivers/video/fbdev/core/fbcmap.c
+@@ -17,6 +17,8 @@
+ #include <linux/slab.h>
+ #include <linux/uaccess.h>
+ 
++#include "../../console/fbcondecor.h"
++
+ static u16 red2[] __read_mostly = {
+     0x0000, 0xaaaa
+ };
+@@ -249,14 +251,17 @@ int fb_set_cmap(struct fb_cmap *cmap, struct fb_info *info)
+ 			if (transp)
+ 				htransp = *transp++;
+ 			if (info->fbops->fb_setcolreg(start++,
+-						      hred, hgreen, hblue,
++						      hred, hgreen, hblue, 
+ 						      htransp, info))
+ 				break;
+ 		}
+ 	}
+-	if (rc == 0)
++	if (rc == 0) {
+ 		fb_copy_cmap(cmap, &info->cmap);
+-
++		if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++		    info->fix.visual == FB_VISUAL_DIRECTCOLOR)
++			fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++	}
+ 	return rc;
+ }
+ 
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index b6d5008..d6703f2 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1250,15 +1250,6 @@ struct fb_fix_screeninfo32 {
+ 	u16			reserved[3];
+ };
+ 
+-struct fb_cmap32 {
+-	u32			start;
+-	u32			len;
+-	compat_caddr_t	red;
+-	compat_caddr_t	green;
+-	compat_caddr_t	blue;
+-	compat_caddr_t	transp;
+-};
+-
+ static int fb_getput_cmap(struct fb_info *info, unsigned int cmd,
+ 			  unsigned long arg)
+ {
+diff --git a/include/linux/console_decor.h b/include/linux/console_decor.h
+new file mode 100644
+index 0000000..04b8d80
+--- /dev/null
++++ b/include/linux/console_decor.h
+@@ -0,0 +1,46 @@
++#ifndef _LINUX_CONSOLE_DECOR_H_
++#define _LINUX_CONSOLE_DECOR_H_ 1
++
++/* A structure used by the framebuffer console decorations (drivers/video/console/fbcondecor.c) */
++struct vc_decor {
++	__u8 bg_color;				/* The color that is to be treated as transparent */
++	__u8 state;				/* Current decor state: 0 = off, 1 = on */
++	__u16 tx, ty;				/* Top left corner coordinates of the text field */
++	__u16 twidth, theight;			/* Width and height of the text field */
++	char* theme;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++
++struct vc_decor32 {
++	__u8 bg_color;				/* The color that is to be treated as transparent */
++	__u8 state;				/* Current decor state: 0 = off, 1 = on */
++	__u16 tx, ty;				/* Top left corner coordinates of the text field */
++	__u16 twidth, theight;			/* Width and height of the text field */
++	compat_uptr_t theme;
++};
++
++#define vc_decor_from_compat(to, from) \
++	(to).bg_color = (from).bg_color; \
++	(to).state    = (from).state; \
++	(to).tx       = (from).tx; \
++	(to).ty       = (from).ty; \
++	(to).twidth   = (from).twidth; \
++	(to).theight  = (from).theight; \
++	(to).theme    = compat_ptr((from).theme)
++
++#define vc_decor_to_compat(to, from) \
++	(to).bg_color = (from).bg_color; \
++	(to).state    = (from).state; \
++	(to).tx       = (from).tx; \
++	(to).ty       = (from).ty; \
++	(to).twidth   = (from).twidth; \
++	(to).theight  = (from).theight; \
++	(to).theme    = ptr_to_compat((from).theme)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#endif
+diff --git a/include/linux/console_struct.h b/include/linux/console_struct.h
+index 7f0c329..98f5d60 100644
+--- a/include/linux/console_struct.h
++++ b/include/linux/console_struct.h
+@@ -19,6 +19,7 @@
+ struct vt_struct;
+ 
+ #define NPAR 16
++#include <linux/console_decor.h>
+ 
+ struct vc_data {
+ 	struct tty_port port;			/* Upper level data */
+@@ -107,6 +108,8 @@ struct vc_data {
+ 	unsigned long	vc_uni_pagedir;
+ 	unsigned long	*vc_uni_pagedir_loc;  /* [!] Location of uni_pagedir variable for this console */
+ 	bool vc_panic_force_write; /* when oops/panic this VC can accept forced output/blanking */
++
++	struct vc_decor vc_decor;
+ 	/* additional information is in vt_kern.h */
+ };
+ 
+diff --git a/include/linux/fb.h b/include/linux/fb.h
+index fe6ac95..1e36b03 100644
+--- a/include/linux/fb.h
++++ b/include/linux/fb.h
+@@ -219,6 +219,34 @@ struct fb_deferred_io {
+ };
+ #endif
+ 
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_image32 {
++	__u32 dx;			/* Where to place image */
++	__u32 dy;
++	__u32 width;			/* Size of image */
++	__u32 height;
++	__u32 fg_color;			/* Only used when a mono bitmap */
++	__u32 bg_color;
++	__u8  depth;			/* Depth of the image */
++	const compat_uptr_t data;	/* Pointer to image data */
++	struct fb_cmap32 cmap;		/* color map info */
++};
++
++#define fb_image_from_compat(to, from) \
++	(to).dx       = (from).dx; \
++	(to).dy       = (from).dy; \
++	(to).width    = (from).width; \
++	(to).height   = (from).height; \
++	(to).fg_color = (from).fg_color; \
++	(to).bg_color = (from).bg_color; \
++	(to).depth    = (from).depth; \
++	(to).data     = compat_ptr((from).data); \
++	fb_cmap_from_compat((to).cmap, (from).cmap)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /*
+  * Frame buffer operations
+  *
+@@ -489,6 +517,9 @@ struct fb_info {
+ #define FBINFO_STATE_SUSPENDED	1
+ 	u32 state;			/* Hardware state i.e suspend */
+ 	void *fbcon_par;                /* fbcon use-only private area */
++
++	struct fb_image bgdecor;
++
+ 	/* From here on everything is device dependent */
+ 	void *par;
+ 	/* we need the PCI or similar aperture base/size not
+diff --git a/include/uapi/linux/fb.h b/include/uapi/linux/fb.h
+index fb795c3..dc77a03 100644
+--- a/include/uapi/linux/fb.h
++++ b/include/uapi/linux/fb.h
+@@ -8,6 +8,25 @@
+ 
+ #define FB_MAX			32	/* sufficient for now */
+ 
++struct fbcon_decor_iowrapper
++{
++	unsigned short vc;		/* Virtual console */
++	unsigned char origin;		/* Point of origin of the request */
++	void *data;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++struct fbcon_decor_iowrapper32
++{
++	unsigned short vc;		/* Virtual console */
++	unsigned char origin;		/* Point of origin of the request */
++	compat_uptr_t data;
++};
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /* ioctls
+    0x46 is 'F'								*/
+ #define FBIOGET_VSCREENINFO	0x4600
+@@ -35,6 +54,25 @@
+ #define FBIOGET_DISPINFO        0x4618
+ #define FBIO_WAITFORVSYNC	_IOW('F', 0x20, __u32)
+ 
++#define FBIOCONDECOR_SETCFG	_IOWR('F', 0x19, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETCFG	_IOR('F', 0x1A, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETSTATE	_IOWR('F', 0x1B, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETSTATE	_IOR('F', 0x1C, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETPIC 	_IOWR('F', 0x1D, struct fbcon_decor_iowrapper)
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#define FBIOCONDECOR_SETCFG32	_IOWR('F', 0x19, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETCFG32	_IOR('F', 0x1A, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETSTATE32	_IOWR('F', 0x1B, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETSTATE32	_IOR('F', 0x1C, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETPIC32	_IOWR('F', 0x1D, struct fbcon_decor_iowrapper32)
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#define FBCON_DECOR_THEME_LEN		128	/* Maximum lenght of a theme name */
++#define FBCON_DECOR_IO_ORIG_KERNEL	0	/* Kernel ioctl origin */
++#define FBCON_DECOR_IO_ORIG_USER	1	/* User ioctl origin */
++ 
+ #define FB_TYPE_PACKED_PIXELS		0	/* Packed Pixels	*/
+ #define FB_TYPE_PLANES			1	/* Non interleaved planes */
+ #define FB_TYPE_INTERLEAVED_PLANES	2	/* Interleaved planes	*/
+@@ -277,6 +315,29 @@ struct fb_var_screeninfo {
+ 	__u32 reserved[4];		/* Reserved for future compatibility */
+ };
+ 
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_cmap32 {
++	__u32 start;
++	__u32 len;			/* Number of entries */
++	compat_uptr_t red;		/* Red values	*/
++	compat_uptr_t green;
++	compat_uptr_t blue;
++	compat_uptr_t transp;		/* transparency, can be NULL */
++};
++
++#define fb_cmap_from_compat(to, from) \
++	(to).start  = (from).start; \
++	(to).len    = (from).len; \
++	(to).red    = compat_ptr((from).red); \
++	(to).green  = compat_ptr((from).green); \
++	(to).blue   = compat_ptr((from).blue); \
++	(to).transp = compat_ptr((from).transp)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++
+ struct fb_cmap {
+ 	__u32 start;			/* First entry	*/
+ 	__u32 len;			/* Number of entries */
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 74f5b58..6386ab0 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -146,6 +146,10 @@ static const int cap_last_cap = CAP_LAST_CAP;
+ static unsigned long hung_task_timeout_max = (LONG_MAX/HZ);
+ #endif
+ 
++#ifdef CONFIG_FB_CON_DECOR
++extern char fbcon_decor_path[];
++#endif
++
+ #ifdef CONFIG_INOTIFY_USER
+ #include <linux/inotify.h>
+ #endif
+@@ -255,6 +259,15 @@ static struct ctl_table sysctl_base_table[] = {
+ 		.mode		= 0555,
+ 		.child		= dev_table,
+ 	},
++#ifdef CONFIG_FB_CON_DECOR
++	{
++		.procname	= "fbcondecor",
++		.data		= &fbcon_decor_path,
++		.maxlen		= KMOD_PATH_LEN,
++		.mode		= 0644,
++		.proc_handler	= &proc_dostring,
++	},
++#endif
+ 	{ }
+ };
+ 

diff --git a/5000_enable-additional-cpu-optimizations-for-gcc.patch b/5000_enable-additional-cpu-optimizations-for-gcc.patch
new file mode 100644
index 0000000..f7ab6f0
--- /dev/null
+++ b/5000_enable-additional-cpu-optimizations-for-gcc.patch
@@ -0,0 +1,327 @@
+This patch has been tested on and known to work with kernel versions from 3.2
+up to the latest git version (pulled on 12/14/2013).
+
+This patch will expand the number of microarchitectures to include new
+processors including: AMD K10-family, AMD Family 10h (Barcelona), AMD Family
+14h (Bobcat), AMD Family 15h (Bulldozer), AMD Family 15h (Piledriver), AMD
+Family 16h (Jaguar), Intel 1st Gen Core i3/i5/i7 (Nehalem), Intel 2nd Gen Core
+i3/i5/i7 (Sandybridge), Intel 3rd Gen Core i3/i5/i7 (Ivybridge), and Intel 4th
+Gen Core i3/i5/i7 (Haswell). It also offers the compiler the 'native' flag.
+
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=3.15
+gcc version <4.9
+
+---
+diff -uprN a/arch/x86/include/asm/module.h b/arch/x86/include/asm/module.h
+--- a/arch/x86/include/asm/module.h	2013-11-03 18:41:51.000000000 -0500
++++ b/arch/x86/include/asm/module.h	2013-12-15 06:21:24.351122516 -0500
+@@ -15,6 +15,16 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MCOREI7
++#define MODULE_PROC_FAMILY "COREI7 "
++#elif defined CONFIG_MCOREI7AVX
++#define MODULE_PROC_FAMILY "COREI7AVX "
++#elif defined CONFIG_MCOREAVXI
++#define MODULE_PROC_FAMILY "COREAVXI "
++#elif defined CONFIG_MCOREAVX2
++#define MODULE_PROC_FAMILY "COREAVX2 "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -33,6 +43,18 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+diff -uprN a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+--- a/arch/x86/Kconfig.cpu	2013-11-03 18:41:51.000000000 -0500
++++ b/arch/x86/Kconfig.cpu	2013-12-15 06:21:24.351122516 -0500
+@@ -139,7 +139,7 @@ config MPENTIUM4
+ 
+ 
+ config MK6
+-	bool "K6/K6-II/K6-III"
++	bool "AMD K6/K6-II/K6-III"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD K6-family processor.  Enables use of
+@@ -147,7 +147,7 @@ config MK6
+ 	  flags to GCC.
+ 
+ config MK7
+-	bool "Athlon/Duron/K7"
++	bool "AMD Athlon/Duron/K7"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD Athlon K7-family processor.  Enables use of
+@@ -155,12 +155,55 @@ config MK7
+ 	  flags to GCC.
+ 
+ config MK8
+-	bool "Opteron/Athlon64/Hammer/K8"
++	bool "AMD Opteron/Athlon64/Hammer/K8"
+ 	---help---
+ 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ 	  Enables use of some extended instructions, and passes appropriate
+ 	  optimization flags to GCC.
+ 
++config MK10
++	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++	---help---
++	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++		Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MBARCELONA
++	bool "AMD Barcelona"
++	---help---
++	  Select this for AMD Barcelona and newer processors.
++
++	  Enables -march=barcelona
++
++config MBOBCAT
++	bool "AMD Bobcat"
++	---help---
++	  Select this for AMD Bobcat processors.
++
++	  Enables -march=btver1
++
++config MBULLDOZER
++	bool "AMD Bulldozer"
++	---help---
++	  Select this for AMD Bulldozer processors.
++
++	  Enables -march=bdver1
++
++config MPILEDRIVER
++	bool "AMD Piledriver"
++	---help---
++	  Select this for AMD Piledriver processors.
++
++	  Enables -march=bdver2
++
++config MJAGUAR
++	bool "AMD Jaguar"
++	---help---
++	  Select this for AMD Jaguar processors.
++
++	  Enables -march=btver2
++
+ config MCRUSOE
+ 	bool "Crusoe"
+ 	depends on X86_32
+@@ -251,8 +294,17 @@ config MPSC
+ 	  using the cpu family field
+ 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+ 
++config MATOM
++	bool "Intel Atom"
++	---help---
++
++	  Select this for the Intel Atom platform. Intel Atom CPUs have an
++	  in-order pipelining architecture and thus can benefit from
++	  accordingly optimized code. Use a recent GCC with specific Atom
++	  support in order to fully benefit from selecting this option.
++
+ config MCORE2
+-	bool "Core 2/newer Xeon"
++	bool "Intel Core 2"
+ 	---help---
+ 
+ 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -260,14 +312,40 @@ config MCORE2
+ 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ 	  (not a typo)
+ 
+-config MATOM
+-	bool "Intel Atom"
++	  Enables -march=core2
++
++config MCOREI7
++	bool "Intel Core i7"
+ 	---help---
+ 
+-	  Select this for the Intel Atom platform. Intel Atom CPUs have an
+-	  in-order pipelining architecture and thus can benefit from
+-	  accordingly optimized code. Use a recent GCC with specific Atom
+-	  support in order to fully benefit from selecting this option.
++	  Select this for the Intel Nehalem platform. Intel Nehalem proecessors
++	  include Core i3, i5, i7, Xeon: 34xx, 35xx, 55xx, 56xx, 75xx processors.
++
++	  Enables -march=corei7
++
++config MCOREI7AVX
++	bool "Intel Core 2nd Gen AVX"
++	---help---
++
++	  Select this for 2nd Gen Core processors including Sandy Bridge.
++
++	  Enables -march=corei7-avx
++
++config MCOREAVXI
++	bool "Intel Core 3rd Gen AVX"
++	---help---
++
++	  Select this for 3rd Gen Core processors including Ivy Bridge.
++
++	  Enables -march=core-avx-i
++
++config MCOREAVX2
++	bool "Intel Core AVX2"
++	---help---
++
++	  Select this for AVX2 enabled processors including Haswell.
++
++	  Enables -march=core-avx2
+ 
+ config GENERIC_CPU
+ 	bool "Generic-x86-64"
+@@ -276,6 +354,19 @@ config GENERIC_CPU
+ 	  Generic x86-64 CPU.
+ 	  Run equally well on all x86-64 CPUs.
+ 
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++   GCC 4.2 and above support -march=native, which automatically detects
++   the optimum settings to use based on your processor. -march=native
++   also detects and applies additional settings beyond -march specific
++   to your CPU, (eg. -msse4). Unless you have a specific reason not to
++   (e.g. distcc cross-compiling), you should probably be using
++   -march=native rather than anything listed below.
++
++   Enables -march=native
++
+ endchoice
+ 
+ config X86_GENERIC
+@@ -300,7 +391,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ 	int
+ 	default "7" if MPENTIUM4 || MPSC
+-	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MPENTIUMM || MCORE2 || MCOREI7 || MCOREI7AVX || MCOREAVXI || MCOREAVX2 || MATOM || MVIAC7 || X86_GENERIC || MNATIVE || GENERIC_CPU
+ 	default "4" if MELAN || M486 || MGEODEGX1
+ 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+ 
+@@ -331,11 +422,11 @@ config X86_ALIGNMENT_16
+ 
+ config X86_INTEL_USERCOPY
+ 	def_bool y
+-	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || MNATIVE || X86_GENERIC || MK8 || MK7 || MK10 || MBARCELONA || MEFFICEON || MCORE2 || MCOREI7 || MCOREI7AVX || MCOREAVXI || MCOREAVX2
+ 
+ config X86_USE_PPRO_CHECKSUM
+ 	def_bool y
+-	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MCOREI7 || MCOREI7AVX || MCOREAVXI || MCOREAVX2 || MATOM || MNATIVE
+ 
+ config X86_USE_3DNOW
+ 	def_bool y
+@@ -363,17 +454,17 @@ config X86_P6_NOP
+ 
+ config X86_TSC
+ 	def_bool y
+-	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MCOREI7 || MCOREI7-AVX || MATOM) || X86_64 || MNATIVE
+ 
+ config X86_CMPXCHG64
+ 	def_bool y
+-	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM
++	depends on X86_PAE || X86_64 || MCORE2 || MCOREI7 || MCOREI7AVX || MCOREAVXI || MCOREAVX2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM || MNATIVE
+ 
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+ config X86_CMOV
+ 	def_bool y
+-	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++	depends on (MK8 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MK7 || MCORE2 || MCOREI7 || MCOREI7AVX || MCOREAVXI || MCOREAVX2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+ 
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+diff -uprN a/arch/x86/Makefile b/arch/x86/Makefile
+--- a/arch/x86/Makefile	2013-11-03 18:41:51.000000000 -0500
++++ b/arch/x86/Makefile	2013-12-15 06:21:24.354455723 -0500
+@@ -61,11 +61,26 @@ else
+ 	KBUILD_CFLAGS += $(call cc-option,-mno-sse -mpreferred-stack-boundary=3)
+ 
+         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+ 
+         cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
++                $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++        cflags-$(CONFIG_MCOREI7) += \
++                $(call cc-option,-march=corei7,$(call cc-option,-mtune=corei7))
++        cflags-$(CONFIG_MCOREI7AVX) += \
++                $(call cc-option,-march=corei7-avx,$(call cc-option,-mtune=corei7-avx))
++        cflags-$(CONFIG_MCOREAVXI) += \
++                $(call cc-option,-march=core-avx-i,$(call cc-option,-mtune=core-avx-i))
++        cflags-$(CONFIG_MCOREAVX2) += \
++                $(call cc-option,-march=core-avx2,$(call cc-option,-mtune=core-avx2))
+ 	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+ 		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
+         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+diff -uprN a/arch/x86/Makefile_32.cpu b/arch/x86/Makefile_32.cpu
+--- a/arch/x86/Makefile_32.cpu	2013-11-03 18:41:51.000000000 -0500
++++ b/arch/x86/Makefile_32.cpu	2013-12-15 06:21:24.354455723 -0500
+@@ -23,7 +23,14 @@ cflags-$(CONFIG_MK6)		+= -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7)		+= -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8)		+= $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK10)	+= $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA)	+= $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT)	+= $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER)	+= $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER)	+= $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MJAGUAR)	+= $(call cc-option,-march=btver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE)	+= -march=i686 $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -32,6 +39,10 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-
+ cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7)		+= -march=i686
+ cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
++cflags-$(CONFIG_MCOREI7)	+= -march=i686 $(call tune,corei7)
++cflags-$(CONFIG_MCOREI7AVX)	+= -march=i686 $(call tune,corei7-avx)
++cflags-$(CONFIG_MCOREAVXI)	+= -march=i686 $(call tune,core-avx-i)
++cflags-$(CONFIG_MCOREAVX2)	+= -march=i686 $(call tune,core-avx2)
+ cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+ 	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))

diff --git a/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r7-4.0.patch b/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r7-4.0.patch
new file mode 100644
index 0000000..468d157
--- /dev/null
+++ b/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r7-4.0.patch
@@ -0,0 +1,104 @@
+From 63e26848e2df36a3c29d2d38ce8b008539d64a5d Mon Sep 17 00:00:00 2001
+From: Paolo Valente <paolo.valente@unimore.it>
+Date: Tue, 7 Apr 2015 13:39:12 +0200
+Subject: [PATCH 1/3] block: cgroups, kconfig, build bits for BFQ-v7r7-4.0
+
+Update Kconfig.iosched and do the related Makefile changes to include
+kernel configuration options for BFQ. Also add the bfqio controller
+to the cgroups subsystem.
+
+Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
+Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com>
+---
+ block/Kconfig.iosched         | 32 ++++++++++++++++++++++++++++++++
+ block/Makefile                |  1 +
+ include/linux/cgroup_subsys.h |  4 ++++
+ 3 files changed, 37 insertions(+)
+
+diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
+index 421bef9..0ee5f0f 100644
+--- a/block/Kconfig.iosched
++++ b/block/Kconfig.iosched
+@@ -39,6 +39,27 @@ config CFQ_GROUP_IOSCHED
+ 	---help---
+ 	  Enable group IO scheduling in CFQ.
+ 
++config IOSCHED_BFQ
++	tristate "BFQ I/O scheduler"
++	default n
++	---help---
++	  The BFQ I/O scheduler tries to distribute bandwidth among
++	  all processes according to their weights.
++	  It aims at distributing the bandwidth as desired, independently of
++	  the disk parameters and with any workload. It also tries to
++	  guarantee low latency to interactive and soft real-time
++	  applications. If compiled built-in (saying Y here), BFQ can
++	  be configured to support hierarchical scheduling.
++
++config CGROUP_BFQIO
++	bool "BFQ hierarchical scheduling support"
++	depends on CGROUPS && IOSCHED_BFQ=y
++	default n
++	---help---
++	  Enable hierarchical scheduling in BFQ, using the cgroups
++	  filesystem interface.  The name of the subsystem will be
++	  bfqio.
++
+ choice
+ 	prompt "Default I/O scheduler"
+ 	default DEFAULT_CFQ
+@@ -52,6 +73,16 @@ choice
+ 	config DEFAULT_CFQ
+ 		bool "CFQ" if IOSCHED_CFQ=y
+ 
++	config DEFAULT_BFQ
++		bool "BFQ" if IOSCHED_BFQ=y
++		help
++		  Selects BFQ as the default I/O scheduler which will be
++		  used by default for all block devices.
++		  The BFQ I/O scheduler aims at distributing the bandwidth
++		  as desired, independently of the disk parameters and with
++		  any workload. It also tries to guarantee low latency to
++		  interactive and soft real-time applications.
++
+ 	config DEFAULT_NOOP
+ 		bool "No-op"
+ 
+@@ -61,6 +92,7 @@ config DEFAULT_IOSCHED
+ 	string
+ 	default "deadline" if DEFAULT_DEADLINE
+ 	default "cfq" if DEFAULT_CFQ
++	default "bfq" if DEFAULT_BFQ
+ 	default "noop" if DEFAULT_NOOP
+ 
+ endmenu
+diff --git a/block/Makefile b/block/Makefile
+index 00ecc97..1ed86d5 100644
+--- a/block/Makefile
++++ b/block/Makefile
+@@ -18,6 +18,7 @@ obj-$(CONFIG_BLK_DEV_THROTTLING)	+= blk-throttle.o
+ obj-$(CONFIG_IOSCHED_NOOP)	+= noop-iosched.o
+ obj-$(CONFIG_IOSCHED_DEADLINE)	+= deadline-iosched.o
+ obj-$(CONFIG_IOSCHED_CFQ)	+= cfq-iosched.o
++obj-$(CONFIG_IOSCHED_BFQ)	+= bfq-iosched.o
+ 
+ obj-$(CONFIG_BLOCK_COMPAT)	+= compat_ioctl.o
+ obj-$(CONFIG_BLK_CMDLINE_PARSER)	+= cmdline-parser.o
+diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
+index e4a96fb..267d681 100644
+--- a/include/linux/cgroup_subsys.h
++++ b/include/linux/cgroup_subsys.h
+@@ -35,6 +35,10 @@ SUBSYS(freezer)
+ SUBSYS(net_cls)
+ #endif
+ 
++#if IS_ENABLED(CONFIG_CGROUP_BFQIO)
++SUBSYS(bfqio)
++#endif
++
+ #if IS_ENABLED(CONFIG_CGROUP_PERF)
+ SUBSYS(perf_event)
+ #endif
+-- 
+2.1.0
+

diff --git a/5002_block-introduce-the-BFQ-v7r7-I-O-sched-for-4.0.patch1 b/5002_block-introduce-the-BFQ-v7r7-I-O-sched-for-4.0.patch1
new file mode 100644
index 0000000..a6cfc58
--- /dev/null
+++ b/5002_block-introduce-the-BFQ-v7r7-I-O-sched-for-4.0.patch1
@@ -0,0 +1,6966 @@
+From 8cdf2dae6ee87049c7bb086d34e2ce981b545813 Mon Sep 17 00:00:00 2001
+From: Paolo Valente <paolo.valente@unimore.it>
+Date: Thu, 9 May 2013 19:10:02 +0200
+Subject: [PATCH 2/3] block: introduce the BFQ-v7r7 I/O sched for 4.0
+
+Add the BFQ-v7r7 I/O scheduler to 4.0.
+The general structure is borrowed from CFQ, as much of the code for
+handling I/O contexts. Over time, several useful features have been
+ported from CFQ as well (details in the changelog in README.BFQ). A
+(bfq_)queue is associated to each task doing I/O on a device, and each
+time a scheduling decision has to be made a queue is selected and served
+until it expires.
+
+    - Slices are given in the service domain: tasks are assigned
+      budgets, measured in number of sectors. Once got the disk, a task
+      must however consume its assigned budget within a configurable
+      maximum time (by default, the maximum possible value of the
+      budgets is automatically computed to comply with this timeout).
+      This allows the desired latency vs "throughput boosting" tradeoff
+      to be set.
+
+    - Budgets are scheduled according to a variant of WF2Q+, implemented
+      using an augmented rb-tree to take eligibility into account while
+      preserving an O(log N) overall complexity.
+
+    - A low-latency tunable is provided; if enabled, both interactive
+      and soft real-time applications are guaranteed a very low latency.
+
+    - Latency guarantees are preserved also in the presence of NCQ.
+
+    - Also with flash-based devices, a high throughput is achieved
+      while still preserving latency guarantees.
+
+    - BFQ features Early Queue Merge (EQM), a sort of fusion of the
+      cooperating-queue-merging and the preemption mechanisms present
+      in CFQ. EQM is in fact a unified mechanism that tries to get a
+      sequential read pattern, and hence a high throughput, with any
+      set of processes performing interleaved I/O over a contiguous
+      sequence of sectors.
+
+    - BFQ supports full hierarchical scheduling, exporting a cgroups
+      interface.  Since each node has a full scheduler, each group can
+      be assigned its own weight.
+
+    - If the cgroups interface is not used, only I/O priorities can be
+      assigned to processes, with ioprio values mapped to weights
+      with the relation weight = IOPRIO_BE_NR - ioprio.
+
+    - ioprio classes are served in strict priority order, i.e., lower
+      priority queues are not served as long as there are higher
+      priority queues.  Among queues in the same class the bandwidth is
+      distributed in proportion to the weight of each queue. A very
+      thin extra bandwidth is however guaranteed to the Idle class, to
+      prevent it from starving.
+
+Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
+Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com>
+---
+ block/bfq-cgroup.c  |  936 ++++++++++++
+ block/bfq-ioc.c     |   36 +
+ block/bfq-iosched.c | 3902 +++++++++++++++++++++++++++++++++++++++++++++++++++
+ block/bfq-sched.c   | 1214 ++++++++++++++++
+ block/bfq.h         |  775 ++++++++++
+ 5 files changed, 6863 insertions(+)
+ create mode 100644 block/bfq-cgroup.c
+ create mode 100644 block/bfq-ioc.c
+ create mode 100644 block/bfq-iosched.c
+ create mode 100644 block/bfq-sched.c
+ create mode 100644 block/bfq.h
+
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+new file mode 100644
+index 0000000..11e2f1d
+--- /dev/null
++++ b/block/bfq-cgroup.c
+@@ -0,0 +1,936 @@
++/*
++ * BFQ: CGROUPS support.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ *		      Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ
++ * file.
++ */
++
++#ifdef CONFIG_CGROUP_BFQIO
++
++static DEFINE_MUTEX(bfqio_mutex);
++
++static bool bfqio_is_removed(struct bfqio_cgroup *bgrp)
++{
++	return bgrp ? !bgrp->online : false;
++}
++
++static struct bfqio_cgroup bfqio_root_cgroup = {
++	.weight = BFQ_DEFAULT_GRP_WEIGHT,
++	.ioprio = BFQ_DEFAULT_GRP_IOPRIO,
++	.ioprio_class = BFQ_DEFAULT_GRP_CLASS,
++};
++
++static inline void bfq_init_entity(struct bfq_entity *entity,
++				   struct bfq_group *bfqg)
++{
++	entity->weight = entity->new_weight;
++	entity->orig_weight = entity->new_weight;
++	entity->ioprio = entity->new_ioprio;
++	entity->ioprio_class = entity->new_ioprio_class;
++	entity->parent = bfqg->my_entity;
++	entity->sched_data = &bfqg->sched_data;
++}
++
++static struct bfqio_cgroup *css_to_bfqio(struct cgroup_subsys_state *css)
++{
++	return css ? container_of(css, struct bfqio_cgroup, css) : NULL;
++}
++
++/*
++ * Search the bfq_group for bfqd into the hash table (by now only a list)
++ * of bgrp.  Must be called under rcu_read_lock().
++ */
++static struct bfq_group *bfqio_lookup_group(struct bfqio_cgroup *bgrp,
++					    struct bfq_data *bfqd)
++{
++	struct bfq_group *bfqg;
++	void *key;
++
++	hlist_for_each_entry_rcu(bfqg, &bgrp->group_data, group_node) {
++		key = rcu_dereference(bfqg->bfqd);
++		if (key == bfqd)
++			return bfqg;
++	}
++
++	return NULL;
++}
++
++static inline void bfq_group_init_entity(struct bfqio_cgroup *bgrp,
++					 struct bfq_group *bfqg)
++{
++	struct bfq_entity *entity = &bfqg->entity;
++
++	/*
++	 * If the weight of the entity has never been set via the sysfs
++	 * interface, then bgrp->weight == 0. In this case we initialize
++	 * the weight from the current ioprio value. Otherwise, the group
++	 * weight, if set, has priority over the ioprio value.
++	 */
++	if (bgrp->weight == 0) {
++		entity->new_weight = bfq_ioprio_to_weight(bgrp->ioprio);
++		entity->new_ioprio = bgrp->ioprio;
++	} else {
++		if (bgrp->weight < BFQ_MIN_WEIGHT ||
++		    bgrp->weight > BFQ_MAX_WEIGHT) {
++			printk(KERN_CRIT "bfq_group_init_entity: "
++					 "bgrp->weight %d\n", bgrp->weight);
++			BUG();
++		}
++		entity->new_weight = bgrp->weight;
++		entity->new_ioprio = bfq_weight_to_ioprio(bgrp->weight);
++	}
++	entity->orig_weight = entity->weight = entity->new_weight;
++	entity->ioprio = entity->new_ioprio;
++	entity->ioprio_class = entity->new_ioprio_class = bgrp->ioprio_class;
++	entity->my_sched_data = &bfqg->sched_data;
++	bfqg->active_entities = 0;
++}
++
++static inline void bfq_group_set_parent(struct bfq_group *bfqg,
++					struct bfq_group *parent)
++{
++	struct bfq_entity *entity;
++
++	BUG_ON(parent == NULL);
++	BUG_ON(bfqg == NULL);
++
++	entity = &bfqg->entity;
++	entity->parent = parent->my_entity;
++	entity->sched_data = &parent->sched_data;
++}
++
++/**
++ * bfq_group_chain_alloc - allocate a chain of groups.
++ * @bfqd: queue descriptor.
++ * @css: the leaf cgroup_subsys_state this chain starts from.
++ *
++ * Allocate a chain of groups starting from the one belonging to
++ * @cgroup up to the root cgroup.  Stop if a cgroup on the chain
++ * to the root has already an allocated group on @bfqd.
++ */
++static struct bfq_group *bfq_group_chain_alloc(struct bfq_data *bfqd,
++					       struct cgroup_subsys_state *css)
++{
++	struct bfqio_cgroup *bgrp;
++	struct bfq_group *bfqg, *prev = NULL, *leaf = NULL;
++
++	for (; css != NULL; css = css->parent) {
++		bgrp = css_to_bfqio(css);
++
++		bfqg = bfqio_lookup_group(bgrp, bfqd);
++		if (bfqg != NULL) {
++			/*
++			 * All the cgroups in the path from there to the
++			 * root must have a bfq_group for bfqd, so we don't
++			 * need any more allocations.
++			 */
++			break;
++		}
++
++		bfqg = kzalloc(sizeof(*bfqg), GFP_ATOMIC);
++		if (bfqg == NULL)
++			goto cleanup;
++
++		bfq_group_init_entity(bgrp, bfqg);
++		bfqg->my_entity = &bfqg->entity;
++
++		if (leaf == NULL) {
++			leaf = bfqg;
++			prev = leaf;
++		} else {
++			bfq_group_set_parent(prev, bfqg);
++			/*
++			 * Build a list of allocated nodes using the bfqd
++			 * filed, that is still unused and will be
++			 * initialized only after the node will be
++			 * connected.
++			 */
++			prev->bfqd = bfqg;
++			prev = bfqg;
++		}
++	}
++
++	return leaf;
++
++cleanup:
++	while (leaf != NULL) {
++		prev = leaf;
++		leaf = leaf->bfqd;
++		kfree(prev);
++	}
++
++	return NULL;
++}
++
++/**
++ * bfq_group_chain_link - link an allocated group chain to a cgroup
++ *                        hierarchy.
++ * @bfqd: the queue descriptor.
++ * @css: the leaf cgroup_subsys_state to start from.
++ * @leaf: the leaf group (to be associated to @cgroup).
++ *
++ * Try to link a chain of groups to a cgroup hierarchy, connecting the
++ * nodes bottom-up, so we can be sure that when we find a cgroup in the
++ * hierarchy that already as a group associated to @bfqd all the nodes
++ * in the path to the root cgroup have one too.
++ *
++ * On locking: the queue lock protects the hierarchy (there is a hierarchy
++ * per device) while the bfqio_cgroup lock protects the list of groups
++ * belonging to the same cgroup.
++ */
++static void bfq_group_chain_link(struct bfq_data *bfqd,
++				 struct cgroup_subsys_state *css,
++				 struct bfq_group *leaf)
++{
++	struct bfqio_cgroup *bgrp;
++	struct bfq_group *bfqg, *next, *prev = NULL;
++	unsigned long flags;
++
++	assert_spin_locked(bfqd->queue->queue_lock);
++
++	for (; css != NULL && leaf != NULL; css = css->parent) {
++		bgrp = css_to_bfqio(css);
++		next = leaf->bfqd;
++
++		bfqg = bfqio_lookup_group(bgrp, bfqd);
++		BUG_ON(bfqg != NULL);
++
++		spin_lock_irqsave(&bgrp->lock, flags);
++
++		rcu_assign_pointer(leaf->bfqd, bfqd);
++		hlist_add_head_rcu(&leaf->group_node, &bgrp->group_data);
++		hlist_add_head(&leaf->bfqd_node, &bfqd->group_list);
++
++		spin_unlock_irqrestore(&bgrp->lock, flags);
++
++		prev = leaf;
++		leaf = next;
++	}
++
++	BUG_ON(css == NULL && leaf != NULL);
++	if (css != NULL && prev != NULL) {
++		bgrp = css_to_bfqio(css);
++		bfqg = bfqio_lookup_group(bgrp, bfqd);
++		bfq_group_set_parent(prev, bfqg);
++	}
++}
++
++/**
++ * bfq_find_alloc_group - return the group associated to @bfqd in @cgroup.
++ * @bfqd: queue descriptor.
++ * @cgroup: cgroup being searched for.
++ *
++ * Return a group associated to @bfqd in @cgroup, allocating one if
++ * necessary.  When a group is returned all the cgroups in the path
++ * to the root have a group associated to @bfqd.
++ *
++ * If the allocation fails, return the root group: this breaks guarantees
++ * but is a safe fallback.  If this loss becomes a problem it can be
++ * mitigated using the equivalent weight (given by the product of the
++ * weights of the groups in the path from @group to the root) in the
++ * root scheduler.
++ *
++ * We allocate all the missing nodes in the path from the leaf cgroup
++ * to the root and we connect the nodes only after all the allocations
++ * have been successful.
++ */
++static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
++					      struct cgroup_subsys_state *css)
++{
++	struct bfqio_cgroup *bgrp = css_to_bfqio(css);
++	struct bfq_group *bfqg;
++
++	bfqg = bfqio_lookup_group(bgrp, bfqd);
++	if (bfqg != NULL)
++		return bfqg;
++
++	bfqg = bfq_group_chain_alloc(bfqd, css);
++	if (bfqg != NULL)
++		bfq_group_chain_link(bfqd, css, bfqg);
++	else
++		bfqg = bfqd->root_group;
++
++	return bfqg;
++}
++
++/**
++ * bfq_bfqq_move - migrate @bfqq to @bfqg.
++ * @bfqd: queue descriptor.
++ * @bfqq: the queue to move.
++ * @entity: @bfqq's entity.
++ * @bfqg: the group to move to.
++ *
++ * Move @bfqq to @bfqg, deactivating it from its old group and reactivating
++ * it on the new one.  Avoid putting the entity on the old group idle tree.
++ *
++ * Must be called under the queue lock; the cgroup owning @bfqg must
++ * not disappear (by now this just means that we are called under
++ * rcu_read_lock()).
++ */
++static void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++			  struct bfq_entity *entity, struct bfq_group *bfqg)
++{
++	int busy, resume;
++
++	busy = bfq_bfqq_busy(bfqq);
++	resume = !RB_EMPTY_ROOT(&bfqq->sort_list);
++
++	BUG_ON(resume && !entity->on_st);
++	BUG_ON(busy && !resume && entity->on_st &&
++	       bfqq != bfqd->in_service_queue);
++
++	if (busy) {
++		BUG_ON(atomic_read(&bfqq->ref) < 2);
++
++		if (!resume)
++			bfq_del_bfqq_busy(bfqd, bfqq, 0);
++		else
++			bfq_deactivate_bfqq(bfqd, bfqq, 0);
++	} else if (entity->on_st)
++		bfq_put_idle_entity(bfq_entity_service_tree(entity), entity);
++
++	/*
++	 * Here we use a reference to bfqg.  We don't need a refcounter
++	 * as the cgroup reference will not be dropped, so that its
++	 * destroy() callback will not be invoked.
++	 */
++	entity->parent = bfqg->my_entity;
++	entity->sched_data = &bfqg->sched_data;
++
++	if (busy && resume)
++		bfq_activate_bfqq(bfqd, bfqq);
++
++	if (bfqd->in_service_queue == NULL && !bfqd->rq_in_driver)
++		bfq_schedule_dispatch(bfqd);
++}
++
++/**
++ * __bfq_bic_change_cgroup - move @bic to @cgroup.
++ * @bfqd: the queue descriptor.
++ * @bic: the bic to move.
++ * @cgroup: the cgroup to move to.
++ *
++ * Move bic to cgroup, assuming that bfqd->queue is locked; the caller
++ * has to make sure that the reference to cgroup is valid across the call.
++ *
++ * NOTE: an alternative approach might have been to store the current
++ * cgroup in bfqq and getting a reference to it, reducing the lookup
++ * time here, at the price of slightly more complex code.
++ */
++static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
++						struct bfq_io_cq *bic,
++						struct cgroup_subsys_state *css)
++{
++	struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0);
++	struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1);
++	struct bfq_entity *entity;
++	struct bfq_group *bfqg;
++	struct bfqio_cgroup *bgrp;
++
++	bgrp = css_to_bfqio(css);
++
++	bfqg = bfq_find_alloc_group(bfqd, css);
++	if (async_bfqq != NULL) {
++		entity = &async_bfqq->entity;
++
++		if (entity->sched_data != &bfqg->sched_data) {
++			bic_set_bfqq(bic, NULL, 0);
++			bfq_log_bfqq(bfqd, async_bfqq,
++				     "bic_change_group: %p %d",
++				     async_bfqq, atomic_read(&async_bfqq->ref));
++			bfq_put_queue(async_bfqq);
++		}
++	}
++
++	if (sync_bfqq != NULL) {
++		entity = &sync_bfqq->entity;
++		if (entity->sched_data != &bfqg->sched_data)
++			bfq_bfqq_move(bfqd, sync_bfqq, entity, bfqg);
++	}
++
++	return bfqg;
++}
++
++/**
++ * bfq_bic_change_cgroup - move @bic to @cgroup.
++ * @bic: the bic being migrated.
++ * @cgroup: the destination cgroup.
++ *
++ * When the task owning @bic is moved to @cgroup, @bic is immediately
++ * moved into its new parent group.
++ */
++static void bfq_bic_change_cgroup(struct bfq_io_cq *bic,
++				  struct cgroup_subsys_state *css)
++{
++	struct bfq_data *bfqd;
++	unsigned long uninitialized_var(flags);
++
++	bfqd = bfq_get_bfqd_locked(&(bic->icq.q->elevator->elevator_data),
++				   &flags);
++	if (bfqd != NULL) {
++		__bfq_bic_change_cgroup(bfqd, bic, css);
++		bfq_put_bfqd_unlock(bfqd, &flags);
++	}
++}
++
++/**
++ * bfq_bic_update_cgroup - update the cgroup of @bic.
++ * @bic: the @bic to update.
++ *
++ * Make sure that @bic is enqueued in the cgroup of the current task.
++ * We need this in addition to moving bics during the cgroup attach
++ * phase because the task owning @bic could be at its first disk
++ * access or we may end up in the root cgroup as the result of a
++ * memory allocation failure and here we try to move to the right
++ * group.
++ *
++ * Must be called under the queue lock.  It is safe to use the returned
++ * value even after the rcu_read_unlock() as the migration/destruction
++ * paths act under the queue lock too.  IOW it is impossible to race with
++ * group migration/destruction and end up with an invalid group as:
++ *   a) here cgroup has not yet been destroyed, nor its destroy callback
++ *      has started execution, as current holds a reference to it,
++ *   b) if it is destroyed after rcu_read_unlock() [after current is
++ *      migrated to a different cgroup] its attach() callback will have
++ *      taken care of remove all the references to the old cgroup data.
++ */
++static struct bfq_group *bfq_bic_update_cgroup(struct bfq_io_cq *bic)
++{
++	struct bfq_data *bfqd = bic_to_bfqd(bic);
++	struct bfq_group *bfqg;
++	struct cgroup_subsys_state *css;
++
++	BUG_ON(bfqd == NULL);
++
++	rcu_read_lock();
++	css = task_css(current, bfqio_cgrp_id);
++	bfqg = __bfq_bic_change_cgroup(bfqd, bic, css);
++	rcu_read_unlock();
++
++	return bfqg;
++}
++
++/**
++ * bfq_flush_idle_tree - deactivate any entity on the idle tree of @st.
++ * @st: the service tree being flushed.
++ */
++static inline void bfq_flush_idle_tree(struct bfq_service_tree *st)
++{
++	struct bfq_entity *entity = st->first_idle;
++
++	for (; entity != NULL; entity = st->first_idle)
++		__bfq_deactivate_entity(entity, 0);
++}
++
++/**
++ * bfq_reparent_leaf_entity - move leaf entity to the root_group.
++ * @bfqd: the device data structure with the root group.
++ * @entity: the entity to move.
++ */
++static inline void bfq_reparent_leaf_entity(struct bfq_data *bfqd,
++					    struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++	BUG_ON(bfqq == NULL);
++	bfq_bfqq_move(bfqd, bfqq, entity, bfqd->root_group);
++	return;
++}
++
++/**
++ * bfq_reparent_active_entities - move to the root group all active
++ *                                entities.
++ * @bfqd: the device data structure with the root group.
++ * @bfqg: the group to move from.
++ * @st: the service tree with the entities.
++ *
++ * Needs queue_lock to be taken and reference to be valid over the call.
++ */
++static inline void bfq_reparent_active_entities(struct bfq_data *bfqd,
++						struct bfq_group *bfqg,
++						struct bfq_service_tree *st)
++{
++	struct rb_root *active = &st->active;
++	struct bfq_entity *entity = NULL;
++
++	if (!RB_EMPTY_ROOT(&st->active))
++		entity = bfq_entity_of(rb_first(active));
++
++	for (; entity != NULL; entity = bfq_entity_of(rb_first(active)))
++		bfq_reparent_leaf_entity(bfqd, entity);
++
++	if (bfqg->sched_data.in_service_entity != NULL)
++		bfq_reparent_leaf_entity(bfqd,
++			bfqg->sched_data.in_service_entity);
++
++	return;
++}
++
++/**
++ * bfq_destroy_group - destroy @bfqg.
++ * @bgrp: the bfqio_cgroup containing @bfqg.
++ * @bfqg: the group being destroyed.
++ *
++ * Destroy @bfqg, making sure that it is not referenced from its parent.
++ */
++static void bfq_destroy_group(struct bfqio_cgroup *bgrp, struct bfq_group *bfqg)
++{
++	struct bfq_data *bfqd;
++	struct bfq_service_tree *st;
++	struct bfq_entity *entity = bfqg->my_entity;
++	unsigned long uninitialized_var(flags);
++	int i;
++
++	hlist_del(&bfqg->group_node);
++
++	/*
++	 * Empty all service_trees belonging to this group before
++	 * deactivating the group itself.
++	 */
++	for (i = 0; i < BFQ_IOPRIO_CLASSES; i++) {
++		st = bfqg->sched_data.service_tree + i;
++
++		/*
++		 * The idle tree may still contain bfq_queues belonging
++		 * to exited task because they never migrated to a different
++		 * cgroup from the one being destroyed now.  No one else
++		 * can access them so it's safe to act without any lock.
++		 */
++		bfq_flush_idle_tree(st);
++
++		/*
++		 * It may happen that some queues are still active
++		 * (busy) upon group destruction (if the corresponding
++		 * processes have been forced to terminate). We move
++		 * all the leaf entities corresponding to these queues
++		 * to the root_group.
++		 * Also, it may happen that the group has an entity
++		 * in service, which is disconnected from the active
++		 * tree: it must be moved, too.
++		 * There is no need to put the sync queues, as the
++		 * scheduler has taken no reference.
++		 */
++		bfqd = bfq_get_bfqd_locked(&bfqg->bfqd, &flags);
++		if (bfqd != NULL) {
++			bfq_reparent_active_entities(bfqd, bfqg, st);
++			bfq_put_bfqd_unlock(bfqd, &flags);
++		}
++		BUG_ON(!RB_EMPTY_ROOT(&st->active));
++		BUG_ON(!RB_EMPTY_ROOT(&st->idle));
++	}
++	BUG_ON(bfqg->sched_data.next_in_service != NULL);
++	BUG_ON(bfqg->sched_data.in_service_entity != NULL);
++
++	/*
++	 * We may race with device destruction, take extra care when
++	 * dereferencing bfqg->bfqd.
++	 */
++	bfqd = bfq_get_bfqd_locked(&bfqg->bfqd, &flags);
++	if (bfqd != NULL) {
++		hlist_del(&bfqg->bfqd_node);
++		__bfq_deactivate_entity(entity, 0);
++		bfq_put_async_queues(bfqd, bfqg);
++		bfq_put_bfqd_unlock(bfqd, &flags);
++	}
++	BUG_ON(entity->tree != NULL);
++
++	/*
++	 * No need to defer the kfree() to the end of the RCU grace
++	 * period: we are called from the destroy() callback of our
++	 * cgroup, so we can be sure that no one is a) still using
++	 * this cgroup or b) doing lookups in it.
++	 */
++	kfree(bfqg);
++}
++
++static void bfq_end_wr_async(struct bfq_data *bfqd)
++{
++	struct hlist_node *tmp;
++	struct bfq_group *bfqg;
++
++	hlist_for_each_entry_safe(bfqg, tmp, &bfqd->group_list, bfqd_node)
++		bfq_end_wr_async_queues(bfqd, bfqg);
++	bfq_end_wr_async_queues(bfqd, bfqd->root_group);
++}
++
++/**
++ * bfq_disconnect_groups - disconnect @bfqd from all its groups.
++ * @bfqd: the device descriptor being exited.
++ *
++ * When the device exits we just make sure that no lookup can return
++ * the now unused group structures.  They will be deallocated on cgroup
++ * destruction.
++ */
++static void bfq_disconnect_groups(struct bfq_data *bfqd)
++{
++	struct hlist_node *tmp;
++	struct bfq_group *bfqg;
++
++	bfq_log(bfqd, "disconnect_groups beginning");
++	hlist_for_each_entry_safe(bfqg, tmp, &bfqd->group_list, bfqd_node) {
++		hlist_del(&bfqg->bfqd_node);
++
++		__bfq_deactivate_entity(bfqg->my_entity, 0);
++
++		/*
++		 * Don't remove from the group hash, just set an
++		 * invalid key.  No lookups can race with the
++		 * assignment as bfqd is being destroyed; this
++		 * implies also that new elements cannot be added
++		 * to the list.
++		 */
++		rcu_assign_pointer(bfqg->bfqd, NULL);
++
++		bfq_log(bfqd, "disconnect_groups: put async for group %p",
++			bfqg);
++		bfq_put_async_queues(bfqd, bfqg);
++	}
++}
++
++static inline void bfq_free_root_group(struct bfq_data *bfqd)
++{
++	struct bfqio_cgroup *bgrp = &bfqio_root_cgroup;
++	struct bfq_group *bfqg = bfqd->root_group;
++
++	bfq_put_async_queues(bfqd, bfqg);
++
++	spin_lock_irq(&bgrp->lock);
++	hlist_del_rcu(&bfqg->group_node);
++	spin_unlock_irq(&bgrp->lock);
++
++	/*
++	 * No need to synchronize_rcu() here: since the device is gone
++	 * there cannot be any read-side access to its root_group.
++	 */
++	kfree(bfqg);
++}
++
++static struct bfq_group *bfq_alloc_root_group(struct bfq_data *bfqd, int node)
++{
++	struct bfq_group *bfqg;
++	struct bfqio_cgroup *bgrp;
++	int i;
++
++	bfqg = kzalloc_node(sizeof(*bfqg), GFP_KERNEL, node);
++	if (bfqg == NULL)
++		return NULL;
++
++	bfqg->entity.parent = NULL;
++	for (i = 0; i < BFQ_IOPRIO_CLASSES; i++)
++		bfqg->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT;
++
++	bgrp = &bfqio_root_cgroup;
++	spin_lock_irq(&bgrp->lock);
++	rcu_assign_pointer(bfqg->bfqd, bfqd);
++	hlist_add_head_rcu(&bfqg->group_node, &bgrp->group_data);
++	spin_unlock_irq(&bgrp->lock);
++
++	return bfqg;
++}
++
++#define SHOW_FUNCTION(__VAR)						\
++static u64 bfqio_cgroup_##__VAR##_read(struct cgroup_subsys_state *css, \
++				       struct cftype *cftype)		\
++{									\
++	struct bfqio_cgroup *bgrp = css_to_bfqio(css);			\
++	u64 ret = -ENODEV;						\
++									\
++	mutex_lock(&bfqio_mutex);					\
++	if (bfqio_is_removed(bgrp))					\
++		goto out_unlock;					\
++									\
++	spin_lock_irq(&bgrp->lock);					\
++	ret = bgrp->__VAR;						\
++	spin_unlock_irq(&bgrp->lock);					\
++									\
++out_unlock:								\
++	mutex_unlock(&bfqio_mutex);					\
++	return ret;							\
++}
++
++SHOW_FUNCTION(weight);
++SHOW_FUNCTION(ioprio);
++SHOW_FUNCTION(ioprio_class);
++#undef SHOW_FUNCTION
++
++#define STORE_FUNCTION(__VAR, __MIN, __MAX)				\
++static int bfqio_cgroup_##__VAR##_write(struct cgroup_subsys_state *css,\
++					struct cftype *cftype,		\
++					u64 val)			\
++{									\
++	struct bfqio_cgroup *bgrp = css_to_bfqio(css);			\
++	struct bfq_group *bfqg;						\
++	int ret = -EINVAL;						\
++									\
++	if (val < (__MIN) || val > (__MAX))				\
++		return ret;						\
++									\
++	ret = -ENODEV;							\
++	mutex_lock(&bfqio_mutex);					\
++	if (bfqio_is_removed(bgrp))					\
++		goto out_unlock;					\
++	ret = 0;							\
++									\
++	spin_lock_irq(&bgrp->lock);					\
++	bgrp->__VAR = (unsigned short)val;				\
++	hlist_for_each_entry(bfqg, &bgrp->group_data, group_node) {	\
++		/*							\
++		 * Setting the ioprio_changed flag of the entity        \
++		 * to 1 with new_##__VAR == ##__VAR would re-set        \
++		 * the value of the weight to its ioprio mapping.       \
++		 * Set the flag only if necessary.			\
++		 */							\
++		if ((unsigned short)val != bfqg->entity.new_##__VAR) {  \
++			bfqg->entity.new_##__VAR = (unsigned short)val; \
++			/*						\
++			 * Make sure that the above new value has been	\
++			 * stored in bfqg->entity.new_##__VAR before	\
++			 * setting the ioprio_changed flag. In fact,	\
++			 * this flag may be read asynchronously (in	\
++			 * critical sections protected by a different	\
++			 * lock than that held here), and finding this	\
++			 * flag set may cause the execution of the code	\
++			 * for updating parameters whose value may	\
++			 * depend also on bfqg->entity.new_##__VAR (in	\
++			 * __bfq_entity_update_weight_prio).		\
++			 * This barrier makes sure that the new value	\
++			 * of bfqg->entity.new_##__VAR is correctly	\
++			 * seen in that code.				\
++			 */						\
++			smp_wmb();                                      \
++			bfqg->entity.ioprio_changed = 1;                \
++		}							\
++	}								\
++	spin_unlock_irq(&bgrp->lock);					\
++									\
++out_unlock:								\
++	mutex_unlock(&bfqio_mutex);					\
++	return ret;							\
++}
++
++STORE_FUNCTION(weight, BFQ_MIN_WEIGHT, BFQ_MAX_WEIGHT);
++STORE_FUNCTION(ioprio, 0, IOPRIO_BE_NR - 1);
++STORE_FUNCTION(ioprio_class, IOPRIO_CLASS_RT, IOPRIO_CLASS_IDLE);
++#undef STORE_FUNCTION
++
++static struct cftype bfqio_files[] = {
++	{
++		.name = "weight",
++		.read_u64 = bfqio_cgroup_weight_read,
++		.write_u64 = bfqio_cgroup_weight_write,
++	},
++	{
++		.name = "ioprio",
++		.read_u64 = bfqio_cgroup_ioprio_read,
++		.write_u64 = bfqio_cgroup_ioprio_write,
++	},
++	{
++		.name = "ioprio_class",
++		.read_u64 = bfqio_cgroup_ioprio_class_read,
++		.write_u64 = bfqio_cgroup_ioprio_class_write,
++	},
++	{ },	/* terminate */
++};
++
++static struct cgroup_subsys_state *bfqio_create(struct cgroup_subsys_state
++						*parent_css)
++{
++	struct bfqio_cgroup *bgrp;
++
++	if (parent_css != NULL) {
++		bgrp = kzalloc(sizeof(*bgrp), GFP_KERNEL);
++		if (bgrp == NULL)
++			return ERR_PTR(-ENOMEM);
++	} else
++		bgrp = &bfqio_root_cgroup;
++
++	spin_lock_init(&bgrp->lock);
++	INIT_HLIST_HEAD(&bgrp->group_data);
++	bgrp->ioprio = BFQ_DEFAULT_GRP_IOPRIO;
++	bgrp->ioprio_class = BFQ_DEFAULT_GRP_CLASS;
++
++	return &bgrp->css;
++}
++
++/*
++ * We cannot support shared io contexts, as we have no means to support
++ * two tasks with the same ioc in two different groups without major rework
++ * of the main bic/bfqq data structures.  By now we allow a task to change
++ * its cgroup only if it's the only owner of its ioc; the drawback of this
++ * behavior is that a group containing a task that forked using CLONE_IO
++ * will not be destroyed until the tasks sharing the ioc die.
++ */
++static int bfqio_can_attach(struct cgroup_subsys_state *css,
++			    struct cgroup_taskset *tset)
++{
++	struct task_struct *task;
++	struct io_context *ioc;
++	int ret = 0;
++
++	cgroup_taskset_for_each(task, tset) {
++		/*
++		 * task_lock() is needed to avoid races with
++		 * exit_io_context()
++		 */
++		task_lock(task);
++		ioc = task->io_context;
++		if (ioc != NULL && atomic_read(&ioc->nr_tasks) > 1)
++			/*
++			 * ioc == NULL means that the task is either too
++			 * young or exiting: if it has still no ioc the
++			 * ioc can't be shared, if the task is exiting the
++			 * attach will fail anyway, no matter what we
++			 * return here.
++			 */
++			ret = -EINVAL;
++		task_unlock(task);
++		if (ret)
++			break;
++	}
++
++	return ret;
++}
++
++static void bfqio_attach(struct cgroup_subsys_state *css,
++			 struct cgroup_taskset *tset)
++{
++	struct task_struct *task;
++	struct io_context *ioc;
++	struct io_cq *icq;
++
++	/*
++	 * IMPORTANT NOTE: The move of more than one process at a time to a
++	 * new group has not yet been tested.
++	 */
++	cgroup_taskset_for_each(task, tset) {
++		ioc = get_task_io_context(task, GFP_ATOMIC, NUMA_NO_NODE);
++		if (ioc) {
++			/*
++			 * Handle cgroup change here.
++			 */
++			rcu_read_lock();
++			hlist_for_each_entry_rcu(icq, &ioc->icq_list, ioc_node)
++				if (!strncmp(
++					icq->q->elevator->type->elevator_name,
++					"bfq", ELV_NAME_MAX))
++					bfq_bic_change_cgroup(icq_to_bic(icq),
++							      css);
++			rcu_read_unlock();
++			put_io_context(ioc);
++		}
++	}
++}
++
++static void bfqio_destroy(struct cgroup_subsys_state *css)
++{
++	struct bfqio_cgroup *bgrp = css_to_bfqio(css);
++	struct hlist_node *tmp;
++	struct bfq_group *bfqg;
++
++	/*
++	 * Since we are destroying the cgroup, there are no more tasks
++	 * referencing it, and all the RCU grace periods that may have
++	 * referenced it are ended (as the destruction of the parent
++	 * cgroup is RCU-safe); bgrp->group_data will not be accessed by
++	 * anything else and we don't need any synchronization.
++	 */
++	hlist_for_each_entry_safe(bfqg, tmp, &bgrp->group_data, group_node)
++		bfq_destroy_group(bgrp, bfqg);
++
++	BUG_ON(!hlist_empty(&bgrp->group_data));
++
++	kfree(bgrp);
++}
++
++static int bfqio_css_online(struct cgroup_subsys_state *css)
++{
++	struct bfqio_cgroup *bgrp = css_to_bfqio(css);
++
++	mutex_lock(&bfqio_mutex);
++	bgrp->online = true;
++	mutex_unlock(&bfqio_mutex);
++
++	return 0;
++}
++
++static void bfqio_css_offline(struct cgroup_subsys_state *css)
++{
++	struct bfqio_cgroup *bgrp = css_to_bfqio(css);
++
++	mutex_lock(&bfqio_mutex);
++	bgrp->online = false;
++	mutex_unlock(&bfqio_mutex);
++}
++
++struct cgroup_subsys bfqio_cgrp_subsys = {
++	.css_alloc = bfqio_create,
++	.css_online = bfqio_css_online,
++	.css_offline = bfqio_css_offline,
++	.can_attach = bfqio_can_attach,
++	.attach = bfqio_attach,
++	.css_free = bfqio_destroy,
++	.legacy_cftypes = bfqio_files,
++};
++#else
++static inline void bfq_init_entity(struct bfq_entity *entity,
++				   struct bfq_group *bfqg)
++{
++	entity->weight = entity->new_weight;
++	entity->orig_weight = entity->new_weight;
++	entity->ioprio = entity->new_ioprio;
++	entity->ioprio_class = entity->new_ioprio_class;
++	entity->sched_data = &bfqg->sched_data;
++}
++
++static inline struct bfq_group *
++bfq_bic_update_cgroup(struct bfq_io_cq *bic)
++{
++	struct bfq_data *bfqd = bic_to_bfqd(bic);
++	return bfqd->root_group;
++}
++
++static inline void bfq_bfqq_move(struct bfq_data *bfqd,
++				 struct bfq_queue *bfqq,
++				 struct bfq_entity *entity,
++				 struct bfq_group *bfqg)
++{
++}
++
++static void bfq_end_wr_async(struct bfq_data *bfqd)
++{
++	bfq_end_wr_async_queues(bfqd, bfqd->root_group);
++}
++
++static inline void bfq_disconnect_groups(struct bfq_data *bfqd)
++{
++	bfq_put_async_queues(bfqd, bfqd->root_group);
++}
++
++static inline void bfq_free_root_group(struct bfq_data *bfqd)
++{
++	kfree(bfqd->root_group);
++}
++
++static struct bfq_group *bfq_alloc_root_group(struct bfq_data *bfqd, int node)
++{
++	struct bfq_group *bfqg;
++	int i;
++
++	bfqg = kmalloc_node(sizeof(*bfqg), GFP_KERNEL | __GFP_ZERO, node);
++	if (bfqg == NULL)
++		return NULL;
++
++	for (i = 0; i < BFQ_IOPRIO_CLASSES; i++)
++		bfqg->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT;
++
++	return bfqg;
++}
++#endif
+diff --git a/block/bfq-ioc.c b/block/bfq-ioc.c
+new file mode 100644
+index 0000000..7f6b000
+--- /dev/null
++++ b/block/bfq-ioc.c
+@@ -0,0 +1,36 @@
++/*
++ * BFQ: I/O context handling.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ *		      Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ */
++
++/**
++ * icq_to_bic - convert iocontext queue structure to bfq_io_cq.
++ * @icq: the iocontext queue.
++ */
++static inline struct bfq_io_cq *icq_to_bic(struct io_cq *icq)
++{
++	/* bic->icq is the first member, %NULL will convert to %NULL */
++	return container_of(icq, struct bfq_io_cq, icq);
++}
++
++/**
++ * bfq_bic_lookup - search into @ioc a bic associated to @bfqd.
++ * @bfqd: the lookup key.
++ * @ioc: the io_context of the process doing I/O.
++ *
++ * Queue lock must be held.
++ */
++static inline struct bfq_io_cq *bfq_bic_lookup(struct bfq_data *bfqd,
++					       struct io_context *ioc)
++{
++	if (ioc)
++		return icq_to_bic(ioc_lookup_icq(ioc, bfqd->queue));
++	return NULL;
++}
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+new file mode 100644
+index 0000000..97ee934
+--- /dev/null
++++ b/block/bfq-iosched.c
+@@ -0,0 +1,3902 @@
++/*
++ * Budget Fair Queueing (BFQ) disk scheduler.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ *		      Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ
++ * file.
++ *
++ * BFQ is a proportional-share storage-I/O scheduling algorithm based on
++ * the slice-by-slice service scheme of CFQ. But BFQ assigns budgets,
++ * measured in number of sectors, to processes instead of time slices. The
++ * device is not granted to the in-service process for a given time slice,
++ * but until it has exhausted its assigned budget. This change from the time
++ * to the service domain allows BFQ to distribute the device throughput
++ * among processes as desired, without any distortion due to ZBR, workload
++ * fluctuations or other factors. BFQ uses an ad hoc internal scheduler,
++ * called B-WF2Q+, to schedule processes according to their budgets. More
++ * precisely, BFQ schedules queues associated to processes. Thanks to the
++ * accurate policy of B-WF2Q+, BFQ can afford to assign high budgets to
++ * I/O-bound processes issuing sequential requests (to boost the
++ * throughput), and yet guarantee a low latency to interactive and soft
++ * real-time applications.
++ *
++ * BFQ is described in [1], where also a reference to the initial, more
++ * theoretical paper on BFQ can be found. The interested reader can find
++ * in the latter paper full details on the main algorithm, as well as
++ * formulas of the guarantees and formal proofs of all the properties.
++ * With respect to the version of BFQ presented in these papers, this
++ * implementation adds a few more heuristics, such as the one that
++ * guarantees a low latency to soft real-time applications, and a
++ * hierarchical extension based on H-WF2Q+.
++ *
++ * B-WF2Q+ is based on WF2Q+, that is described in [2], together with
++ * H-WF2Q+, while the augmented tree used to implement B-WF2Q+ with O(log N)
++ * complexity derives from the one introduced with EEVDF in [3].
++ *
++ * [1] P. Valente and M. Andreolini, ``Improving Application Responsiveness
++ *     with the BFQ Disk I/O Scheduler'',
++ *     Proceedings of the 5th Annual International Systems and Storage
++ *     Conference (SYSTOR '12), June 2012.
++ *
++ * http://algogroup.unimo.it/people/paolo/disk_sched/bf1-v1-suite-results.pdf
++ *
++ * [2] Jon C.R. Bennett and H. Zhang, ``Hierarchical Packet Fair Queueing
++ *     Algorithms,'' IEEE/ACM Transactions on Networking, 5(5):675-689,
++ *     Oct 1997.
++ *
++ * http://www.cs.cmu.edu/~hzhang/papers/TON-97-Oct.ps.gz
++ *
++ * [3] I. Stoica and H. Abdel-Wahab, ``Earliest Eligible Virtual Deadline
++ *     First: A Flexible and Accurate Mechanism for Proportional Share
++ *     Resource Allocation,'' technical report.
++ *
++ * http://www.cs.berkeley.edu/~istoica/papers/eevdf-tr-95.pdf
++ */
++#include <linux/module.h>
++#include <linux/slab.h>
++#include <linux/blkdev.h>
++#include <linux/cgroup.h>
++#include <linux/elevator.h>
++#include <linux/jiffies.h>
++#include <linux/rbtree.h>
++#include <linux/ioprio.h>
++#include "bfq.h"
++#include "blk.h"
++
++/* Max number of dispatches in one round of service. */
++static const int bfq_quantum = 4;
++
++/* Expiration time of sync (0) and async (1) requests, in jiffies. */
++static const int bfq_fifo_expire[2] = { HZ / 4, HZ / 8 };
++
++/* Maximum backwards seek, in KiB. */
++static const int bfq_back_max = 16 * 1024;
++
++/* Penalty of a backwards seek, in number of sectors. */
++static const int bfq_back_penalty = 2;
++
++/* Idling period duration, in jiffies. */
++static int bfq_slice_idle = HZ / 125;
++
++/* Default maximum budget values, in sectors and number of requests. */
++static const int bfq_default_max_budget = 16 * 1024;
++static const int bfq_max_budget_async_rq = 4;
++
++/*
++ * Async to sync throughput distribution is controlled as follows:
++ * when an async request is served, the entity is charged the number
++ * of sectors of the request, multiplied by the factor below
++ */
++static const int bfq_async_charge_factor = 10;
++
++/* Default timeout values, in jiffies, approximating CFQ defaults. */
++static const int bfq_timeout_sync = HZ / 8;
++static int bfq_timeout_async = HZ / 25;
++
++struct kmem_cache *bfq_pool;
++
++/* Below this threshold (in ms), we consider thinktime immediate. */
++#define BFQ_MIN_TT		2
++
++/* hw_tag detection: parallel requests threshold and min samples needed. */
++#define BFQ_HW_QUEUE_THRESHOLD	4
++#define BFQ_HW_QUEUE_SAMPLES	32
++
++#define BFQQ_SEEK_THR	 (sector_t)(8 * 1024)
++#define BFQQ_SEEKY(bfqq) ((bfqq)->seek_mean > BFQQ_SEEK_THR)
++
++/* Min samples used for peak rate estimation (for autotuning). */
++#define BFQ_PEAK_RATE_SAMPLES	32
++
++/* Shift used for peak rate fixed precision calculations. */
++#define BFQ_RATE_SHIFT		16
++
++/*
++ * By default, BFQ computes the duration of the weight raising for
++ * interactive applications automatically, using the following formula:
++ * duration = (R / r) * T, where r is the peak rate of the device, and
++ * R and T are two reference parameters.
++ * In particular, R is the peak rate of the reference device (see below),
++ * and T is a reference time: given the systems that are likely to be
++ * installed on the reference device according to its speed class, T is
++ * about the maximum time needed, under BFQ and while reading two files in
++ * parallel, to load typical large applications on these systems.
++ * In practice, the slower/faster the device at hand is, the more/less it
++ * takes to load applications with respect to the reference device.
++ * Accordingly, the longer/shorter BFQ grants weight raising to interactive
++ * applications.
++ *
++ * BFQ uses four different reference pairs (R, T), depending on:
++ * . whether the device is rotational or non-rotational;
++ * . whether the device is slow, such as old or portable HDDs, as well as
++ *   SD cards, or fast, such as newer HDDs and SSDs.
++ *
++ * The device's speed class is dynamically (re)detected in
++ * bfq_update_peak_rate() every time the estimated peak rate is updated.
++ *
++ * In the following definitions, R_slow[0]/R_fast[0] and T_slow[0]/T_fast[0]
++ * are the reference values for a slow/fast rotational device, whereas
++ * R_slow[1]/R_fast[1] and T_slow[1]/T_fast[1] are the reference values for
++ * a slow/fast non-rotational device. Finally, device_speed_thresh are the
++ * thresholds used to switch between speed classes.
++ * Both the reference peak rates and the thresholds are measured in
++ * sectors/usec, left-shifted by BFQ_RATE_SHIFT.
++ */
++static int R_slow[2] = {1536, 10752};
++static int R_fast[2] = {17415, 34791};
++/*
++ * To improve readability, a conversion function is used to initialize the
++ * following arrays, which entails that they can be initialized only in a
++ * function.
++ */
++static int T_slow[2];
++static int T_fast[2];
++static int device_speed_thresh[2];
++
++#define BFQ_SERVICE_TREE_INIT	((struct bfq_service_tree)		\
++				{ RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 })
++
++#define RQ_BIC(rq)		((struct bfq_io_cq *) (rq)->elv.priv[0])
++#define RQ_BFQQ(rq)		((rq)->elv.priv[1])
++
++static inline void bfq_schedule_dispatch(struct bfq_data *bfqd);
++
++#include "bfq-ioc.c"
++#include "bfq-sched.c"
++#include "bfq-cgroup.c"
++
++#define bfq_class_idle(bfqq)	((bfqq)->entity.ioprio_class ==\
++				 IOPRIO_CLASS_IDLE)
++#define bfq_class_rt(bfqq)	((bfqq)->entity.ioprio_class ==\
++				 IOPRIO_CLASS_RT)
++
++#define bfq_sample_valid(samples)	((samples) > 80)
++
++/*
++ * We regard a request as SYNC, if either it's a read or has the SYNC bit
++ * set (in which case it could also be a direct WRITE).
++ */
++static inline int bfq_bio_sync(struct bio *bio)
++{
++	if (bio_data_dir(bio) == READ || (bio->bi_rw & REQ_SYNC))
++		return 1;
++
++	return 0;
++}
++
++/*
++ * Scheduler run of queue, if there are requests pending and no one in the
++ * driver that will restart queueing.
++ */
++static inline void bfq_schedule_dispatch(struct bfq_data *bfqd)
++{
++	if (bfqd->queued != 0) {
++		bfq_log(bfqd, "schedule dispatch");
++		kblockd_schedule_work(&bfqd->unplug_work);
++	}
++}
++
++/*
++ * Lifted from AS - choose which of rq1 and rq2 that is best served now.
++ * We choose the request that is closesr to the head right now.  Distance
++ * behind the head is penalized and only allowed to a certain extent.
++ */
++static struct request *bfq_choose_req(struct bfq_data *bfqd,
++				      struct request *rq1,
++				      struct request *rq2,
++				      sector_t last)
++{
++	sector_t s1, s2, d1 = 0, d2 = 0;
++	unsigned long back_max;
++#define BFQ_RQ1_WRAP	0x01 /* request 1 wraps */
++#define BFQ_RQ2_WRAP	0x02 /* request 2 wraps */
++	unsigned wrap = 0; /* bit mask: requests behind the disk head? */
++
++	if (rq1 == NULL || rq1 == rq2)
++		return rq2;
++	if (rq2 == NULL)
++		return rq1;
++
++	if (rq_is_sync(rq1) && !rq_is_sync(rq2))
++		return rq1;
++	else if (rq_is_sync(rq2) && !rq_is_sync(rq1))
++		return rq2;
++	if ((rq1->cmd_flags & REQ_META) && !(rq2->cmd_flags & REQ_META))
++		return rq1;
++	else if ((rq2->cmd_flags & REQ_META) && !(rq1->cmd_flags & REQ_META))
++		return rq2;
++
++	s1 = blk_rq_pos(rq1);
++	s2 = blk_rq_pos(rq2);
++
++	/*
++	 * By definition, 1KiB is 2 sectors.
++	 */
++	back_max = bfqd->bfq_back_max * 2;
++
++	/*
++	 * Strict one way elevator _except_ in the case where we allow
++	 * short backward seeks which are biased as twice the cost of a
++	 * similar forward seek.
++	 */
++	if (s1 >= last)
++		d1 = s1 - last;
++	else if (s1 + back_max >= last)
++		d1 = (last - s1) * bfqd->bfq_back_penalty;
++	else
++		wrap |= BFQ_RQ1_WRAP;
++
++	if (s2 >= last)
++		d2 = s2 - last;
++	else if (s2 + back_max >= last)
++		d2 = (last - s2) * bfqd->bfq_back_penalty;
++	else
++		wrap |= BFQ_RQ2_WRAP;
++
++	/* Found required data */
++
++	/*
++	 * By doing switch() on the bit mask "wrap" we avoid having to
++	 * check two variables for all permutations: --> faster!
++	 */
++	switch (wrap) {
++	case 0: /* common case for CFQ: rq1 and rq2 not wrapped */
++		if (d1 < d2)
++			return rq1;
++		else if (d2 < d1)
++			return rq2;
++		else {
++			if (s1 >= s2)
++				return rq1;
++			else
++				return rq2;
++		}
++
++	case BFQ_RQ2_WRAP:
++		return rq1;
++	case BFQ_RQ1_WRAP:
++		return rq2;
++	case (BFQ_RQ1_WRAP|BFQ_RQ2_WRAP): /* both rqs wrapped */
++	default:
++		/*
++		 * Since both rqs are wrapped,
++		 * start with the one that's further behind head
++		 * (--> only *one* back seek required),
++		 * since back seek takes more time than forward.
++		 */
++		if (s1 <= s2)
++			return rq1;
++		else
++			return rq2;
++	}
++}
++
++static struct bfq_queue *
++bfq_rq_pos_tree_lookup(struct bfq_data *bfqd, struct rb_root *root,
++		     sector_t sector, struct rb_node **ret_parent,
++		     struct rb_node ***rb_link)
++{
++	struct rb_node **p, *parent;
++	struct bfq_queue *bfqq = NULL;
++
++	parent = NULL;
++	p = &root->rb_node;
++	while (*p) {
++		struct rb_node **n;
++
++		parent = *p;
++		bfqq = rb_entry(parent, struct bfq_queue, pos_node);
++
++		/*
++		 * Sort strictly based on sector. Smallest to the left,
++		 * largest to the right.
++		 */
++		if (sector > blk_rq_pos(bfqq->next_rq))
++			n = &(*p)->rb_right;
++		else if (sector < blk_rq_pos(bfqq->next_rq))
++			n = &(*p)->rb_left;
++		else
++			break;
++		p = n;
++		bfqq = NULL;
++	}
++
++	*ret_parent = parent;
++	if (rb_link)
++		*rb_link = p;
++
++	bfq_log(bfqd, "rq_pos_tree_lookup %llu: returning %d",
++		(long long unsigned)sector,
++		bfqq != NULL ? bfqq->pid : 0);
++
++	return bfqq;
++}
++
++static void bfq_rq_pos_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	struct rb_node **p, *parent;
++	struct bfq_queue *__bfqq;
++
++	if (bfqq->pos_root != NULL) {
++		rb_erase(&bfqq->pos_node, bfqq->pos_root);
++		bfqq->pos_root = NULL;
++	}
++
++	if (bfq_class_idle(bfqq))
++		return;
++	if (!bfqq->next_rq)
++		return;
++
++	bfqq->pos_root = &bfqd->rq_pos_tree;
++	__bfqq = bfq_rq_pos_tree_lookup(bfqd, bfqq->pos_root,
++			blk_rq_pos(bfqq->next_rq), &parent, &p);
++	if (__bfqq == NULL) {
++		rb_link_node(&bfqq->pos_node, parent, p);
++		rb_insert_color(&bfqq->pos_node, bfqq->pos_root);
++	} else
++		bfqq->pos_root = NULL;
++}
++
++/*
++ * Tell whether there are active queues or groups with differentiated weights.
++ */
++static inline bool bfq_differentiated_weights(struct bfq_data *bfqd)
++{
++	BUG_ON(!bfqd->hw_tag);
++	/*
++	 * For weights to differ, at least one of the trees must contain
++	 * at least two nodes.
++	 */
++	return (!RB_EMPTY_ROOT(&bfqd->queue_weights_tree) &&
++		(bfqd->queue_weights_tree.rb_node->rb_left ||
++		 bfqd->queue_weights_tree.rb_node->rb_right)
++#ifdef CONFIG_CGROUP_BFQIO
++	       ) ||
++	       (!RB_EMPTY_ROOT(&bfqd->group_weights_tree) &&
++		(bfqd->group_weights_tree.rb_node->rb_left ||
++		 bfqd->group_weights_tree.rb_node->rb_right)
++#endif
++	       );
++}
++
++/*
++ * If the weight-counter tree passed as input contains no counter for
++ * the weight of the input entity, then add that counter; otherwise just
++ * increment the existing counter.
++ *
++ * Note that weight-counter trees contain few nodes in mostly symmetric
++ * scenarios. For example, if all queues have the same weight, then the
++ * weight-counter tree for the queues may contain at most one node.
++ * This holds even if low_latency is on, because weight-raised queues
++ * are not inserted in the tree.
++ * In most scenarios, the rate at which nodes are created/destroyed
++ * should be low too.
++ */
++static void bfq_weights_tree_add(struct bfq_data *bfqd,
++				 struct bfq_entity *entity,
++				 struct rb_root *root)
++{
++	struct rb_node **new = &(root->rb_node), *parent = NULL;
++
++	/*
++	 * Do not insert if:
++	 * - the device does not support queueing;
++	 * - the entity is already associated with a counter, which happens if:
++	 *   1) the entity is associated with a queue, 2) a request arrival
++	 *   has caused the queue to become both non-weight-raised, and hence
++	 *   change its weight, and backlogged; in this respect, each
++	 *   of the two events causes an invocation of this function,
++	 *   3) this is the invocation of this function caused by the second
++	 *   event. This second invocation is actually useless, and we handle
++	 *   this fact by exiting immediately. More efficient or clearer
++	 *   solutions might possibly be adopted.
++	 */
++	if (!bfqd->hw_tag || entity->weight_counter)
++		return;
++
++	while (*new) {
++		struct bfq_weight_counter *__counter = container_of(*new,
++						struct bfq_weight_counter,
++						weights_node);
++		parent = *new;
++
++		if (entity->weight == __counter->weight) {
++			entity->weight_counter = __counter;
++			goto inc_counter;
++		}
++		if (entity->weight < __counter->weight)
++			new = &((*new)->rb_left);
++		else
++			new = &((*new)->rb_right);
++	}
++
++	entity->weight_counter = kzalloc(sizeof(struct bfq_weight_counter),
++					 GFP_ATOMIC);
++	entity->weight_counter->weight = entity->weight;
++	rb_link_node(&entity->weight_counter->weights_node, parent, new);
++	rb_insert_color(&entity->weight_counter->weights_node, root);
++
++inc_counter:
++	entity->weight_counter->num_active++;
++}
++
++/*
++ * Decrement the weight counter associated with the entity, and, if the
++ * counter reaches 0, remove the counter from the tree.
++ * See the comments to the function bfq_weights_tree_add() for considerations
++ * about overhead.
++ */
++static void bfq_weights_tree_remove(struct bfq_data *bfqd,
++				    struct bfq_entity *entity,
++				    struct rb_root *root)
++{
++	/*
++	 * Check whether the entity is actually associated with a counter.
++	 * In fact, the device may not be considered NCQ-capable for a while,
++	 * which implies that no insertion in the weight trees is performed,
++	 * after which the device may start to be deemed NCQ-capable, and hence
++	 * this function may start to be invoked. This may cause the function
++	 * to be invoked for entities that are not associated with any counter.
++	 */
++	if (!entity->weight_counter)
++		return;
++
++	BUG_ON(RB_EMPTY_ROOT(root));
++	BUG_ON(entity->weight_counter->weight != entity->weight);
++
++	BUG_ON(!entity->weight_counter->num_active);
++	entity->weight_counter->num_active--;
++	if (entity->weight_counter->num_active > 0)
++		goto reset_entity_pointer;
++
++	rb_erase(&entity->weight_counter->weights_node, root);
++	kfree(entity->weight_counter);
++
++reset_entity_pointer:
++	entity->weight_counter = NULL;
++}
++
++static struct request *bfq_find_next_rq(struct bfq_data *bfqd,
++					struct bfq_queue *bfqq,
++					struct request *last)
++{
++	struct rb_node *rbnext = rb_next(&last->rb_node);
++	struct rb_node *rbprev = rb_prev(&last->rb_node);
++	struct request *next = NULL, *prev = NULL;
++
++	BUG_ON(RB_EMPTY_NODE(&last->rb_node));
++
++	if (rbprev != NULL)
++		prev = rb_entry_rq(rbprev);
++
++	if (rbnext != NULL)
++		next = rb_entry_rq(rbnext);
++	else {
++		rbnext = rb_first(&bfqq->sort_list);
++		if (rbnext && rbnext != &last->rb_node)
++			next = rb_entry_rq(rbnext);
++	}
++
++	return bfq_choose_req(bfqd, next, prev, blk_rq_pos(last));
++}
++
++/* see the definition of bfq_async_charge_factor for details */
++static inline unsigned long bfq_serv_to_charge(struct request *rq,
++					       struct bfq_queue *bfqq)
++{
++	return blk_rq_sectors(rq) *
++		(1 + ((!bfq_bfqq_sync(bfqq)) * (bfqq->wr_coeff == 1) *
++		bfq_async_charge_factor));
++}
++
++/**
++ * bfq_updated_next_req - update the queue after a new next_rq selection.
++ * @bfqd: the device data the queue belongs to.
++ * @bfqq: the queue to update.
++ *
++ * If the first request of a queue changes we make sure that the queue
++ * has enough budget to serve at least its first request (if the
++ * request has grown).  We do this because if the queue has not enough
++ * budget for its first request, it has to go through two dispatch
++ * rounds to actually get it dispatched.
++ */
++static void bfq_updated_next_req(struct bfq_data *bfqd,
++				 struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++	struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++	struct request *next_rq = bfqq->next_rq;
++	unsigned long new_budget;
++
++	if (next_rq == NULL)
++		return;
++
++	if (bfqq == bfqd->in_service_queue)
++		/*
++		 * In order not to break guarantees, budgets cannot be
++		 * changed after an entity has been selected.
++		 */
++		return;
++
++	BUG_ON(entity->tree != &st->active);
++	BUG_ON(entity == entity->sched_data->in_service_entity);
++
++	new_budget = max_t(unsigned long, bfqq->max_budget,
++			   bfq_serv_to_charge(next_rq, bfqq));
++	if (entity->budget != new_budget) {
++		entity->budget = new_budget;
++		bfq_log_bfqq(bfqd, bfqq, "updated next rq: new budget %lu",
++					 new_budget);
++		bfq_activate_bfqq(bfqd, bfqq);
++	}
++}
++
++static inline unsigned int bfq_wr_duration(struct bfq_data *bfqd)
++{
++	u64 dur;
++
++	if (bfqd->bfq_wr_max_time > 0)
++		return bfqd->bfq_wr_max_time;
++
++	dur = bfqd->RT_prod;
++	do_div(dur, bfqd->peak_rate);
++
++	return dur;
++}
++
++/* Empty burst list and add just bfqq (see comments to bfq_handle_burst) */
++static inline void bfq_reset_burst_list(struct bfq_data *bfqd,
++					struct bfq_queue *bfqq)
++{
++	struct bfq_queue *item;
++	struct hlist_node *n;
++
++	hlist_for_each_entry_safe(item, n, &bfqd->burst_list, burst_list_node)
++		hlist_del_init(&item->burst_list_node);
++	hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
++	bfqd->burst_size = 1;
++}
++
++/* Add bfqq to the list of queues in current burst (see bfq_handle_burst) */
++static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	/* Increment burst size to take into account also bfqq */
++	bfqd->burst_size++;
++
++	if (bfqd->burst_size == bfqd->bfq_large_burst_thresh) {
++		struct bfq_queue *pos, *bfqq_item;
++		struct hlist_node *n;
++
++		/*
++		 * Enough queues have been activated shortly after each
++		 * other to consider this burst as large.
++		 */
++		bfqd->large_burst = true;
++
++		/*
++		 * We can now mark all queues in the burst list as
++		 * belonging to a large burst.
++		 */
++		hlist_for_each_entry(bfqq_item, &bfqd->burst_list,
++				     burst_list_node)
++		        bfq_mark_bfqq_in_large_burst(bfqq_item);
++		bfq_mark_bfqq_in_large_burst(bfqq);
++
++		/*
++		 * From now on, and until the current burst finishes, any
++		 * new queue being activated shortly after the last queue
++		 * was inserted in the burst can be immediately marked as
++		 * belonging to a large burst. So the burst list is not
++		 * needed any more. Remove it.
++		 */
++		hlist_for_each_entry_safe(pos, n, &bfqd->burst_list,
++					  burst_list_node)
++			hlist_del_init(&pos->burst_list_node);
++	} else /* burst not yet large: add bfqq to the burst list */
++		hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
++}
++
++/*
++ * If many queues happen to become active shortly after each other, then,
++ * to help the processes associated to these queues get their job done as
++ * soon as possible, it is usually better to not grant either weight-raising
++ * or device idling to these queues. In this comment we describe, firstly,
++ * the reasons why this fact holds, and, secondly, the next function, which
++ * implements the main steps needed to properly mark these queues so that
++ * they can then be treated in a different way.
++ *
++ * As for the terminology, we say that a queue becomes active, i.e.,
++ * switches from idle to backlogged, either when it is created (as a
++ * consequence of the arrival of an I/O request), or, if already existing,
++ * when a new request for the queue arrives while the queue is idle.
++ * Bursts of activations, i.e., activations of different queues occurring
++ * shortly after each other, are typically caused by services or applications
++ * that spawn or reactivate many parallel threads/processes. Examples are
++ * systemd during boot or git grep.
++ *
++ * These services or applications benefit mostly from a high throughput:
++ * the quicker the requests of the activated queues are cumulatively served,
++ * the sooner the target job of these queues gets completed. As a consequence,
++ * weight-raising any of these queues, which also implies idling the device
++ * for it, is almost always counterproductive: in most cases it just lowers
++ * throughput.
++ *
++ * On the other hand, a burst of activations may be also caused by the start
++ * of an application that does not consist in a lot of parallel I/O-bound
++ * threads. In fact, with a complex application, the burst may be just a
++ * consequence of the fact that several processes need to be executed to
++ * start-up the application. To start an application as quickly as possible,
++ * the best thing to do is to privilege the I/O related to the application
++ * with respect to all other I/O. Therefore, the best strategy to start as
++ * quickly as possible an application that causes a burst of activations is
++ * to weight-raise all the queues activated during the burst. This is the
++ * exact opposite of the best strategy for the other type of bursts.
++ *
++ * In the end, to take the best action for each of the two cases, the two
++ * types of bursts need to be distinguished. Fortunately, this seems
++ * relatively easy to do, by looking at the sizes of the bursts. In
++ * particular, we found a threshold such that bursts with a larger size
++ * than that threshold are apparently caused only by services or commands
++ * such as systemd or git grep. For brevity, hereafter we call just 'large'
++ * these bursts. BFQ *does not* weight-raise queues whose activations occur
++ * in a large burst. In addition, for each of these queues BFQ performs or
++ * does not perform idling depending on which choice boosts the throughput
++ * most. The exact choice depends on the device and request pattern at
++ * hand.
++ *
++ * Turning back to the next function, it implements all the steps needed
++ * to detect the occurrence of a large burst and to properly mark all the
++ * queues belonging to it (so that they can then be treated in a different
++ * way). This goal is achieved by maintaining a special "burst list" that
++ * holds, temporarily, the queues that belong to the burst in progress. The
++ * list is then used to mark these queues as belonging to a large burst if
++ * the burst does become large. The main steps are the following.
++ *
++ * . when the very first queue is activated, the queue is inserted into the
++ *   list (as it could be the first queue in a possible burst)
++ *
++ * . if the current burst has not yet become large, and a queue Q that does
++ *   not yet belong to the burst is activated shortly after the last time
++ *   at which a new queue entered the burst list, then the function appends
++ *   Q to the burst list
++ *
++ * . if, as a consequence of the previous step, the burst size reaches
++ *   the large-burst threshold, then
++ *
++ *     . all the queues in the burst list are marked as belonging to a
++ *       large burst
++ *
++ *     . the burst list is deleted; in fact, the burst list already served
++ *       its purpose (keeping temporarily track of the queues in a burst,
++ *       so as to be able to mark them as belonging to a large burst in the
++ *       previous sub-step), and now is not needed any more
++ *
++ *     . the device enters a large-burst mode
++ *
++ * . if a queue Q that does not belong to the burst is activated while
++ *   the device is in large-burst mode and shortly after the last time
++ *   at which a queue either entered the burst list or was marked as
++ *   belonging to the current large burst, then Q is immediately marked
++ *   as belonging to a large burst.
++ *
++ * . if a queue Q that does not belong to the burst is activated a while
++ *   later, i.e., not shortly after, than the last time at which a queue
++ *   either entered the burst list or was marked as belonging to the
++ *   current large burst, then the current burst is deemed as finished and:
++ *
++ *        . the large-burst mode is reset if set
++ *
++ *        . the burst list is emptied
++ *
++ *        . Q is inserted in the burst list, as Q may be the first queue
++ *          in a possible new burst (then the burst list contains just Q
++ *          after this step).
++ */
++static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++			     bool idle_for_long_time)
++{
++	/*
++	 * If bfqq happened to be activated in a burst, but has been idle
++	 * for at least as long as an interactive queue, then we assume
++	 * that, in the overall I/O initiated in the burst, the I/O
++	 * associated to bfqq is finished. So bfqq does not need to be
++	 * treated as a queue belonging to a burst anymore. Accordingly,
++	 * we reset bfqq's in_large_burst flag if set, and remove bfqq
++	 * from the burst list if it's there. We do not decrement instead
++	 * burst_size, because the fact that bfqq does not need to belong
++	 * to the burst list any more does not invalidate the fact that
++	 * bfqq may have been activated during the current burst.
++	 */
++	if (idle_for_long_time) {
++		hlist_del_init(&bfqq->burst_list_node);
++		bfq_clear_bfqq_in_large_burst(bfqq);
++	}
++
++	/*
++	 * If bfqq is already in the burst list or is part of a large
++	 * burst, then there is nothing else to do.
++	 */
++	if (!hlist_unhashed(&bfqq->burst_list_node) ||
++	    bfq_bfqq_in_large_burst(bfqq))
++		return;
++
++	/*
++	 * If bfqq's activation happens late enough, then the current
++	 * burst is finished, and related data structures must be reset.
++	 *
++	 * In this respect, consider the special case where bfqq is the very
++	 * first queue being activated. In this case, last_ins_in_burst is
++	 * not yet significant when we get here. But it is easy to verify
++	 * that, whether or not the following condition is true, bfqq will
++	 * end up being inserted into the burst list. In particular the
++	 * list will happen to contain only bfqq. And this is exactly what
++	 * has to happen, as bfqq may be the first queue in a possible
++	 * burst.
++	 */
++	if (time_is_before_jiffies(bfqd->last_ins_in_burst +
++	    bfqd->bfq_burst_interval)) {
++		bfqd->large_burst = false;
++		bfq_reset_burst_list(bfqd, bfqq);
++		return;
++	}
++
++	/*
++	 * If we get here, then bfqq is being activated shortly after the
++	 * last queue. So, if the current burst is also large, we can mark
++	 * bfqq as belonging to this large burst immediately.
++	 */
++	if (bfqd->large_burst) {
++		bfq_mark_bfqq_in_large_burst(bfqq);
++		return;
++	}
++
++	/*
++	 * If we get here, then a large-burst state has not yet been
++	 * reached, but bfqq is being activated shortly after the last
++	 * queue. Then we add bfqq to the burst.
++	 */
++	bfq_add_to_burst(bfqd, bfqq);
++}
++
++static void bfq_add_request(struct request *rq)
++{
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++	struct bfq_entity *entity = &bfqq->entity;
++	struct bfq_data *bfqd = bfqq->bfqd;
++	struct request *next_rq, *prev;
++	unsigned long old_wr_coeff = bfqq->wr_coeff;
++	bool interactive = false;
++
++	bfq_log_bfqq(bfqd, bfqq, "add_request %d", rq_is_sync(rq));
++	bfqq->queued[rq_is_sync(rq)]++;
++	bfqd->queued++;
++
++	elv_rb_add(&bfqq->sort_list, rq);
++
++	/*
++	 * Check if this request is a better next-serve candidate.
++	 */
++	prev = bfqq->next_rq;
++	next_rq = bfq_choose_req(bfqd, bfqq->next_rq, rq, bfqd->last_position);
++	BUG_ON(next_rq == NULL);
++	bfqq->next_rq = next_rq;
++
++	/*
++	 * Adjust priority tree position, if next_rq changes.
++	 */
++	if (prev != bfqq->next_rq)
++		bfq_rq_pos_tree_add(bfqd, bfqq);
++
++	if (!bfq_bfqq_busy(bfqq)) {
++		bool soft_rt,
++		     idle_for_long_time = time_is_before_jiffies(
++						bfqq->budget_timeout +
++						bfqd->bfq_wr_min_idle_time);
++
++		if (bfq_bfqq_sync(bfqq)) {
++			bool already_in_burst =
++			   !hlist_unhashed(&bfqq->burst_list_node) ||
++			   bfq_bfqq_in_large_burst(bfqq);
++			bfq_handle_burst(bfqd, bfqq, idle_for_long_time);
++			/*
++			 * If bfqq was not already in the current burst,
++			 * then, at this point, bfqq either has been
++			 * added to the current burst or has caused the
++			 * current burst to terminate. In particular, in
++			 * the second case, bfqq has become the first
++			 * queue in a possible new burst.
++			 * In both cases last_ins_in_burst needs to be
++			 * moved forward.
++			 */
++			if (!already_in_burst)
++				bfqd->last_ins_in_burst = jiffies;
++		}
++
++		soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
++			!bfq_bfqq_in_large_burst(bfqq) &&
++			time_is_before_jiffies(bfqq->soft_rt_next_start);
++		interactive = !bfq_bfqq_in_large_burst(bfqq) &&
++			      idle_for_long_time;
++		entity->budget = max_t(unsigned long, bfqq->max_budget,
++				       bfq_serv_to_charge(next_rq, bfqq));
++
++		if (!bfq_bfqq_IO_bound(bfqq)) {
++			if (time_before(jiffies,
++					RQ_BIC(rq)->ttime.last_end_request +
++					bfqd->bfq_slice_idle)) {
++				bfqq->requests_within_timer++;
++				if (bfqq->requests_within_timer >=
++				    bfqd->bfq_requests_within_timer)
++					bfq_mark_bfqq_IO_bound(bfqq);
++			} else
++				bfqq->requests_within_timer = 0;
++		}
++
++		if (!bfqd->low_latency)
++			goto add_bfqq_busy;
++
++		/*
++		 * If the queue is not being boosted and has been idle
++		 * for enough time, start a weight-raising period
++		 */
++		if (old_wr_coeff == 1 && (interactive || soft_rt)) {
++			bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++			if (interactive)
++				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++			else
++				bfqq->wr_cur_max_time =
++					bfqd->bfq_wr_rt_max_time;
++			bfq_log_bfqq(bfqd, bfqq,
++				     "wrais starting at %lu, rais_max_time %u",
++				     jiffies,
++				     jiffies_to_msecs(bfqq->wr_cur_max_time));
++		} else if (old_wr_coeff > 1) {
++			if (interactive)
++				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++			else if (bfq_bfqq_in_large_burst(bfqq) ||
++				 (bfqq->wr_cur_max_time ==
++				  bfqd->bfq_wr_rt_max_time &&
++				  !soft_rt)) {
++				bfqq->wr_coeff = 1;
++				bfq_log_bfqq(bfqd, bfqq,
++					"wrais ending at %lu, rais_max_time %u",
++					jiffies,
++					jiffies_to_msecs(bfqq->
++						wr_cur_max_time));
++			} else if (time_before(
++					bfqq->last_wr_start_finish +
++					bfqq->wr_cur_max_time,
++					jiffies +
++					bfqd->bfq_wr_rt_max_time) &&
++				   soft_rt) {
++				/*
++				 *
++				 * The remaining weight-raising time is lower
++				 * than bfqd->bfq_wr_rt_max_time, which
++				 * means that the application is enjoying
++				 * weight raising either because deemed soft-
++				 * rt in the near past, or because deemed
++				 * interactive a long ago. In both cases,
++				 * resetting now the current remaining weight-
++				 * raising time for the application to the
++				 * weight-raising duration for soft rt
++				 * applications would not cause any latency
++				 * increase for the application (as the new
++				 * duration would be higher than the remaining
++				 * time).
++				 *
++				 * In addition, the application is now meeting
++				 * the requirements for being deemed soft rt.
++				 * In the end we can correctly and safely
++				 * (re)charge the weight-raising duration for
++				 * the application with the weight-raising
++				 * duration for soft rt applications.
++				 *
++				 * In particular, doing this recharge now, i.e.,
++				 * before the weight-raising period for the
++				 * application finishes, reduces the probability
++				 * of the following negative scenario:
++				 * 1) the weight of a soft rt application is
++				 *    raised at startup (as for any newly
++				 *    created application),
++				 * 2) since the application is not interactive,
++				 *    at a certain time weight-raising is
++				 *    stopped for the application,
++				 * 3) at that time the application happens to
++				 *    still have pending requests, and hence
++				 *    is destined to not have a chance to be
++				 *    deemed soft rt before these requests are
++				 *    completed (see the comments to the
++				 *    function bfq_bfqq_softrt_next_start()
++				 *    for details on soft rt detection),
++				 * 4) these pending requests experience a high
++				 *    latency because the application is not
++				 *    weight-raised while they are pending.
++				 */
++				bfqq->last_wr_start_finish = jiffies;
++				bfqq->wr_cur_max_time =
++					bfqd->bfq_wr_rt_max_time;
++			}
++		}
++		if (old_wr_coeff != bfqq->wr_coeff)
++			entity->ioprio_changed = 1;
++add_bfqq_busy:
++		bfqq->last_idle_bklogged = jiffies;
++		bfqq->service_from_backlogged = 0;
++		bfq_clear_bfqq_softrt_update(bfqq);
++		bfq_add_bfqq_busy(bfqd, bfqq);
++	} else {
++		if (bfqd->low_latency && old_wr_coeff == 1 && !rq_is_sync(rq) &&
++		    time_is_before_jiffies(
++				bfqq->last_wr_start_finish +
++				bfqd->bfq_wr_min_inter_arr_async)) {
++			bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++			bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++
++			bfqd->wr_busy_queues++;
++			entity->ioprio_changed = 1;
++			bfq_log_bfqq(bfqd, bfqq,
++			    "non-idle wrais starting at %lu, rais_max_time %u",
++			    jiffies,
++			    jiffies_to_msecs(bfqq->wr_cur_max_time));
++		}
++		if (prev != bfqq->next_rq)
++			bfq_updated_next_req(bfqd, bfqq);
++	}
++
++	if (bfqd->low_latency &&
++		(old_wr_coeff == 1 || bfqq->wr_coeff == 1 || interactive))
++		bfqq->last_wr_start_finish = jiffies;
++}
++
++static struct request *bfq_find_rq_fmerge(struct bfq_data *bfqd,
++					  struct bio *bio)
++{
++	struct task_struct *tsk = current;
++	struct bfq_io_cq *bic;
++	struct bfq_queue *bfqq;
++
++	bic = bfq_bic_lookup(bfqd, tsk->io_context);
++	if (bic == NULL)
++		return NULL;
++
++	bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
++	if (bfqq != NULL)
++		return elv_rb_find(&bfqq->sort_list, bio_end_sector(bio));
++
++	return NULL;
++}
++
++static void bfq_activate_request(struct request_queue *q, struct request *rq)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++
++	bfqd->rq_in_driver++;
++	bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq);
++	bfq_log(bfqd, "activate_request: new bfqd->last_position %llu",
++		(long long unsigned)bfqd->last_position);
++}
++
++static inline void bfq_deactivate_request(struct request_queue *q,
++					  struct request *rq)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++
++	BUG_ON(bfqd->rq_in_driver == 0);
++	bfqd->rq_in_driver--;
++}
++
++static void bfq_remove_request(struct request *rq)
++{
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++	struct bfq_data *bfqd = bfqq->bfqd;
++	const int sync = rq_is_sync(rq);
++
++	if (bfqq->next_rq == rq) {
++		bfqq->next_rq = bfq_find_next_rq(bfqd, bfqq, rq);
++		bfq_updated_next_req(bfqd, bfqq);
++	}
++
++	list_del_init(&rq->queuelist);
++	BUG_ON(bfqq->queued[sync] == 0);
++	bfqq->queued[sync]--;
++	bfqd->queued--;
++	elv_rb_del(&bfqq->sort_list, rq);
++
++	if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
++		if (bfq_bfqq_busy(bfqq) && bfqq != bfqd->in_service_queue)
++			bfq_del_bfqq_busy(bfqd, bfqq, 1);
++		/*
++		 * Remove queue from request-position tree as it is empty.
++		 */
++		if (bfqq->pos_root != NULL) {
++			rb_erase(&bfqq->pos_node, bfqq->pos_root);
++			bfqq->pos_root = NULL;
++		}
++	}
++
++	if (rq->cmd_flags & REQ_META) {
++		BUG_ON(bfqq->meta_pending == 0);
++		bfqq->meta_pending--;
++	}
++}
++
++static int bfq_merge(struct request_queue *q, struct request **req,
++		     struct bio *bio)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct request *__rq;
++
++	__rq = bfq_find_rq_fmerge(bfqd, bio);
++	if (__rq != NULL && elv_rq_merge_ok(__rq, bio)) {
++		*req = __rq;
++		return ELEVATOR_FRONT_MERGE;
++	}
++
++	return ELEVATOR_NO_MERGE;
++}
++
++static void bfq_merged_request(struct request_queue *q, struct request *req,
++			       int type)
++{
++	if (type == ELEVATOR_FRONT_MERGE &&
++	    rb_prev(&req->rb_node) &&
++	    blk_rq_pos(req) <
++	    blk_rq_pos(container_of(rb_prev(&req->rb_node),
++				    struct request, rb_node))) {
++		struct bfq_queue *bfqq = RQ_BFQQ(req);
++		struct bfq_data *bfqd = bfqq->bfqd;
++		struct request *prev, *next_rq;
++
++		/* Reposition request in its sort_list */
++		elv_rb_del(&bfqq->sort_list, req);
++		elv_rb_add(&bfqq->sort_list, req);
++		/* Choose next request to be served for bfqq */
++		prev = bfqq->next_rq;
++		next_rq = bfq_choose_req(bfqd, bfqq->next_rq, req,
++					 bfqd->last_position);
++		BUG_ON(next_rq == NULL);
++		bfqq->next_rq = next_rq;
++		/*
++		 * If next_rq changes, update both the queue's budget to
++		 * fit the new request and the queue's position in its
++		 * rq_pos_tree.
++		 */
++		if (prev != bfqq->next_rq) {
++			bfq_updated_next_req(bfqd, bfqq);
++			bfq_rq_pos_tree_add(bfqd, bfqq);
++		}
++	}
++}
++
++static void bfq_merged_requests(struct request_queue *q, struct request *rq,
++				struct request *next)
++{
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++	/*
++	 * Reposition in fifo if next is older than rq.
++	 */
++	if (!list_empty(&rq->queuelist) && !list_empty(&next->queuelist) &&
++	    time_before(next->fifo_time, rq->fifo_time)) {
++		list_move(&rq->queuelist, &next->queuelist);
++		rq->fifo_time = next->fifo_time;
++	}
++
++	if (bfqq->next_rq == next)
++		bfqq->next_rq = rq;
++
++	bfq_remove_request(next);
++}
++
++/* Must be called with bfqq != NULL */
++static inline void bfq_bfqq_end_wr(struct bfq_queue *bfqq)
++{
++	BUG_ON(bfqq == NULL);
++	if (bfq_bfqq_busy(bfqq))
++		bfqq->bfqd->wr_busy_queues--;
++	bfqq->wr_coeff = 1;
++	bfqq->wr_cur_max_time = 0;
++	/* Trigger a weight change on the next activation of the queue */
++	bfqq->entity.ioprio_changed = 1;
++}
++
++static void bfq_end_wr_async_queues(struct bfq_data *bfqd,
++				    struct bfq_group *bfqg)
++{
++	int i, j;
++
++	for (i = 0; i < 2; i++)
++		for (j = 0; j < IOPRIO_BE_NR; j++)
++			if (bfqg->async_bfqq[i][j] != NULL)
++				bfq_bfqq_end_wr(bfqg->async_bfqq[i][j]);
++	if (bfqg->async_idle_bfqq != NULL)
++		bfq_bfqq_end_wr(bfqg->async_idle_bfqq);
++}
++
++static void bfq_end_wr(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq;
++
++	spin_lock_irq(bfqd->queue->queue_lock);
++
++	list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list)
++		bfq_bfqq_end_wr(bfqq);
++	list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list)
++		bfq_bfqq_end_wr(bfqq);
++	bfq_end_wr_async(bfqd);
++
++	spin_unlock_irq(bfqd->queue->queue_lock);
++}
++
++static int bfq_allow_merge(struct request_queue *q, struct request *rq,
++			   struct bio *bio)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_io_cq *bic;
++	struct bfq_queue *bfqq;
++
++	/*
++	 * Disallow merge of a sync bio into an async request.
++	 */
++	if (bfq_bio_sync(bio) && !rq_is_sync(rq))
++		return 0;
++
++	/*
++	 * Lookup the bfqq that this bio will be queued with. Allow
++	 * merge only if rq is queued there.
++	 * Queue lock is held here.
++	 */
++	bic = bfq_bic_lookup(bfqd, current->io_context);
++	if (bic == NULL)
++		return 0;
++
++	bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
++	return bfqq == RQ_BFQQ(rq);
++}
++
++static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
++				       struct bfq_queue *bfqq)
++{
++	if (bfqq != NULL) {
++		bfq_mark_bfqq_must_alloc(bfqq);
++		bfq_mark_bfqq_budget_new(bfqq);
++		bfq_clear_bfqq_fifo_expire(bfqq);
++
++		bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
++
++		bfq_log_bfqq(bfqd, bfqq,
++			     "set_in_service_queue, cur-budget = %lu",
++			     bfqq->entity.budget);
++	}
++
++	bfqd->in_service_queue = bfqq;
++}
++
++/*
++ * Get and set a new queue for service.
++ */
++static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd,
++						  struct bfq_queue *bfqq)
++{
++	if (!bfqq)
++		bfqq = bfq_get_next_queue(bfqd);
++	else
++		bfq_get_next_queue_forced(bfqd, bfqq);
++
++	__bfq_set_in_service_queue(bfqd, bfqq);
++	return bfqq;
++}
++
++static inline sector_t bfq_dist_from_last(struct bfq_data *bfqd,
++					  struct request *rq)
++{
++	if (blk_rq_pos(rq) >= bfqd->last_position)
++		return blk_rq_pos(rq) - bfqd->last_position;
++	else
++		return bfqd->last_position - blk_rq_pos(rq);
++}
++
++/*
++ * Return true if bfqq has no request pending and rq is close enough to
++ * bfqd->last_position, or if rq is closer to bfqd->last_position than
++ * bfqq->next_rq
++ */
++static inline int bfq_rq_close(struct bfq_data *bfqd, struct request *rq)
++{
++	return bfq_dist_from_last(bfqd, rq) <= BFQQ_SEEK_THR;
++}
++
++static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
++{
++	struct rb_root *root = &bfqd->rq_pos_tree;
++	struct rb_node *parent, *node;
++	struct bfq_queue *__bfqq;
++	sector_t sector = bfqd->last_position;
++
++	if (RB_EMPTY_ROOT(root))
++		return NULL;
++
++	/*
++	 * First, if we find a request starting at the end of the last
++	 * request, choose it.
++	 */
++	__bfqq = bfq_rq_pos_tree_lookup(bfqd, root, sector, &parent, NULL);
++	if (__bfqq != NULL)
++		return __bfqq;
++
++	/*
++	 * If the exact sector wasn't found, the parent of the NULL leaf
++	 * will contain the closest sector (rq_pos_tree sorted by
++	 * next_request position).
++	 */
++	__bfqq = rb_entry(parent, struct bfq_queue, pos_node);
++	if (bfq_rq_close(bfqd, __bfqq->next_rq))
++		return __bfqq;
++
++	if (blk_rq_pos(__bfqq->next_rq) < sector)
++		node = rb_next(&__bfqq->pos_node);
++	else
++		node = rb_prev(&__bfqq->pos_node);
++	if (node == NULL)
++		return NULL;
++
++	__bfqq = rb_entry(node, struct bfq_queue, pos_node);
++	if (bfq_rq_close(bfqd, __bfqq->next_rq))
++		return __bfqq;
++
++	return NULL;
++}
++
++/*
++ * bfqd - obvious
++ * cur_bfqq - passed in so that we don't decide that the current queue
++ *            is closely cooperating with itself.
++ *
++ * We are assuming that cur_bfqq has dispatched at least one request,
++ * and that bfqd->last_position reflects a position on the disk associated
++ * with the I/O issued by cur_bfqq.
++ */
++static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
++					      struct bfq_queue *cur_bfqq)
++{
++	struct bfq_queue *bfqq;
++
++	if (bfq_class_idle(cur_bfqq))
++		return NULL;
++	if (!bfq_bfqq_sync(cur_bfqq))
++		return NULL;
++	if (BFQQ_SEEKY(cur_bfqq))
++		return NULL;
++
++	/* If device has only one backlogged bfq_queue, don't search. */
++	if (bfqd->busy_queues == 1)
++		return NULL;
++
++	/*
++	 * We should notice if some of the queues are cooperating, e.g.
++	 * working closely on the same area of the disk. In that case,
++	 * we can group them together and don't waste time idling.
++	 */
++	bfqq = bfqq_close(bfqd);
++	if (bfqq == NULL || bfqq == cur_bfqq)
++		return NULL;
++
++	/*
++	 * Do not merge queues from different bfq_groups.
++	*/
++	if (bfqq->entity.parent != cur_bfqq->entity.parent)
++		return NULL;
++
++	/*
++	 * It only makes sense to merge sync queues.
++	 */
++	if (!bfq_bfqq_sync(bfqq))
++		return NULL;
++	if (BFQQ_SEEKY(bfqq))
++		return NULL;
++
++	/*
++	 * Do not merge queues of different priority classes.
++	 */
++	if (bfq_class_rt(bfqq) != bfq_class_rt(cur_bfqq))
++		return NULL;
++
++	return bfqq;
++}
++
++/*
++ * If enough samples have been computed, return the current max budget
++ * stored in bfqd, which is dynamically updated according to the
++ * estimated disk peak rate; otherwise return the default max budget
++ */
++static inline unsigned long bfq_max_budget(struct bfq_data *bfqd)
++{
++	if (bfqd->budgets_assigned < 194)
++		return bfq_default_max_budget;
++	else
++		return bfqd->bfq_max_budget;
++}
++
++/*
++ * Return min budget, which is a fraction of the current or default
++ * max budget (trying with 1/32)
++ */
++static inline unsigned long bfq_min_budget(struct bfq_data *bfqd)
++{
++	if (bfqd->budgets_assigned < 194)
++		return bfq_default_max_budget / 32;
++	else
++		return bfqd->bfq_max_budget / 32;
++}
++
++static void bfq_arm_slice_timer(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq = bfqd->in_service_queue;
++	struct bfq_io_cq *bic;
++	unsigned long sl;
++
++	BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
++
++	/* Processes have exited, don't wait. */
++	bic = bfqd->in_service_bic;
++	if (bic == NULL || atomic_read(&bic->icq.ioc->active_ref) == 0)
++		return;
++
++	bfq_mark_bfqq_wait_request(bfqq);
++
++	/*
++	 * We don't want to idle for seeks, but we do want to allow
++	 * fair distribution of slice time for a process doing back-to-back
++	 * seeks. So allow a little bit of time for him to submit a new rq.
++	 *
++	 * To prevent processes with (partly) seeky workloads from
++	 * being too ill-treated, grant them a small fraction of the
++	 * assigned budget before reducing the waiting time to
++	 * BFQ_MIN_TT. This happened to help reduce latency.
++	 */
++	sl = bfqd->bfq_slice_idle;
++	/*
++	 * Unless the queue is being weight-raised, grant only minimum idle
++	 * time if the queue either has been seeky for long enough or has
++	 * already proved to be constantly seeky.
++	 */
++	if (bfq_sample_valid(bfqq->seek_samples) &&
++	    ((BFQQ_SEEKY(bfqq) && bfqq->entity.service >
++				  bfq_max_budget(bfqq->bfqd) / 8) ||
++	      bfq_bfqq_constantly_seeky(bfqq)) && bfqq->wr_coeff == 1)
++		sl = min(sl, msecs_to_jiffies(BFQ_MIN_TT));
++	else if (bfqq->wr_coeff > 1)
++		sl = sl * 3;
++	bfqd->last_idling_start = ktime_get();
++	mod_timer(&bfqd->idle_slice_timer, jiffies + sl);
++	bfq_log(bfqd, "arm idle: %u/%u ms",
++		jiffies_to_msecs(sl), jiffies_to_msecs(bfqd->bfq_slice_idle));
++}
++
++/*
++ * Set the maximum time for the in-service queue to consume its
++ * budget. This prevents seeky processes from lowering the disk
++ * throughput (always guaranteed with a time slice scheme as in CFQ).
++ */
++static void bfq_set_budget_timeout(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq = bfqd->in_service_queue;
++	unsigned int timeout_coeff;
++	if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time)
++		timeout_coeff = 1;
++	else
++		timeout_coeff = bfqq->entity.weight / bfqq->entity.orig_weight;
++
++	bfqd->last_budget_start = ktime_get();
++
++	bfq_clear_bfqq_budget_new(bfqq);
++	bfqq->budget_timeout = jiffies +
++		bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] * timeout_coeff;
++
++	bfq_log_bfqq(bfqd, bfqq, "set budget_timeout %u",
++		jiffies_to_msecs(bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] *
++		timeout_coeff));
++}
++
++/*
++ * Move request from internal lists to the request queue dispatch list.
++ */
++static void bfq_dispatch_insert(struct request_queue *q, struct request *rq)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++	/*
++	 * For consistency, the next instruction should have been executed
++	 * after removing the request from the queue and dispatching it.
++	 * We execute instead this instruction before bfq_remove_request()
++	 * (and hence introduce a temporary inconsistency), for efficiency.
++	 * In fact, in a forced_dispatch, this prevents two counters related
++	 * to bfqq->dispatched to risk to be uselessly decremented if bfqq
++	 * is not in service, and then to be incremented again after
++	 * incrementing bfqq->dispatched.
++	 */
++	bfqq->dispatched++;
++	bfq_remove_request(rq);
++	elv_dispatch_sort(q, rq);
++
++	if (bfq_bfqq_sync(bfqq))
++		bfqd->sync_flight++;
++}
++
++/*
++ * Return expired entry, or NULL to just start from scratch in rbtree.
++ */
++static struct request *bfq_check_fifo(struct bfq_queue *bfqq)
++{
++	struct request *rq = NULL;
++
++	if (bfq_bfqq_fifo_expire(bfqq))
++		return NULL;
++
++	bfq_mark_bfqq_fifo_expire(bfqq);
++
++	if (list_empty(&bfqq->fifo))
++		return NULL;
++
++	rq = rq_entry_fifo(bfqq->fifo.next);
++
++	if (time_before(jiffies, rq->fifo_time))
++		return NULL;
++
++	return rq;
++}
++
++/* Must be called with the queue_lock held. */
++static int bfqq_process_refs(struct bfq_queue *bfqq)
++{
++	int process_refs, io_refs;
++
++	io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
++	process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
++	BUG_ON(process_refs < 0);
++	return process_refs;
++}
++
++static void bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
++{
++	int process_refs, new_process_refs;
++	struct bfq_queue *__bfqq;
++
++	/*
++	 * If there are no process references on the new_bfqq, then it is
++	 * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain
++	 * may have dropped their last reference (not just their last process
++	 * reference).
++	 */
++	if (!bfqq_process_refs(new_bfqq))
++		return;
++
++	/* Avoid a circular list and skip interim queue merges. */
++	while ((__bfqq = new_bfqq->new_bfqq)) {
++		if (__bfqq == bfqq)
++			return;
++		new_bfqq = __bfqq;
++	}
++
++	process_refs = bfqq_process_refs(bfqq);
++	new_process_refs = bfqq_process_refs(new_bfqq);
++	/*
++	 * If the process for the bfqq has gone away, there is no
++	 * sense in merging the queues.
++	 */
++	if (process_refs == 0 || new_process_refs == 0)
++		return;
++
++	/*
++	 * Merge in the direction of the lesser amount of work.
++	 */
++	if (new_process_refs >= process_refs) {
++		bfqq->new_bfqq = new_bfqq;
++		atomic_add(process_refs, &new_bfqq->ref);
++	} else {
++		new_bfqq->new_bfqq = bfqq;
++		atomic_add(new_process_refs, &bfqq->ref);
++	}
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
++		new_bfqq->pid);
++}
++
++static inline unsigned long bfq_bfqq_budget_left(struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++	return entity->budget - entity->service;
++}
++
++static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	BUG_ON(bfqq != bfqd->in_service_queue);
++
++	__bfq_bfqd_reset_in_service(bfqd);
++
++	/*
++	 * If this bfqq is shared between multiple processes, check
++	 * to make sure that those processes are still issuing I/Os
++	 * within the mean seek distance. If not, it may be time to
++	 * break the queues apart again.
++	 */
++	if (bfq_bfqq_coop(bfqq) && BFQQ_SEEKY(bfqq))
++		bfq_mark_bfqq_split_coop(bfqq);
++
++	if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
++		/*
++		 * Overloading budget_timeout field to store the time
++		 * at which the queue remains with no backlog; used by
++		 * the weight-raising mechanism.
++		 */
++		bfqq->budget_timeout = jiffies;
++		bfq_del_bfqq_busy(bfqd, bfqq, 1);
++	} else {
++		bfq_activate_bfqq(bfqd, bfqq);
++		/*
++		 * Resort priority tree of potential close cooperators.
++		 */
++		bfq_rq_pos_tree_add(bfqd, bfqq);
++	}
++}
++
++/**
++ * __bfq_bfqq_recalc_budget - try to adapt the budget to the @bfqq behavior.
++ * @bfqd: device data.
++ * @bfqq: queue to update.
++ * @reason: reason for expiration.
++ *
++ * Handle the feedback on @bfqq budget.  See the body for detailed
++ * comments.
++ */
++static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
++				     struct bfq_queue *bfqq,
++				     enum bfqq_expiration reason)
++{
++	struct request *next_rq;
++	unsigned long budget, min_budget;
++
++	budget = bfqq->max_budget;
++	min_budget = bfq_min_budget(bfqd);
++
++	BUG_ON(bfqq != bfqd->in_service_queue);
++
++	bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last budg %lu, budg left %lu",
++		bfqq->entity.budget, bfq_bfqq_budget_left(bfqq));
++	bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last max_budg %lu, min budg %lu",
++		budget, bfq_min_budget(bfqd));
++	bfq_log_bfqq(bfqd, bfqq, "recalc_budg: sync %d, seeky %d",
++		bfq_bfqq_sync(bfqq), BFQQ_SEEKY(bfqd->in_service_queue));
++
++	if (bfq_bfqq_sync(bfqq)) {
++		switch (reason) {
++		/*
++		 * Caveat: in all the following cases we trade latency
++		 * for throughput.
++		 */
++		case BFQ_BFQQ_TOO_IDLE:
++			/*
++			 * This is the only case where we may reduce
++			 * the budget: if there is no request of the
++			 * process still waiting for completion, then
++			 * we assume (tentatively) that the timer has
++			 * expired because the batch of requests of
++			 * the process could have been served with a
++			 * smaller budget.  Hence, betting that
++			 * process will behave in the same way when it
++			 * becomes backlogged again, we reduce its
++			 * next budget.  As long as we guess right,
++			 * this budget cut reduces the latency
++			 * experienced by the process.
++			 *
++			 * However, if there are still outstanding
++			 * requests, then the process may have not yet
++			 * issued its next request just because it is
++			 * still waiting for the completion of some of
++			 * the still outstanding ones.  So in this
++			 * subcase we do not reduce its budget, on the
++			 * contrary we increase it to possibly boost
++			 * the throughput, as discussed in the
++			 * comments to the BUDGET_TIMEOUT case.
++			 */
++			if (bfqq->dispatched > 0) /* still outstanding reqs */
++				budget = min(budget * 2, bfqd->bfq_max_budget);
++			else {
++				if (budget > 5 * min_budget)
++					budget -= 4 * min_budget;
++				else
++					budget = min_budget;
++			}
++			break;
++		case BFQ_BFQQ_BUDGET_TIMEOUT:
++			/*
++			 * We double the budget here because: 1) it
++			 * gives the chance to boost the throughput if
++			 * this is not a seeky process (which may have
++			 * bumped into this timeout because of, e.g.,
++			 * ZBR), 2) together with charge_full_budget
++			 * it helps give seeky processes higher
++			 * timestamps, and hence be served less
++			 * frequently.
++			 */
++			budget = min(budget * 2, bfqd->bfq_max_budget);
++			break;
++		case BFQ_BFQQ_BUDGET_EXHAUSTED:
++			/*
++			 * The process still has backlog, and did not
++			 * let either the budget timeout or the disk
++			 * idling timeout expire. Hence it is not
++			 * seeky, has a short thinktime and may be
++			 * happy with a higher budget too. So
++			 * definitely increase the budget of this good
++			 * candidate to boost the disk throughput.
++			 */
++			budget = min(budget * 4, bfqd->bfq_max_budget);
++			break;
++		case BFQ_BFQQ_NO_MORE_REQUESTS:
++		       /*
++			* Leave the budget unchanged.
++			*/
++		default:
++			return;
++		}
++	} else /* async queue */
++	    /* async queues get always the maximum possible budget
++	     * (their ability to dispatch is limited by
++	     * @bfqd->bfq_max_budget_async_rq).
++	     */
++		budget = bfqd->bfq_max_budget;
++
++	bfqq->max_budget = budget;
++
++	if (bfqd->budgets_assigned >= 194 && bfqd->bfq_user_max_budget == 0 &&
++	    bfqq->max_budget > bfqd->bfq_max_budget)
++		bfqq->max_budget = bfqd->bfq_max_budget;
++
++	/*
++	 * Make sure that we have enough budget for the next request.
++	 * Since the finish time of the bfqq must be kept in sync with
++	 * the budget, be sure to call __bfq_bfqq_expire() after the
++	 * update.
++	 */
++	next_rq = bfqq->next_rq;
++	if (next_rq != NULL)
++		bfqq->entity.budget = max_t(unsigned long, bfqq->max_budget,
++					    bfq_serv_to_charge(next_rq, bfqq));
++	else
++		bfqq->entity.budget = bfqq->max_budget;
++
++	bfq_log_bfqq(bfqd, bfqq, "head sect: %u, new budget %lu",
++			next_rq != NULL ? blk_rq_sectors(next_rq) : 0,
++			bfqq->entity.budget);
++}
++
++static unsigned long bfq_calc_max_budget(u64 peak_rate, u64 timeout)
++{
++	unsigned long max_budget;
++
++	/*
++	 * The max_budget calculated when autotuning is equal to the
++	 * amount of sectors transfered in timeout_sync at the
++	 * estimated peak rate.
++	 */
++	max_budget = (unsigned long)(peak_rate * 1000 *
++				     timeout >> BFQ_RATE_SHIFT);
++
++	return max_budget;
++}
++
++/*
++ * In addition to updating the peak rate, checks whether the process
++ * is "slow", and returns 1 if so. This slow flag is used, in addition
++ * to the budget timeout, to reduce the amount of service provided to
++ * seeky processes, and hence reduce their chances to lower the
++ * throughput. See the code for more details.
++ */
++static int bfq_update_peak_rate(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++				int compensate, enum bfqq_expiration reason)
++{
++	u64 bw, usecs, expected, timeout;
++	ktime_t delta;
++	int update = 0;
++
++	if (!bfq_bfqq_sync(bfqq) || bfq_bfqq_budget_new(bfqq))
++		return 0;
++
++	if (compensate)
++		delta = bfqd->last_idling_start;
++	else
++		delta = ktime_get();
++	delta = ktime_sub(delta, bfqd->last_budget_start);
++	usecs = ktime_to_us(delta);
++
++	/* Don't trust short/unrealistic values. */
++	if (usecs < 100 || usecs >= LONG_MAX)
++		return 0;
++
++	/*
++	 * Calculate the bandwidth for the last slice.  We use a 64 bit
++	 * value to store the peak rate, in sectors per usec in fixed
++	 * point math.  We do so to have enough precision in the estimate
++	 * and to avoid overflows.
++	 */
++	bw = (u64)bfqq->entity.service << BFQ_RATE_SHIFT;
++	do_div(bw, (unsigned long)usecs);
++
++	timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
++
++	/*
++	 * Use only long (> 20ms) intervals to filter out spikes for
++	 * the peak rate estimation.
++	 */
++	if (usecs > 20000) {
++		if (bw > bfqd->peak_rate ||
++		   (!BFQQ_SEEKY(bfqq) &&
++		    reason == BFQ_BFQQ_BUDGET_TIMEOUT)) {
++			bfq_log(bfqd, "measured bw =%llu", bw);
++			/*
++			 * To smooth oscillations use a low-pass filter with
++			 * alpha=7/8, i.e.,
++			 * new_rate = (7/8) * old_rate + (1/8) * bw
++			 */
++			do_div(bw, 8);
++			if (bw == 0)
++				return 0;
++			bfqd->peak_rate *= 7;
++			do_div(bfqd->peak_rate, 8);
++			bfqd->peak_rate += bw;
++			update = 1;
++			bfq_log(bfqd, "new peak_rate=%llu", bfqd->peak_rate);
++		}
++
++		update |= bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES - 1;
++
++		if (bfqd->peak_rate_samples < BFQ_PEAK_RATE_SAMPLES)
++			bfqd->peak_rate_samples++;
++
++		if (bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES &&
++		    update) {
++			int dev_type = blk_queue_nonrot(bfqd->queue);
++			if (bfqd->bfq_user_max_budget == 0) {
++				bfqd->bfq_max_budget =
++					bfq_calc_max_budget(bfqd->peak_rate,
++							    timeout);
++				bfq_log(bfqd, "new max_budget=%lu",
++					bfqd->bfq_max_budget);
++			}
++			if (bfqd->device_speed == BFQ_BFQD_FAST &&
++			    bfqd->peak_rate < device_speed_thresh[dev_type]) {
++				bfqd->device_speed = BFQ_BFQD_SLOW;
++				bfqd->RT_prod = R_slow[dev_type] *
++						T_slow[dev_type];
++			} else if (bfqd->device_speed == BFQ_BFQD_SLOW &&
++			    bfqd->peak_rate > device_speed_thresh[dev_type]) {
++				bfqd->device_speed = BFQ_BFQD_FAST;
++				bfqd->RT_prod = R_fast[dev_type] *
++						T_fast[dev_type];
++			}
++		}
++	}
++
++	/*
++	 * If the process has been served for a too short time
++	 * interval to let its possible sequential accesses prevail on
++	 * the initial seek time needed to move the disk head on the
++	 * first sector it requested, then give the process a chance
++	 * and for the moment return false.
++	 */
++	if (bfqq->entity.budget <= bfq_max_budget(bfqd) / 8)
++		return 0;
++
++	/*
++	 * A process is considered ``slow'' (i.e., seeky, so that we
++	 * cannot treat it fairly in the service domain, as it would
++	 * slow down too much the other processes) if, when a slice
++	 * ends for whatever reason, it has received service at a
++	 * rate that would not be high enough to complete the budget
++	 * before the budget timeout expiration.
++	 */
++	expected = bw * 1000 * timeout >> BFQ_RATE_SHIFT;
++
++	/*
++	 * Caveat: processes doing IO in the slower disk zones will
++	 * tend to be slow(er) even if not seeky. And the estimated
++	 * peak rate will actually be an average over the disk
++	 * surface. Hence, to not be too harsh with unlucky processes,
++	 * we keep a budget/3 margin of safety before declaring a
++	 * process slow.
++	 */
++	return expected > (4 * bfqq->entity.budget) / 3;
++}
++
++/*
++ * To be deemed as soft real-time, an application must meet two
++ * requirements. First, the application must not require an average
++ * bandwidth higher than the approximate bandwidth required to playback or
++ * record a compressed high-definition video.
++ * The next function is invoked on the completion of the last request of a
++ * batch, to compute the next-start time instant, soft_rt_next_start, such
++ * that, if the next request of the application does not arrive before
++ * soft_rt_next_start, then the above requirement on the bandwidth is met.
++ *
++ * The second requirement is that the request pattern of the application is
++ * isochronous, i.e., that, after issuing a request or a batch of requests,
++ * the application stops issuing new requests until all its pending requests
++ * have been completed. After that, the application may issue a new batch,
++ * and so on.
++ * For this reason the next function is invoked to compute
++ * soft_rt_next_start only for applications that meet this requirement,
++ * whereas soft_rt_next_start is set to infinity for applications that do
++ * not.
++ *
++ * Unfortunately, even a greedy application may happen to behave in an
++ * isochronous way if the CPU load is high. In fact, the application may
++ * stop issuing requests while the CPUs are busy serving other processes,
++ * then restart, then stop again for a while, and so on. In addition, if
++ * the disk achieves a low enough throughput with the request pattern
++ * issued by the application (e.g., because the request pattern is random
++ * and/or the device is slow), then the application may meet the above
++ * bandwidth requirement too. To prevent such a greedy application to be
++ * deemed as soft real-time, a further rule is used in the computation of
++ * soft_rt_next_start: soft_rt_next_start must be higher than the current
++ * time plus the maximum time for which the arrival of a request is waited
++ * for when a sync queue becomes idle, namely bfqd->bfq_slice_idle.
++ * This filters out greedy applications, as the latter issue instead their
++ * next request as soon as possible after the last one has been completed
++ * (in contrast, when a batch of requests is completed, a soft real-time
++ * application spends some time processing data).
++ *
++ * Unfortunately, the last filter may easily generate false positives if
++ * only bfqd->bfq_slice_idle is used as a reference time interval and one
++ * or both the following cases occur:
++ * 1) HZ is so low that the duration of a jiffy is comparable to or higher
++ *    than bfqd->bfq_slice_idle. This happens, e.g., on slow devices with
++ *    HZ=100.
++ * 2) jiffies, instead of increasing at a constant rate, may stop increasing
++ *    for a while, then suddenly 'jump' by several units to recover the lost
++ *    increments. This seems to happen, e.g., inside virtual machines.
++ * To address this issue, we do not use as a reference time interval just
++ * bfqd->bfq_slice_idle, but bfqd->bfq_slice_idle plus a few jiffies. In
++ * particular we add the minimum number of jiffies for which the filter
++ * seems to be quite precise also in embedded systems and KVM/QEMU virtual
++ * machines.
++ */
++static inline unsigned long bfq_bfqq_softrt_next_start(struct bfq_data *bfqd,
++						       struct bfq_queue *bfqq)
++{
++	return max(bfqq->last_idle_bklogged +
++		   HZ * bfqq->service_from_backlogged /
++		   bfqd->bfq_wr_max_softrt_rate,
++		   jiffies + bfqq->bfqd->bfq_slice_idle + 4);
++}
++
++/*
++ * Return the largest-possible time instant such that, for as long as possible,
++ * the current time will be lower than this time instant according to the macro
++ * time_is_before_jiffies().
++ */
++static inline unsigned long bfq_infinity_from_now(unsigned long now)
++{
++	return now + ULONG_MAX / 2;
++}
++
++/**
++ * bfq_bfqq_expire - expire a queue.
++ * @bfqd: device owning the queue.
++ * @bfqq: the queue to expire.
++ * @compensate: if true, compensate for the time spent idling.
++ * @reason: the reason causing the expiration.
++ *
++ *
++ * If the process associated to the queue is slow (i.e., seeky), or in
++ * case of budget timeout, or, finally, if it is async, we
++ * artificially charge it an entire budget (independently of the
++ * actual service it received). As a consequence, the queue will get
++ * higher timestamps than the correct ones upon reactivation, and
++ * hence it will be rescheduled as if it had received more service
++ * than what it actually received. In the end, this class of processes
++ * will receive less service in proportion to how slowly they consume
++ * their budgets (and hence how seriously they tend to lower the
++ * throughput).
++ *
++ * In contrast, when a queue expires because it has been idling for
++ * too much or because it exhausted its budget, we do not touch the
++ * amount of service it has received. Hence when the queue will be
++ * reactivated and its timestamps updated, the latter will be in sync
++ * with the actual service received by the queue until expiration.
++ *
++ * Charging a full budget to the first type of queues and the exact
++ * service to the others has the effect of using the WF2Q+ policy to
++ * schedule the former on a timeslice basis, without violating the
++ * service domain guarantees of the latter.
++ */
++static void bfq_bfqq_expire(struct bfq_data *bfqd,
++			    struct bfq_queue *bfqq,
++			    int compensate,
++			    enum bfqq_expiration reason)
++{
++	int slow;
++	BUG_ON(bfqq != bfqd->in_service_queue);
++
++	/* Update disk peak rate for autotuning and check whether the
++	 * process is slow (see bfq_update_peak_rate).
++	 */
++	slow = bfq_update_peak_rate(bfqd, bfqq, compensate, reason);
++
++	/*
++	 * As above explained, 'punish' slow (i.e., seeky), timed-out
++	 * and async queues, to favor sequential sync workloads.
++	 *
++	 * Processes doing I/O in the slower disk zones will tend to be
++	 * slow(er) even if not seeky. Hence, since the estimated peak
++	 * rate is actually an average over the disk surface, these
++	 * processes may timeout just for bad luck. To avoid punishing
++	 * them we do not charge a full budget to a process that
++	 * succeeded in consuming at least 2/3 of its budget.
++	 */
++	if (slow || (reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
++		     bfq_bfqq_budget_left(bfqq) >=  bfqq->entity.budget / 3))
++		bfq_bfqq_charge_full_budget(bfqq);
++
++	bfqq->service_from_backlogged += bfqq->entity.service;
++
++	if (BFQQ_SEEKY(bfqq) && reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
++	    !bfq_bfqq_constantly_seeky(bfqq)) {
++		bfq_mark_bfqq_constantly_seeky(bfqq);
++		if (!blk_queue_nonrot(bfqd->queue))
++			bfqd->const_seeky_busy_in_flight_queues++;
++	}
++
++	if (reason == BFQ_BFQQ_TOO_IDLE &&
++	    bfqq->entity.service <= 2 * bfqq->entity.budget / 10 )
++		bfq_clear_bfqq_IO_bound(bfqq);
++
++	if (bfqd->low_latency && bfqq->wr_coeff == 1)
++		bfqq->last_wr_start_finish = jiffies;
++
++	if (bfqd->low_latency && bfqd->bfq_wr_max_softrt_rate > 0 &&
++	    RB_EMPTY_ROOT(&bfqq->sort_list)) {
++		/*
++		 * If we get here, and there are no outstanding requests,
++		 * then the request pattern is isochronous (see the comments
++		 * to the function bfq_bfqq_softrt_next_start()). Hence we
++		 * can compute soft_rt_next_start. If, instead, the queue
++		 * still has outstanding requests, then we have to wait
++		 * for the completion of all the outstanding requests to
++		 * discover whether the request pattern is actually
++		 * isochronous.
++		 */
++		if (bfqq->dispatched == 0)
++			bfqq->soft_rt_next_start =
++				bfq_bfqq_softrt_next_start(bfqd, bfqq);
++		else {
++			/*
++			 * The application is still waiting for the
++			 * completion of one or more requests:
++			 * prevent it from possibly being incorrectly
++			 * deemed as soft real-time by setting its
++			 * soft_rt_next_start to infinity. In fact,
++			 * without this assignment, the application
++			 * would be incorrectly deemed as soft
++			 * real-time if:
++			 * 1) it issued a new request before the
++			 *    completion of all its in-flight
++			 *    requests, and
++			 * 2) at that time, its soft_rt_next_start
++			 *    happened to be in the past.
++			 */
++			bfqq->soft_rt_next_start =
++				bfq_infinity_from_now(jiffies);
++			/*
++			 * Schedule an update of soft_rt_next_start to when
++			 * the task may be discovered to be isochronous.
++			 */
++			bfq_mark_bfqq_softrt_update(bfqq);
++		}
++	}
++
++	bfq_log_bfqq(bfqd, bfqq,
++		"expire (%d, slow %d, num_disp %d, idle_win %d)", reason,
++		slow, bfqq->dispatched, bfq_bfqq_idle_window(bfqq));
++
++	/*
++	 * Increase, decrease or leave budget unchanged according to
++	 * reason.
++	 */
++	__bfq_bfqq_recalc_budget(bfqd, bfqq, reason);
++	__bfq_bfqq_expire(bfqd, bfqq);
++}
++
++/*
++ * Budget timeout is not implemented through a dedicated timer, but
++ * just checked on request arrivals and completions, as well as on
++ * idle timer expirations.
++ */
++static int bfq_bfqq_budget_timeout(struct bfq_queue *bfqq)
++{
++	if (bfq_bfqq_budget_new(bfqq) ||
++	    time_before(jiffies, bfqq->budget_timeout))
++		return 0;
++	return 1;
++}
++
++/*
++ * If we expire a queue that is waiting for the arrival of a new
++ * request, we may prevent the fictitious timestamp back-shifting that
++ * allows the guarantees of the queue to be preserved (see [1] for
++ * this tricky aspect). Hence we return true only if this condition
++ * does not hold, or if the queue is slow enough to deserve only to be
++ * kicked off for preserving a high throughput.
++*/
++static inline int bfq_may_expire_for_budg_timeout(struct bfq_queue *bfqq)
++{
++	bfq_log_bfqq(bfqq->bfqd, bfqq,
++		"may_budget_timeout: wait_request %d left %d timeout %d",
++		bfq_bfqq_wait_request(bfqq),
++			bfq_bfqq_budget_left(bfqq) >=  bfqq->entity.budget / 3,
++		bfq_bfqq_budget_timeout(bfqq));
++
++	return (!bfq_bfqq_wait_request(bfqq) ||
++		bfq_bfqq_budget_left(bfqq) >=  bfqq->entity.budget / 3)
++		&&
++		bfq_bfqq_budget_timeout(bfqq);
++}
++
++/*
++ * Device idling is allowed only for the queues for which this function
++ * returns true. For this reason, the return value of this function plays a
++ * critical role for both throughput boosting and service guarantees. The
++ * return value is computed through a logical expression. In this rather
++ * long comment, we try to briefly describe all the details and motivations
++ * behind the components of this logical expression.
++ *
++ * First, the expression is false if bfqq is not sync, or if: bfqq happened
++ * to become active during a large burst of queue activations, and the
++ * pattern of requests bfqq contains boosts the throughput if bfqq is
++ * expired. In fact, queues that became active during a large burst benefit
++ * only from throughput, as discussed in the comments to bfq_handle_burst.
++ * In this respect, expiring bfqq certainly boosts the throughput on NCQ-
++ * capable flash-based devices, whereas, on rotational devices, it boosts
++ * the throughput only if bfqq contains random requests.
++ *
++ * On the opposite end, if (a) bfqq is sync, (b) the above burst-related
++ * condition does not hold, and (c) bfqq is being weight-raised, then the
++ * expression always evaluates to true, as device idling is instrumental
++ * for preserving low-latency guarantees (see [1]). If, instead, conditions
++ * (a) and (b) do hold, but (c) does not, then the expression evaluates to
++ * true only if: (1) bfqq is I/O-bound and has a non-null idle window, and
++ * (2) at least one of the following two conditions holds.
++ * The first condition is that the device is not performing NCQ, because
++ * idling the device most certainly boosts the throughput if this condition
++ * holds and bfqq is I/O-bound and has been granted a non-null idle window.
++ * The second compound condition is made of the logical AND of two components.
++ *
++ * The first component is true only if there is no weight-raised busy
++ * queue. This guarantees that the device is not idled for a sync non-
++ * weight-raised queue when there are busy weight-raised queues. The former
++ * is then expired immediately if empty. Combined with the timestamping
++ * rules of BFQ (see [1] for details), this causes sync non-weight-raised
++ * queues to get a lower number of requests served, and hence to ask for a
++ * lower number of requests from the request pool, before the busy weight-
++ * raised queues get served again.
++ *
++ * This is beneficial for the processes associated with weight-raised
++ * queues, when the request pool is saturated (e.g., in the presence of
++ * write hogs). In fact, if the processes associated with the other queues
++ * ask for requests at a lower rate, then weight-raised processes have a
++ * higher probability to get a request from the pool immediately (or at
++ * least soon) when they need one. Hence they have a higher probability to
++ * actually get a fraction of the disk throughput proportional to their
++ * high weight. This is especially true with NCQ-capable drives, which
++ * enqueue several requests in advance and further reorder internally-
++ * queued requests.
++ *
++ * In the end, mistreating non-weight-raised queues when there are busy
++ * weight-raised queues seems to mitigate starvation problems in the
++ * presence of heavy write workloads and NCQ, and hence to guarantee a
++ * higher application and system responsiveness in these hostile scenarios.
++ *
++ * If the first component of the compound condition is instead true, i.e.,
++ * there is no weight-raised busy queue, then the second component of the
++ * compound condition takes into account service-guarantee and throughput
++ * issues related to NCQ (recall that the compound condition is evaluated
++ * only if the device is detected as supporting NCQ).
++ *
++ * As for service guarantees, allowing the drive to enqueue more than one
++ * request at a time, and hence delegating de facto final scheduling
++ * decisions to the drive's internal scheduler, causes loss of control on
++ * the actual request service order. In this respect, when the drive is
++ * allowed to enqueue more than one request at a time, the service
++ * distribution enforced by the drive's internal scheduler is likely to
++ * coincide with the desired device-throughput distribution only in the
++ * following, perfectly symmetric, scenario:
++ * 1) all active queues have the same weight,
++ * 2) all active groups at the same level in the groups tree have the same
++ *    weight,
++ * 3) all active groups at the same level in the groups tree have the same
++ *    number of children.
++ *
++ * Even in such a scenario, sequential I/O may still receive a preferential
++ * treatment, but this is not likely to be a big issue with flash-based
++ * devices, because of their non-dramatic loss of throughput with random
++ * I/O. Things do differ with HDDs, for which additional care is taken, as
++ * explained after completing the discussion for flash-based devices.
++ *
++ * Unfortunately, keeping the necessary state for evaluating exactly the
++ * above symmetry conditions would be quite complex and time-consuming.
++ * Therefore BFQ evaluates instead the following stronger sub-conditions,
++ * for which it is much easier to maintain the needed state:
++ * 1) all active queues have the same weight,
++ * 2) all active groups have the same weight,
++ * 3) all active groups have at most one active child each.
++ * In particular, the last two conditions are always true if hierarchical
++ * support and the cgroups interface are not enabled, hence no state needs
++ * to be maintained in this case.
++ *
++ * According to the above considerations, the second component of the
++ * compound condition evaluates to true if any of the above symmetry
++ * sub-condition does not hold, or the device is not flash-based. Therefore,
++ * if also the first component is true, then idling is allowed for a sync
++ * queue. These are the only sub-conditions considered if the device is
++ * flash-based, as, for such a device, it is sensible to force idling only
++ * for service-guarantee issues. In fact, as for throughput, idling
++ * NCQ-capable flash-based devices would not boost the throughput even
++ * with sequential I/O; rather it would lower the throughput in proportion
++ * to how fast the device is. In the end, (only) if all the three
++ * sub-conditions hold and the device is flash-based, the compound
++ * condition evaluates to false and therefore no idling is performed.
++ *
++ * As already said, things change with a rotational device, where idling
++ * boosts the throughput with sequential I/O (even with NCQ). Hence, for
++ * such a device the second component of the compound condition evaluates
++ * to true also if the following additional sub-condition does not hold:
++ * the queue is constantly seeky. Unfortunately, this different behavior
++ * with respect to flash-based devices causes an additional asymmetry: if
++ * some sync queues enjoy idling and some other sync queues do not, then
++ * the latter get a low share of the device throughput, simply because the
++ * former get many requests served after being set as in service, whereas
++ * the latter do not. As a consequence, to guarantee the desired throughput
++ * distribution, on HDDs the compound expression evaluates to true (and
++ * hence device idling is performed) also if the following last symmetry
++ * condition does not hold: no other queue is benefiting from idling. Also
++ * this last condition is actually replaced with a simpler-to-maintain and
++ * stronger condition: there is no busy queue which is not constantly seeky
++ * (and hence may also benefit from idling).
++ *
++ * To sum up, when all the required symmetry and throughput-boosting
++ * sub-conditions hold, the second component of the compound condition
++ * evaluates to false, and hence no idling is performed. This helps to
++ * keep the drives' internal queues full on NCQ-capable devices, and hence
++ * to boost the throughput, without causing 'almost' any loss of service
++ * guarantees. The 'almost' follows from the fact that, if the internal
++ * queue of one such device is filled while all the sub-conditions hold,
++ * but at some point in time some sub-condition stops to hold, then it may
++ * become impossible to let requests be served in the new desired order
++ * until all the requests already queued in the device have been served.
++ */
++static inline bool bfq_bfqq_must_not_expire(struct bfq_queue *bfqq)
++{
++	struct bfq_data *bfqd = bfqq->bfqd;
++#ifdef CONFIG_CGROUP_BFQIO
++#define symmetric_scenario	  (!bfqd->active_numerous_groups && \
++				   !bfq_differentiated_weights(bfqd))
++#else
++#define symmetric_scenario	  (!bfq_differentiated_weights(bfqd))
++#endif
++#define cond_for_seeky_on_ncq_hdd (bfq_bfqq_constantly_seeky(bfqq) && \
++				   bfqd->busy_in_flight_queues == \
++				   bfqd->const_seeky_busy_in_flight_queues)
++
++#define cond_for_expiring_in_burst	(bfq_bfqq_in_large_burst(bfqq) && \
++					 bfqd->hw_tag && \
++					 (blk_queue_nonrot(bfqd->queue) || \
++					  bfq_bfqq_constantly_seeky(bfqq)))
++
++/*
++ * Condition for expiring a non-weight-raised queue (and hence not idling
++ * the device).
++ */
++#define cond_for_expiring_non_wr  (bfqd->hw_tag && \
++				   (bfqd->wr_busy_queues > 0 || \
++				    (symmetric_scenario && \
++				     (blk_queue_nonrot(bfqd->queue) || \
++				      cond_for_seeky_on_ncq_hdd))))
++
++	return bfq_bfqq_sync(bfqq) &&
++		!cond_for_expiring_in_burst &&
++		(bfqq->wr_coeff > 1 ||
++		 (bfq_bfqq_IO_bound(bfqq) && bfq_bfqq_idle_window(bfqq) &&
++		  !cond_for_expiring_non_wr)
++	);
++}
++
++/*
++ * If the in-service queue is empty but sync, and the function
++ * bfq_bfqq_must_not_expire returns true, then:
++ * 1) the queue must remain in service and cannot be expired, and
++ * 2) the disk must be idled to wait for the possible arrival of a new
++ *    request for the queue.
++ * See the comments to the function bfq_bfqq_must_not_expire for the reasons
++ * why performing device idling is the best choice to boost the throughput
++ * and preserve service guarantees when bfq_bfqq_must_not_expire itself
++ * returns true.
++ */
++static inline bool bfq_bfqq_must_idle(struct bfq_queue *bfqq)
++{
++	struct bfq_data *bfqd = bfqq->bfqd;
++
++	return RB_EMPTY_ROOT(&bfqq->sort_list) && bfqd->bfq_slice_idle != 0 &&
++	       bfq_bfqq_must_not_expire(bfqq);
++}
++
++/*
++ * Select a queue for service.  If we have a current queue in service,
++ * check whether to continue servicing it, or retrieve and set a new one.
++ */
++static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq, *new_bfqq = NULL;
++	struct request *next_rq;
++	enum bfqq_expiration reason = BFQ_BFQQ_BUDGET_TIMEOUT;
++
++	bfqq = bfqd->in_service_queue;
++	if (bfqq == NULL)
++		goto new_queue;
++
++	bfq_log_bfqq(bfqd, bfqq, "select_queue: already in-service queue");
++
++	/*
++         * If another queue has a request waiting within our mean seek
++         * distance, let it run. The expire code will check for close
++         * cooperators and put the close queue at the front of the
++         * service tree. If possible, merge the expiring queue with the
++         * new bfqq.
++         */
++        new_bfqq = bfq_close_cooperator(bfqd, bfqq);
++        if (new_bfqq != NULL && bfqq->new_bfqq == NULL)
++                bfq_setup_merge(bfqq, new_bfqq);
++
++	if (bfq_may_expire_for_budg_timeout(bfqq) &&
++	    !timer_pending(&bfqd->idle_slice_timer) &&
++	    !bfq_bfqq_must_idle(bfqq))
++		goto expire;
++
++	next_rq = bfqq->next_rq;
++	/*
++	 * If bfqq has requests queued and it has enough budget left to
++	 * serve them, keep the queue, otherwise expire it.
++	 */
++	if (next_rq != NULL) {
++		if (bfq_serv_to_charge(next_rq, bfqq) >
++			bfq_bfqq_budget_left(bfqq)) {
++			reason = BFQ_BFQQ_BUDGET_EXHAUSTED;
++			goto expire;
++		} else {
++			/*
++			 * The idle timer may be pending because we may
++			 * not disable disk idling even when a new request
++			 * arrives.
++			 */
++			if (timer_pending(&bfqd->idle_slice_timer)) {
++				/*
++				 * If we get here: 1) at least a new request
++				 * has arrived but we have not disabled the
++				 * timer because the request was too small,
++				 * 2) then the block layer has unplugged
++				 * the device, causing the dispatch to be
++				 * invoked.
++				 *
++				 * Since the device is unplugged, now the
++				 * requests are probably large enough to
++				 * provide a reasonable throughput.
++				 * So we disable idling.
++				 */
++				bfq_clear_bfqq_wait_request(bfqq);
++				del_timer(&bfqd->idle_slice_timer);
++			}
++			if (new_bfqq == NULL)
++				goto keep_queue;
++			else
++				goto expire;
++		}
++	}
++
++	/*
++	 * No requests pending.  If the in-service queue still has requests
++	 * in flight (possibly waiting for a completion) or is idling for a
++	 * new request, then keep it.
++	 */
++	if (new_bfqq == NULL && (timer_pending(&bfqd->idle_slice_timer) ||
++	    (bfqq->dispatched != 0 && bfq_bfqq_must_not_expire(bfqq)))) {
++		bfqq = NULL;
++		goto keep_queue;
++	} else if (new_bfqq != NULL && timer_pending(&bfqd->idle_slice_timer)) {
++		/*
++		 * Expiring the queue because there is a close cooperator,
++		 * cancel timer.
++		 */
++		bfq_clear_bfqq_wait_request(bfqq);
++		del_timer(&bfqd->idle_slice_timer);
++	}
++
++	reason = BFQ_BFQQ_NO_MORE_REQUESTS;
++expire:
++	bfq_bfqq_expire(bfqd, bfqq, 0, reason);
++new_queue:
++	bfqq = bfq_set_in_service_queue(bfqd, new_bfqq);
++	bfq_log(bfqd, "select_queue: new queue %d returned",
++		bfqq != NULL ? bfqq->pid : 0);
++keep_queue:
++	return bfqq;
++}
++
++static void bfq_update_wr_data(struct bfq_data *bfqd,
++			       struct bfq_queue *bfqq)
++{
++	if (bfqq->wr_coeff > 1) { /* queue is being boosted */
++		struct bfq_entity *entity = &bfqq->entity;
++
++		bfq_log_bfqq(bfqd, bfqq,
++			"raising period dur %u/%u msec, old coeff %u, w %d(%d)",
++			jiffies_to_msecs(jiffies -
++				bfqq->last_wr_start_finish),
++			jiffies_to_msecs(bfqq->wr_cur_max_time),
++			bfqq->wr_coeff,
++			bfqq->entity.weight, bfqq->entity.orig_weight);
++
++		BUG_ON(bfqq != bfqd->in_service_queue && entity->weight !=
++		       entity->orig_weight * bfqq->wr_coeff);
++		if (entity->ioprio_changed)
++			bfq_log_bfqq(bfqd, bfqq, "WARN: pending prio change");
++		/*
++		 * If the queue was activated in a burst, or
++		 * too much time has elapsed from the beginning
++		 * of this weight-raising, then end weight raising.
++		 */
++		if (bfq_bfqq_in_large_burst(bfqq) ||
++		    time_is_before_jiffies(bfqq->last_wr_start_finish +
++					   bfqq->wr_cur_max_time)) {
++			bfqq->last_wr_start_finish = jiffies;
++			bfq_log_bfqq(bfqd, bfqq,
++				     "wrais ending at %lu, rais_max_time %u",
++				     bfqq->last_wr_start_finish,
++				     jiffies_to_msecs(bfqq->wr_cur_max_time));
++			bfq_bfqq_end_wr(bfqq);
++			__bfq_entity_update_weight_prio(
++				bfq_entity_service_tree(entity),
++				entity);
++		}
++	}
++}
++
++/*
++ * Dispatch one request from bfqq, moving it to the request queue
++ * dispatch list.
++ */
++static int bfq_dispatch_request(struct bfq_data *bfqd,
++				struct bfq_queue *bfqq)
++{
++	int dispatched = 0;
++	struct request *rq;
++	unsigned long service_to_charge;
++
++	BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list));
++
++	/* Follow expired path, else get first next available. */
++	rq = bfq_check_fifo(bfqq);
++	if (rq == NULL)
++		rq = bfqq->next_rq;
++	service_to_charge = bfq_serv_to_charge(rq, bfqq);
++
++	if (service_to_charge > bfq_bfqq_budget_left(bfqq)) {
++		/*
++		 * This may happen if the next rq is chosen in fifo order
++		 * instead of sector order. The budget is properly
++		 * dimensioned to be always sufficient to serve the next
++		 * request only if it is chosen in sector order. The reason
++		 * is that it would be quite inefficient and little useful
++		 * to always make sure that the budget is large enough to
++		 * serve even the possible next rq in fifo order.
++		 * In fact, requests are seldom served in fifo order.
++		 *
++		 * Expire the queue for budget exhaustion, and make sure
++		 * that the next act_budget is enough to serve the next
++		 * request, even if it comes from the fifo expired path.
++		 */
++		bfqq->next_rq = rq;
++		/*
++		 * Since this dispatch is failed, make sure that
++		 * a new one will be performed
++		 */
++		if (!bfqd->rq_in_driver)
++			bfq_schedule_dispatch(bfqd);
++		goto expire;
++	}
++
++	/* Finally, insert request into driver dispatch list. */
++	bfq_bfqq_served(bfqq, service_to_charge);
++	bfq_dispatch_insert(bfqd->queue, rq);
++
++	bfq_update_wr_data(bfqd, bfqq);
++
++	bfq_log_bfqq(bfqd, bfqq,
++			"dispatched %u sec req (%llu), budg left %lu",
++			blk_rq_sectors(rq),
++			(long long unsigned)blk_rq_pos(rq),
++			bfq_bfqq_budget_left(bfqq));
++
++	dispatched++;
++
++	if (bfqd->in_service_bic == NULL) {
++		atomic_long_inc(&RQ_BIC(rq)->icq.ioc->refcount);
++		bfqd->in_service_bic = RQ_BIC(rq);
++	}
++
++	if (bfqd->busy_queues > 1 && ((!bfq_bfqq_sync(bfqq) &&
++	    dispatched >= bfqd->bfq_max_budget_async_rq) ||
++	    bfq_class_idle(bfqq)))
++		goto expire;
++
++	return dispatched;
++
++expire:
++	bfq_bfqq_expire(bfqd, bfqq, 0, BFQ_BFQQ_BUDGET_EXHAUSTED);
++	return dispatched;
++}
++
++static int __bfq_forced_dispatch_bfqq(struct bfq_queue *bfqq)
++{
++	int dispatched = 0;
++
++	while (bfqq->next_rq != NULL) {
++		bfq_dispatch_insert(bfqq->bfqd->queue, bfqq->next_rq);
++		dispatched++;
++	}
++
++	BUG_ON(!list_empty(&bfqq->fifo));
++	return dispatched;
++}
++
++/*
++ * Drain our current requests.
++ * Used for barriers and when switching io schedulers on-the-fly.
++ */
++static int bfq_forced_dispatch(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq, *n;
++	struct bfq_service_tree *st;
++	int dispatched = 0;
++
++	bfqq = bfqd->in_service_queue;
++	if (bfqq != NULL)
++		__bfq_bfqq_expire(bfqd, bfqq);
++
++	/*
++	 * Loop through classes, and be careful to leave the scheduler
++	 * in a consistent state, as feedback mechanisms and vtime
++	 * updates cannot be disabled during the process.
++	 */
++	list_for_each_entry_safe(bfqq, n, &bfqd->active_list, bfqq_list) {
++		st = bfq_entity_service_tree(&bfqq->entity);
++
++		dispatched += __bfq_forced_dispatch_bfqq(bfqq);
++		bfqq->max_budget = bfq_max_budget(bfqd);
++
++		bfq_forget_idle(st);
++	}
++
++	BUG_ON(bfqd->busy_queues != 0);
++
++	return dispatched;
++}
++
++static int bfq_dispatch_requests(struct request_queue *q, int force)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_queue *bfqq;
++	int max_dispatch;
++
++	bfq_log(bfqd, "dispatch requests: %d busy queues", bfqd->busy_queues);
++	if (bfqd->busy_queues == 0)
++		return 0;
++
++	if (unlikely(force))
++		return bfq_forced_dispatch(bfqd);
++
++	bfqq = bfq_select_queue(bfqd);
++	if (bfqq == NULL)
++		return 0;
++
++	max_dispatch = bfqd->bfq_quantum;
++	if (bfq_class_idle(bfqq))
++		max_dispatch = 1;
++
++	if (!bfq_bfqq_sync(bfqq))
++		max_dispatch = bfqd->bfq_max_budget_async_rq;
++
++	if (bfqq->dispatched >= max_dispatch) {
++		if (bfqd->busy_queues > 1)
++			return 0;
++		if (bfqq->dispatched >= 4 * max_dispatch)
++			return 0;
++	}
++
++	if (bfqd->sync_flight != 0 && !bfq_bfqq_sync(bfqq))
++		return 0;
++
++	bfq_clear_bfqq_wait_request(bfqq);
++	BUG_ON(timer_pending(&bfqd->idle_slice_timer));
++
++	if (!bfq_dispatch_request(bfqd, bfqq))
++		return 0;
++
++	bfq_log_bfqq(bfqd, bfqq, "dispatched one request of %d (max_disp %d)",
++			bfqq->pid, max_dispatch);
++
++	return 1;
++}
++
++/*
++ * Task holds one reference to the queue, dropped when task exits.  Each rq
++ * in-flight on this queue also holds a reference, dropped when rq is freed.
++ *
++ * Queue lock must be held here.
++ */
++static void bfq_put_queue(struct bfq_queue *bfqq)
++{
++	struct bfq_data *bfqd = bfqq->bfqd;
++
++	BUG_ON(atomic_read(&bfqq->ref) <= 0);
++
++	bfq_log_bfqq(bfqd, bfqq, "put_queue: %p %d", bfqq,
++		     atomic_read(&bfqq->ref));
++	if (!atomic_dec_and_test(&bfqq->ref))
++		return;
++
++	BUG_ON(rb_first(&bfqq->sort_list) != NULL);
++	BUG_ON(bfqq->allocated[READ] + bfqq->allocated[WRITE] != 0);
++	BUG_ON(bfqq->entity.tree != NULL);
++	BUG_ON(bfq_bfqq_busy(bfqq));
++	BUG_ON(bfqd->in_service_queue == bfqq);
++
++	if (bfq_bfqq_sync(bfqq))
++		/*
++		 * The fact that this queue is being destroyed does not
++		 * invalidate the fact that this queue may have been
++		 * activated during the current burst. As a consequence,
++		 * although the queue does not exist anymore, and hence
++		 * needs to be removed from the burst list if there,
++		 * the burst size has not to be decremented.
++		 */
++		hlist_del_init(&bfqq->burst_list_node);
++
++	bfq_log_bfqq(bfqd, bfqq, "put_queue: %p freed", bfqq);
++
++	kmem_cache_free(bfq_pool, bfqq);
++}
++
++static void bfq_put_cooperator(struct bfq_queue *bfqq)
++{
++	struct bfq_queue *__bfqq, *next;
++
++	/*
++	 * If this queue was scheduled to merge with another queue, be
++	 * sure to drop the reference taken on that queue (and others in
++	 * the merge chain). See bfq_setup_merge and bfq_merge_bfqqs.
++	 */
++	__bfqq = bfqq->new_bfqq;
++	while (__bfqq) {
++		if (__bfqq == bfqq)
++			break;
++		next = __bfqq->new_bfqq;
++		bfq_put_queue(__bfqq);
++		__bfqq = next;
++	}
++}
++
++static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	if (bfqq == bfqd->in_service_queue) {
++		__bfq_bfqq_expire(bfqd, bfqq);
++		bfq_schedule_dispatch(bfqd);
++	}
++
++	bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq,
++		     atomic_read(&bfqq->ref));
++
++	bfq_put_cooperator(bfqq);
++
++	bfq_put_queue(bfqq);
++}
++
++static inline void bfq_init_icq(struct io_cq *icq)
++{
++	struct bfq_io_cq *bic = icq_to_bic(icq);
++
++	bic->ttime.last_end_request = jiffies;
++}
++
++static void bfq_exit_icq(struct io_cq *icq)
++{
++	struct bfq_io_cq *bic = icq_to_bic(icq);
++	struct bfq_data *bfqd = bic_to_bfqd(bic);
++
++	if (bic->bfqq[BLK_RW_ASYNC]) {
++		bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_ASYNC]);
++		bic->bfqq[BLK_RW_ASYNC] = NULL;
++	}
++
++	if (bic->bfqq[BLK_RW_SYNC]) {
++		bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_SYNC]);
++		bic->bfqq[BLK_RW_SYNC] = NULL;
++	}
++}
++
++/*
++ * Update the entity prio values; note that the new values will not
++ * be used until the next (re)activation.
++ */
++static void bfq_init_prio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
++{
++	struct task_struct *tsk = current;
++	int ioprio_class;
++
++	if (!bfq_bfqq_prio_changed(bfqq))
++		return;
++
++	ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
++	switch (ioprio_class) {
++	default:
++		dev_err(bfqq->bfqd->queue->backing_dev_info.dev,
++			"bfq: bad prio class %d\n", ioprio_class);
++	case IOPRIO_CLASS_NONE:
++		/*
++		 * No prio set, inherit CPU scheduling settings.
++		 */
++		bfqq->entity.new_ioprio = task_nice_ioprio(tsk);
++		bfqq->entity.new_ioprio_class = task_nice_ioclass(tsk);
++		break;
++	case IOPRIO_CLASS_RT:
++		bfqq->entity.new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
++		bfqq->entity.new_ioprio_class = IOPRIO_CLASS_RT;
++		break;
++	case IOPRIO_CLASS_BE:
++		bfqq->entity.new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
++		bfqq->entity.new_ioprio_class = IOPRIO_CLASS_BE;
++		break;
++	case IOPRIO_CLASS_IDLE:
++		bfqq->entity.new_ioprio_class = IOPRIO_CLASS_IDLE;
++		bfqq->entity.new_ioprio = 7;
++		bfq_clear_bfqq_idle_window(bfqq);
++		break;
++	}
++
++	if (bfqq->entity.new_ioprio < 0 ||
++	    bfqq->entity.new_ioprio >= IOPRIO_BE_NR) {
++		printk(KERN_CRIT "bfq_init_prio_data: new_ioprio %d\n",
++				 bfqq->entity.new_ioprio);
++		BUG();
++	}
++
++	bfqq->entity.ioprio_changed = 1;
++
++	bfq_clear_bfqq_prio_changed(bfqq);
++}
++
++static void bfq_changed_ioprio(struct bfq_io_cq *bic)
++{
++	struct bfq_data *bfqd;
++	struct bfq_queue *bfqq, *new_bfqq;
++	struct bfq_group *bfqg;
++	unsigned long uninitialized_var(flags);
++	int ioprio = bic->icq.ioc->ioprio;
++
++	bfqd = bfq_get_bfqd_locked(&(bic->icq.q->elevator->elevator_data),
++				   &flags);
++	/*
++	 * This condition may trigger on a newly created bic, be sure to
++	 * drop the lock before returning.
++	 */
++	if (unlikely(bfqd == NULL) || likely(bic->ioprio == ioprio))
++		goto out;
++
++	bfqq = bic->bfqq[BLK_RW_ASYNC];
++	if (bfqq != NULL) {
++		bfqg = container_of(bfqq->entity.sched_data, struct bfq_group,
++				    sched_data);
++		new_bfqq = bfq_get_queue(bfqd, bfqg, BLK_RW_ASYNC, bic,
++					 GFP_ATOMIC);
++		if (new_bfqq != NULL) {
++			bic->bfqq[BLK_RW_ASYNC] = new_bfqq;
++			bfq_log_bfqq(bfqd, bfqq,
++				     "changed_ioprio: bfqq %p %d",
++				     bfqq, atomic_read(&bfqq->ref));
++			bfq_put_queue(bfqq);
++		}
++	}
++
++	bfqq = bic->bfqq[BLK_RW_SYNC];
++	if (bfqq != NULL)
++		bfq_mark_bfqq_prio_changed(bfqq);
++
++	bic->ioprio = ioprio;
++
++out:
++	bfq_put_bfqd_unlock(bfqd, &flags);
++}
++
++static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++			  pid_t pid, int is_sync)
++{
++	RB_CLEAR_NODE(&bfqq->entity.rb_node);
++	INIT_LIST_HEAD(&bfqq->fifo);
++	INIT_HLIST_NODE(&bfqq->burst_list_node);
++
++	atomic_set(&bfqq->ref, 0);
++	bfqq->bfqd = bfqd;
++
++	bfq_mark_bfqq_prio_changed(bfqq);
++
++	if (is_sync) {
++		if (!bfq_class_idle(bfqq))
++			bfq_mark_bfqq_idle_window(bfqq);
++		bfq_mark_bfqq_sync(bfqq);
++	}
++	bfq_mark_bfqq_IO_bound(bfqq);
++
++	/* Tentative initial value to trade off between thr and lat */
++	bfqq->max_budget = (2 * bfq_max_budget(bfqd)) / 3;
++	bfqq->pid = pid;
++
++	bfqq->wr_coeff = 1;
++	bfqq->last_wr_start_finish = 0;
++	/*
++	 * Set to the value for which bfqq will not be deemed as
++	 * soft rt when it becomes backlogged.
++	 */
++	bfqq->soft_rt_next_start = bfq_infinity_from_now(jiffies);
++}
++
++static struct bfq_queue *bfq_find_alloc_queue(struct bfq_data *bfqd,
++					      struct bfq_group *bfqg,
++					      int is_sync,
++					      struct bfq_io_cq *bic,
++					      gfp_t gfp_mask)
++{
++	struct bfq_queue *bfqq, *new_bfqq = NULL;
++
++retry:
++	/* bic always exists here */
++	bfqq = bic_to_bfqq(bic, is_sync);
++
++	/*
++	 * Always try a new alloc if we fall back to the OOM bfqq
++	 * originally, since it should just be a temporary situation.
++	 */
++	if (bfqq == NULL || bfqq == &bfqd->oom_bfqq) {
++		bfqq = NULL;
++		if (new_bfqq != NULL) {
++			bfqq = new_bfqq;
++			new_bfqq = NULL;
++		} else if (gfp_mask & __GFP_WAIT) {
++			spin_unlock_irq(bfqd->queue->queue_lock);
++			new_bfqq = kmem_cache_alloc_node(bfq_pool,
++					gfp_mask | __GFP_ZERO,
++					bfqd->queue->node);
++			spin_lock_irq(bfqd->queue->queue_lock);
++			if (new_bfqq != NULL)
++				goto retry;
++		} else {
++			bfqq = kmem_cache_alloc_node(bfq_pool,
++					gfp_mask | __GFP_ZERO,
++					bfqd->queue->node);
++		}
++
++		if (bfqq != NULL) {
++			bfq_init_bfqq(bfqd, bfqq, current->pid, is_sync);
++			bfq_init_prio_data(bfqq, bic);
++			bfq_init_entity(&bfqq->entity, bfqg);
++			bfq_log_bfqq(bfqd, bfqq, "allocated");
++		} else {
++			bfqq = &bfqd->oom_bfqq;
++			bfq_log_bfqq(bfqd, bfqq, "using oom bfqq");
++		}
++	}
++
++	if (new_bfqq != NULL)
++		kmem_cache_free(bfq_pool, new_bfqq);
++
++	return bfqq;
++}
++
++static struct bfq_queue **bfq_async_queue_prio(struct bfq_data *bfqd,
++					       struct bfq_group *bfqg,
++					       int ioprio_class, int ioprio)
++{
++	switch (ioprio_class) {
++	case IOPRIO_CLASS_RT:
++		return &bfqg->async_bfqq[0][ioprio];
++	case IOPRIO_CLASS_NONE:
++		ioprio = IOPRIO_NORM;
++		/* fall through */
++	case IOPRIO_CLASS_BE:
++		return &bfqg->async_bfqq[1][ioprio];
++	case IOPRIO_CLASS_IDLE:
++		return &bfqg->async_idle_bfqq;
++	default:
++		BUG();
++	}
++}
++
++static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
++				       struct bfq_group *bfqg, int is_sync,
++				       struct bfq_io_cq *bic, gfp_t gfp_mask)
++{
++	const int ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
++	const int ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
++	struct bfq_queue **async_bfqq = NULL;
++	struct bfq_queue *bfqq = NULL;
++
++	if (!is_sync) {
++		async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
++						  ioprio);
++		bfqq = *async_bfqq;
++	}
++
++	if (bfqq == NULL)
++		bfqq = bfq_find_alloc_queue(bfqd, bfqg, is_sync, bic, gfp_mask);
++
++	/*
++	 * Pin the queue now that it's allocated, scheduler exit will
++	 * prune it.
++	 */
++	if (!is_sync && *async_bfqq == NULL) {
++		atomic_inc(&bfqq->ref);
++		bfq_log_bfqq(bfqd, bfqq, "get_queue, bfqq not in async: %p, %d",
++			     bfqq, atomic_read(&bfqq->ref));
++		*async_bfqq = bfqq;
++	}
++
++	atomic_inc(&bfqq->ref);
++	bfq_log_bfqq(bfqd, bfqq, "get_queue, at end: %p, %d", bfqq,
++		     atomic_read(&bfqq->ref));
++	return bfqq;
++}
++
++static void bfq_update_io_thinktime(struct bfq_data *bfqd,
++				    struct bfq_io_cq *bic)
++{
++	unsigned long elapsed = jiffies - bic->ttime.last_end_request;
++	unsigned long ttime = min(elapsed, 2UL * bfqd->bfq_slice_idle);
++
++	bic->ttime.ttime_samples = (7*bic->ttime.ttime_samples + 256) / 8;
++	bic->ttime.ttime_total = (7*bic->ttime.ttime_total + 256*ttime) / 8;
++	bic->ttime.ttime_mean = (bic->ttime.ttime_total + 128) /
++				bic->ttime.ttime_samples;
++}
++
++static void bfq_update_io_seektime(struct bfq_data *bfqd,
++				   struct bfq_queue *bfqq,
++				   struct request *rq)
++{
++	sector_t sdist;
++	u64 total;
++
++	if (bfqq->last_request_pos < blk_rq_pos(rq))
++		sdist = blk_rq_pos(rq) - bfqq->last_request_pos;
++	else
++		sdist = bfqq->last_request_pos - blk_rq_pos(rq);
++
++	/*
++	 * Don't allow the seek distance to get too large from the
++	 * odd fragment, pagein, etc.
++	 */
++	if (bfqq->seek_samples == 0) /* first request, not really a seek */
++		sdist = 0;
++	else if (bfqq->seek_samples <= 60) /* second & third seek */
++		sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*1024);
++	else
++		sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*64);
++
++	bfqq->seek_samples = (7*bfqq->seek_samples + 256) / 8;
++	bfqq->seek_total = (7*bfqq->seek_total + (u64)256*sdist) / 8;
++	total = bfqq->seek_total + (bfqq->seek_samples/2);
++	do_div(total, bfqq->seek_samples);
++	bfqq->seek_mean = (sector_t)total;
++
++	bfq_log_bfqq(bfqd, bfqq, "dist=%llu mean=%llu", (u64)sdist,
++			(u64)bfqq->seek_mean);
++}
++
++/*
++ * Disable idle window if the process thinks too long or seeks so much that
++ * it doesn't matter.
++ */
++static void bfq_update_idle_window(struct bfq_data *bfqd,
++				   struct bfq_queue *bfqq,
++				   struct bfq_io_cq *bic)
++{
++	int enable_idle;
++
++	/* Don't idle for async or idle io prio class. */
++	if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq))
++		return;
++
++	enable_idle = bfq_bfqq_idle_window(bfqq);
++
++	if (atomic_read(&bic->icq.ioc->active_ref) == 0 ||
++	    bfqd->bfq_slice_idle == 0 ||
++		(bfqd->hw_tag && BFQQ_SEEKY(bfqq) &&
++			bfqq->wr_coeff == 1))
++		enable_idle = 0;
++	else if (bfq_sample_valid(bic->ttime.ttime_samples)) {
++		if (bic->ttime.ttime_mean > bfqd->bfq_slice_idle &&
++			bfqq->wr_coeff == 1)
++			enable_idle = 0;
++		else
++			enable_idle = 1;
++	}
++	bfq_log_bfqq(bfqd, bfqq, "update_idle_window: enable_idle %d",
++		enable_idle);
++
++	if (enable_idle)
++		bfq_mark_bfqq_idle_window(bfqq);
++	else
++		bfq_clear_bfqq_idle_window(bfqq);
++}
++
++/*
++ * Called when a new fs request (rq) is added to bfqq.  Check if there's
++ * something we should do about it.
++ */
++static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++			    struct request *rq)
++{
++	struct bfq_io_cq *bic = RQ_BIC(rq);
++
++	if (rq->cmd_flags & REQ_META)
++		bfqq->meta_pending++;
++
++	bfq_update_io_thinktime(bfqd, bic);
++	bfq_update_io_seektime(bfqd, bfqq, rq);
++	if (!BFQQ_SEEKY(bfqq) && bfq_bfqq_constantly_seeky(bfqq)) {
++		bfq_clear_bfqq_constantly_seeky(bfqq);
++		if (!blk_queue_nonrot(bfqd->queue)) {
++			BUG_ON(!bfqd->const_seeky_busy_in_flight_queues);
++			bfqd->const_seeky_busy_in_flight_queues--;
++		}
++	}
++	if (bfqq->entity.service > bfq_max_budget(bfqd) / 8 ||
++	    !BFQQ_SEEKY(bfqq))
++		bfq_update_idle_window(bfqd, bfqq, bic);
++
++	bfq_log_bfqq(bfqd, bfqq,
++		     "rq_enqueued: idle_window=%d (seeky %d, mean %llu)",
++		     bfq_bfqq_idle_window(bfqq), BFQQ_SEEKY(bfqq),
++		     (long long unsigned)bfqq->seek_mean);
++
++	bfqq->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq);
++
++	if (bfqq == bfqd->in_service_queue && bfq_bfqq_wait_request(bfqq)) {
++		int small_req = bfqq->queued[rq_is_sync(rq)] == 1 &&
++				blk_rq_sectors(rq) < 32;
++		int budget_timeout = bfq_bfqq_budget_timeout(bfqq);
++
++		/*
++		 * There is just this request queued: if the request
++		 * is small and the queue is not to be expired, then
++		 * just exit.
++		 *
++		 * In this way, if the disk is being idled to wait for
++		 * a new request from the in-service queue, we avoid
++		 * unplugging the device and committing the disk to serve
++		 * just a small request. On the contrary, we wait for
++		 * the block layer to decide when to unplug the device:
++		 * hopefully, new requests will be merged to this one
++		 * quickly, then the device will be unplugged and
++		 * larger requests will be dispatched.
++		 */
++		if (small_req && !budget_timeout)
++			return;
++
++		/*
++		 * A large enough request arrived, or the queue is to
++		 * be expired: in both cases disk idling is to be
++		 * stopped, so clear wait_request flag and reset
++		 * timer.
++		 */
++		bfq_clear_bfqq_wait_request(bfqq);
++		del_timer(&bfqd->idle_slice_timer);
++
++		/*
++		 * The queue is not empty, because a new request just
++		 * arrived. Hence we can safely expire the queue, in
++		 * case of budget timeout, without risking that the
++		 * timestamps of the queue are not updated correctly.
++		 * See [1] for more details.
++		 */
++		if (budget_timeout)
++			bfq_bfqq_expire(bfqd, bfqq, 0, BFQ_BFQQ_BUDGET_TIMEOUT);
++
++		/*
++		 * Let the request rip immediately, or let a new queue be
++		 * selected if bfqq has just been expired.
++		 */
++		__blk_run_queue(bfqd->queue);
++	}
++}
++
++static void bfq_insert_request(struct request_queue *q, struct request *rq)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++	assert_spin_locked(bfqd->queue->queue_lock);
++	bfq_init_prio_data(bfqq, RQ_BIC(rq));
++
++	bfq_add_request(rq);
++
++	rq->fifo_time = jiffies + bfqd->bfq_fifo_expire[rq_is_sync(rq)];
++	list_add_tail(&rq->queuelist, &bfqq->fifo);
++
++	bfq_rq_enqueued(bfqd, bfqq, rq);
++}
++
++static void bfq_update_hw_tag(struct bfq_data *bfqd)
++{
++	bfqd->max_rq_in_driver = max(bfqd->max_rq_in_driver,
++				     bfqd->rq_in_driver);
++
++	if (bfqd->hw_tag == 1)
++		return;
++
++	/*
++	 * This sample is valid if the number of outstanding requests
++	 * is large enough to allow a queueing behavior.  Note that the
++	 * sum is not exact, as it's not taking into account deactivated
++	 * requests.
++	 */
++	if (bfqd->rq_in_driver + bfqd->queued < BFQ_HW_QUEUE_THRESHOLD)
++		return;
++
++	if (bfqd->hw_tag_samples++ < BFQ_HW_QUEUE_SAMPLES)
++		return;
++
++	bfqd->hw_tag = bfqd->max_rq_in_driver > BFQ_HW_QUEUE_THRESHOLD;
++	bfqd->max_rq_in_driver = 0;
++	bfqd->hw_tag_samples = 0;
++}
++
++static void bfq_completed_request(struct request_queue *q, struct request *rq)
++{
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++	struct bfq_data *bfqd = bfqq->bfqd;
++	bool sync = bfq_bfqq_sync(bfqq);
++
++	bfq_log_bfqq(bfqd, bfqq, "completed one req with %u sects left (%d)",
++		     blk_rq_sectors(rq), sync);
++
++	bfq_update_hw_tag(bfqd);
++
++	BUG_ON(!bfqd->rq_in_driver);
++	BUG_ON(!bfqq->dispatched);
++	bfqd->rq_in_driver--;
++	bfqq->dispatched--;
++
++	if (!bfqq->dispatched && !bfq_bfqq_busy(bfqq)) {
++		bfq_weights_tree_remove(bfqd, &bfqq->entity,
++					&bfqd->queue_weights_tree);
++		if (!blk_queue_nonrot(bfqd->queue)) {
++			BUG_ON(!bfqd->busy_in_flight_queues);
++			bfqd->busy_in_flight_queues--;
++			if (bfq_bfqq_constantly_seeky(bfqq)) {
++				BUG_ON(!bfqd->
++					const_seeky_busy_in_flight_queues);
++				bfqd->const_seeky_busy_in_flight_queues--;
++			}
++		}
++	}
++
++	if (sync) {
++		bfqd->sync_flight--;
++		RQ_BIC(rq)->ttime.last_end_request = jiffies;
++	}
++
++	/*
++	 * If we are waiting to discover whether the request pattern of the
++	 * task associated with the queue is actually isochronous, and
++	 * both requisites for this condition to hold are satisfied, then
++	 * compute soft_rt_next_start (see the comments to the function
++	 * bfq_bfqq_softrt_next_start()).
++	 */
++	if (bfq_bfqq_softrt_update(bfqq) && bfqq->dispatched == 0 &&
++	    RB_EMPTY_ROOT(&bfqq->sort_list))
++		bfqq->soft_rt_next_start =
++			bfq_bfqq_softrt_next_start(bfqd, bfqq);
++
++	/*
++	 * If this is the in-service queue, check if it needs to be expired,
++	 * or if we want to idle in case it has no pending requests.
++	 */
++	if (bfqd->in_service_queue == bfqq) {
++		if (bfq_bfqq_budget_new(bfqq))
++			bfq_set_budget_timeout(bfqd);
++
++		if (bfq_bfqq_must_idle(bfqq)) {
++			bfq_arm_slice_timer(bfqd);
++			goto out;
++		} else if (bfq_may_expire_for_budg_timeout(bfqq))
++			bfq_bfqq_expire(bfqd, bfqq, 0, BFQ_BFQQ_BUDGET_TIMEOUT);
++		else if (RB_EMPTY_ROOT(&bfqq->sort_list) &&
++			 (bfqq->dispatched == 0 ||
++			  !bfq_bfqq_must_not_expire(bfqq)))
++			bfq_bfqq_expire(bfqd, bfqq, 0,
++					BFQ_BFQQ_NO_MORE_REQUESTS);
++	}
++
++	if (!bfqd->rq_in_driver)
++		bfq_schedule_dispatch(bfqd);
++
++out:
++	return;
++}
++
++static inline int __bfq_may_queue(struct bfq_queue *bfqq)
++{
++	if (bfq_bfqq_wait_request(bfqq) && bfq_bfqq_must_alloc(bfqq)) {
++		bfq_clear_bfqq_must_alloc(bfqq);
++		return ELV_MQUEUE_MUST;
++	}
++
++	return ELV_MQUEUE_MAY;
++}
++
++static int bfq_may_queue(struct request_queue *q, int rw)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct task_struct *tsk = current;
++	struct bfq_io_cq *bic;
++	struct bfq_queue *bfqq;
++
++	/*
++	 * Don't force setup of a queue from here, as a call to may_queue
++	 * does not necessarily imply that a request actually will be
++	 * queued. So just lookup a possibly existing queue, or return
++	 * 'may queue' if that fails.
++	 */
++	bic = bfq_bic_lookup(bfqd, tsk->io_context);
++	if (bic == NULL)
++		return ELV_MQUEUE_MAY;
++
++	bfqq = bic_to_bfqq(bic, rw_is_sync(rw));
++	if (bfqq != NULL) {
++		bfq_init_prio_data(bfqq, bic);
++
++		return __bfq_may_queue(bfqq);
++	}
++
++	return ELV_MQUEUE_MAY;
++}
++
++/*
++ * Queue lock held here.
++ */
++static void bfq_put_request(struct request *rq)
++{
++	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++	if (bfqq != NULL) {
++		const int rw = rq_data_dir(rq);
++
++		BUG_ON(!bfqq->allocated[rw]);
++		bfqq->allocated[rw]--;
++
++		rq->elv.priv[0] = NULL;
++		rq->elv.priv[1] = NULL;
++
++		bfq_log_bfqq(bfqq->bfqd, bfqq, "put_request %p, %d",
++			     bfqq, atomic_read(&bfqq->ref));
++		bfq_put_queue(bfqq);
++	}
++}
++
++static struct bfq_queue *
++bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
++		struct bfq_queue *bfqq)
++{
++	bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
++		(long unsigned)bfqq->new_bfqq->pid);
++	bic_set_bfqq(bic, bfqq->new_bfqq, 1);
++	bfq_mark_bfqq_coop(bfqq->new_bfqq);
++	bfq_put_queue(bfqq);
++	return bic_to_bfqq(bic, 1);
++}
++
++/*
++ * Returns NULL if a new bfqq should be allocated, or the old bfqq if this
++ * was the last process referring to said bfqq.
++ */
++static struct bfq_queue *
++bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
++{
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue");
++	if (bfqq_process_refs(bfqq) == 1) {
++		bfqq->pid = current->pid;
++		bfq_clear_bfqq_coop(bfqq);
++		bfq_clear_bfqq_split_coop(bfqq);
++		return bfqq;
++	}
++
++	bic_set_bfqq(bic, NULL, 1);
++
++	bfq_put_cooperator(bfqq);
++
++	bfq_put_queue(bfqq);
++	return NULL;
++}
++
++/*
++ * Allocate bfq data structures associated with this request.
++ */
++static int bfq_set_request(struct request_queue *q, struct request *rq,
++			   struct bio *bio, gfp_t gfp_mask)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_io_cq *bic = icq_to_bic(rq->elv.icq);
++	const int rw = rq_data_dir(rq);
++	const int is_sync = rq_is_sync(rq);
++	struct bfq_queue *bfqq;
++	struct bfq_group *bfqg;
++	unsigned long flags;
++
++	might_sleep_if(gfp_mask & __GFP_WAIT);
++
++	bfq_changed_ioprio(bic);
++
++	spin_lock_irqsave(q->queue_lock, flags);
++
++	if (bic == NULL)
++		goto queue_fail;
++
++	bfqg = bfq_bic_update_cgroup(bic);
++
++new_queue:
++	bfqq = bic_to_bfqq(bic, is_sync);
++	if (bfqq == NULL || bfqq == &bfqd->oom_bfqq) {
++		bfqq = bfq_get_queue(bfqd, bfqg, is_sync, bic, gfp_mask);
++		bic_set_bfqq(bic, bfqq, is_sync);
++	} else {
++		/*
++		 * If the queue was seeky for too long, break it apart.
++		 */
++		if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq)) {
++			bfq_log_bfqq(bfqd, bfqq, "breaking apart bfqq");
++			bfqq = bfq_split_bfqq(bic, bfqq);
++			if (!bfqq)
++				goto new_queue;
++		}
++
++		/*
++		 * Check to see if this queue is scheduled to merge with
++		 * another closely cooperating queue. The merging of queues
++		 * happens here as it must be done in process context.
++		 * The reference on new_bfqq was taken in merge_bfqqs.
++		 */
++		if (bfqq->new_bfqq != NULL)
++			bfqq = bfq_merge_bfqqs(bfqd, bic, bfqq);
++	}
++
++	bfqq->allocated[rw]++;
++	atomic_inc(&bfqq->ref);
++	bfq_log_bfqq(bfqd, bfqq, "set_request: bfqq %p, %d", bfqq,
++		     atomic_read(&bfqq->ref));
++
++	rq->elv.priv[0] = bic;
++	rq->elv.priv[1] = bfqq;
++
++	spin_unlock_irqrestore(q->queue_lock, flags);
++
++	return 0;
++
++queue_fail:
++	bfq_schedule_dispatch(bfqd);
++	spin_unlock_irqrestore(q->queue_lock, flags);
++
++	return 1;
++}
++
++static void bfq_kick_queue(struct work_struct *work)
++{
++	struct bfq_data *bfqd =
++		container_of(work, struct bfq_data, unplug_work);
++	struct request_queue *q = bfqd->queue;
++
++	spin_lock_irq(q->queue_lock);
++	__blk_run_queue(q);
++	spin_unlock_irq(q->queue_lock);
++}
++
++/*
++ * Handler of the expiration of the timer running if the in-service queue
++ * is idling inside its time slice.
++ */
++static void bfq_idle_slice_timer(unsigned long data)
++{
++	struct bfq_data *bfqd = (struct bfq_data *)data;
++	struct bfq_queue *bfqq;
++	unsigned long flags;
++	enum bfqq_expiration reason;
++
++	spin_lock_irqsave(bfqd->queue->queue_lock, flags);
++
++	bfqq = bfqd->in_service_queue;
++	/*
++	 * Theoretical race here: the in-service queue can be NULL or
++	 * different from the queue that was idling if the timer handler
++	 * spins on the queue_lock and a new request arrives for the
++	 * current queue and there is a full dispatch cycle that changes
++	 * the in-service queue.  This can hardly happen, but in the worst
++	 * case we just expire a queue too early.
++	 */
++	if (bfqq != NULL) {
++		bfq_log_bfqq(bfqd, bfqq, "slice_timer expired");
++		if (bfq_bfqq_budget_timeout(bfqq))
++			/*
++			 * Also here the queue can be safely expired
++			 * for budget timeout without wasting
++			 * guarantees
++			 */
++			reason = BFQ_BFQQ_BUDGET_TIMEOUT;
++		else if (bfqq->queued[0] == 0 && bfqq->queued[1] == 0)
++			/*
++			 * The queue may not be empty upon timer expiration,
++			 * because we may not disable the timer when the
++			 * first request of the in-service queue arrives
++			 * during disk idling.
++			 */
++			reason = BFQ_BFQQ_TOO_IDLE;
++		else
++			goto schedule_dispatch;
++
++		bfq_bfqq_expire(bfqd, bfqq, 1, reason);
++	}
++
++schedule_dispatch:
++	bfq_schedule_dispatch(bfqd);
++
++	spin_unlock_irqrestore(bfqd->queue->queue_lock, flags);
++}
++
++static void bfq_shutdown_timer_wq(struct bfq_data *bfqd)
++{
++	del_timer_sync(&bfqd->idle_slice_timer);
++	cancel_work_sync(&bfqd->unplug_work);
++}
++
++static inline void __bfq_put_async_bfqq(struct bfq_data *bfqd,
++					struct bfq_queue **bfqq_ptr)
++{
++	struct bfq_group *root_group = bfqd->root_group;
++	struct bfq_queue *bfqq = *bfqq_ptr;
++
++	bfq_log(bfqd, "put_async_bfqq: %p", bfqq);
++	if (bfqq != NULL) {
++		bfq_bfqq_move(bfqd, bfqq, &bfqq->entity, root_group);
++		bfq_log_bfqq(bfqd, bfqq, "put_async_bfqq: putting %p, %d",
++			     bfqq, atomic_read(&bfqq->ref));
++		bfq_put_queue(bfqq);
++		*bfqq_ptr = NULL;
++	}
++}
++
++/*
++ * Release all the bfqg references to its async queues.  If we are
++ * deallocating the group these queues may still contain requests, so
++ * we reparent them to the root cgroup (i.e., the only one that will
++ * exist for sure until all the requests on a device are gone).
++ */
++static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg)
++{
++	int i, j;
++
++	for (i = 0; i < 2; i++)
++		for (j = 0; j < IOPRIO_BE_NR; j++)
++			__bfq_put_async_bfqq(bfqd, &bfqg->async_bfqq[i][j]);
++
++	__bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq);
++}
++
++static void bfq_exit_queue(struct elevator_queue *e)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++	struct request_queue *q = bfqd->queue;
++	struct bfq_queue *bfqq, *n;
++
++	bfq_shutdown_timer_wq(bfqd);
++
++	spin_lock_irq(q->queue_lock);
++
++	BUG_ON(bfqd->in_service_queue != NULL);
++	list_for_each_entry_safe(bfqq, n, &bfqd->idle_list, bfqq_list)
++		bfq_deactivate_bfqq(bfqd, bfqq, 0);
++
++	bfq_disconnect_groups(bfqd);
++	spin_unlock_irq(q->queue_lock);
++
++	bfq_shutdown_timer_wq(bfqd);
++
++	synchronize_rcu();
++
++	BUG_ON(timer_pending(&bfqd->idle_slice_timer));
++
++	bfq_free_root_group(bfqd);
++	kfree(bfqd);
++}
++
++static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
++{
++	struct bfq_group *bfqg;
++	struct bfq_data *bfqd;
++	struct elevator_queue *eq;
++
++	eq = elevator_alloc(q, e);
++	if (eq == NULL)
++		return -ENOMEM;
++
++	bfqd = kzalloc_node(sizeof(*bfqd), GFP_KERNEL, q->node);
++	if (bfqd == NULL) {
++		kobject_put(&eq->kobj);
++		return -ENOMEM;
++	}
++	eq->elevator_data = bfqd;
++
++	/*
++	 * Our fallback bfqq if bfq_find_alloc_queue() runs into OOM issues.
++	 * Grab a permanent reference to it, so that the normal code flow
++	 * will not attempt to free it.
++	 */
++	bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, 1, 0);
++	atomic_inc(&bfqd->oom_bfqq.ref);
++	bfqd->oom_bfqq.entity.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO;
++	bfqd->oom_bfqq.entity.new_ioprio_class = IOPRIO_CLASS_BE;
++	/*
++	 * Trigger weight initialization, according to ioprio, at the
++	 * oom_bfqq's first activation. The oom_bfqq's ioprio and ioprio
++	 * class won't be changed any more.
++	 */
++	bfqd->oom_bfqq.entity.ioprio_changed = 1;
++
++	bfqd->queue = q;
++
++	spin_lock_irq(q->queue_lock);
++	q->elevator = eq;
++	spin_unlock_irq(q->queue_lock);
++
++	bfqg = bfq_alloc_root_group(bfqd, q->node);
++	if (bfqg == NULL) {
++		kfree(bfqd);
++		kobject_put(&eq->kobj);
++		return -ENOMEM;
++	}
++
++	bfqd->root_group = bfqg;
++	bfq_init_entity(&bfqd->oom_bfqq.entity, bfqd->root_group);
++#ifdef CONFIG_CGROUP_BFQIO
++	bfqd->active_numerous_groups = 0;
++#endif
++
++	init_timer(&bfqd->idle_slice_timer);
++	bfqd->idle_slice_timer.function = bfq_idle_slice_timer;
++	bfqd->idle_slice_timer.data = (unsigned long)bfqd;
++
++	bfqd->rq_pos_tree = RB_ROOT;
++	bfqd->queue_weights_tree = RB_ROOT;
++	bfqd->group_weights_tree = RB_ROOT;
++
++	INIT_WORK(&bfqd->unplug_work, bfq_kick_queue);
++
++	INIT_LIST_HEAD(&bfqd->active_list);
++	INIT_LIST_HEAD(&bfqd->idle_list);
++	INIT_HLIST_HEAD(&bfqd->burst_list);
++
++	bfqd->hw_tag = -1;
++
++	bfqd->bfq_max_budget = bfq_default_max_budget;
++
++	bfqd->bfq_quantum = bfq_quantum;
++	bfqd->bfq_fifo_expire[0] = bfq_fifo_expire[0];
++	bfqd->bfq_fifo_expire[1] = bfq_fifo_expire[1];
++	bfqd->bfq_back_max = bfq_back_max;
++	bfqd->bfq_back_penalty = bfq_back_penalty;
++	bfqd->bfq_slice_idle = bfq_slice_idle;
++	bfqd->bfq_class_idle_last_service = 0;
++	bfqd->bfq_max_budget_async_rq = bfq_max_budget_async_rq;
++	bfqd->bfq_timeout[BLK_RW_ASYNC] = bfq_timeout_async;
++	bfqd->bfq_timeout[BLK_RW_SYNC] = bfq_timeout_sync;
++
++	bfqd->bfq_coop_thresh = 2;
++	bfqd->bfq_failed_cooperations = 7000;
++	bfqd->bfq_requests_within_timer = 120;
++
++	bfqd->bfq_large_burst_thresh = 11;
++	bfqd->bfq_burst_interval = msecs_to_jiffies(500);
++
++	bfqd->low_latency = true;
++
++	bfqd->bfq_wr_coeff = 20;
++	bfqd->bfq_wr_rt_max_time = msecs_to_jiffies(300);
++	bfqd->bfq_wr_max_time = 0;
++	bfqd->bfq_wr_min_idle_time = msecs_to_jiffies(2000);
++	bfqd->bfq_wr_min_inter_arr_async = msecs_to_jiffies(500);
++	bfqd->bfq_wr_max_softrt_rate = 7000; /*
++					      * Approximate rate required
++					      * to playback or record a
++					      * high-definition compressed
++					      * video.
++					      */
++	bfqd->wr_busy_queues = 0;
++	bfqd->busy_in_flight_queues = 0;
++	bfqd->const_seeky_busy_in_flight_queues = 0;
++
++	/*
++	 * Begin by assuming, optimistically, that the device peak rate is
++	 * equal to the highest reference rate.
++	 */
++	bfqd->RT_prod = R_fast[blk_queue_nonrot(bfqd->queue)] *
++			T_fast[blk_queue_nonrot(bfqd->queue)];
++	bfqd->peak_rate = R_fast[blk_queue_nonrot(bfqd->queue)];
++	bfqd->device_speed = BFQ_BFQD_FAST;
++
++	return 0;
++}
++
++static void bfq_slab_kill(void)
++{
++	if (bfq_pool != NULL)
++		kmem_cache_destroy(bfq_pool);
++}
++
++static int __init bfq_slab_setup(void)
++{
++	bfq_pool = KMEM_CACHE(bfq_queue, 0);
++	if (bfq_pool == NULL)
++		return -ENOMEM;
++	return 0;
++}
++
++static ssize_t bfq_var_show(unsigned int var, char *page)
++{
++	return sprintf(page, "%d\n", var);
++}
++
++static ssize_t bfq_var_store(unsigned long *var, const char *page,
++			     size_t count)
++{
++	unsigned long new_val;
++	int ret = kstrtoul(page, 10, &new_val);
++
++	if (ret == 0)
++		*var = new_val;
++
++	return count;
++}
++
++static ssize_t bfq_wr_max_time_show(struct elevator_queue *e, char *page)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++	return sprintf(page, "%d\n", bfqd->bfq_wr_max_time > 0 ?
++		       jiffies_to_msecs(bfqd->bfq_wr_max_time) :
++		       jiffies_to_msecs(bfq_wr_duration(bfqd)));
++}
++
++static ssize_t bfq_weights_show(struct elevator_queue *e, char *page)
++{
++	struct bfq_queue *bfqq;
++	struct bfq_data *bfqd = e->elevator_data;
++	ssize_t num_char = 0;
++
++	num_char += sprintf(page + num_char, "Tot reqs queued %d\n\n",
++			    bfqd->queued);
++
++	spin_lock_irq(bfqd->queue->queue_lock);
++
++	num_char += sprintf(page + num_char, "Active:\n");
++	list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list) {
++	  num_char += sprintf(page + num_char,
++			      "pid%d: weight %hu, nr_queued %d %d, dur %d/%u\n",
++			      bfqq->pid,
++			      bfqq->entity.weight,
++			      bfqq->queued[0],
++			      bfqq->queued[1],
++			jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish),
++			jiffies_to_msecs(bfqq->wr_cur_max_time));
++	}
++
++	num_char += sprintf(page + num_char, "Idle:\n");
++	list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list) {
++			num_char += sprintf(page + num_char,
++				"pid%d: weight %hu, dur %d/%u\n",
++				bfqq->pid,
++				bfqq->entity.weight,
++				jiffies_to_msecs(jiffies -
++					bfqq->last_wr_start_finish),
++				jiffies_to_msecs(bfqq->wr_cur_max_time));
++	}
++
++	spin_unlock_irq(bfqd->queue->queue_lock);
++
++	return num_char;
++}
++
++#define SHOW_FUNCTION(__FUNC, __VAR, __CONV)				\
++static ssize_t __FUNC(struct elevator_queue *e, char *page)		\
++{									\
++	struct bfq_data *bfqd = e->elevator_data;			\
++	unsigned int __data = __VAR;					\
++	if (__CONV)							\
++		__data = jiffies_to_msecs(__data);			\
++	return bfq_var_show(__data, (page));				\
++}
++SHOW_FUNCTION(bfq_quantum_show, bfqd->bfq_quantum, 0);
++SHOW_FUNCTION(bfq_fifo_expire_sync_show, bfqd->bfq_fifo_expire[1], 1);
++SHOW_FUNCTION(bfq_fifo_expire_async_show, bfqd->bfq_fifo_expire[0], 1);
++SHOW_FUNCTION(bfq_back_seek_max_show, bfqd->bfq_back_max, 0);
++SHOW_FUNCTION(bfq_back_seek_penalty_show, bfqd->bfq_back_penalty, 0);
++SHOW_FUNCTION(bfq_slice_idle_show, bfqd->bfq_slice_idle, 1);
++SHOW_FUNCTION(bfq_max_budget_show, bfqd->bfq_user_max_budget, 0);
++SHOW_FUNCTION(bfq_max_budget_async_rq_show,
++	      bfqd->bfq_max_budget_async_rq, 0);
++SHOW_FUNCTION(bfq_timeout_sync_show, bfqd->bfq_timeout[BLK_RW_SYNC], 1);
++SHOW_FUNCTION(bfq_timeout_async_show, bfqd->bfq_timeout[BLK_RW_ASYNC], 1);
++SHOW_FUNCTION(bfq_low_latency_show, bfqd->low_latency, 0);
++SHOW_FUNCTION(bfq_wr_coeff_show, bfqd->bfq_wr_coeff, 0);
++SHOW_FUNCTION(bfq_wr_rt_max_time_show, bfqd->bfq_wr_rt_max_time, 1);
++SHOW_FUNCTION(bfq_wr_min_idle_time_show, bfqd->bfq_wr_min_idle_time, 1);
++SHOW_FUNCTION(bfq_wr_min_inter_arr_async_show, bfqd->bfq_wr_min_inter_arr_async,
++	1);
++SHOW_FUNCTION(bfq_wr_max_softrt_rate_show, bfqd->bfq_wr_max_softrt_rate, 0);
++#undef SHOW_FUNCTION
++
++#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
++static ssize_t								\
++__FUNC(struct elevator_queue *e, const char *page, size_t count)	\
++{									\
++	struct bfq_data *bfqd = e->elevator_data;			\
++	unsigned long uninitialized_var(__data);			\
++	int ret = bfq_var_store(&__data, (page), count);		\
++	if (__data < (MIN))						\
++		__data = (MIN);						\
++	else if (__data > (MAX))					\
++		__data = (MAX);						\
++	if (__CONV)							\
++		*(__PTR) = msecs_to_jiffies(__data);			\
++	else								\
++		*(__PTR) = __data;					\
++	return ret;							\
++}
++STORE_FUNCTION(bfq_quantum_store, &bfqd->bfq_quantum, 1, INT_MAX, 0);
++STORE_FUNCTION(bfq_fifo_expire_sync_store, &bfqd->bfq_fifo_expire[1], 1,
++		INT_MAX, 1);
++STORE_FUNCTION(bfq_fifo_expire_async_store, &bfqd->bfq_fifo_expire[0], 1,
++		INT_MAX, 1);
++STORE_FUNCTION(bfq_back_seek_max_store, &bfqd->bfq_back_max, 0, INT_MAX, 0);
++STORE_FUNCTION(bfq_back_seek_penalty_store, &bfqd->bfq_back_penalty, 1,
++		INT_MAX, 0);
++STORE_FUNCTION(bfq_slice_idle_store, &bfqd->bfq_slice_idle, 0, INT_MAX, 1);
++STORE_FUNCTION(bfq_max_budget_async_rq_store, &bfqd->bfq_max_budget_async_rq,
++		1, INT_MAX, 0);
++STORE_FUNCTION(bfq_timeout_async_store, &bfqd->bfq_timeout[BLK_RW_ASYNC], 0,
++		INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_coeff_store, &bfqd->bfq_wr_coeff, 1, INT_MAX, 0);
++STORE_FUNCTION(bfq_wr_max_time_store, &bfqd->bfq_wr_max_time, 0, INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_rt_max_time_store, &bfqd->bfq_wr_rt_max_time, 0, INT_MAX,
++		1);
++STORE_FUNCTION(bfq_wr_min_idle_time_store, &bfqd->bfq_wr_min_idle_time, 0,
++		INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_min_inter_arr_async_store,
++		&bfqd->bfq_wr_min_inter_arr_async, 0, INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_max_softrt_rate_store, &bfqd->bfq_wr_max_softrt_rate, 0,
++		INT_MAX, 0);
++#undef STORE_FUNCTION
++
++/* do nothing for the moment */
++static ssize_t bfq_weights_store(struct elevator_queue *e,
++				    const char *page, size_t count)
++{
++	return count;
++}
++
++static inline unsigned long bfq_estimated_max_budget(struct bfq_data *bfqd)
++{
++	u64 timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
++
++	if (bfqd->peak_rate_samples >= BFQ_PEAK_RATE_SAMPLES)
++		return bfq_calc_max_budget(bfqd->peak_rate, timeout);
++	else
++		return bfq_default_max_budget;
++}
++
++static ssize_t bfq_max_budget_store(struct elevator_queue *e,
++				    const char *page, size_t count)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++	unsigned long uninitialized_var(__data);
++	int ret = bfq_var_store(&__data, (page), count);
++
++	if (__data == 0)
++		bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
++	else {
++		if (__data > INT_MAX)
++			__data = INT_MAX;
++		bfqd->bfq_max_budget = __data;
++	}
++
++	bfqd->bfq_user_max_budget = __data;
++
++	return ret;
++}
++
++static ssize_t bfq_timeout_sync_store(struct elevator_queue *e,
++				      const char *page, size_t count)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++	unsigned long uninitialized_var(__data);
++	int ret = bfq_var_store(&__data, (page), count);
++
++	if (__data < 1)
++		__data = 1;
++	else if (__data > INT_MAX)
++		__data = INT_MAX;
++
++	bfqd->bfq_timeout[BLK_RW_SYNC] = msecs_to_jiffies(__data);
++	if (bfqd->bfq_user_max_budget == 0)
++		bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
++
++	return ret;
++}
++
++static ssize_t bfq_low_latency_store(struct elevator_queue *e,
++				     const char *page, size_t count)
++{
++	struct bfq_data *bfqd = e->elevator_data;
++	unsigned long uninitialized_var(__data);
++	int ret = bfq_var_store(&__data, (page), count);
++
++	if (__data > 1)
++		__data = 1;
++	if (__data == 0 && bfqd->low_latency != 0)
++		bfq_end_wr(bfqd);
++	bfqd->low_latency = __data;
++
++	return ret;
++}
++
++#define BFQ_ATTR(name) \
++	__ATTR(name, S_IRUGO|S_IWUSR, bfq_##name##_show, bfq_##name##_store)
++
++static struct elv_fs_entry bfq_attrs[] = {
++	BFQ_ATTR(quantum),
++	BFQ_ATTR(fifo_expire_sync),
++	BFQ_ATTR(fifo_expire_async),
++	BFQ_ATTR(back_seek_max),
++	BFQ_ATTR(back_seek_penalty),
++	BFQ_ATTR(slice_idle),
++	BFQ_ATTR(max_budget),
++	BFQ_ATTR(max_budget_async_rq),
++	BFQ_ATTR(timeout_sync),
++	BFQ_ATTR(timeout_async),
++	BFQ_ATTR(low_latency),
++	BFQ_ATTR(wr_coeff),
++	BFQ_ATTR(wr_max_time),
++	BFQ_ATTR(wr_rt_max_time),
++	BFQ_ATTR(wr_min_idle_time),
++	BFQ_ATTR(wr_min_inter_arr_async),
++	BFQ_ATTR(wr_max_softrt_rate),
++	BFQ_ATTR(weights),
++	__ATTR_NULL
++};
++
++static struct elevator_type iosched_bfq = {
++	.ops = {
++		.elevator_merge_fn =		bfq_merge,
++		.elevator_merged_fn =		bfq_merged_request,
++		.elevator_merge_req_fn =	bfq_merged_requests,
++		.elevator_allow_merge_fn =	bfq_allow_merge,
++		.elevator_dispatch_fn =		bfq_dispatch_requests,
++		.elevator_add_req_fn =		bfq_insert_request,
++		.elevator_activate_req_fn =	bfq_activate_request,
++		.elevator_deactivate_req_fn =	bfq_deactivate_request,
++		.elevator_completed_req_fn =	bfq_completed_request,
++		.elevator_former_req_fn =	elv_rb_former_request,
++		.elevator_latter_req_fn =	elv_rb_latter_request,
++		.elevator_init_icq_fn =		bfq_init_icq,
++		.elevator_exit_icq_fn =		bfq_exit_icq,
++		.elevator_set_req_fn =		bfq_set_request,
++		.elevator_put_req_fn =		bfq_put_request,
++		.elevator_may_queue_fn =	bfq_may_queue,
++		.elevator_init_fn =		bfq_init_queue,
++		.elevator_exit_fn =		bfq_exit_queue,
++	},
++	.icq_size =		sizeof(struct bfq_io_cq),
++	.icq_align =		__alignof__(struct bfq_io_cq),
++	.elevator_attrs =	bfq_attrs,
++	.elevator_name =	"bfq",
++	.elevator_owner =	THIS_MODULE,
++};
++
++static int __init bfq_init(void)
++{
++	/*
++	 * Can be 0 on HZ < 1000 setups.
++	 */
++	if (bfq_slice_idle == 0)
++		bfq_slice_idle = 1;
++
++	if (bfq_timeout_async == 0)
++		bfq_timeout_async = 1;
++
++	if (bfq_slab_setup())
++		return -ENOMEM;
++
++	/*
++	 * Times to load large popular applications for the typical systems
++	 * installed on the reference devices (see the comments before the
++	 * definitions of the two arrays).
++	 */
++	T_slow[0] = msecs_to_jiffies(2600);
++	T_slow[1] = msecs_to_jiffies(1000);
++	T_fast[0] = msecs_to_jiffies(5500);
++	T_fast[1] = msecs_to_jiffies(2000);
++
++	/*
++	 * Thresholds that determine the switch between speed classes (see
++	 * the comments before the definition of the array).
++	 */
++	device_speed_thresh[0] = (R_fast[0] + R_slow[0]) / 2;
++	device_speed_thresh[1] = (R_fast[1] + R_slow[1]) / 2;
++
++	elv_register(&iosched_bfq);
++	pr_info("BFQ I/O-scheduler version: v7r7");
++
++	return 0;
++}
++
++static void __exit bfq_exit(void)
++{
++	elv_unregister(&iosched_bfq);
++	bfq_slab_kill();
++}
++
++module_init(bfq_init);
++module_exit(bfq_exit);
++
++MODULE_AUTHOR("Fabio Checconi, Paolo Valente");
++MODULE_LICENSE("GPL");
+diff --git a/block/bfq-sched.c b/block/bfq-sched.c
+new file mode 100644
+index 0000000..2931563
+--- /dev/null
++++ b/block/bfq-sched.c
+@@ -0,0 +1,1214 @@
++/*
++ * BFQ: Hierarchical B-WF2Q+ scheduler.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ *		      Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ */
++
++#ifdef CONFIG_CGROUP_BFQIO
++#define for_each_entity(entity)	\
++	for (; entity != NULL; entity = entity->parent)
++
++#define for_each_entity_safe(entity, parent) \
++	for (; entity && ({ parent = entity->parent; 1; }); entity = parent)
++
++static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
++						 int extract,
++						 struct bfq_data *bfqd);
++
++static inline void bfq_update_budget(struct bfq_entity *next_in_service)
++{
++	struct bfq_entity *bfqg_entity;
++	struct bfq_group *bfqg;
++	struct bfq_sched_data *group_sd;
++
++	BUG_ON(next_in_service == NULL);
++
++	group_sd = next_in_service->sched_data;
++
++	bfqg = container_of(group_sd, struct bfq_group, sched_data);
++	/*
++	 * bfq_group's my_entity field is not NULL only if the group
++	 * is not the root group. We must not touch the root entity
++	 * as it must never become an in-service entity.
++	 */
++	bfqg_entity = bfqg->my_entity;
++	if (bfqg_entity != NULL)
++		bfqg_entity->budget = next_in_service->budget;
++}
++
++static int bfq_update_next_in_service(struct bfq_sched_data *sd)
++{
++	struct bfq_entity *next_in_service;
++
++	if (sd->in_service_entity != NULL)
++		/* will update/requeue at the end of service */
++		return 0;
++
++	/*
++	 * NOTE: this can be improved in many ways, such as returning
++	 * 1 (and thus propagating upwards the update) only when the
++	 * budget changes, or caching the bfqq that will be scheduled
++	 * next from this subtree.  By now we worry more about
++	 * correctness than about performance...
++	 */
++	next_in_service = bfq_lookup_next_entity(sd, 0, NULL);
++	sd->next_in_service = next_in_service;
++
++	if (next_in_service != NULL)
++		bfq_update_budget(next_in_service);
++
++	return 1;
++}
++
++static inline void bfq_check_next_in_service(struct bfq_sched_data *sd,
++					     struct bfq_entity *entity)
++{
++	BUG_ON(sd->next_in_service != entity);
++}
++#else
++#define for_each_entity(entity)	\
++	for (; entity != NULL; entity = NULL)
++
++#define for_each_entity_safe(entity, parent) \
++	for (parent = NULL; entity != NULL; entity = parent)
++
++static inline int bfq_update_next_in_service(struct bfq_sched_data *sd)
++{
++	return 0;
++}
++
++static inline void bfq_check_next_in_service(struct bfq_sched_data *sd,
++					     struct bfq_entity *entity)
++{
++}
++
++static inline void bfq_update_budget(struct bfq_entity *next_in_service)
++{
++}
++#endif
++
++/*
++ * Shift for timestamp calculations.  This actually limits the maximum
++ * service allowed in one timestamp delta (small shift values increase it),
++ * the maximum total weight that can be used for the queues in the system
++ * (big shift values increase it), and the period of virtual time
++ * wraparounds.
++ */
++#define WFQ_SERVICE_SHIFT	22
++
++/**
++ * bfq_gt - compare two timestamps.
++ * @a: first ts.
++ * @b: second ts.
++ *
++ * Return @a > @b, dealing with wrapping correctly.
++ */
++static inline int bfq_gt(u64 a, u64 b)
++{
++	return (s64)(a - b) > 0;
++}
++
++static inline struct bfq_queue *bfq_entity_to_bfqq(struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = NULL;
++
++	BUG_ON(entity == NULL);
++
++	if (entity->my_sched_data == NULL)
++		bfqq = container_of(entity, struct bfq_queue, entity);
++
++	return bfqq;
++}
++
++
++/**
++ * bfq_delta - map service into the virtual time domain.
++ * @service: amount of service.
++ * @weight: scale factor (weight of an entity or weight sum).
++ */
++static inline u64 bfq_delta(unsigned long service,
++					unsigned long weight)
++{
++	u64 d = (u64)service << WFQ_SERVICE_SHIFT;
++
++	do_div(d, weight);
++	return d;
++}
++
++/**
++ * bfq_calc_finish - assign the finish time to an entity.
++ * @entity: the entity to act upon.
++ * @service: the service to be charged to the entity.
++ */
++static inline void bfq_calc_finish(struct bfq_entity *entity,
++				   unsigned long service)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++	BUG_ON(entity->weight == 0);
++
++	entity->finish = entity->start +
++		bfq_delta(service, entity->weight);
++
++	if (bfqq != NULL) {
++		bfq_log_bfqq(bfqq->bfqd, bfqq,
++			"calc_finish: serv %lu, w %d",
++			service, entity->weight);
++		bfq_log_bfqq(bfqq->bfqd, bfqq,
++			"calc_finish: start %llu, finish %llu, delta %llu",
++			entity->start, entity->finish,
++			bfq_delta(service, entity->weight));
++	}
++}
++
++/**
++ * bfq_entity_of - get an entity from a node.
++ * @node: the node field of the entity.
++ *
++ * Convert a node pointer to the relative entity.  This is used only
++ * to simplify the logic of some functions and not as the generic
++ * conversion mechanism because, e.g., in the tree walking functions,
++ * the check for a %NULL value would be redundant.
++ */
++static inline struct bfq_entity *bfq_entity_of(struct rb_node *node)
++{
++	struct bfq_entity *entity = NULL;
++
++	if (node != NULL)
++		entity = rb_entry(node, struct bfq_entity, rb_node);
++
++	return entity;
++}
++
++/**
++ * bfq_extract - remove an entity from a tree.
++ * @root: the tree root.
++ * @entity: the entity to remove.
++ */
++static inline void bfq_extract(struct rb_root *root,
++			       struct bfq_entity *entity)
++{
++	BUG_ON(entity->tree != root);
++
++	entity->tree = NULL;
++	rb_erase(&entity->rb_node, root);
++}
++
++/**
++ * bfq_idle_extract - extract an entity from the idle tree.
++ * @st: the service tree of the owning @entity.
++ * @entity: the entity being removed.
++ */
++static void bfq_idle_extract(struct bfq_service_tree *st,
++			     struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct rb_node *next;
++
++	BUG_ON(entity->tree != &st->idle);
++
++	if (entity == st->first_idle) {
++		next = rb_next(&entity->rb_node);
++		st->first_idle = bfq_entity_of(next);
++	}
++
++	if (entity == st->last_idle) {
++		next = rb_prev(&entity->rb_node);
++		st->last_idle = bfq_entity_of(next);
++	}
++
++	bfq_extract(&st->idle, entity);
++
++	if (bfqq != NULL)
++		list_del(&bfqq->bfqq_list);
++}
++
++/**
++ * bfq_insert - generic tree insertion.
++ * @root: tree root.
++ * @entity: entity to insert.
++ *
++ * This is used for the idle and the active tree, since they are both
++ * ordered by finish time.
++ */
++static void bfq_insert(struct rb_root *root, struct bfq_entity *entity)
++{
++	struct bfq_entity *entry;
++	struct rb_node **node = &root->rb_node;
++	struct rb_node *parent = NULL;
++
++	BUG_ON(entity->tree != NULL);
++
++	while (*node != NULL) {
++		parent = *node;
++		entry = rb_entry(parent, struct bfq_entity, rb_node);
++
++		if (bfq_gt(entry->finish, entity->finish))
++			node = &parent->rb_left;
++		else
++			node = &parent->rb_right;
++	}
++
++	rb_link_node(&entity->rb_node, parent, node);
++	rb_insert_color(&entity->rb_node, root);
++
++	entity->tree = root;
++}
++
++/**
++ * bfq_update_min - update the min_start field of a entity.
++ * @entity: the entity to update.
++ * @node: one of its children.
++ *
++ * This function is called when @entity may store an invalid value for
++ * min_start due to updates to the active tree.  The function  assumes
++ * that the subtree rooted at @node (which may be its left or its right
++ * child) has a valid min_start value.
++ */
++static inline void bfq_update_min(struct bfq_entity *entity,
++				  struct rb_node *node)
++{
++	struct bfq_entity *child;
++
++	if (node != NULL) {
++		child = rb_entry(node, struct bfq_entity, rb_node);
++		if (bfq_gt(entity->min_start, child->min_start))
++			entity->min_start = child->min_start;
++	}
++}
++
++/**
++ * bfq_update_active_node - recalculate min_start.
++ * @node: the node to update.
++ *
++ * @node may have changed position or one of its children may have moved,
++ * this function updates its min_start value.  The left and right subtrees
++ * are assumed to hold a correct min_start value.
++ */
++static inline void bfq_update_active_node(struct rb_node *node)
++{
++	struct bfq_entity *entity = rb_entry(node, struct bfq_entity, rb_node);
++
++	entity->min_start = entity->start;
++	bfq_update_min(entity, node->rb_right);
++	bfq_update_min(entity, node->rb_left);
++}
++
++/**
++ * bfq_update_active_tree - update min_start for the whole active tree.
++ * @node: the starting node.
++ *
++ * @node must be the deepest modified node after an update.  This function
++ * updates its min_start using the values held by its children, assuming
++ * that they did not change, and then updates all the nodes that may have
++ * changed in the path to the root.  The only nodes that may have changed
++ * are the ones in the path or their siblings.
++ */
++static void bfq_update_active_tree(struct rb_node *node)
++{
++	struct rb_node *parent;
++
++up:
++	bfq_update_active_node(node);
++
++	parent = rb_parent(node);
++	if (parent == NULL)
++		return;
++
++	if (node == parent->rb_left && parent->rb_right != NULL)
++		bfq_update_active_node(parent->rb_right);
++	else if (parent->rb_left != NULL)
++		bfq_update_active_node(parent->rb_left);
++
++	node = parent;
++	goto up;
++}
++
++static void bfq_weights_tree_add(struct bfq_data *bfqd,
++				 struct bfq_entity *entity,
++				 struct rb_root *root);
++
++static void bfq_weights_tree_remove(struct bfq_data *bfqd,
++				    struct bfq_entity *entity,
++				    struct rb_root *root);
++
++
++/**
++ * bfq_active_insert - insert an entity in the active tree of its
++ *                     group/device.
++ * @st: the service tree of the entity.
++ * @entity: the entity being inserted.
++ *
++ * The active tree is ordered by finish time, but an extra key is kept
++ * per each node, containing the minimum value for the start times of
++ * its children (and the node itself), so it's possible to search for
++ * the eligible node with the lowest finish time in logarithmic time.
++ */
++static void bfq_active_insert(struct bfq_service_tree *st,
++			      struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct rb_node *node = &entity->rb_node;
++#ifdef CONFIG_CGROUP_BFQIO
++	struct bfq_sched_data *sd = NULL;
++	struct bfq_group *bfqg = NULL;
++	struct bfq_data *bfqd = NULL;
++#endif
++
++	bfq_insert(&st->active, entity);
++
++	if (node->rb_left != NULL)
++		node = node->rb_left;
++	else if (node->rb_right != NULL)
++		node = node->rb_right;
++
++	bfq_update_active_tree(node);
++
++#ifdef CONFIG_CGROUP_BFQIO
++	sd = entity->sched_data;
++	bfqg = container_of(sd, struct bfq_group, sched_data);
++	BUG_ON(!bfqg);
++	bfqd = (struct bfq_data *)bfqg->bfqd;
++#endif
++	if (bfqq != NULL)
++		list_add(&bfqq->bfqq_list, &bfqq->bfqd->active_list);
++#ifdef CONFIG_CGROUP_BFQIO
++	else { /* bfq_group */
++		BUG_ON(!bfqd);
++		bfq_weights_tree_add(bfqd, entity, &bfqd->group_weights_tree);
++	}
++	if (bfqg != bfqd->root_group) {
++		BUG_ON(!bfqg);
++		BUG_ON(!bfqd);
++		bfqg->active_entities++;
++		if (bfqg->active_entities == 2)
++			bfqd->active_numerous_groups++;
++	}
++#endif
++}
++
++/**
++ * bfq_ioprio_to_weight - calc a weight from an ioprio.
++ * @ioprio: the ioprio value to convert.
++ */
++static inline unsigned short bfq_ioprio_to_weight(int ioprio)
++{
++	BUG_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
++	return IOPRIO_BE_NR - ioprio;
++}
++
++/**
++ * bfq_weight_to_ioprio - calc an ioprio from a weight.
++ * @weight: the weight value to convert.
++ *
++ * To preserve as mush as possible the old only-ioprio user interface,
++ * 0 is used as an escape ioprio value for weights (numerically) equal or
++ * larger than IOPRIO_BE_NR
++ */
++static inline unsigned short bfq_weight_to_ioprio(int weight)
++{
++	BUG_ON(weight < BFQ_MIN_WEIGHT || weight > BFQ_MAX_WEIGHT);
++	return IOPRIO_BE_NR - weight < 0 ? 0 : IOPRIO_BE_NR - weight;
++}
++
++static inline void bfq_get_entity(struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++	if (bfqq != NULL) {
++		atomic_inc(&bfqq->ref);
++		bfq_log_bfqq(bfqq->bfqd, bfqq, "get_entity: %p %d",
++			     bfqq, atomic_read(&bfqq->ref));
++	}
++}
++
++/**
++ * bfq_find_deepest - find the deepest node that an extraction can modify.
++ * @node: the node being removed.
++ *
++ * Do the first step of an extraction in an rb tree, looking for the
++ * node that will replace @node, and returning the deepest node that
++ * the following modifications to the tree can touch.  If @node is the
++ * last node in the tree return %NULL.
++ */
++static struct rb_node *bfq_find_deepest(struct rb_node *node)
++{
++	struct rb_node *deepest;
++
++	if (node->rb_right == NULL && node->rb_left == NULL)
++		deepest = rb_parent(node);
++	else if (node->rb_right == NULL)
++		deepest = node->rb_left;
++	else if (node->rb_left == NULL)
++		deepest = node->rb_right;
++	else {
++		deepest = rb_next(node);
++		if (deepest->rb_right != NULL)
++			deepest = deepest->rb_right;
++		else if (rb_parent(deepest) != node)
++			deepest = rb_parent(deepest);
++	}
++
++	return deepest;
++}
++
++/**
++ * bfq_active_extract - remove an entity from the active tree.
++ * @st: the service_tree containing the tree.
++ * @entity: the entity being removed.
++ */
++static void bfq_active_extract(struct bfq_service_tree *st,
++			       struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct rb_node *node;
++#ifdef CONFIG_CGROUP_BFQIO
++	struct bfq_sched_data *sd = NULL;
++	struct bfq_group *bfqg = NULL;
++	struct bfq_data *bfqd = NULL;
++#endif
++
++	node = bfq_find_deepest(&entity->rb_node);
++	bfq_extract(&st->active, entity);
++
++	if (node != NULL)
++		bfq_update_active_tree(node);
++
++#ifdef CONFIG_CGROUP_BFQIO
++	sd = entity->sched_data;
++	bfqg = container_of(sd, struct bfq_group, sched_data);
++	BUG_ON(!bfqg);
++	bfqd = (struct bfq_data *)bfqg->bfqd;
++#endif
++	if (bfqq != NULL)
++		list_del(&bfqq->bfqq_list);
++#ifdef CONFIG_CGROUP_BFQIO
++	else { /* bfq_group */
++		BUG_ON(!bfqd);
++		bfq_weights_tree_remove(bfqd, entity,
++					&bfqd->group_weights_tree);
++	}
++	if (bfqg != bfqd->root_group) {
++		BUG_ON(!bfqg);
++		BUG_ON(!bfqd);
++		BUG_ON(!bfqg->active_entities);
++		bfqg->active_entities--;
++		if (bfqg->active_entities == 1) {
++			BUG_ON(!bfqd->active_numerous_groups);
++			bfqd->active_numerous_groups--;
++		}
++	}
++#endif
++}
++
++/**
++ * bfq_idle_insert - insert an entity into the idle tree.
++ * @st: the service tree containing the tree.
++ * @entity: the entity to insert.
++ */
++static void bfq_idle_insert(struct bfq_service_tree *st,
++			    struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct bfq_entity *first_idle = st->first_idle;
++	struct bfq_entity *last_idle = st->last_idle;
++
++	if (first_idle == NULL || bfq_gt(first_idle->finish, entity->finish))
++		st->first_idle = entity;
++	if (last_idle == NULL || bfq_gt(entity->finish, last_idle->finish))
++		st->last_idle = entity;
++
++	bfq_insert(&st->idle, entity);
++
++	if (bfqq != NULL)
++		list_add(&bfqq->bfqq_list, &bfqq->bfqd->idle_list);
++}
++
++/**
++ * bfq_forget_entity - remove an entity from the wfq trees.
++ * @st: the service tree.
++ * @entity: the entity being removed.
++ *
++ * Update the device status and forget everything about @entity, putting
++ * the device reference to it, if it is a queue.  Entities belonging to
++ * groups are not refcounted.
++ */
++static void bfq_forget_entity(struct bfq_service_tree *st,
++			      struct bfq_entity *entity)
++{
++	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct bfq_sched_data *sd;
++
++	BUG_ON(!entity->on_st);
++
++	entity->on_st = 0;
++	st->wsum -= entity->weight;
++	if (bfqq != NULL) {
++		sd = entity->sched_data;
++		bfq_log_bfqq(bfqq->bfqd, bfqq, "forget_entity: %p %d",
++			     bfqq, atomic_read(&bfqq->ref));
++		bfq_put_queue(bfqq);
++	}
++}
++
++/**
++ * bfq_put_idle_entity - release the idle tree ref of an entity.
++ * @st: service tree for the entity.
++ * @entity: the entity being released.
++ */
++static void bfq_put_idle_entity(struct bfq_service_tree *st,
++				struct bfq_entity *entity)
++{
++	bfq_idle_extract(st, entity);
++	bfq_forget_entity(st, entity);
++}
++
++/**
++ * bfq_forget_idle - update the idle tree if necessary.
++ * @st: the service tree to act upon.
++ *
++ * To preserve the global O(log N) complexity we only remove one entry here;
++ * as the idle tree will not grow indefinitely this can be done safely.
++ */
++static void bfq_forget_idle(struct bfq_service_tree *st)
++{
++	struct bfq_entity *first_idle = st->first_idle;
++	struct bfq_entity *last_idle = st->last_idle;
++
++	if (RB_EMPTY_ROOT(&st->active) && last_idle != NULL &&
++	    !bfq_gt(last_idle->finish, st->vtime)) {
++		/*
++		 * Forget the whole idle tree, increasing the vtime past
++		 * the last finish time of idle entities.
++		 */
++		st->vtime = last_idle->finish;
++	}
++
++	if (first_idle != NULL && !bfq_gt(first_idle->finish, st->vtime))
++		bfq_put_idle_entity(st, first_idle);
++}
++
++static struct bfq_service_tree *
++__bfq_entity_update_weight_prio(struct bfq_service_tree *old_st,
++			 struct bfq_entity *entity)
++{
++	struct bfq_service_tree *new_st = old_st;
++
++	if (entity->ioprio_changed) {
++		struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++		unsigned short prev_weight, new_weight;
++		struct bfq_data *bfqd = NULL;
++		struct rb_root *root;
++#ifdef CONFIG_CGROUP_BFQIO
++		struct bfq_sched_data *sd;
++		struct bfq_group *bfqg;
++#endif
++
++		if (bfqq != NULL)
++			bfqd = bfqq->bfqd;
++#ifdef CONFIG_CGROUP_BFQIO
++		else {
++			sd = entity->my_sched_data;
++			bfqg = container_of(sd, struct bfq_group, sched_data);
++			BUG_ON(!bfqg);
++			bfqd = (struct bfq_data *)bfqg->bfqd;
++			BUG_ON(!bfqd);
++		}
++#endif
++
++		BUG_ON(old_st->wsum < entity->weight);
++		old_st->wsum -= entity->weight;
++
++		if (entity->new_weight != entity->orig_weight) {
++			if (entity->new_weight < BFQ_MIN_WEIGHT ||
++			    entity->new_weight > BFQ_MAX_WEIGHT) {
++				printk(KERN_CRIT "update_weight_prio: "
++						 "new_weight %d\n",
++					entity->new_weight);
++				BUG();
++			}
++			entity->orig_weight = entity->new_weight;
++			entity->ioprio =
++				bfq_weight_to_ioprio(entity->orig_weight);
++		} else if (entity->new_ioprio != entity->ioprio) {
++			entity->ioprio = entity->new_ioprio;
++			entity->orig_weight =
++					bfq_ioprio_to_weight(entity->ioprio);
++		} else
++			entity->new_weight = entity->orig_weight =
++				bfq_ioprio_to_weight(entity->ioprio);
++
++		entity->ioprio_class = entity->new_ioprio_class;
++		entity->ioprio_changed = 0;
++
++		/*
++		 * NOTE: here we may be changing the weight too early,
++		 * this will cause unfairness.  The correct approach
++		 * would have required additional complexity to defer
++		 * weight changes to the proper time instants (i.e.,
++		 * when entity->finish <= old_st->vtime).
++		 */
++		new_st = bfq_entity_service_tree(entity);
++
++		prev_weight = entity->weight;
++		new_weight = entity->orig_weight *
++			     (bfqq != NULL ? bfqq->wr_coeff : 1);
++		/*
++		 * If the weight of the entity changes, remove the entity
++		 * from its old weight counter (if there is a counter
++		 * associated with the entity), and add it to the counter
++		 * associated with its new weight.
++		 */
++		if (prev_weight != new_weight) {
++			root = bfqq ? &bfqd->queue_weights_tree :
++				      &bfqd->group_weights_tree;
++			bfq_weights_tree_remove(bfqd, entity, root);
++		}
++		entity->weight = new_weight;
++		/*
++		 * Add the entity to its weights tree only if it is
++		 * not associated with a weight-raised queue.
++		 */
++		if (prev_weight != new_weight &&
++		    (bfqq ? bfqq->wr_coeff == 1 : 1))
++			/* If we get here, root has been initialized. */
++			bfq_weights_tree_add(bfqd, entity, root);
++
++		new_st->wsum += entity->weight;
++
++		if (new_st != old_st)
++			entity->start = new_st->vtime;
++	}
++
++	return new_st;
++}
++
++/**
++ * bfq_bfqq_served - update the scheduler status after selection for
++ *                   service.
++ * @bfqq: the queue being served.
++ * @served: bytes to transfer.
++ *
++ * NOTE: this can be optimized, as the timestamps of upper level entities
++ * are synchronized every time a new bfqq is selected for service.  By now,
++ * we keep it to better check consistency.
++ */
++static void bfq_bfqq_served(struct bfq_queue *bfqq, unsigned long served)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++	struct bfq_service_tree *st;
++
++	for_each_entity(entity) {
++		st = bfq_entity_service_tree(entity);
++
++		entity->service += served;
++		BUG_ON(entity->service > entity->budget);
++		BUG_ON(st->wsum == 0);
++
++		st->vtime += bfq_delta(served, st->wsum);
++		bfq_forget_idle(st);
++	}
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "bfqq_served %lu secs", served);
++}
++
++/**
++ * bfq_bfqq_charge_full_budget - set the service to the entity budget.
++ * @bfqq: the queue that needs a service update.
++ *
++ * When it's not possible to be fair in the service domain, because
++ * a queue is not consuming its budget fast enough (the meaning of
++ * fast depends on the timeout parameter), we charge it a full
++ * budget.  In this way we should obtain a sort of time-domain
++ * fairness among all the seeky/slow queues.
++ */
++static inline void bfq_bfqq_charge_full_budget(struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "charge_full_budget");
++
++	bfq_bfqq_served(bfqq, entity->budget - entity->service);
++}
++
++/**
++ * __bfq_activate_entity - activate an entity.
++ * @entity: the entity being activated.
++ *
++ * Called whenever an entity is activated, i.e., it is not active and one
++ * of its children receives a new request, or has to be reactivated due to
++ * budget exhaustion.  It uses the current budget of the entity (and the
++ * service received if @entity is active) of the queue to calculate its
++ * timestamps.
++ */
++static void __bfq_activate_entity(struct bfq_entity *entity)
++{
++	struct bfq_sched_data *sd = entity->sched_data;
++	struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++
++	if (entity == sd->in_service_entity) {
++		BUG_ON(entity->tree != NULL);
++		/*
++		 * If we are requeueing the current entity we have
++		 * to take care of not charging to it service it has
++		 * not received.
++		 */
++		bfq_calc_finish(entity, entity->service);
++		entity->start = entity->finish;
++		sd->in_service_entity = NULL;
++	} else if (entity->tree == &st->active) {
++		/*
++		 * Requeueing an entity due to a change of some
++		 * next_in_service entity below it.  We reuse the
++		 * old start time.
++		 */
++		bfq_active_extract(st, entity);
++	} else if (entity->tree == &st->idle) {
++		/*
++		 * Must be on the idle tree, bfq_idle_extract() will
++		 * check for that.
++		 */
++		bfq_idle_extract(st, entity);
++		entity->start = bfq_gt(st->vtime, entity->finish) ?
++				       st->vtime : entity->finish;
++	} else {
++		/*
++		 * The finish time of the entity may be invalid, and
++		 * it is in the past for sure, otherwise the queue
++		 * would have been on the idle tree.
++		 */
++		entity->start = st->vtime;
++		st->wsum += entity->weight;
++		bfq_get_entity(entity);
++
++		BUG_ON(entity->on_st);
++		entity->on_st = 1;
++	}
++
++	st = __bfq_entity_update_weight_prio(st, entity);
++	bfq_calc_finish(entity, entity->budget);
++	bfq_active_insert(st, entity);
++}
++
++/**
++ * bfq_activate_entity - activate an entity and its ancestors if necessary.
++ * @entity: the entity to activate.
++ *
++ * Activate @entity and all the entities on the path from it to the root.
++ */
++static void bfq_activate_entity(struct bfq_entity *entity)
++{
++	struct bfq_sched_data *sd;
++
++	for_each_entity(entity) {
++		__bfq_activate_entity(entity);
++
++		sd = entity->sched_data;
++		if (!bfq_update_next_in_service(sd))
++			/*
++			 * No need to propagate the activation to the
++			 * upper entities, as they will be updated when
++			 * the in-service entity is rescheduled.
++			 */
++			break;
++	}
++}
++
++/**
++ * __bfq_deactivate_entity - deactivate an entity from its service tree.
++ * @entity: the entity to deactivate.
++ * @requeue: if false, the entity will not be put into the idle tree.
++ *
++ * Deactivate an entity, independently from its previous state.  If the
++ * entity was not on a service tree just return, otherwise if it is on
++ * any scheduler tree, extract it from that tree, and if necessary
++ * and if the caller did not specify @requeue, put it on the idle tree.
++ *
++ * Return %1 if the caller should update the entity hierarchy, i.e.,
++ * if the entity was in service or if it was the next_in_service for
++ * its sched_data; return %0 otherwise.
++ */
++static int __bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
++{
++	struct bfq_sched_data *sd = entity->sched_data;
++	struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++	int was_in_service = entity == sd->in_service_entity;
++	int ret = 0;
++
++	if (!entity->on_st)
++		return 0;
++
++	BUG_ON(was_in_service && entity->tree != NULL);
++
++	if (was_in_service) {
++		bfq_calc_finish(entity, entity->service);
++		sd->in_service_entity = NULL;
++	} else if (entity->tree == &st->active)
++		bfq_active_extract(st, entity);
++	else if (entity->tree == &st->idle)
++		bfq_idle_extract(st, entity);
++	else if (entity->tree != NULL)
++		BUG();
++
++	if (was_in_service || sd->next_in_service == entity)
++		ret = bfq_update_next_in_service(sd);
++
++	if (!requeue || !bfq_gt(entity->finish, st->vtime))
++		bfq_forget_entity(st, entity);
++	else
++		bfq_idle_insert(st, entity);
++
++	BUG_ON(sd->in_service_entity == entity);
++	BUG_ON(sd->next_in_service == entity);
++
++	return ret;
++}
++
++/**
++ * bfq_deactivate_entity - deactivate an entity.
++ * @entity: the entity to deactivate.
++ * @requeue: true if the entity can be put on the idle tree
++ */
++static void bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
++{
++	struct bfq_sched_data *sd;
++	struct bfq_entity *parent;
++
++	for_each_entity_safe(entity, parent) {
++		sd = entity->sched_data;
++
++		if (!__bfq_deactivate_entity(entity, requeue))
++			/*
++			 * The parent entity is still backlogged, and
++			 * we don't need to update it as it is still
++			 * in service.
++			 */
++			break;
++
++		if (sd->next_in_service != NULL)
++			/*
++			 * The parent entity is still backlogged and
++			 * the budgets on the path towards the root
++			 * need to be updated.
++			 */
++			goto update;
++
++		/*
++		 * If we reach there the parent is no more backlogged and
++		 * we want to propagate the dequeue upwards.
++		 */
++		requeue = 1;
++	}
++
++	return;
++
++update:
++	entity = parent;
++	for_each_entity(entity) {
++		__bfq_activate_entity(entity);
++
++		sd = entity->sched_data;
++		if (!bfq_update_next_in_service(sd))
++			break;
++	}
++}
++
++/**
++ * bfq_update_vtime - update vtime if necessary.
++ * @st: the service tree to act upon.
++ *
++ * If necessary update the service tree vtime to have at least one
++ * eligible entity, skipping to its start time.  Assumes that the
++ * active tree of the device is not empty.
++ *
++ * NOTE: this hierarchical implementation updates vtimes quite often,
++ * we may end up with reactivated processes getting timestamps after a
++ * vtime skip done because we needed a ->first_active entity on some
++ * intermediate node.
++ */
++static void bfq_update_vtime(struct bfq_service_tree *st)
++{
++	struct bfq_entity *entry;
++	struct rb_node *node = st->active.rb_node;
++
++	entry = rb_entry(node, struct bfq_entity, rb_node);
++	if (bfq_gt(entry->min_start, st->vtime)) {
++		st->vtime = entry->min_start;
++		bfq_forget_idle(st);
++	}
++}
++
++/**
++ * bfq_first_active_entity - find the eligible entity with
++ *                           the smallest finish time
++ * @st: the service tree to select from.
++ *
++ * This function searches the first schedulable entity, starting from the
++ * root of the tree and going on the left every time on this side there is
++ * a subtree with at least one eligible (start >= vtime) entity. The path on
++ * the right is followed only if a) the left subtree contains no eligible
++ * entities and b) no eligible entity has been found yet.
++ */
++static struct bfq_entity *bfq_first_active_entity(struct bfq_service_tree *st)
++{
++	struct bfq_entity *entry, *first = NULL;
++	struct rb_node *node = st->active.rb_node;
++
++	while (node != NULL) {
++		entry = rb_entry(node, struct bfq_entity, rb_node);
++left:
++		if (!bfq_gt(entry->start, st->vtime))
++			first = entry;
++
++		BUG_ON(bfq_gt(entry->min_start, st->vtime));
++
++		if (node->rb_left != NULL) {
++			entry = rb_entry(node->rb_left,
++					 struct bfq_entity, rb_node);
++			if (!bfq_gt(entry->min_start, st->vtime)) {
++				node = node->rb_left;
++				goto left;
++			}
++		}
++		if (first != NULL)
++			break;
++		node = node->rb_right;
++	}
++
++	BUG_ON(first == NULL && !RB_EMPTY_ROOT(&st->active));
++	return first;
++}
++
++/**
++ * __bfq_lookup_next_entity - return the first eligible entity in @st.
++ * @st: the service tree.
++ *
++ * Update the virtual time in @st and return the first eligible entity
++ * it contains.
++ */
++static struct bfq_entity *__bfq_lookup_next_entity(struct bfq_service_tree *st,
++						   bool force)
++{
++	struct bfq_entity *entity, *new_next_in_service = NULL;
++
++	if (RB_EMPTY_ROOT(&st->active))
++		return NULL;
++
++	bfq_update_vtime(st);
++	entity = bfq_first_active_entity(st);
++	BUG_ON(bfq_gt(entity->start, st->vtime));
++
++	/*
++	 * If the chosen entity does not match with the sched_data's
++	 * next_in_service and we are forcedly serving the IDLE priority
++	 * class tree, bubble up budget update.
++	 */
++	if (unlikely(force && entity != entity->sched_data->next_in_service)) {
++		new_next_in_service = entity;
++		for_each_entity(new_next_in_service)
++			bfq_update_budget(new_next_in_service);
++	}
++
++	return entity;
++}
++
++/**
++ * bfq_lookup_next_entity - return the first eligible entity in @sd.
++ * @sd: the sched_data.
++ * @extract: if true the returned entity will be also extracted from @sd.
++ *
++ * NOTE: since we cache the next_in_service entity at each level of the
++ * hierarchy, the complexity of the lookup can be decreased with
++ * absolutely no effort just returning the cached next_in_service value;
++ * we prefer to do full lookups to test the consistency of * the data
++ * structures.
++ */
++static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
++						 int extract,
++						 struct bfq_data *bfqd)
++{
++	struct bfq_service_tree *st = sd->service_tree;
++	struct bfq_entity *entity;
++	int i = 0;
++
++	BUG_ON(sd->in_service_entity != NULL);
++
++	if (bfqd != NULL &&
++	    jiffies - bfqd->bfq_class_idle_last_service > BFQ_CL_IDLE_TIMEOUT) {
++		entity = __bfq_lookup_next_entity(st + BFQ_IOPRIO_CLASSES - 1,
++						  true);
++		if (entity != NULL) {
++			i = BFQ_IOPRIO_CLASSES - 1;
++			bfqd->bfq_class_idle_last_service = jiffies;
++			sd->next_in_service = entity;
++		}
++	}
++	for (; i < BFQ_IOPRIO_CLASSES; i++) {
++		entity = __bfq_lookup_next_entity(st + i, false);
++		if (entity != NULL) {
++			if (extract) {
++				bfq_check_next_in_service(sd, entity);
++				bfq_active_extract(st + i, entity);
++				sd->in_service_entity = entity;
++				sd->next_in_service = NULL;
++			}
++			break;
++		}
++	}
++
++	return entity;
++}
++
++/*
++ * Get next queue for service.
++ */
++static struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
++{
++	struct bfq_entity *entity = NULL;
++	struct bfq_sched_data *sd;
++	struct bfq_queue *bfqq;
++
++	BUG_ON(bfqd->in_service_queue != NULL);
++
++	if (bfqd->busy_queues == 0)
++		return NULL;
++
++	sd = &bfqd->root_group->sched_data;
++	for (; sd != NULL; sd = entity->my_sched_data) {
++		entity = bfq_lookup_next_entity(sd, 1, bfqd);
++		BUG_ON(entity == NULL);
++		entity->service = 0;
++	}
++
++	bfqq = bfq_entity_to_bfqq(entity);
++	BUG_ON(bfqq == NULL);
++
++	return bfqq;
++}
++
++/*
++ * Forced extraction of the given queue.
++ */
++static void bfq_get_next_queue_forced(struct bfq_data *bfqd,
++				      struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity;
++	struct bfq_sched_data *sd;
++
++	BUG_ON(bfqd->in_service_queue != NULL);
++
++	entity = &bfqq->entity;
++	/*
++	 * Bubble up extraction/update from the leaf to the root.
++	*/
++	for_each_entity(entity) {
++		sd = entity->sched_data;
++		bfq_update_budget(entity);
++		bfq_update_vtime(bfq_entity_service_tree(entity));
++		bfq_active_extract(bfq_entity_service_tree(entity), entity);
++		sd->in_service_entity = entity;
++		sd->next_in_service = NULL;
++		entity->service = 0;
++	}
++
++	return;
++}
++
++static void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd)
++{
++	if (bfqd->in_service_bic != NULL) {
++		put_io_context(bfqd->in_service_bic->icq.ioc);
++		bfqd->in_service_bic = NULL;
++	}
++
++	bfqd->in_service_queue = NULL;
++	del_timer(&bfqd->idle_slice_timer);
++}
++
++static void bfq_deactivate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++				int requeue)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++
++	if (bfqq == bfqd->in_service_queue)
++		__bfq_bfqd_reset_in_service(bfqd);
++
++	bfq_deactivate_entity(entity, requeue);
++}
++
++static void bfq_activate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	struct bfq_entity *entity = &bfqq->entity;
++
++	bfq_activate_entity(entity);
++}
++
++/*
++ * Called when the bfqq no longer has requests pending, remove it from
++ * the service tree.
++ */
++static void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++			      int requeue)
++{
++	BUG_ON(!bfq_bfqq_busy(bfqq));
++	BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
++
++	bfq_log_bfqq(bfqd, bfqq, "del from busy");
++
++	bfq_clear_bfqq_busy(bfqq);
++
++	BUG_ON(bfqd->busy_queues == 0);
++	bfqd->busy_queues--;
++
++	if (!bfqq->dispatched) {
++		bfq_weights_tree_remove(bfqd, &bfqq->entity,
++					&bfqd->queue_weights_tree);
++		if (!blk_queue_nonrot(bfqd->queue)) {
++			BUG_ON(!bfqd->busy_in_flight_queues);
++			bfqd->busy_in_flight_queues--;
++			if (bfq_bfqq_constantly_seeky(bfqq)) {
++				BUG_ON(!bfqd->
++					const_seeky_busy_in_flight_queues);
++				bfqd->const_seeky_busy_in_flight_queues--;
++			}
++		}
++	}
++	if (bfqq->wr_coeff > 1)
++		bfqd->wr_busy_queues--;
++
++	bfq_deactivate_bfqq(bfqd, bfqq, requeue);
++}
++
++/*
++ * Called when an inactive queue receives a new request.
++ */
++static void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++	BUG_ON(bfq_bfqq_busy(bfqq));
++	BUG_ON(bfqq == bfqd->in_service_queue);
++
++	bfq_log_bfqq(bfqd, bfqq, "add to busy");
++
++	bfq_activate_bfqq(bfqd, bfqq);
++
++	bfq_mark_bfqq_busy(bfqq);
++	bfqd->busy_queues++;
++
++	if (!bfqq->dispatched) {
++		if (bfqq->wr_coeff == 1)
++			bfq_weights_tree_add(bfqd, &bfqq->entity,
++					     &bfqd->queue_weights_tree);
++		if (!blk_queue_nonrot(bfqd->queue)) {
++			bfqd->busy_in_flight_queues++;
++			if (bfq_bfqq_constantly_seeky(bfqq))
++				bfqd->const_seeky_busy_in_flight_queues++;
++		}
++	}
++	if (bfqq->wr_coeff > 1)
++		bfqd->wr_busy_queues++;
++}
+diff --git a/block/bfq.h b/block/bfq.h
+new file mode 100644
+index 0000000..518f2ac
+--- /dev/null
++++ b/block/bfq.h
+@@ -0,0 +1,775 @@
++/*
++ * BFQ-v7r7 for 4.0.0: data structures and common functions prototypes.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ *		      Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ */
++
++#ifndef _BFQ_H
++#define _BFQ_H
++
++#include <linux/blktrace_api.h>
++#include <linux/hrtimer.h>
++#include <linux/ioprio.h>
++#include <linux/rbtree.h>
++
++#define BFQ_IOPRIO_CLASSES	3
++#define BFQ_CL_IDLE_TIMEOUT	(HZ/5)
++
++#define BFQ_MIN_WEIGHT	1
++#define BFQ_MAX_WEIGHT	1000
++
++#define BFQ_DEFAULT_QUEUE_IOPRIO	4
++
++#define BFQ_DEFAULT_GRP_WEIGHT	10
++#define BFQ_DEFAULT_GRP_IOPRIO	0
++#define BFQ_DEFAULT_GRP_CLASS	IOPRIO_CLASS_BE
++
++struct bfq_entity;
++
++/**
++ * struct bfq_service_tree - per ioprio_class service tree.
++ * @active: tree for active entities (i.e., those backlogged).
++ * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
++ * @first_idle: idle entity with minimum F_i.
++ * @last_idle: idle entity with maximum F_i.
++ * @vtime: scheduler virtual time.
++ * @wsum: scheduler weight sum; active and idle entities contribute to it.
++ *
++ * Each service tree represents a B-WF2Q+ scheduler on its own.  Each
++ * ioprio_class has its own independent scheduler, and so its own
++ * bfq_service_tree.  All the fields are protected by the queue lock
++ * of the containing bfqd.
++ */
++struct bfq_service_tree {
++	struct rb_root active;
++	struct rb_root idle;
++
++	struct bfq_entity *first_idle;
++	struct bfq_entity *last_idle;
++
++	u64 vtime;
++	unsigned long wsum;
++};
++
++/**
++ * struct bfq_sched_data - multi-class scheduler.
++ * @in_service_entity: entity in service.
++ * @next_in_service: head-of-the-line entity in the scheduler.
++ * @service_tree: array of service trees, one per ioprio_class.
++ *
++ * bfq_sched_data is the basic scheduler queue.  It supports three
++ * ioprio_classes, and can be used either as a toplevel queue or as
++ * an intermediate queue on a hierarchical setup.
++ * @next_in_service points to the active entity of the sched_data
++ * service trees that will be scheduled next.
++ *
++ * The supported ioprio_classes are the same as in CFQ, in descending
++ * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
++ * Requests from higher priority queues are served before all the
++ * requests from lower priority queues; among requests of the same
++ * queue requests are served according to B-WF2Q+.
++ * All the fields are protected by the queue lock of the containing bfqd.
++ */
++struct bfq_sched_data {
++	struct bfq_entity *in_service_entity;
++	struct bfq_entity *next_in_service;
++	struct bfq_service_tree service_tree[BFQ_IOPRIO_CLASSES];
++};
++
++/**
++ * struct bfq_weight_counter - counter of the number of all active entities
++ *                             with a given weight.
++ * @weight: weight of the entities that this counter refers to.
++ * @num_active: number of active entities with this weight.
++ * @weights_node: weights tree member (see bfq_data's @queue_weights_tree
++ *                and @group_weights_tree).
++ */
++struct bfq_weight_counter {
++	short int weight;
++	unsigned int num_active;
++	struct rb_node weights_node;
++};
++
++/**
++ * struct bfq_entity - schedulable entity.
++ * @rb_node: service_tree member.
++ * @weight_counter: pointer to the weight counter associated with this entity.
++ * @on_st: flag, true if the entity is on a tree (either the active or
++ *         the idle one of its service_tree).
++ * @finish: B-WF2Q+ finish timestamp (aka F_i).
++ * @start: B-WF2Q+ start timestamp (aka S_i).
++ * @tree: tree the entity is enqueued into; %NULL if not on a tree.
++ * @min_start: minimum start time of the (active) subtree rooted at
++ *             this entity; used for O(log N) lookups into active trees.
++ * @service: service received during the last round of service.
++ * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
++ * @weight: weight of the queue
++ * @parent: parent entity, for hierarchical scheduling.
++ * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
++ *                 associated scheduler queue, %NULL on leaf nodes.
++ * @sched_data: the scheduler queue this entity belongs to.
++ * @ioprio: the ioprio in use.
++ * @new_weight: when a weight change is requested, the new weight value.
++ * @orig_weight: original weight, used to implement weight boosting
++ * @new_ioprio: when an ioprio change is requested, the new ioprio value.
++ * @ioprio_class: the ioprio_class in use.
++ * @new_ioprio_class: when an ioprio_class change is requested, the new
++ *                    ioprio_class value.
++ * @ioprio_changed: flag, true when the user requested a weight, ioprio or
++ *                  ioprio_class change.
++ *
++ * A bfq_entity is used to represent either a bfq_queue (leaf node in the
++ * cgroup hierarchy) or a bfq_group into the upper level scheduler.  Each
++ * entity belongs to the sched_data of the parent group in the cgroup
++ * hierarchy.  Non-leaf entities have also their own sched_data, stored
++ * in @my_sched_data.
++ *
++ * Each entity stores independently its priority values; this would
++ * allow different weights on different devices, but this
++ * functionality is not exported to userspace by now.  Priorities and
++ * weights are updated lazily, first storing the new values into the
++ * new_* fields, then setting the @ioprio_changed flag.  As soon as
++ * there is a transition in the entity state that allows the priority
++ * update to take place the effective and the requested priority
++ * values are synchronized.
++ *
++ * Unless cgroups are used, the weight value is calculated from the
++ * ioprio to export the same interface as CFQ.  When dealing with
++ * ``well-behaved'' queues (i.e., queues that do not spend too much
++ * time to consume their budget and have true sequential behavior, and
++ * when there are no external factors breaking anticipation) the
++ * relative weights at each level of the cgroups hierarchy should be
++ * guaranteed.  All the fields are protected by the queue lock of the
++ * containing bfqd.
++ */
++struct bfq_entity {
++	struct rb_node rb_node;
++	struct bfq_weight_counter *weight_counter;
++
++	int on_st;
++
++	u64 finish;
++	u64 start;
++
++	struct rb_root *tree;
++
++	u64 min_start;
++
++	unsigned long service, budget;
++	unsigned short weight, new_weight;
++	unsigned short orig_weight;
++
++	struct bfq_entity *parent;
++
++	struct bfq_sched_data *my_sched_data;
++	struct bfq_sched_data *sched_data;
++
++	unsigned short ioprio, new_ioprio;
++	unsigned short ioprio_class, new_ioprio_class;
++
++	int ioprio_changed;
++};
++
++struct bfq_group;
++
++/**
++ * struct bfq_queue - leaf schedulable entity.
++ * @ref: reference counter.
++ * @bfqd: parent bfq_data.
++ * @new_bfqq: shared bfq_queue if queue is cooperating with
++ *           one or more other queues.
++ * @pos_node: request-position tree member (see bfq_data's @rq_pos_tree).
++ * @pos_root: request-position tree root (see bfq_data's @rq_pos_tree).
++ * @sort_list: sorted list of pending requests.
++ * @next_rq: if fifo isn't expired, next request to serve.
++ * @queued: nr of requests queued in @sort_list.
++ * @allocated: currently allocated requests.
++ * @meta_pending: pending metadata requests.
++ * @fifo: fifo list of requests in sort_list.
++ * @entity: entity representing this queue in the scheduler.
++ * @max_budget: maximum budget allowed from the feedback mechanism.
++ * @budget_timeout: budget expiration (in jiffies).
++ * @dispatched: number of requests on the dispatch list or inside driver.
++ * @flags: status flags.
++ * @bfqq_list: node for active/idle bfqq list inside our bfqd.
++ * @burst_list_node: node for the device's burst list.
++ * @seek_samples: number of seeks sampled
++ * @seek_total: sum of the distances of the seeks sampled
++ * @seek_mean: mean seek distance
++ * @last_request_pos: position of the last request enqueued
++ * @requests_within_timer: number of consecutive pairs of request completion
++ *                         and arrival, such that the queue becomes idle
++ *                         after the completion, but the next request arrives
++ *                         within an idle time slice; used only if the queue's
++ *                         IO_bound has been cleared.
++ * @pid: pid of the process owning the queue, used for logging purposes.
++ * @last_wr_start_finish: start time of the current weight-raising period if
++ *                        the @bfq-queue is being weight-raised, otherwise
++ *                        finish time of the last weight-raising period
++ * @wr_cur_max_time: current max raising time for this queue
++ * @soft_rt_next_start: minimum time instant such that, only if a new
++ *                      request is enqueued after this time instant in an
++ *                      idle @bfq_queue with no outstanding requests, then
++ *                      the task associated with the queue it is deemed as
++ *                      soft real-time (see the comments to the function
++ *                      bfq_bfqq_softrt_next_start()).
++ * @last_idle_bklogged: time of the last transition of the @bfq_queue from
++ *                      idle to backlogged
++ * @service_from_backlogged: cumulative service received from the @bfq_queue
++ *                           since the last transition from idle to
++ *                           backlogged
++ *
++ * A bfq_queue is a leaf request queue; it can be associated with an io_context
++ * or more, if it is async or shared between cooperating processes. @cgroup
++ * holds a reference to the cgroup, to be sure that it does not disappear while
++ * a bfqq still references it (mostly to avoid races between request issuing and
++ * task migration followed by cgroup destruction).
++ * All the fields are protected by the queue lock of the containing bfqd.
++ */
++struct bfq_queue {
++	atomic_t ref;
++	struct bfq_data *bfqd;
++
++	/* fields for cooperating queues handling */
++	struct bfq_queue *new_bfqq;
++	struct rb_node pos_node;
++	struct rb_root *pos_root;
++
++	struct rb_root sort_list;
++	struct request *next_rq;
++	int queued[2];
++	int allocated[2];
++	int meta_pending;
++	struct list_head fifo;
++
++	struct bfq_entity entity;
++
++	unsigned long max_budget;
++	unsigned long budget_timeout;
++
++	int dispatched;
++
++	unsigned int flags;
++
++	struct list_head bfqq_list;
++
++	struct hlist_node burst_list_node;
++
++	unsigned int seek_samples;
++	u64 seek_total;
++	sector_t seek_mean;
++	sector_t last_request_pos;
++
++	unsigned int requests_within_timer;
++
++	pid_t pid;
++
++	/* weight-raising fields */
++	unsigned long wr_cur_max_time;
++	unsigned long soft_rt_next_start;
++	unsigned long last_wr_start_finish;
++	unsigned int wr_coeff;
++	unsigned long last_idle_bklogged;
++	unsigned long service_from_backlogged;
++};
++
++/**
++ * struct bfq_ttime - per process thinktime stats.
++ * @ttime_total: total process thinktime
++ * @ttime_samples: number of thinktime samples
++ * @ttime_mean: average process thinktime
++ */
++struct bfq_ttime {
++	unsigned long last_end_request;
++
++	unsigned long ttime_total;
++	unsigned long ttime_samples;
++	unsigned long ttime_mean;
++};
++
++/**
++ * struct bfq_io_cq - per (request_queue, io_context) structure.
++ * @icq: associated io_cq structure
++ * @bfqq: array of two process queues, the sync and the async
++ * @ttime: associated @bfq_ttime struct
++ */
++struct bfq_io_cq {
++	struct io_cq icq; /* must be the first member */
++	struct bfq_queue *bfqq[2];
++	struct bfq_ttime ttime;
++	int ioprio;
++};
++
++enum bfq_device_speed {
++	BFQ_BFQD_FAST,
++	BFQ_BFQD_SLOW,
++};
++
++/**
++ * struct bfq_data - per device data structure.
++ * @queue: request queue for the managed device.
++ * @root_group: root bfq_group for the device.
++ * @rq_pos_tree: rbtree sorted by next_request position, used when
++ *               determining if two or more queues have interleaving
++ *               requests (see bfq_close_cooperator()).
++ * @active_numerous_groups: number of bfq_groups containing more than one
++ *                          active @bfq_entity.
++ * @queue_weights_tree: rbtree of weight counters of @bfq_queues, sorted by
++ *                      weight. Used to keep track of whether all @bfq_queues
++ *                     have the same weight. The tree contains one counter
++ *                     for each distinct weight associated to some active
++ *                     and not weight-raised @bfq_queue (see the comments to
++ *                      the functions bfq_weights_tree_[add|remove] for
++ *                     further details).
++ * @group_weights_tree: rbtree of non-queue @bfq_entity weight counters, sorted
++ *                      by weight. Used to keep track of whether all
++ *                     @bfq_groups have the same weight. The tree contains
++ *                     one counter for each distinct weight associated to
++ *                     some active @bfq_group (see the comments to the
++ *                     functions bfq_weights_tree_[add|remove] for further
++ *                     details).
++ * @busy_queues: number of bfq_queues containing requests (including the
++ *		 queue in service, even if it is idling).
++ * @busy_in_flight_queues: number of @bfq_queues containing pending or
++ *                         in-flight requests, plus the @bfq_queue in
++ *                         service, even if idle but waiting for the
++ *                         possible arrival of its next sync request. This
++ *                         field is updated only if the device is rotational,
++ *                         but used only if the device is also NCQ-capable.
++ *                         The reason why the field is updated also for non-
++ *                         NCQ-capable rotational devices is related to the
++ *                         fact that the value of @hw_tag may be set also
++ *                         later than when busy_in_flight_queues may need to
++ *                         be incremented for the first time(s). Taking also
++ *                         this possibility into account, to avoid unbalanced
++ *                         increments/decrements, would imply more overhead
++ *                         than just updating busy_in_flight_queues
++ *                         regardless of the value of @hw_tag.
++ * @const_seeky_busy_in_flight_queues: number of constantly-seeky @bfq_queues
++ *                                     (that is, seeky queues that expired
++ *                                     for budget timeout at least once)
++ *                                     containing pending or in-flight
++ *                                     requests, including the in-service
++ *                                     @bfq_queue if constantly seeky. This
++ *                                     field is updated only if the device
++ *                                     is rotational, but used only if the
++ *                                     device is also NCQ-capable (see the
++ *                                     comments to @busy_in_flight_queues).
++ * @wr_busy_queues: number of weight-raised busy @bfq_queues.
++ * @queued: number of queued requests.
++ * @rq_in_driver: number of requests dispatched and waiting for completion.
++ * @sync_flight: number of sync requests in the driver.
++ * @max_rq_in_driver: max number of reqs in driver in the last
++ *                    @hw_tag_samples completed requests.
++ * @hw_tag_samples: nr of samples used to calculate hw_tag.
++ * @hw_tag: flag set to one if the driver is showing a queueing behavior.
++ * @budgets_assigned: number of budgets assigned.
++ * @idle_slice_timer: timer set when idling for the next sequential request
++ *                    from the queue in service.
++ * @unplug_work: delayed work to restart dispatching on the request queue.
++ * @in_service_queue: bfq_queue in service.
++ * @in_service_bic: bfq_io_cq (bic) associated with the @in_service_queue.
++ * @last_position: on-disk position of the last served request.
++ * @last_budget_start: beginning of the last budget.
++ * @last_idling_start: beginning of the last idle slice.
++ * @peak_rate: peak transfer rate observed for a budget.
++ * @peak_rate_samples: number of samples used to calculate @peak_rate.
++ * @bfq_max_budget: maximum budget allotted to a bfq_queue before
++ *                  rescheduling.
++ * @group_list: list of all the bfq_groups active on the device.
++ * @active_list: list of all the bfq_queues active on the device.
++ * @idle_list: list of all the bfq_queues idle on the device.
++ * @bfq_quantum: max number of requests dispatched per dispatch round.
++ * @bfq_fifo_expire: timeout for async/sync requests; when it expires
++ *                   requests are served in fifo order.
++ * @bfq_back_penalty: weight of backward seeks wrt forward ones.
++ * @bfq_back_max: maximum allowed backward seek.
++ * @bfq_slice_idle: maximum idling time.
++ * @bfq_user_max_budget: user-configured max budget value
++ *                       (0 for auto-tuning).
++ * @bfq_max_budget_async_rq: maximum budget (in nr of requests) allotted to
++ *                           async queues.
++ * @bfq_timeout: timeout for bfq_queues to consume their budget; used to
++ *               to prevent seeky queues to impose long latencies to well
++ *               behaved ones (this also implies that seeky queues cannot
++ *               receive guarantees in the service domain; after a timeout
++ *               they are charged for the whole allocated budget, to try
++ *               to preserve a behavior reasonably fair among them, but
++ *               without service-domain guarantees).
++ * @bfq_coop_thresh: number of queue merges after which a @bfq_queue is
++ *                   no more granted any weight-raising.
++ * @bfq_failed_cooperations: number of consecutive failed cooperation
++ *                           chances after which weight-raising is restored
++ *                           to a queue subject to more than bfq_coop_thresh
++ *                           queue merges.
++ * @bfq_requests_within_timer: number of consecutive requests that must be
++ *                             issued within the idle time slice to set
++ *                             again idling to a queue which was marked as
++ *                             non-I/O-bound (see the definition of the
++ *                             IO_bound flag for further details).
++ * @last_ins_in_burst: last time at which a queue entered the current
++ *                     burst of queues being activated shortly after
++ *                     each other; for more details about this and the
++ *                     following parameters related to a burst of
++ *                     activations, see the comments to the function
++ *                     @bfq_handle_burst.
++ * @bfq_burst_interval: reference time interval used to decide whether a
++ *                      queue has been activated shortly after
++ *                      @last_ins_in_burst.
++ * @burst_size: number of queues in the current burst of queue activations.
++ * @bfq_large_burst_thresh: maximum burst size above which the current
++ * 			    queue-activation burst is deemed as 'large'.
++ * @large_burst: true if a large queue-activation burst is in progress.
++ * @burst_list: head of the burst list (as for the above fields, more details
++ * 		in the comments to the function bfq_handle_burst).
++ * @low_latency: if set to true, low-latency heuristics are enabled.
++ * @bfq_wr_coeff: maximum factor by which the weight of a weight-raised
++ *                queue is multiplied.
++ * @bfq_wr_max_time: maximum duration of a weight-raising period (jiffies).
++ * @bfq_wr_rt_max_time: maximum duration for soft real-time processes.
++ * @bfq_wr_min_idle_time: minimum idle period after which weight-raising
++ *			  may be reactivated for a queue (in jiffies).
++ * @bfq_wr_min_inter_arr_async: minimum period between request arrivals
++ *				after which weight-raising may be
++ *				reactivated for an already busy queue
++ *				(in jiffies).
++ * @bfq_wr_max_softrt_rate: max service-rate for a soft real-time queue,
++ *			    sectors per seconds.
++ * @RT_prod: cached value of the product R*T used for computing the maximum
++ *	     duration of the weight raising automatically.
++ * @device_speed: device-speed class for the low-latency heuristic.
++ * @oom_bfqq: fallback dummy bfqq for extreme OOM conditions.
++ *
++ * All the fields are protected by the @queue lock.
++ */
++struct bfq_data {
++	struct request_queue *queue;
++
++	struct bfq_group *root_group;
++	struct rb_root rq_pos_tree;
++
++#ifdef CONFIG_CGROUP_BFQIO
++	int active_numerous_groups;
++#endif
++
++	struct rb_root queue_weights_tree;
++	struct rb_root group_weights_tree;
++
++	int busy_queues;
++	int busy_in_flight_queues;
++	int const_seeky_busy_in_flight_queues;
++	int wr_busy_queues;
++	int queued;
++	int rq_in_driver;
++	int sync_flight;
++
++	int max_rq_in_driver;
++	int hw_tag_samples;
++	int hw_tag;
++
++	int budgets_assigned;
++
++	struct timer_list idle_slice_timer;
++	struct work_struct unplug_work;
++
++	struct bfq_queue *in_service_queue;
++	struct bfq_io_cq *in_service_bic;
++
++	sector_t last_position;
++
++	ktime_t last_budget_start;
++	ktime_t last_idling_start;
++	int peak_rate_samples;
++	u64 peak_rate;
++	unsigned long bfq_max_budget;
++
++	struct hlist_head group_list;
++	struct list_head active_list;
++	struct list_head idle_list;
++
++	unsigned int bfq_quantum;
++	unsigned int bfq_fifo_expire[2];
++	unsigned int bfq_back_penalty;
++	unsigned int bfq_back_max;
++	unsigned int bfq_slice_idle;
++	u64 bfq_class_idle_last_service;
++
++	unsigned int bfq_user_max_budget;
++	unsigned int bfq_max_budget_async_rq;
++	unsigned int bfq_timeout[2];
++
++	unsigned int bfq_coop_thresh;
++	unsigned int bfq_failed_cooperations;
++	unsigned int bfq_requests_within_timer;
++
++	unsigned long last_ins_in_burst;
++	unsigned long bfq_burst_interval;
++	int burst_size;
++	unsigned long bfq_large_burst_thresh;
++	bool large_burst;
++	struct hlist_head burst_list;
++
++	bool low_latency;
++
++	/* parameters of the low_latency heuristics */
++	unsigned int bfq_wr_coeff;
++	unsigned int bfq_wr_max_time;
++	unsigned int bfq_wr_rt_max_time;
++	unsigned int bfq_wr_min_idle_time;
++	unsigned long bfq_wr_min_inter_arr_async;
++	unsigned int bfq_wr_max_softrt_rate;
++	u64 RT_prod;
++	enum bfq_device_speed device_speed;
++
++	struct bfq_queue oom_bfqq;
++};
++
++enum bfqq_state_flags {
++	BFQ_BFQQ_FLAG_busy = 0,		/* has requests or is in service */
++	BFQ_BFQQ_FLAG_wait_request,	/* waiting for a request */
++	BFQ_BFQQ_FLAG_must_alloc,	/* must be allowed rq alloc */
++	BFQ_BFQQ_FLAG_fifo_expire,	/* FIFO checked in this slice */
++	BFQ_BFQQ_FLAG_idle_window,	/* slice idling enabled */
++	BFQ_BFQQ_FLAG_prio_changed,	/* task priority has changed */
++	BFQ_BFQQ_FLAG_sync,		/* synchronous queue */
++	BFQ_BFQQ_FLAG_budget_new,	/* no completion with this budget */
++	BFQ_BFQQ_FLAG_IO_bound,         /*
++					 * bfqq has timed-out at least once
++					 * having consumed at most 2/10 of
++					 * its budget
++					 */
++	BFQ_BFQQ_FLAG_in_large_burst,	/*
++					 * bfqq activated in a large burst,
++					 * see comments to bfq_handle_burst.
++					 */
++	BFQ_BFQQ_FLAG_constantly_seeky,	/*
++					 * bfqq has proved to be slow and
++					 * seeky until budget timeout
++					 */
++	BFQ_BFQQ_FLAG_softrt_update,    /*
++					 * may need softrt-next-start
++					 * update
++					 */
++	BFQ_BFQQ_FLAG_coop,		/* bfqq is shared */
++	BFQ_BFQQ_FLAG_split_coop,	/* shared bfqq will be splitted */
++};
++
++#define BFQ_BFQQ_FNS(name)						\
++static inline void bfq_mark_bfqq_##name(struct bfq_queue *bfqq)		\
++{									\
++	(bfqq)->flags |= (1 << BFQ_BFQQ_FLAG_##name);			\
++}									\
++static inline void bfq_clear_bfqq_##name(struct bfq_queue *bfqq)	\
++{									\
++	(bfqq)->flags &= ~(1 << BFQ_BFQQ_FLAG_##name);			\
++}									\
++static inline int bfq_bfqq_##name(const struct bfq_queue *bfqq)		\
++{									\
++	return ((bfqq)->flags & (1 << BFQ_BFQQ_FLAG_##name)) != 0;	\
++}
++
++BFQ_BFQQ_FNS(busy);
++BFQ_BFQQ_FNS(wait_request);
++BFQ_BFQQ_FNS(must_alloc);
++BFQ_BFQQ_FNS(fifo_expire);
++BFQ_BFQQ_FNS(idle_window);
++BFQ_BFQQ_FNS(prio_changed);
++BFQ_BFQQ_FNS(sync);
++BFQ_BFQQ_FNS(budget_new);
++BFQ_BFQQ_FNS(IO_bound);
++BFQ_BFQQ_FNS(in_large_burst);
++BFQ_BFQQ_FNS(constantly_seeky);
++BFQ_BFQQ_FNS(coop);
++BFQ_BFQQ_FNS(split_coop);
++BFQ_BFQQ_FNS(softrt_update);
++#undef BFQ_BFQQ_FNS
++
++/* Logging facilities. */
++#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) \
++	blk_add_trace_msg((bfqd)->queue, "bfq%d " fmt, (bfqq)->pid, ##args)
++
++#define bfq_log(bfqd, fmt, args...) \
++	blk_add_trace_msg((bfqd)->queue, "bfq " fmt, ##args)
++
++/* Expiration reasons. */
++enum bfqq_expiration {
++	BFQ_BFQQ_TOO_IDLE = 0,		/*
++					 * queue has been idling for
++					 * too long
++					 */
++	BFQ_BFQQ_BUDGET_TIMEOUT,	/* budget took too long to be used */
++	BFQ_BFQQ_BUDGET_EXHAUSTED,	/* budget consumed */
++	BFQ_BFQQ_NO_MORE_REQUESTS,	/* the queue has no more requests */
++};
++
++#ifdef CONFIG_CGROUP_BFQIO
++/**
++ * struct bfq_group - per (device, cgroup) data structure.
++ * @entity: schedulable entity to insert into the parent group sched_data.
++ * @sched_data: own sched_data, to contain child entities (they may be
++ *              both bfq_queues and bfq_groups).
++ * @group_node: node to be inserted into the bfqio_cgroup->group_data
++ *              list of the containing cgroup's bfqio_cgroup.
++ * @bfqd_node: node to be inserted into the @bfqd->group_list list
++ *             of the groups active on the same device; used for cleanup.
++ * @bfqd: the bfq_data for the device this group acts upon.
++ * @async_bfqq: array of async queues for all the tasks belonging to
++ *              the group, one queue per ioprio value per ioprio_class,
++ *              except for the idle class that has only one queue.
++ * @async_idle_bfqq: async queue for the idle class (ioprio is ignored).
++ * @my_entity: pointer to @entity, %NULL for the toplevel group; used
++ *             to avoid too many special cases during group creation/
++ *             migration.
++ * @active_entities: number of active entities belonging to the group;
++ *                   unused for the root group. Used to know whether there
++ *                   are groups with more than one active @bfq_entity
++ *                   (see the comments to the function
++ *                   bfq_bfqq_must_not_expire()).
++ *
++ * Each (device, cgroup) pair has its own bfq_group, i.e., for each cgroup
++ * there is a set of bfq_groups, each one collecting the lower-level
++ * entities belonging to the group that are acting on the same device.
++ *
++ * Locking works as follows:
++ *    o @group_node is protected by the bfqio_cgroup lock, and is accessed
++ *      via RCU from its readers.
++ *    o @bfqd is protected by the queue lock, RCU is used to access it
++ *      from the readers.
++ *    o All the other fields are protected by the @bfqd queue lock.
++ */
++struct bfq_group {
++	struct bfq_entity entity;
++	struct bfq_sched_data sched_data;
++
++	struct hlist_node group_node;
++	struct hlist_node bfqd_node;
++
++	void *bfqd;
++
++	struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR];
++	struct bfq_queue *async_idle_bfqq;
++
++	struct bfq_entity *my_entity;
++
++	int active_entities;
++};
++
++/**
++ * struct bfqio_cgroup - bfq cgroup data structure.
++ * @css: subsystem state for bfq in the containing cgroup.
++ * @online: flag marked when the subsystem is inserted.
++ * @weight: cgroup weight.
++ * @ioprio: cgroup ioprio.
++ * @ioprio_class: cgroup ioprio_class.
++ * @lock: spinlock that protects @ioprio, @ioprio_class and @group_data.
++ * @group_data: list containing the bfq_group belonging to this cgroup.
++ *
++ * @group_data is accessed using RCU, with @lock protecting the updates,
++ * @ioprio and @ioprio_class are protected by @lock.
++ */
++struct bfqio_cgroup {
++	struct cgroup_subsys_state css;
++	bool online;
++
++	unsigned short weight, ioprio, ioprio_class;
++
++	spinlock_t lock;
++	struct hlist_head group_data;
++};
++#else
++struct bfq_group {
++	struct bfq_sched_data sched_data;
++
++	struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR];
++	struct bfq_queue *async_idle_bfqq;
++};
++#endif
++
++static inline struct bfq_service_tree *
++bfq_entity_service_tree(struct bfq_entity *entity)
++{
++	struct bfq_sched_data *sched_data = entity->sched_data;
++	unsigned int idx = entity->ioprio_class - 1;
++
++	BUG_ON(idx >= BFQ_IOPRIO_CLASSES);
++	BUG_ON(sched_data == NULL);
++
++	return sched_data->service_tree + idx;
++}
++
++static inline struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic,
++					    bool is_sync)
++{
++	return bic->bfqq[is_sync];
++}
++
++static inline void bic_set_bfqq(struct bfq_io_cq *bic,
++				struct bfq_queue *bfqq, bool is_sync)
++{
++	bic->bfqq[is_sync] = bfqq;
++}
++
++static inline struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic)
++{
++	return bic->icq.q->elevator->elevator_data;
++}
++
++/**
++ * bfq_get_bfqd_locked - get a lock to a bfqd using a RCU protected pointer.
++ * @ptr: a pointer to a bfqd.
++ * @flags: storage for the flags to be saved.
++ *
++ * This function allows bfqg->bfqd to be protected by the
++ * queue lock of the bfqd they reference; the pointer is dereferenced
++ * under RCU, so the storage for bfqd is assured to be safe as long
++ * as the RCU read side critical section does not end.  After the
++ * bfqd->queue->queue_lock is taken the pointer is rechecked, to be
++ * sure that no other writer accessed it.  If we raced with a writer,
++ * the function returns NULL, with the queue unlocked, otherwise it
++ * returns the dereferenced pointer, with the queue locked.
++ */
++static inline struct bfq_data *bfq_get_bfqd_locked(void **ptr,
++						   unsigned long *flags)
++{
++	struct bfq_data *bfqd;
++
++	rcu_read_lock();
++	bfqd = rcu_dereference(*(struct bfq_data **)ptr);
++
++	if (bfqd != NULL) {
++		spin_lock_irqsave(bfqd->queue->queue_lock, *flags);
++		if (*ptr == bfqd)
++			goto out;
++		spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
++	}
++
++	bfqd = NULL;
++out:
++	rcu_read_unlock();
++	return bfqd;
++}
++
++static inline void bfq_put_bfqd_unlock(struct bfq_data *bfqd,
++				       unsigned long *flags)
++{
++	spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
++}
++
++static void bfq_changed_ioprio(struct bfq_io_cq *bic);
++static void bfq_put_queue(struct bfq_queue *bfqq);
++static void bfq_dispatch_insert(struct request_queue *q, struct request *rq);
++static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
++				       struct bfq_group *bfqg, int is_sync,
++				       struct bfq_io_cq *bic, gfp_t gfp_mask);
++static void bfq_end_wr_async_queues(struct bfq_data *bfqd,
++				    struct bfq_group *bfqg);
++static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg);
++static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq);
++
++#endif /* _BFQ_H */
+-- 
+2.1.0
+

diff --git a/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r7-for-4.0.0.patch b/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r7-for-4.0.0.patch
new file mode 100644
index 0000000..53267cd
--- /dev/null
+++ b/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r7-for-4.0.0.patch
@@ -0,0 +1,1222 @@
+From d49cf2e7913ec1c4b86a9de657140d9ec5fa8c19 Mon Sep 17 00:00:00 2001
+From: Mauro Andreolini <mauro.andreolini@unimore.it>
+Date: Thu, 18 Dec 2014 21:32:08 +0100
+Subject: [PATCH 3/3] block, bfq: add Early Queue Merge (EQM) to BFQ-v7r7 for
+ 4.0.0
+
+A set of processes may happen  to  perform interleaved reads, i.e.,requests
+whose union would give rise to a  sequential read  pattern.  There are two
+typical  cases: in the first  case,   processes  read  fixed-size chunks of
+data at a fixed distance from each other, while in the second case processes
+may read variable-size chunks at  variable distances. The latter case occurs
+for  example with  QEMU, which  splits the  I/O generated  by the  guest into
+multiple chunks,  and lets these chunks  be served by a  pool of cooperating
+processes,  iteratively  assigning  the  next  chunk of  I/O  to  the first
+available  process. CFQ  uses actual  queue merging  for the  first type of
+rocesses, whereas it  uses preemption to get a sequential  read pattern out
+of the read requests  performed by the second type of  processes. In the end
+it uses  two different  mechanisms to  achieve the  same goal: boosting the
+throughput with interleaved I/O.
+
+This patch introduces  Early Queue Merge (EQM), a unified mechanism to get a
+sequential  read pattern  with both  types of  processes. The  main idea is
+checking newly arrived requests against the next request of the active queue
+both in case of actual request insert and in case of request merge. By doing
+so, both the types of processes can be handled by just merging their queues.
+EQM is  then simpler and  more compact than the  pair of mechanisms used in
+CFQ.
+
+Finally, EQM  also preserves the  typical low-latency properties of BFQ, by
+properly restoring the weight-raising state of a queue when it gets back to
+a non-merged state.
+
+Signed-off-by: Mauro Andreolini <mauro.andreolini@unimore.it>
+Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com>
+Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
+---
+ block/bfq-iosched.c | 751 +++++++++++++++++++++++++++++++++++++---------------
+ block/bfq-sched.c   |  28 --
+ block/bfq.h         |  54 +++-
+ 3 files changed, 581 insertions(+), 252 deletions(-)
+
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 97ee934..328f33c 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -571,6 +571,57 @@ static inline unsigned int bfq_wr_duration(struct bfq_data *bfqd)
+ 	return dur;
+ }
+ 
++static inline unsigned
++bfq_bfqq_cooperations(struct bfq_queue *bfqq)
++{
++	return bfqq->bic ? bfqq->bic->cooperations : 0;
++}
++
++static inline void
++bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
++{
++	if (bic->saved_idle_window)
++		bfq_mark_bfqq_idle_window(bfqq);
++	else
++		bfq_clear_bfqq_idle_window(bfqq);
++	if (bic->saved_IO_bound)
++		bfq_mark_bfqq_IO_bound(bfqq);
++	else
++		bfq_clear_bfqq_IO_bound(bfqq);
++	/* Assuming that the flag in_large_burst is already correctly set */
++	if (bic->wr_time_left && bfqq->bfqd->low_latency &&
++	    !bfq_bfqq_in_large_burst(bfqq) &&
++	    bic->cooperations < bfqq->bfqd->bfq_coop_thresh) {
++		/*
++		 * Start a weight raising period with the duration given by
++		 * the raising_time_left snapshot.
++		 */
++		if (bfq_bfqq_busy(bfqq))
++			bfqq->bfqd->wr_busy_queues++;
++		bfqq->wr_coeff = bfqq->bfqd->bfq_wr_coeff;
++		bfqq->wr_cur_max_time = bic->wr_time_left;
++		bfqq->last_wr_start_finish = jiffies;
++		bfqq->entity.ioprio_changed = 1;
++	}
++	/*
++	 * Clear wr_time_left to prevent bfq_bfqq_save_state() from
++	 * getting confused about the queue's need of a weight-raising
++	 * period.
++	 */
++	bic->wr_time_left = 0;
++}
++
++/* Must be called with the queue_lock held. */
++static int bfqq_process_refs(struct bfq_queue *bfqq)
++{
++	int process_refs, io_refs;
++
++	io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
++	process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
++	BUG_ON(process_refs < 0);
++	return process_refs;
++}
++
+ /* Empty burst list and add just bfqq (see comments to bfq_handle_burst) */
+ static inline void bfq_reset_burst_list(struct bfq_data *bfqd,
+ 					struct bfq_queue *bfqq)
+@@ -815,7 +866,7 @@ static void bfq_add_request(struct request *rq)
+ 		bfq_rq_pos_tree_add(bfqd, bfqq);
+ 
+ 	if (!bfq_bfqq_busy(bfqq)) {
+-		bool soft_rt,
++		bool soft_rt, coop_or_in_burst,
+ 		     idle_for_long_time = time_is_before_jiffies(
+ 						bfqq->budget_timeout +
+ 						bfqd->bfq_wr_min_idle_time);
+@@ -839,11 +890,12 @@ static void bfq_add_request(struct request *rq)
+ 				bfqd->last_ins_in_burst = jiffies;
+ 		}
+ 
++		coop_or_in_burst = bfq_bfqq_in_large_burst(bfqq) ||
++			bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh;
+ 		soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
+-			!bfq_bfqq_in_large_burst(bfqq) &&
++			!coop_or_in_burst &&
+ 			time_is_before_jiffies(bfqq->soft_rt_next_start);
+-		interactive = !bfq_bfqq_in_large_burst(bfqq) &&
+-			      idle_for_long_time;
++		interactive = !coop_or_in_burst && idle_for_long_time;
+ 		entity->budget = max_t(unsigned long, bfqq->max_budget,
+ 				       bfq_serv_to_charge(next_rq, bfqq));
+ 
+@@ -862,11 +914,20 @@ static void bfq_add_request(struct request *rq)
+ 		if (!bfqd->low_latency)
+ 			goto add_bfqq_busy;
+ 
++		if (bfq_bfqq_just_split(bfqq))
++			goto set_ioprio_changed;
++
+ 		/*
+-		 * If the queue is not being boosted and has been idle
+-		 * for enough time, start a weight-raising period
++		 * If the queue:
++		 * - is not being boosted,
++		 * - has been idle for enough time,
++		 * - is not a sync queue or is linked to a bfq_io_cq (it is
++		 *   shared "for its nature" or it is not shared and its
++		 *   requests have not been redirected to a shared queue)
++		 * start a weight-raising period.
+ 		 */
+-		if (old_wr_coeff == 1 && (interactive || soft_rt)) {
++		if (old_wr_coeff == 1 && (interactive || soft_rt) &&
++		    (!bfq_bfqq_sync(bfqq) || bfqq->bic != NULL)) {
+ 			bfqq->wr_coeff = bfqd->bfq_wr_coeff;
+ 			if (interactive)
+ 				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
+@@ -880,7 +941,7 @@ static void bfq_add_request(struct request *rq)
+ 		} else if (old_wr_coeff > 1) {
+ 			if (interactive)
+ 				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
+-			else if (bfq_bfqq_in_large_burst(bfqq) ||
++			else if (coop_or_in_burst ||
+ 				 (bfqq->wr_cur_max_time ==
+ 				  bfqd->bfq_wr_rt_max_time &&
+ 				  !soft_rt)) {
+@@ -899,18 +960,18 @@ static void bfq_add_request(struct request *rq)
+ 				/*
+ 				 *
+ 				 * The remaining weight-raising time is lower
+-				 * than bfqd->bfq_wr_rt_max_time, which
+-				 * means that the application is enjoying
+-				 * weight raising either because deemed soft-
+-				 * rt in the near past, or because deemed
+-				 * interactive a long ago. In both cases,
+-				 * resetting now the current remaining weight-
+-				 * raising time for the application to the
+-				 * weight-raising duration for soft rt
+-				 * applications would not cause any latency
+-				 * increase for the application (as the new
+-				 * duration would be higher than the remaining
+-				 * time).
++				 * than bfqd->bfq_wr_rt_max_time, which means
++				 * that the application is enjoying weight
++				 * raising either because deemed soft-rt in
++				 * the near past, or because deemed interactive
++				 * a long ago.
++				 * In both cases, resetting now the current
++				 * remaining weight-raising time for the
++				 * application to the weight-raising duration
++				 * for soft rt applications would not cause any
++				 * latency increase for the application (as the
++				 * new duration would be higher than the
++				 * remaining time).
+ 				 *
+ 				 * In addition, the application is now meeting
+ 				 * the requirements for being deemed soft rt.
+@@ -945,6 +1006,7 @@ static void bfq_add_request(struct request *rq)
+ 					bfqd->bfq_wr_rt_max_time;
+ 			}
+ 		}
++set_ioprio_changed:
+ 		if (old_wr_coeff != bfqq->wr_coeff)
+ 			entity->ioprio_changed = 1;
+ add_bfqq_busy:
+@@ -1156,90 +1218,35 @@ static void bfq_end_wr(struct bfq_data *bfqd)
+ 	spin_unlock_irq(bfqd->queue->queue_lock);
+ }
+ 
+-static int bfq_allow_merge(struct request_queue *q, struct request *rq,
+-			   struct bio *bio)
++static inline sector_t bfq_io_struct_pos(void *io_struct, bool request)
+ {
+-	struct bfq_data *bfqd = q->elevator->elevator_data;
+-	struct bfq_io_cq *bic;
+-	struct bfq_queue *bfqq;
+-
+-	/*
+-	 * Disallow merge of a sync bio into an async request.
+-	 */
+-	if (bfq_bio_sync(bio) && !rq_is_sync(rq))
+-		return 0;
+-
+-	/*
+-	 * Lookup the bfqq that this bio will be queued with. Allow
+-	 * merge only if rq is queued there.
+-	 * Queue lock is held here.
+-	 */
+-	bic = bfq_bic_lookup(bfqd, current->io_context);
+-	if (bic == NULL)
+-		return 0;
+-
+-	bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
+-	return bfqq == RQ_BFQQ(rq);
+-}
+-
+-static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
+-				       struct bfq_queue *bfqq)
+-{
+-	if (bfqq != NULL) {
+-		bfq_mark_bfqq_must_alloc(bfqq);
+-		bfq_mark_bfqq_budget_new(bfqq);
+-		bfq_clear_bfqq_fifo_expire(bfqq);
+-
+-		bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
+-
+-		bfq_log_bfqq(bfqd, bfqq,
+-			     "set_in_service_queue, cur-budget = %lu",
+-			     bfqq->entity.budget);
+-	}
+-
+-	bfqd->in_service_queue = bfqq;
+-}
+-
+-/*
+- * Get and set a new queue for service.
+- */
+-static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd,
+-						  struct bfq_queue *bfqq)
+-{
+-	if (!bfqq)
+-		bfqq = bfq_get_next_queue(bfqd);
++	if (request)
++		return blk_rq_pos(io_struct);
+ 	else
+-		bfq_get_next_queue_forced(bfqd, bfqq);
+-
+-	__bfq_set_in_service_queue(bfqd, bfqq);
+-	return bfqq;
++		return ((struct bio *)io_struct)->bi_iter.bi_sector;
+ }
+ 
+-static inline sector_t bfq_dist_from_last(struct bfq_data *bfqd,
+-					  struct request *rq)
++static inline sector_t bfq_dist_from(sector_t pos1,
++				     sector_t pos2)
+ {
+-	if (blk_rq_pos(rq) >= bfqd->last_position)
+-		return blk_rq_pos(rq) - bfqd->last_position;
++	if (pos1 >= pos2)
++		return pos1 - pos2;
+ 	else
+-		return bfqd->last_position - blk_rq_pos(rq);
++		return pos2 - pos1;
+ }
+ 
+-/*
+- * Return true if bfqq has no request pending and rq is close enough to
+- * bfqd->last_position, or if rq is closer to bfqd->last_position than
+- * bfqq->next_rq
+- */
+-static inline int bfq_rq_close(struct bfq_data *bfqd, struct request *rq)
++static inline int bfq_rq_close_to_sector(void *io_struct, bool request,
++					 sector_t sector)
+ {
+-	return bfq_dist_from_last(bfqd, rq) <= BFQQ_SEEK_THR;
++	return bfq_dist_from(bfq_io_struct_pos(io_struct, request), sector) <=
++	       BFQQ_SEEK_THR;
+ }
+ 
+-static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
++static struct bfq_queue *bfqq_close(struct bfq_data *bfqd, sector_t sector)
+ {
+ 	struct rb_root *root = &bfqd->rq_pos_tree;
+ 	struct rb_node *parent, *node;
+ 	struct bfq_queue *__bfqq;
+-	sector_t sector = bfqd->last_position;
+ 
+ 	if (RB_EMPTY_ROOT(root))
+ 		return NULL;
+@@ -1258,7 +1265,7 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
+ 	 * next_request position).
+ 	 */
+ 	__bfqq = rb_entry(parent, struct bfq_queue, pos_node);
+-	if (bfq_rq_close(bfqd, __bfqq->next_rq))
++	if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector))
+ 		return __bfqq;
+ 
+ 	if (blk_rq_pos(__bfqq->next_rq) < sector)
+@@ -1269,7 +1276,7 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
+ 		return NULL;
+ 
+ 	__bfqq = rb_entry(node, struct bfq_queue, pos_node);
+-	if (bfq_rq_close(bfqd, __bfqq->next_rq))
++	if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector))
+ 		return __bfqq;
+ 
+ 	return NULL;
+@@ -1278,14 +1285,12 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
+ /*
+  * bfqd - obvious
+  * cur_bfqq - passed in so that we don't decide that the current queue
+- *            is closely cooperating with itself.
+- *
+- * We are assuming that cur_bfqq has dispatched at least one request,
+- * and that bfqd->last_position reflects a position on the disk associated
+- * with the I/O issued by cur_bfqq.
++ *            is closely cooperating with itself
++ * sector - used as a reference point to search for a close queue
+  */
+ static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
+-					      struct bfq_queue *cur_bfqq)
++					      struct bfq_queue *cur_bfqq,
++					      sector_t sector)
+ {
+ 	struct bfq_queue *bfqq;
+ 
+@@ -1305,7 +1310,7 @@ static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
+ 	 * working closely on the same area of the disk. In that case,
+ 	 * we can group them together and don't waste time idling.
+ 	 */
+-	bfqq = bfqq_close(bfqd);
++	bfqq = bfqq_close(bfqd, sector);
+ 	if (bfqq == NULL || bfqq == cur_bfqq)
+ 		return NULL;
+ 
+@@ -1332,6 +1337,315 @@ static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
+ 	return bfqq;
+ }
+ 
++static struct bfq_queue *
++bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
++{
++	int process_refs, new_process_refs;
++	struct bfq_queue *__bfqq;
++
++	/*
++	 * If there are no process references on the new_bfqq, then it is
++	 * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain
++	 * may have dropped their last reference (not just their last process
++	 * reference).
++	 */
++	if (!bfqq_process_refs(new_bfqq))
++		return NULL;
++
++	/* Avoid a circular list and skip interim queue merges. */
++	while ((__bfqq = new_bfqq->new_bfqq)) {
++		if (__bfqq == bfqq)
++			return NULL;
++		new_bfqq = __bfqq;
++	}
++
++	process_refs = bfqq_process_refs(bfqq);
++	new_process_refs = bfqq_process_refs(new_bfqq);
++	/*
++	 * If the process for the bfqq has gone away, there is no
++	 * sense in merging the queues.
++	 */
++	if (process_refs == 0 || new_process_refs == 0)
++		return NULL;
++
++	bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
++		new_bfqq->pid);
++
++	/*
++	 * Merging is just a redirection: the requests of the process
++	 * owning one of the two queues are redirected to the other queue.
++	 * The latter queue, in its turn, is set as shared if this is the
++	 * first time that the requests of some process are redirected to
++	 * it.
++	 *
++	 * We redirect bfqq to new_bfqq and not the opposite, because we
++	 * are in the context of the process owning bfqq, hence we have
++	 * the io_cq of this process. So we can immediately configure this
++	 * io_cq to redirect the requests of the process to new_bfqq.
++	 *
++	 * NOTE, even if new_bfqq coincides with the in-service queue, the
++	 * io_cq of new_bfqq is not available, because, if the in-service
++	 * queue is shared, bfqd->in_service_bic may not point to the
++	 * io_cq of the in-service queue.
++	 * Redirecting the requests of the process owning bfqq to the
++	 * currently in-service queue is in any case the best option, as
++	 * we feed the in-service queue with new requests close to the
++	 * last request served and, by doing so, hopefully increase the
++	 * throughput.
++	 */
++	bfqq->new_bfqq = new_bfqq;
++	atomic_add(process_refs, &new_bfqq->ref);
++	return new_bfqq;
++}
++
++/*
++ * Attempt to schedule a merge of bfqq with the currently in-service queue
++ * or with a close queue among the scheduled queues.
++ * Return NULL if no merge was scheduled, a pointer to the shared bfq_queue
++ * structure otherwise.
++ *
++ * The OOM queue is not allowed to participate to cooperation: in fact, since
++ * the requests temporarily redirected to the OOM queue could be redirected
++ * again to dedicated queues at any time, the state needed to correctly
++ * handle merging with the OOM queue would be quite complex and expensive
++ * to maintain. Besides, in such a critical condition as an out of memory,
++ * the benefits of queue merging may be little relevant, or even negligible.
++ */
++static struct bfq_queue *
++bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++		     void *io_struct, bool request)
++{
++	struct bfq_queue *in_service_bfqq, *new_bfqq;
++
++	if (bfqq->new_bfqq)
++		return bfqq->new_bfqq;
++
++	if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
++		return NULL;
++
++	in_service_bfqq = bfqd->in_service_queue;
++
++	if (in_service_bfqq == NULL || in_service_bfqq == bfqq ||
++	    !bfqd->in_service_bic ||
++	    unlikely(in_service_bfqq == &bfqd->oom_bfqq))
++		goto check_scheduled;
++
++	if (bfq_class_idle(in_service_bfqq) || bfq_class_idle(bfqq))
++		goto check_scheduled;
++
++	if (bfq_class_rt(in_service_bfqq) != bfq_class_rt(bfqq))
++		goto check_scheduled;
++
++	if (in_service_bfqq->entity.parent != bfqq->entity.parent)
++		goto check_scheduled;
++
++	if (bfq_rq_close_to_sector(io_struct, request, bfqd->last_position) &&
++	    bfq_bfqq_sync(in_service_bfqq) && bfq_bfqq_sync(bfqq)) {
++		new_bfqq = bfq_setup_merge(bfqq, in_service_bfqq);
++		if (new_bfqq != NULL)
++			return new_bfqq; /* Merge with in-service queue */
++	}
++
++	/*
++	 * Check whether there is a cooperator among currently scheduled
++	 * queues. The only thing we need is that the bio/request is not
++	 * NULL, as we need it to establish whether a cooperator exists.
++	 */
++check_scheduled:
++	new_bfqq = bfq_close_cooperator(bfqd, bfqq,
++					bfq_io_struct_pos(io_struct, request));
++	if (new_bfqq && likely(new_bfqq != &bfqd->oom_bfqq))
++		return bfq_setup_merge(bfqq, new_bfqq);
++
++	return NULL;
++}
++
++static inline void
++bfq_bfqq_save_state(struct bfq_queue *bfqq)
++{
++	/*
++	 * If bfqq->bic == NULL, the queue is already shared or its requests
++	 * have already been redirected to a shared queue; both idle window
++	 * and weight raising state have already been saved. Do nothing.
++	 */
++	if (bfqq->bic == NULL)
++		return;
++	if (bfqq->bic->wr_time_left)
++		/*
++		 * This is the queue of a just-started process, and would
++		 * deserve weight raising: we set wr_time_left to the full
++		 * weight-raising duration to trigger weight-raising when
++		 * and if the queue is split and the first request of the
++		 * queue is enqueued.
++		 */
++		bfqq->bic->wr_time_left = bfq_wr_duration(bfqq->bfqd);
++	else if (bfqq->wr_coeff > 1) {
++		unsigned long wr_duration =
++			jiffies - bfqq->last_wr_start_finish;
++		/*
++		 * It may happen that a queue's weight raising period lasts
++		 * longer than its wr_cur_max_time, as weight raising is
++		 * handled only when a request is enqueued or dispatched (it
++		 * does not use any timer). If the weight raising period is
++		 * about to end, don't save it.
++		 */
++		if (bfqq->wr_cur_max_time <= wr_duration)
++			bfqq->bic->wr_time_left = 0;
++		else
++			bfqq->bic->wr_time_left =
++				bfqq->wr_cur_max_time - wr_duration;
++		/*
++		 * The bfq_queue is becoming shared or the requests of the
++		 * process owning the queue are being redirected to a shared
++		 * queue. Stop the weight raising period of the queue, as in
++		 * both cases it should not be owned by an interactive or
++		 * soft real-time application.
++		 */
++		bfq_bfqq_end_wr(bfqq);
++	} else
++		bfqq->bic->wr_time_left = 0;
++	bfqq->bic->saved_idle_window = bfq_bfqq_idle_window(bfqq);
++	bfqq->bic->saved_IO_bound = bfq_bfqq_IO_bound(bfqq);
++	bfqq->bic->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq);
++	bfqq->bic->was_in_burst_list = !hlist_unhashed(&bfqq->burst_list_node);
++	bfqq->bic->cooperations++;
++	bfqq->bic->failed_cooperations = 0;
++}
++
++static inline void
++bfq_get_bic_reference(struct bfq_queue *bfqq)
++{
++	/*
++	 * If bfqq->bic has a non-NULL value, the bic to which it belongs
++	 * is about to begin using a shared bfq_queue.
++	 */
++	if (bfqq->bic)
++		atomic_long_inc(&bfqq->bic->icq.ioc->refcount);
++}
++
++static void
++bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
++		struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
++{
++	bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
++		(long unsigned)new_bfqq->pid);
++	/* Save weight raising and idle window of the merged queues */
++	bfq_bfqq_save_state(bfqq);
++	bfq_bfqq_save_state(new_bfqq);
++	if (bfq_bfqq_IO_bound(bfqq))
++		bfq_mark_bfqq_IO_bound(new_bfqq);
++	bfq_clear_bfqq_IO_bound(bfqq);
++	/*
++	 * Grab a reference to the bic, to prevent it from being destroyed
++	 * before being possibly touched by a bfq_split_bfqq().
++	 */
++	bfq_get_bic_reference(bfqq);
++	bfq_get_bic_reference(new_bfqq);
++	/*
++	 * Merge queues (that is, let bic redirect its requests to new_bfqq)
++	 */
++	bic_set_bfqq(bic, new_bfqq, 1);
++	bfq_mark_bfqq_coop(new_bfqq);
++	/*
++	 * new_bfqq now belongs to at least two bics (it is a shared queue):
++	 * set new_bfqq->bic to NULL. bfqq either:
++	 * - does not belong to any bic any more, and hence bfqq->bic must
++	 *   be set to NULL, or
++	 * - is a queue whose owning bics have already been redirected to a
++	 *   different queue, hence the queue is destined to not belong to
++	 *   any bic soon and bfqq->bic is already NULL (therefore the next
++	 *   assignment causes no harm).
++	 */
++	new_bfqq->bic = NULL;
++	bfqq->bic = NULL;
++	bfq_put_queue(bfqq);
++}
++
++static inline void bfq_bfqq_increase_failed_cooperations(struct bfq_queue *bfqq)
++{
++	struct bfq_io_cq *bic = bfqq->bic;
++	struct bfq_data *bfqd = bfqq->bfqd;
++
++	if (bic && bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh) {
++		bic->failed_cooperations++;
++		if (bic->failed_cooperations >= bfqd->bfq_failed_cooperations)
++			bic->cooperations = 0;
++	}
++}
++
++static int bfq_allow_merge(struct request_queue *q, struct request *rq,
++			   struct bio *bio)
++{
++	struct bfq_data *bfqd = q->elevator->elevator_data;
++	struct bfq_io_cq *bic;
++	struct bfq_queue *bfqq, *new_bfqq;
++
++	/*
++	 * Disallow merge of a sync bio into an async request.
++	 */
++	if (bfq_bio_sync(bio) && !rq_is_sync(rq))
++		return 0;
++
++	/*
++	 * Lookup the bfqq that this bio will be queued with. Allow
++	 * merge only if rq is queued there.
++	 * Queue lock is held here.
++	 */
++	bic = bfq_bic_lookup(bfqd, current->io_context);
++	if (bic == NULL)
++		return 0;
++
++	bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
++	/*
++	 * We take advantage of this function to perform an early merge
++	 * of the queues of possible cooperating processes.
++	 */
++	if (bfqq != NULL) {
++		new_bfqq = bfq_setup_cooperator(bfqd, bfqq, bio, false);
++		if (new_bfqq != NULL) {
++			bfq_merge_bfqqs(bfqd, bic, bfqq, new_bfqq);
++			/*
++			 * If we get here, the bio will be queued in the
++			 * shared queue, i.e., new_bfqq, so use new_bfqq
++			 * to decide whether bio and rq can be merged.
++			 */
++			bfqq = new_bfqq;
++		} else
++			bfq_bfqq_increase_failed_cooperations(bfqq);
++	}
++
++	return bfqq == RQ_BFQQ(rq);
++}
++
++static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
++				       struct bfq_queue *bfqq)
++{
++	if (bfqq != NULL) {
++		bfq_mark_bfqq_must_alloc(bfqq);
++		bfq_mark_bfqq_budget_new(bfqq);
++		bfq_clear_bfqq_fifo_expire(bfqq);
++
++		bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
++
++		bfq_log_bfqq(bfqd, bfqq,
++			     "set_in_service_queue, cur-budget = %lu",
++			     bfqq->entity.budget);
++	}
++
++	bfqd->in_service_queue = bfqq;
++}
++
++/*
++ * Get and set a new queue for service.
++ */
++static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd)
++{
++	struct bfq_queue *bfqq = bfq_get_next_queue(bfqd);
++
++	__bfq_set_in_service_queue(bfqd, bfqq);
++	return bfqq;
++}
++
+ /*
+  * If enough samples have been computed, return the current max budget
+  * stored in bfqd, which is dynamically updated according to the
+@@ -1475,61 +1789,6 @@ static struct request *bfq_check_fifo(struct bfq_queue *bfqq)
+ 	return rq;
+ }
+ 
+-/* Must be called with the queue_lock held. */
+-static int bfqq_process_refs(struct bfq_queue *bfqq)
+-{
+-	int process_refs, io_refs;
+-
+-	io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
+-	process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
+-	BUG_ON(process_refs < 0);
+-	return process_refs;
+-}
+-
+-static void bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+-{
+-	int process_refs, new_process_refs;
+-	struct bfq_queue *__bfqq;
+-
+-	/*
+-	 * If there are no process references on the new_bfqq, then it is
+-	 * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain
+-	 * may have dropped their last reference (not just their last process
+-	 * reference).
+-	 */
+-	if (!bfqq_process_refs(new_bfqq))
+-		return;
+-
+-	/* Avoid a circular list and skip interim queue merges. */
+-	while ((__bfqq = new_bfqq->new_bfqq)) {
+-		if (__bfqq == bfqq)
+-			return;
+-		new_bfqq = __bfqq;
+-	}
+-
+-	process_refs = bfqq_process_refs(bfqq);
+-	new_process_refs = bfqq_process_refs(new_bfqq);
+-	/*
+-	 * If the process for the bfqq has gone away, there is no
+-	 * sense in merging the queues.
+-	 */
+-	if (process_refs == 0 || new_process_refs == 0)
+-		return;
+-
+-	/*
+-	 * Merge in the direction of the lesser amount of work.
+-	 */
+-	if (new_process_refs >= process_refs) {
+-		bfqq->new_bfqq = new_bfqq;
+-		atomic_add(process_refs, &new_bfqq->ref);
+-	} else {
+-		new_bfqq->new_bfqq = bfqq;
+-		atomic_add(new_process_refs, &bfqq->ref);
+-	}
+-	bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
+-		new_bfqq->pid);
+-}
+-
+ static inline unsigned long bfq_bfqq_budget_left(struct bfq_queue *bfqq)
+ {
+ 	struct bfq_entity *entity = &bfqq->entity;
+@@ -2263,7 +2522,7 @@ static inline bool bfq_bfqq_must_idle(struct bfq_queue *bfqq)
+  */
+ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ {
+-	struct bfq_queue *bfqq, *new_bfqq = NULL;
++	struct bfq_queue *bfqq;
+ 	struct request *next_rq;
+ 	enum bfqq_expiration reason = BFQ_BFQQ_BUDGET_TIMEOUT;
+ 
+@@ -2273,17 +2532,6 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ 
+ 	bfq_log_bfqq(bfqd, bfqq, "select_queue: already in-service queue");
+ 
+-	/*
+-         * If another queue has a request waiting within our mean seek
+-         * distance, let it run. The expire code will check for close
+-         * cooperators and put the close queue at the front of the
+-         * service tree. If possible, merge the expiring queue with the
+-         * new bfqq.
+-         */
+-        new_bfqq = bfq_close_cooperator(bfqd, bfqq);
+-        if (new_bfqq != NULL && bfqq->new_bfqq == NULL)
+-                bfq_setup_merge(bfqq, new_bfqq);
+-
+ 	if (bfq_may_expire_for_budg_timeout(bfqq) &&
+ 	    !timer_pending(&bfqd->idle_slice_timer) &&
+ 	    !bfq_bfqq_must_idle(bfqq))
+@@ -2322,10 +2570,7 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ 				bfq_clear_bfqq_wait_request(bfqq);
+ 				del_timer(&bfqd->idle_slice_timer);
+ 			}
+-			if (new_bfqq == NULL)
+-				goto keep_queue;
+-			else
+-				goto expire;
++			goto keep_queue;
+ 		}
+ 	}
+ 
+@@ -2334,40 +2579,30 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ 	 * in flight (possibly waiting for a completion) or is idling for a
+ 	 * new request, then keep it.
+ 	 */
+-	if (new_bfqq == NULL && (timer_pending(&bfqd->idle_slice_timer) ||
+-	    (bfqq->dispatched != 0 && bfq_bfqq_must_not_expire(bfqq)))) {
++	if (timer_pending(&bfqd->idle_slice_timer) ||
++	    (bfqq->dispatched != 0 && bfq_bfqq_must_not_expire(bfqq))) {
+ 		bfqq = NULL;
+ 		goto keep_queue;
+-	} else if (new_bfqq != NULL && timer_pending(&bfqd->idle_slice_timer)) {
+-		/*
+-		 * Expiring the queue because there is a close cooperator,
+-		 * cancel timer.
+-		 */
+-		bfq_clear_bfqq_wait_request(bfqq);
+-		del_timer(&bfqd->idle_slice_timer);
+ 	}
+ 
+ 	reason = BFQ_BFQQ_NO_MORE_REQUESTS;
+ expire:
+ 	bfq_bfqq_expire(bfqd, bfqq, 0, reason);
+ new_queue:
+-	bfqq = bfq_set_in_service_queue(bfqd, new_bfqq);
++	bfqq = bfq_set_in_service_queue(bfqd);
+ 	bfq_log(bfqd, "select_queue: new queue %d returned",
+ 		bfqq != NULL ? bfqq->pid : 0);
+ keep_queue:
+ 	return bfqq;
+ }
+ 
+-static void bfq_update_wr_data(struct bfq_data *bfqd,
+-			       struct bfq_queue *bfqq)
++static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+-	if (bfqq->wr_coeff > 1) { /* queue is being boosted */
+-		struct bfq_entity *entity = &bfqq->entity;
+-
++	struct bfq_entity *entity = &bfqq->entity;
++	if (bfqq->wr_coeff > 1) { /* queue is being weight-raised */
+ 		bfq_log_bfqq(bfqd, bfqq,
+ 			"raising period dur %u/%u msec, old coeff %u, w %d(%d)",
+-			jiffies_to_msecs(jiffies -
+-				bfqq->last_wr_start_finish),
++			jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish),
+ 			jiffies_to_msecs(bfqq->wr_cur_max_time),
+ 			bfqq->wr_coeff,
+ 			bfqq->entity.weight, bfqq->entity.orig_weight);
+@@ -2376,12 +2611,16 @@ static void bfq_update_wr_data(struct bfq_data *bfqd,
+ 		       entity->orig_weight * bfqq->wr_coeff);
+ 		if (entity->ioprio_changed)
+ 			bfq_log_bfqq(bfqd, bfqq, "WARN: pending prio change");
++
+ 		/*
+ 		 * If the queue was activated in a burst, or
+ 		 * too much time has elapsed from the beginning
+-		 * of this weight-raising, then end weight raising.
++		 * of this weight-raising period, or the queue has
++		 * exceeded the acceptable number of cooperations,
++		 * then end weight raising.
+ 		 */
+ 		if (bfq_bfqq_in_large_burst(bfqq) ||
++		    bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh ||
+ 		    time_is_before_jiffies(bfqq->last_wr_start_finish +
+ 					   bfqq->wr_cur_max_time)) {
+ 			bfqq->last_wr_start_finish = jiffies;
+@@ -2390,11 +2629,13 @@ static void bfq_update_wr_data(struct bfq_data *bfqd,
+ 				     bfqq->last_wr_start_finish,
+ 				     jiffies_to_msecs(bfqq->wr_cur_max_time));
+ 			bfq_bfqq_end_wr(bfqq);
+-			__bfq_entity_update_weight_prio(
+-				bfq_entity_service_tree(entity),
+-				entity);
+ 		}
+ 	}
++	/* Update weight both if it must be raised and if it must be lowered */
++	if ((entity->weight > entity->orig_weight) != (bfqq->wr_coeff > 1))
++		__bfq_entity_update_weight_prio(
++			bfq_entity_service_tree(entity),
++			entity);
+ }
+ 
+ /*
+@@ -2642,6 +2883,25 @@ static inline void bfq_init_icq(struct io_cq *icq)
+ 	struct bfq_io_cq *bic = icq_to_bic(icq);
+ 
+ 	bic->ttime.last_end_request = jiffies;
++	/*
++	 * A newly created bic indicates that the process has just
++	 * started doing I/O, and is probably mapping into memory its
++	 * executable and libraries: it definitely needs weight raising.
++	 * There is however the possibility that the process performs,
++	 * for a while, I/O close to some other process. EQM intercepts
++	 * this behavior and may merge the queue corresponding to the
++	 * process  with some other queue, BEFORE the weight of the queue
++	 * is raised. Merged queues are not weight-raised (they are assumed
++	 * to belong to processes that benefit only from high throughput).
++	 * If the merge is basically the consequence of an accident, then
++	 * the queue will be split soon and will get back its old weight.
++	 * It is then important to write down somewhere that this queue
++	 * does need weight raising, even if it did not make it to get its
++	 * weight raised before being merged. To this purpose, we overload
++	 * the field raising_time_left and assign 1 to it, to mark the queue
++	 * as needing weight raising.
++	 */
++	bic->wr_time_left = 1;
+ }
+ 
+ static void bfq_exit_icq(struct io_cq *icq)
+@@ -2655,6 +2915,13 @@ static void bfq_exit_icq(struct io_cq *icq)
+ 	}
+ 
+ 	if (bic->bfqq[BLK_RW_SYNC]) {
++		/*
++		 * If the bic is using a shared queue, put the reference
++		 * taken on the io_context when the bic started using a
++		 * shared bfq_queue.
++		 */
++		if (bfq_bfqq_coop(bic->bfqq[BLK_RW_SYNC]))
++			put_io_context(icq->ioc);
+ 		bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_SYNC]);
+ 		bic->bfqq[BLK_RW_SYNC] = NULL;
+ 	}
+@@ -2950,6 +3217,10 @@ static void bfq_update_idle_window(struct bfq_data *bfqd,
+ 	if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq))
+ 		return;
+ 
++	/* Idle window just restored, statistics are meaningless. */
++	if (bfq_bfqq_just_split(bfqq))
++		return;
++
+ 	enable_idle = bfq_bfqq_idle_window(bfqq);
+ 
+ 	if (atomic_read(&bic->icq.ioc->active_ref) == 0 ||
+@@ -2997,6 +3268,7 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	if (bfqq->entity.service > bfq_max_budget(bfqd) / 8 ||
+ 	    !BFQQ_SEEKY(bfqq))
+ 		bfq_update_idle_window(bfqd, bfqq, bic);
++	bfq_clear_bfqq_just_split(bfqq);
+ 
+ 	bfq_log_bfqq(bfqd, bfqq,
+ 		     "rq_enqueued: idle_window=%d (seeky %d, mean %llu)",
+@@ -3057,13 +3329,49 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ static void bfq_insert_request(struct request_queue *q, struct request *rq)
+ {
+ 	struct bfq_data *bfqd = q->elevator->elevator_data;
+-	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++	struct bfq_queue *bfqq = RQ_BFQQ(rq), *new_bfqq;
+ 
+ 	assert_spin_locked(bfqd->queue->queue_lock);
++
++	/*
++	 * An unplug may trigger a requeue of a request from the device
++	 * driver: make sure we are in process context while trying to
++	 * merge two bfq_queues.
++	 */
++	if (!in_interrupt()) {
++		new_bfqq = bfq_setup_cooperator(bfqd, bfqq, rq, true);
++		if (new_bfqq != NULL) {
++			if (bic_to_bfqq(RQ_BIC(rq), 1) != bfqq)
++				new_bfqq = bic_to_bfqq(RQ_BIC(rq), 1);
++			/*
++			 * Release the request's reference to the old bfqq
++			 * and make sure one is taken to the shared queue.
++			 */
++			new_bfqq->allocated[rq_data_dir(rq)]++;
++			bfqq->allocated[rq_data_dir(rq)]--;
++			atomic_inc(&new_bfqq->ref);
++			bfq_put_queue(bfqq);
++			if (bic_to_bfqq(RQ_BIC(rq), 1) == bfqq)
++				bfq_merge_bfqqs(bfqd, RQ_BIC(rq),
++						bfqq, new_bfqq);
++			rq->elv.priv[1] = new_bfqq;
++			bfqq = new_bfqq;
++		} else
++			bfq_bfqq_increase_failed_cooperations(bfqq);
++	}
++
+ 	bfq_init_prio_data(bfqq, RQ_BIC(rq));
+ 
+ 	bfq_add_request(rq);
+ 
++	/*
++	 * Here a newly-created bfq_queue has already started a weight-raising
++	 * period: clear raising_time_left to prevent bfq_bfqq_save_state()
++	 * from assigning it a full weight-raising period. See the detailed
++	 * comments about this field in bfq_init_icq().
++	 */
++	if (bfqq->bic != NULL)
++		bfqq->bic->wr_time_left = 0;
+ 	rq->fifo_time = jiffies + bfqd->bfq_fifo_expire[rq_is_sync(rq)];
+ 	list_add_tail(&rq->queuelist, &bfqq->fifo);
+ 
+@@ -3228,18 +3536,6 @@ static void bfq_put_request(struct request *rq)
+ 	}
+ }
+ 
+-static struct bfq_queue *
+-bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
+-		struct bfq_queue *bfqq)
+-{
+-	bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
+-		(long unsigned)bfqq->new_bfqq->pid);
+-	bic_set_bfqq(bic, bfqq->new_bfqq, 1);
+-	bfq_mark_bfqq_coop(bfqq->new_bfqq);
+-	bfq_put_queue(bfqq);
+-	return bic_to_bfqq(bic, 1);
+-}
+-
+ /*
+  * Returns NULL if a new bfqq should be allocated, or the old bfqq if this
+  * was the last process referring to said bfqq.
+@@ -3248,6 +3544,9 @@ static struct bfq_queue *
+ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
+ {
+ 	bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue");
++
++	put_io_context(bic->icq.ioc);
++
+ 	if (bfqq_process_refs(bfqq) == 1) {
+ 		bfqq->pid = current->pid;
+ 		bfq_clear_bfqq_coop(bfqq);
+@@ -3276,6 +3575,7 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+ 	struct bfq_queue *bfqq;
+ 	struct bfq_group *bfqg;
+ 	unsigned long flags;
++	bool split = false;
+ 
+ 	might_sleep_if(gfp_mask & __GFP_WAIT);
+ 
+@@ -3293,25 +3593,26 @@ new_queue:
+ 	if (bfqq == NULL || bfqq == &bfqd->oom_bfqq) {
+ 		bfqq = bfq_get_queue(bfqd, bfqg, is_sync, bic, gfp_mask);
+ 		bic_set_bfqq(bic, bfqq, is_sync);
++		if (split && is_sync) {
++			if ((bic->was_in_burst_list && bfqd->large_burst) ||
++			    bic->saved_in_large_burst)
++				bfq_mark_bfqq_in_large_burst(bfqq);
++			else {
++			    bfq_clear_bfqq_in_large_burst(bfqq);
++			    if (bic->was_in_burst_list)
++			       hlist_add_head(&bfqq->burst_list_node,
++				              &bfqd->burst_list);
++			}
++		}
+ 	} else {
+-		/*
+-		 * If the queue was seeky for too long, break it apart.
+-		 */
++		/* If the queue was seeky for too long, break it apart. */
+ 		if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq)) {
+ 			bfq_log_bfqq(bfqd, bfqq, "breaking apart bfqq");
+ 			bfqq = bfq_split_bfqq(bic, bfqq);
++			split = true;
+ 			if (!bfqq)
+ 				goto new_queue;
+ 		}
+-
+-		/*
+-		 * Check to see if this queue is scheduled to merge with
+-		 * another closely cooperating queue. The merging of queues
+-		 * happens here as it must be done in process context.
+-		 * The reference on new_bfqq was taken in merge_bfqqs.
+-		 */
+-		if (bfqq->new_bfqq != NULL)
+-			bfqq = bfq_merge_bfqqs(bfqd, bic, bfqq);
+ 	}
+ 
+ 	bfqq->allocated[rw]++;
+@@ -3322,6 +3623,26 @@ new_queue:
+ 	rq->elv.priv[0] = bic;
+ 	rq->elv.priv[1] = bfqq;
+ 
++	/*
++	 * If a bfq_queue has only one process reference, it is owned
++	 * by only one bfq_io_cq: we can set the bic field of the
++	 * bfq_queue to the address of that structure. Also, if the
++	 * queue has just been split, mark a flag so that the
++	 * information is available to the other scheduler hooks.
++	 */
++	if (likely(bfqq != &bfqd->oom_bfqq) && bfqq_process_refs(bfqq) == 1) {
++		bfqq->bic = bic;
++		if (split) {
++			bfq_mark_bfqq_just_split(bfqq);
++			/*
++			 * If the queue has just been split from a shared
++			 * queue, restore the idle window and the possible
++			 * weight raising period.
++			 */
++			bfq_bfqq_resume_state(bfqq, bic);
++		}
++	}
++
+ 	spin_unlock_irqrestore(q->queue_lock, flags);
+ 
+ 	return 0;
+diff --git a/block/bfq-sched.c b/block/bfq-sched.c
+index 2931563..6764a7e 100644
+--- a/block/bfq-sched.c
++++ b/block/bfq-sched.c
+@@ -1091,34 +1091,6 @@ static struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
+ 	return bfqq;
+ }
+ 
+-/*
+- * Forced extraction of the given queue.
+- */
+-static void bfq_get_next_queue_forced(struct bfq_data *bfqd,
+-				      struct bfq_queue *bfqq)
+-{
+-	struct bfq_entity *entity;
+-	struct bfq_sched_data *sd;
+-
+-	BUG_ON(bfqd->in_service_queue != NULL);
+-
+-	entity = &bfqq->entity;
+-	/*
+-	 * Bubble up extraction/update from the leaf to the root.
+-	*/
+-	for_each_entity(entity) {
+-		sd = entity->sched_data;
+-		bfq_update_budget(entity);
+-		bfq_update_vtime(bfq_entity_service_tree(entity));
+-		bfq_active_extract(bfq_entity_service_tree(entity), entity);
+-		sd->in_service_entity = entity;
+-		sd->next_in_service = NULL;
+-		entity->service = 0;
+-	}
+-
+-	return;
+-}
+-
+ static void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd)
+ {
+ 	if (bfqd->in_service_bic != NULL) {
+diff --git a/block/bfq.h b/block/bfq.h
+index 518f2ac..4f519ea 100644
+--- a/block/bfq.h
++++ b/block/bfq.h
+@@ -218,18 +218,21 @@ struct bfq_group;
+  *                      idle @bfq_queue with no outstanding requests, then
+  *                      the task associated with the queue it is deemed as
+  *                      soft real-time (see the comments to the function
+- *                      bfq_bfqq_softrt_next_start()).
++ *                      bfq_bfqq_softrt_next_start())
+  * @last_idle_bklogged: time of the last transition of the @bfq_queue from
+  *                      idle to backlogged
+  * @service_from_backlogged: cumulative service received from the @bfq_queue
+  *                           since the last transition from idle to
+  *                           backlogged
++ * @bic: pointer to the bfq_io_cq owning the bfq_queue, set to %NULL if the
++ *	 queue is shared
+  *
+- * A bfq_queue is a leaf request queue; it can be associated with an io_context
+- * or more, if it is async or shared between cooperating processes. @cgroup
+- * holds a reference to the cgroup, to be sure that it does not disappear while
+- * a bfqq still references it (mostly to avoid races between request issuing and
+- * task migration followed by cgroup destruction).
++ * A bfq_queue is a leaf request queue; it can be associated with an
++ * io_context or more, if it  is  async or shared  between  cooperating
++ * processes. @cgroup holds a reference to the cgroup, to be sure that it
++ * does not disappear while a bfqq still references it (mostly to avoid
++ * races between request issuing and task migration followed by cgroup
++ * destruction).
+  * All the fields are protected by the queue lock of the containing bfqd.
+  */
+ struct bfq_queue {
+@@ -269,6 +272,7 @@ struct bfq_queue {
+ 	unsigned int requests_within_timer;
+ 
+ 	pid_t pid;
++	struct bfq_io_cq *bic;
+ 
+ 	/* weight-raising fields */
+ 	unsigned long wr_cur_max_time;
+@@ -298,12 +302,42 @@ struct bfq_ttime {
+  * @icq: associated io_cq structure
+  * @bfqq: array of two process queues, the sync and the async
+  * @ttime: associated @bfq_ttime struct
++ * @wr_time_left: snapshot of the time left before weight raising ends
++ *                for the sync queue associated to this process; this
++ *		  snapshot is taken to remember this value while the weight
++ *		  raising is suspended because the queue is merged with a
++ *		  shared queue, and is used to set @raising_cur_max_time
++ *		  when the queue is split from the shared queue and its
++ *		  weight is raised again
++ * @saved_idle_window: same purpose as the previous field for the idle
++ *                     window
++ * @saved_IO_bound: same purpose as the previous two fields for the I/O
++ *                  bound classification of a queue
++ * @saved_in_large_burst: same purpose as the previous fields for the
++ *                        value of the field keeping the queue's belonging
++ *                        to a large burst
++ * @was_in_burst_list: true if the queue belonged to a burst list
++ *                     before its merge with another cooperating queue
++ * @cooperations: counter of consecutive successful queue merges underwent
++ *                by any of the process' @bfq_queues
++ * @failed_cooperations: counter of consecutive failed queue merges of any
++ *                       of the process' @bfq_queues
+  */
+ struct bfq_io_cq {
+ 	struct io_cq icq; /* must be the first member */
+ 	struct bfq_queue *bfqq[2];
+ 	struct bfq_ttime ttime;
+ 	int ioprio;
++
++	unsigned int wr_time_left;
++	bool saved_idle_window;
++	bool saved_IO_bound;
++
++	bool saved_in_large_burst;
++	bool was_in_burst_list;
++
++	unsigned int cooperations;
++	unsigned int failed_cooperations;
+ };
+ 
+ enum bfq_device_speed {
+@@ -539,7 +573,7 @@ enum bfqq_state_flags {
+ 	BFQ_BFQQ_FLAG_prio_changed,	/* task priority has changed */
+ 	BFQ_BFQQ_FLAG_sync,		/* synchronous queue */
+ 	BFQ_BFQQ_FLAG_budget_new,	/* no completion with this budget */
+-	BFQ_BFQQ_FLAG_IO_bound,         /*
++	BFQ_BFQQ_FLAG_IO_bound,		/*
+ 					 * bfqq has timed-out at least once
+ 					 * having consumed at most 2/10 of
+ 					 * its budget
+@@ -552,12 +586,13 @@ enum bfqq_state_flags {
+ 					 * bfqq has proved to be slow and
+ 					 * seeky until budget timeout
+ 					 */
+-	BFQ_BFQQ_FLAG_softrt_update,    /*
++	BFQ_BFQQ_FLAG_softrt_update,	/*
+ 					 * may need softrt-next-start
+ 					 * update
+ 					 */
+ 	BFQ_BFQQ_FLAG_coop,		/* bfqq is shared */
+-	BFQ_BFQQ_FLAG_split_coop,	/* shared bfqq will be splitted */
++	BFQ_BFQQ_FLAG_split_coop,	/* shared bfqq will be split */
++	BFQ_BFQQ_FLAG_just_split,	/* queue has just been split */
+ };
+ 
+ #define BFQ_BFQQ_FNS(name)						\
+@@ -587,6 +622,7 @@ BFQ_BFQQ_FNS(in_large_burst);
+ BFQ_BFQQ_FNS(constantly_seeky);
+ BFQ_BFQQ_FNS(coop);
+ BFQ_BFQQ_FNS(split_coop);
++BFQ_BFQQ_FNS(just_split);
+ BFQ_BFQQ_FNS(softrt_update);
+ #undef BFQ_BFQQ_FNS
+ 
+-- 
+2.1.0
+

diff --git a/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch b/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
new file mode 100644
index 0000000..c4efd06
--- /dev/null
+++ b/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
@@ -0,0 +1,402 @@
+WARNING - this version of the patch works with version 4.9+ of gcc and with
+kernel version 3.15.x+ and should NOT be applied when compiling on older
+versions due to name changes of the flags with the 4.9 release of gcc.
+Use the older version of this patch hosted on the same github for older
+versions of gcc. For example:
+
+corei7 --> nehalem
+corei7-avx --> sandybridge
+core-avx-i --> ivybridge
+core-avx2 --> haswell
+
+For more, see: https://gcc.gnu.org/gcc-4.9/changes.html
+
+It also changes 'atom' to 'bonnell' in accordance with the gcc v4.9 changes.
+Note that upstream is using the deprecated 'match=atom' flags when I believe it
+should use the newer 'march=bonnell' flag for atom processors.
+
+I have made that change to this patch set as well.  See the following kernel
+bug report to see if I'm right: https://bugzilla.kernel.org/show_bug.cgi?id=77461
+
+This patch will expand the number of microarchitectures to include newer
+processors including: AMD K10-family, AMD Family 10h (Barcelona), AMD Family
+14h (Bobcat), AMD Family 15h (Bulldozer), AMD Family 15h (Piledriver), AMD
+Family 16h (Jaguar), Intel 1st Gen Core i3/i5/i7 (Nehalem), Intel 1.5 Gen Core
+i3/i5/i7 (Westmere), Intel 2nd Gen Core i3/i5/i7 (Sandybridge), Intel 3rd Gen
+Core i3/i5/i7 (Ivybridge), Intel 4th Gen Core i3/i5/i7 (Haswell), Intel 5th
+Gen Core i3/i5/i7 (Broadwell), and the low power Silvermont series of Atom
+processors (Silvermont). It also offers the compiler the 'native' flag.
+
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=3.15
+gcc version >=4.9
+
+--- a/arch/x86/include/asm/module.h	2014-06-16 16:44:27.000000000 -0400
++++ b/arch/x86/include/asm/module.h	2015-03-07 03:27:32.556672424 -0500
+@@ -15,6 +15,22 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -33,6 +49,20 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu	2014-06-16 16:44:27.000000000 -0400
++++ b/arch/x86/Kconfig.cpu	2015-03-07 03:32:14.337713226 -0500
+@@ -137,9 +137,8 @@ config MPENTIUM4
+ 		-Paxville
+ 		-Dempsey
+ 
+-
+ config MK6
+-	bool "K6/K6-II/K6-III"
++	bool "AMD K6/K6-II/K6-III"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD K6-family processor.  Enables use of
+@@ -147,7 +146,7 @@ config MK6
+ 	  flags to GCC.
+ 
+ config MK7
+-	bool "Athlon/Duron/K7"
++	bool "AMD Athlon/Duron/K7"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD Athlon K7-family processor.  Enables use of
+@@ -155,12 +154,62 @@ config MK7
+ 	  flags to GCC.
+ 
+ config MK8
+-	bool "Opteron/Athlon64/Hammer/K8"
++	bool "AMD Opteron/Athlon64/Hammer/K8"
+ 	---help---
+ 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ 	  Enables use of some extended instructions, and passes appropriate
+ 	  optimization flags to GCC.
+ 
++config MK8SSE3
++	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++	---help---
++	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MK10
++	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++	---help---
++	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++		Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MBARCELONA
++	bool "AMD Barcelona"
++	---help---
++	  Select this for AMD Barcelona and newer processors.
++
++	  Enables -march=barcelona
++
++config MBOBCAT
++	bool "AMD Bobcat"
++	---help---
++	  Select this for AMD Bobcat processors.
++
++	  Enables -march=btver1
++
++config MBULLDOZER
++	bool "AMD Bulldozer"
++	---help---
++	  Select this for AMD Bulldozer processors.
++
++	  Enables -march=bdver1
++
++config MPILEDRIVER
++	bool "AMD Piledriver"
++	---help---
++	  Select this for AMD Piledriver processors.
++
++	  Enables -march=bdver2
++
++config MJAGUAR
++	bool "AMD Jaguar"
++	---help---
++	  Select this for AMD Jaguar processors.
++
++	  Enables -march=btver2
++
+ config MCRUSOE
+ 	bool "Crusoe"
+ 	depends on X86_32
+@@ -251,8 +300,17 @@ config MPSC
+ 	  using the cpu family field
+ 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+ 
++config MATOM
++	bool "Intel Atom"
++	---help---
++
++	  Select this for the Intel Atom platform. Intel Atom CPUs have an
++	  in-order pipelining architecture and thus can benefit from
++	  accordingly optimized code. Use a recent GCC with specific Atom
++	  support in order to fully benefit from selecting this option.
++
+ config MCORE2
+-	bool "Core 2/newer Xeon"
++	bool "Intel Core 2"
+ 	---help---
+ 
+ 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -260,14 +318,63 @@ config MCORE2
+ 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ 	  (not a typo)
+ 
+-config MATOM
+-	bool "Intel Atom"
++	  Enables -march=core2
++
++config MNEHALEM
++	bool "Intel Nehalem"
+ 	---help---
+ 
+-	  Select this for the Intel Atom platform. Intel Atom CPUs have an
+-	  in-order pipelining architecture and thus can benefit from
+-	  accordingly optimized code. Use a recent GCC with specific Atom
+-	  support in order to fully benefit from selecting this option.
++	  Select this for 1st Gen Core processors in the Nehalem family.
++
++	  Enables -march=nehalem
++
++config MWESTMERE
++	bool "Intel Westmere"
++	---help---
++
++	  Select this for the Intel Westmere formerly Nehalem-C family.
++
++	  Enables -march=westmere
++
++config MSILVERMONT
++	bool "Intel Silvermont"
++	---help---
++
++	  Select this for the Intel Silvermont platform.
++
++	  Enables -march=silvermont
++
++config MSANDYBRIDGE
++	bool "Intel Sandy Bridge"
++	---help---
++
++	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++	  Enables -march=sandybridge
++
++config MIVYBRIDGE
++	bool "Intel Ivy Bridge"
++	---help---
++
++	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++	  Enables -march=ivybridge
++
++config MHASWELL
++	bool "Intel Haswell"
++	---help---
++
++	  Select this for 4th Gen Core processors in the Haswell family.
++
++	  Enables -march=haswell
++
++config MBROADWELL
++	bool "Intel Broadwell"
++	---help---
++
++	  Select this for 5th Gen Core processors in the Broadwell family.
++
++	  Enables -march=broadwell
+ 
+ config GENERIC_CPU
+ 	bool "Generic-x86-64"
+@@ -276,6 +383,19 @@ config GENERIC_CPU
+ 	  Generic x86-64 CPU.
+ 	  Run equally well on all x86-64 CPUs.
+ 
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++   GCC 4.2 and above support -march=native, which automatically detects
++   the optimum settings to use based on your processor. -march=native 
++   also detects and applies additional settings beyond -march specific
++   to your CPU, (eg. -msse4). Unless you have a specific reason not to
++   (e.g. distcc cross-compiling), you should probably be using
++   -march=native rather than anything listed below.
++
++   Enables -march=native
++
+ endchoice
+ 
+ config X86_GENERIC
+@@ -300,7 +420,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ 	int
+ 	default "7" if MPENTIUM4 || MPSC
+-	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || BROADWELL || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ 	default "4" if MELAN || M486 || MGEODEGX1
+ 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+ 
+@@ -331,11 +451,11 @@ config X86_ALIGNMENT_16
+ 
+ config X86_INTEL_USERCOPY
+ 	def_bool y
+-	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MNATIVE
+ 
+ config X86_USE_PPRO_CHECKSUM
+ 	def_bool y
+-	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MATOM || MNATIVE
+ 
+ config X86_USE_3DNOW
+ 	def_bool y
+@@ -359,17 +479,17 @@ config X86_P6_NOP
+ 
+ config X86_TSC
+ 	def_bool y
+-	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MNATIVE || MATOM) || X86_64
+ 
+ config X86_CMPXCHG64
+ 	def_bool y
+-	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM
++	depends on X86_PAE || X86_64 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM || MNATIVE
+ 
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+ config X86_CMOV
+ 	def_bool y
+-	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++	depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+ 
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+--- a/arch/x86/Makefile	2014-06-16 16:44:27.000000000 -0400
++++ b/arch/x86/Makefile	2015-03-07 03:33:27.650843211 -0500
+@@ -92,13 +92,35 @@ else
+ 	KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=3)
+ 
+         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+ 
+         cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+-		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++                $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++        cflags-$(CONFIG_MNEHALEM) += \
++                $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++        cflags-$(CONFIG_MWESTMERE) += \
++                $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++        cflags-$(CONFIG_MSILVERMONT) += \
++                $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++        cflags-$(CONFIG_MSANDYBRIDGE) += \
++                $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++        cflags-$(CONFIG_MIVYBRIDGE) += \
++                $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++        cflags-$(CONFIG_MHASWELL) += \
++                $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++        cflags-$(CONFIG_MBROADWELL) += \
++                $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++                $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+         KBUILD_CFLAGS += $(cflags-y)
+ 
+--- a/arch/x86/Makefile_32.cpu	2014-06-16 16:44:27.000000000 -0400
++++ b/arch/x86/Makefile_32.cpu	2015-03-07 03:34:15.203586024 -0500
+@@ -23,7 +23,15 @@ cflags-$(CONFIG_MK6)		+= -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7)		+= -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8)		+= $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3)		+= $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10)	+= $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA)	+= $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT)	+= $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER)	+= $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER)	+= $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MJAGUAR)	+= $(call cc-option,-march=btver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE)	+= -march=i686 $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -32,8 +40,15 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-
+ cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7)		+= -march=i686
+ cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+-	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM)	+= -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE)	+= -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT)	+= -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MSANDYBRIDGE)	+= -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE)	+= -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL)	+= -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL)	+= -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++	$(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ 
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN)		+= -march=i486
+


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-06-23 16:37 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-06-23 16:37 UTC (permalink / raw
  To: gentoo-commits

commit:     458b5d172d76e3876d6947ae949e06e000856922
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 23 16:32:28 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 23 16:37:47 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=458b5d17

Linux patch 4.0.6

 0000_README            |    4 +
 1005_linux-4.0.6.patch | 3730 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3734 insertions(+)

diff --git a/0000_README b/0000_README
index 0f63559..8761846 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-4.0.5.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.0.5
 
+Patch:  1005_linux-4.0.6.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.0.6
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1005_linux-4.0.6.patch b/1005_linux-4.0.6.patch
new file mode 100644
index 0000000..15519e7
--- /dev/null
+++ b/1005_linux-4.0.6.patch
@@ -0,0 +1,3730 @@
+diff --git a/Makefile b/Makefile
+index 1880cf77059b..af6da040b952 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+diff --git a/arch/arm/boot/dts/am335x-bone-common.dtsi b/arch/arm/boot/dts/am335x-bone-common.dtsi
+index c3255e0c90aa..dbb3f4d2bf84 100644
+--- a/arch/arm/boot/dts/am335x-bone-common.dtsi
++++ b/arch/arm/boot/dts/am335x-bone-common.dtsi
+@@ -223,6 +223,25 @@
+ /include/ "tps65217.dtsi"
+ 
+ &tps {
++	/*
++	 * Configure pmic to enter OFF-state instead of SLEEP-state ("RTC-only
++	 * mode") at poweroff.  Most BeagleBone versions do not support RTC-only
++	 * mode and risk hardware damage if this mode is entered.
++	 *
++	 * For details, see linux-omap mailing list May 2015 thread
++	 *	[PATCH] ARM: dts: am335x-bone* enable pmic-shutdown-controller
++	 * In particular, messages:
++	 *	http://www.spinics.net/lists/linux-omap/msg118585.html
++	 *	http://www.spinics.net/lists/linux-omap/msg118615.html
++	 *
++	 * You can override this later with
++	 *	&tps {  /delete-property/ ti,pmic-shutdown-controller;  }
++	 * if you want to use RTC-only mode and made sure you are not affected
++	 * by the hardware problems. (Tip: double-check by performing a current
++	 * measurement after shutdown: it should be less than 1 mA.)
++	 */
++	ti,pmic-shutdown-controller;
++
+ 	regulators {
+ 		dcdc1_reg: regulator@0 {
+ 			regulator-name = "vdds_dpr";
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173-evb.dts b/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
+index 43d54017b779..d0ab012fa379 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
+@@ -16,7 +16,8 @@
+ #include "mt8173.dtsi"
+ 
+ / {
+-	model = "mediatek,mt8173-evb";
++	model = "MediaTek MT8173 evaluation board";
++	compatible = "mediatek,mt8173-evb", "mediatek,mt8173";
+ 
+ 	aliases {
+ 		serial0 = &uart0;
+diff --git a/arch/mips/kernel/irq.c b/arch/mips/kernel/irq.c
+index d2bfbc2e8995..be15e52a47a0 100644
+--- a/arch/mips/kernel/irq.c
++++ b/arch/mips/kernel/irq.c
+@@ -109,7 +109,7 @@ void __init init_IRQ(void)
+ #endif
+ }
+ 
+-#ifdef DEBUG_STACKOVERFLOW
++#ifdef CONFIG_DEBUG_STACKOVERFLOW
+ static inline void check_stack_overflow(void)
+ {
+ 	unsigned long sp;
+diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
+index 838d3a6a5b7d..cea02968a908 100644
+--- a/arch/mips/kvm/emulate.c
++++ b/arch/mips/kvm/emulate.c
+@@ -2101,7 +2101,7 @@ enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
+ 		if (vcpu->mmio_needed == 2)
+ 			*gpr = *(int16_t *) run->mmio.data;
+ 		else
+-			*gpr = *(int16_t *) run->mmio.data;
++			*gpr = *(uint16_t *)run->mmio.data;
+ 
+ 		break;
+ 	case 1:
+diff --git a/arch/mips/ralink/ill_acc.c b/arch/mips/ralink/ill_acc.c
+index e20b02e3ae28..e10d10b9e82a 100644
+--- a/arch/mips/ralink/ill_acc.c
++++ b/arch/mips/ralink/ill_acc.c
+@@ -41,7 +41,7 @@ static irqreturn_t ill_acc_irq_handler(int irq, void *_priv)
+ 		addr, (type >> ILL_ACC_OFF_S) & ILL_ACC_OFF_M,
+ 		type & ILL_ACC_LEN_M);
+ 
+-	rt_memc_w32(REG_ILL_ACC_TYPE, REG_ILL_ACC_TYPE);
++	rt_memc_w32(ILL_INT_STATUS, REG_ILL_ACC_TYPE);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/arch/x86/include/asm/segment.h b/arch/x86/include/asm/segment.h
+index db257a58571f..e657b7ba3292 100644
+--- a/arch/x86/include/asm/segment.h
++++ b/arch/x86/include/asm/segment.h
+@@ -200,10 +200,21 @@
+ #define TLS_SIZE (GDT_ENTRY_TLS_ENTRIES * 8)
+ 
+ #ifdef __KERNEL__
++
++/*
++ * early_idt_handler_array is an array of entry points referenced in the
++ * early IDT.  For simplicity, it's a real array with one entry point
++ * every nine bytes.  That leaves room for an optional 'push $0' if the
++ * vector has no error code (two bytes), a 'push $vector_number' (two
++ * bytes), and a jump to the common entry code (up to five bytes).
++ */
++#define EARLY_IDT_HANDLER_SIZE 9
++
+ #ifndef __ASSEMBLY__
+-extern const char early_idt_handlers[NUM_EXCEPTION_VECTORS][2+2+5];
++
++extern const char early_idt_handler_array[NUM_EXCEPTION_VECTORS][EARLY_IDT_HANDLER_SIZE];
+ #ifdef CONFIG_TRACING
+-#define trace_early_idt_handlers early_idt_handlers
++# define trace_early_idt_handler_array early_idt_handler_array
+ #endif
+ 
+ /*
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index c4f8d4659070..b111ab5c4509 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -167,7 +167,7 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
+ 	clear_bss();
+ 
+ 	for (i = 0; i < NUM_EXCEPTION_VECTORS; i++)
+-		set_intr_gate(i, early_idt_handlers[i]);
++		set_intr_gate(i, early_idt_handler_array[i]);
+ 	load_idt((const struct desc_ptr *)&idt_descr);
+ 
+ 	copy_bootdata(__va(real_mode_data));
+diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
+index f36bd42d6f0c..30a2aa3782fa 100644
+--- a/arch/x86/kernel/head_32.S
++++ b/arch/x86/kernel/head_32.S
+@@ -477,21 +477,22 @@ is486:
+ __INIT
+ setup_once:
+ 	/*
+-	 * Set up a idt with 256 entries pointing to ignore_int,
+-	 * interrupt gates. It doesn't actually load idt - that needs
+-	 * to be done on each CPU. Interrupts are enabled elsewhere,
+-	 * when we can be relatively sure everything is ok.
++	 * Set up a idt with 256 interrupt gates that push zero if there
++	 * is no error code and then jump to early_idt_handler_common.
++	 * It doesn't actually load the idt - that needs to be done on
++	 * each CPU. Interrupts are enabled elsewhere, when we can be
++	 * relatively sure everything is ok.
+ 	 */
+ 
+ 	movl $idt_table,%edi
+-	movl $early_idt_handlers,%eax
++	movl $early_idt_handler_array,%eax
+ 	movl $NUM_EXCEPTION_VECTORS,%ecx
+ 1:
+ 	movl %eax,(%edi)
+ 	movl %eax,4(%edi)
+ 	/* interrupt gate, dpl=0, present */
+ 	movl $(0x8E000000 + __KERNEL_CS),2(%edi)
+-	addl $9,%eax
++	addl $EARLY_IDT_HANDLER_SIZE,%eax
+ 	addl $8,%edi
+ 	loop 1b
+ 
+@@ -523,26 +524,28 @@ setup_once:
+ 	andl $0,setup_once_ref	/* Once is enough, thanks */
+ 	ret
+ 
+-ENTRY(early_idt_handlers)
++ENTRY(early_idt_handler_array)
+ 	# 36(%esp) %eflags
+ 	# 32(%esp) %cs
+ 	# 28(%esp) %eip
+ 	# 24(%rsp) error code
+ 	i = 0
+ 	.rept NUM_EXCEPTION_VECTORS
+-	.if (EXCEPTION_ERRCODE_MASK >> i) & 1
+-	ASM_NOP2
+-	.else
++	.ifeq (EXCEPTION_ERRCODE_MASK >> i) & 1
+ 	pushl $0		# Dummy error code, to make stack frame uniform
+ 	.endif
+ 	pushl $i		# 20(%esp) Vector number
+-	jmp early_idt_handler
++	jmp early_idt_handler_common
+ 	i = i + 1
++	.fill early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc
+ 	.endr
+-ENDPROC(early_idt_handlers)
++ENDPROC(early_idt_handler_array)
+ 	
+-	/* This is global to keep gas from relaxing the jumps */
+-ENTRY(early_idt_handler)
++early_idt_handler_common:
++	/*
++	 * The stack is the hardware frame, an error code or zero, and the
++	 * vector number.
++	 */
+ 	cld
+ 
+ 	cmpl $2,(%esp)		# X86_TRAP_NMI
+@@ -602,7 +605,7 @@ ex_entry:
+ is_nmi:
+ 	addl $8,%esp		/* drop vector number and error code */
+ 	iret
+-ENDPROC(early_idt_handler)
++ENDPROC(early_idt_handler_common)
+ 
+ /* This is the default interrupt "handler" :-) */
+ 	ALIGN
+diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
+index 6fd514d9f69a..f8a8406033c3 100644
+--- a/arch/x86/kernel/head_64.S
++++ b/arch/x86/kernel/head_64.S
+@@ -321,26 +321,28 @@ bad_address:
+ 	jmp bad_address
+ 
+ 	__INIT
+-	.globl early_idt_handlers
+-early_idt_handlers:
++ENTRY(early_idt_handler_array)
+ 	# 104(%rsp) %rflags
+ 	#  96(%rsp) %cs
+ 	#  88(%rsp) %rip
+ 	#  80(%rsp) error code
+ 	i = 0
+ 	.rept NUM_EXCEPTION_VECTORS
+-	.if (EXCEPTION_ERRCODE_MASK >> i) & 1
+-	ASM_NOP2
+-	.else
++	.ifeq (EXCEPTION_ERRCODE_MASK >> i) & 1
+ 	pushq $0		# Dummy error code, to make stack frame uniform
+ 	.endif
+ 	pushq $i		# 72(%rsp) Vector number
+-	jmp early_idt_handler
++	jmp early_idt_handler_common
+ 	i = i + 1
++	.fill early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc
+ 	.endr
++ENDPROC(early_idt_handler_array)
+ 
+-/* This is global to keep gas from relaxing the jumps */
+-ENTRY(early_idt_handler)
++early_idt_handler_common:
++	/*
++	 * The stack is the hardware frame, an error code or zero, and the
++	 * vector number.
++	 */
+ 	cld
+ 
+ 	cmpl $2,(%rsp)		# X86_TRAP_NMI
+@@ -412,7 +414,7 @@ ENTRY(early_idt_handler)
+ is_nmi:
+ 	addq $16,%rsp		# drop vector number and error code
+ 	INTERRUPT_RETURN
+-ENDPROC(early_idt_handler)
++ENDPROC(early_idt_handler_common)
+ 
+ 	__INITDATA
+ 
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 987514396c1e..ddeff4844a10 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -559,6 +559,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ 				if (is_ereg(dst_reg))
+ 					EMIT1(0x41);
+ 				EMIT3(0xC1, add_1reg(0xC8, dst_reg), 8);
++
++				/* emit 'movzwl eax, ax' */
++				if (is_ereg(dst_reg))
++					EMIT3(0x45, 0x0F, 0xB7);
++				else
++					EMIT2(0x0F, 0xB7);
++				EMIT1(add_2reg(0xC0, dst_reg, dst_reg));
+ 				break;
+ 			case 32:
+ 				/* emit 'bswap eax' to swap lower 4 bytes */
+@@ -577,6 +584,27 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ 			break;
+ 
+ 		case BPF_ALU | BPF_END | BPF_FROM_LE:
++			switch (imm32) {
++			case 16:
++				/* emit 'movzwl eax, ax' to zero extend 16-bit
++				 * into 64 bit
++				 */
++				if (is_ereg(dst_reg))
++					EMIT3(0x45, 0x0F, 0xB7);
++				else
++					EMIT2(0x0F, 0xB7);
++				EMIT1(add_2reg(0xC0, dst_reg, dst_reg));
++				break;
++			case 32:
++				/* emit 'mov eax, eax' to clear upper 32-bits */
++				if (is_ereg(dst_reg))
++					EMIT1(0x45);
++				EMIT2(0x89, add_2reg(0xC0, dst_reg, dst_reg));
++				break;
++			case 64:
++				/* nop */
++				break;
++			}
+ 			break;
+ 
+ 			/* ST: *(u8*)(dst_reg + off) = imm */
+@@ -938,7 +966,12 @@ void bpf_int_jit_compile(struct bpf_prog *prog)
+ 	}
+ 	ctx.cleanup_addr = proglen;
+ 
+-	for (pass = 0; pass < 10; pass++) {
++	/* JITed image shrinks with every pass and the loop iterates
++	 * until the image stops shrinking. Very large bpf programs
++	 * may converge on the last pass. In such case do one more
++	 * pass to emit the final image
++	 */
++	for (pass = 0; pass < 10 || image; pass++) {
+ 		proglen = do_jit(prog, addrs, image, oldproglen, &ctx);
+ 		if (proglen <= 0) {
+ 			image = NULL;
+diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
+index 7b9be9822724..8533c96bab13 100644
+--- a/arch/x86/vdso/Makefile
++++ b/arch/x86/vdso/Makefile
+@@ -51,7 +51,7 @@ VDSO_LDFLAGS_vdso.lds = -m64 -Wl,-soname=linux-vdso.so.1 \
+ $(obj)/vdso64.so.dbg: $(src)/vdso.lds $(vobjs) FORCE
+ 	$(call if_changed,vdso)
+ 
+-HOST_EXTRACFLAGS += -I$(srctree)/tools/include
++HOST_EXTRACFLAGS += -I$(srctree)/tools/include -I$(srctree)/include/uapi -I$(srctree)/arch/x86/include/uapi
+ hostprogs-y			+= vdso2c
+ 
+ quiet_cmd_vdso2c = VDSO2C  $@
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 5c39703e644f..b2e73e1ef8a4 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1589,6 +1589,7 @@ static int blk_mq_hctx_notify(void *data, unsigned long action,
+ 	return NOTIFY_OK;
+ }
+ 
++/* hctx->ctxs will be freed in queue's release handler */
+ static void blk_mq_exit_hctx(struct request_queue *q,
+ 		struct blk_mq_tag_set *set,
+ 		struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx)
+@@ -1607,7 +1608,6 @@ static void blk_mq_exit_hctx(struct request_queue *q,
+ 
+ 	blk_mq_unregister_cpu_notifier(&hctx->cpu_notifier);
+ 	blk_free_flush_queue(hctx->fq);
+-	kfree(hctx->ctxs);
+ 	blk_mq_free_bitmap(&hctx->ctx_map);
+ }
+ 
+@@ -1873,8 +1873,12 @@ void blk_mq_release(struct request_queue *q)
+ 	unsigned int i;
+ 
+ 	/* hctx kobj stays in hctx */
+-	queue_for_each_hw_ctx(q, hctx, i)
++	queue_for_each_hw_ctx(q, hctx, i) {
++		if (!hctx)
++			continue;
++		kfree(hctx->ctxs);
+ 		kfree(hctx);
++	}
+ 
+ 	kfree(q->queue_hw_ctx);
+ 
+diff --git a/block/genhd.c b/block/genhd.c
+index 0a536dc05f3b..ea982eadaf63 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -422,9 +422,9 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
+ 	/* allocate ext devt */
+ 	idr_preload(GFP_KERNEL);
+ 
+-	spin_lock(&ext_devt_lock);
++	spin_lock_bh(&ext_devt_lock);
+ 	idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_NOWAIT);
+-	spin_unlock(&ext_devt_lock);
++	spin_unlock_bh(&ext_devt_lock);
+ 
+ 	idr_preload_end();
+ 	if (idx < 0)
+@@ -449,9 +449,9 @@ void blk_free_devt(dev_t devt)
+ 		return;
+ 
+ 	if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
+-		spin_lock(&ext_devt_lock);
++		spin_lock_bh(&ext_devt_lock);
+ 		idr_remove(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
+-		spin_unlock(&ext_devt_lock);
++		spin_unlock_bh(&ext_devt_lock);
+ 	}
+ }
+ 
+@@ -653,7 +653,6 @@ void del_gendisk(struct gendisk *disk)
+ 	disk->flags &= ~GENHD_FL_UP;
+ 
+ 	sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi");
+-	bdi_unregister(&disk->queue->backing_dev_info);
+ 	blk_unregister_queue(disk);
+ 	blk_unregister_region(disk_devt(disk), disk->minors);
+ 
+@@ -691,13 +690,13 @@ struct gendisk *get_gendisk(dev_t devt, int *partno)
+ 	} else {
+ 		struct hd_struct *part;
+ 
+-		spin_lock(&ext_devt_lock);
++		spin_lock_bh(&ext_devt_lock);
+ 		part = idr_find(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
+ 		if (part && get_disk(part_to_disk(part))) {
+ 			*partno = part->partno;
+ 			disk = part_to_disk(part);
+ 		}
+-		spin_unlock(&ext_devt_lock);
++		spin_unlock_bh(&ext_devt_lock);
+ 	}
+ 
+ 	return disk;
+diff --git a/drivers/ata/ahci_mvebu.c b/drivers/ata/ahci_mvebu.c
+index 23716dd8a7ec..5928d0746a27 100644
+--- a/drivers/ata/ahci_mvebu.c
++++ b/drivers/ata/ahci_mvebu.c
+@@ -45,7 +45,7 @@ static void ahci_mvebu_mbus_config(struct ahci_host_priv *hpriv,
+ 		writel((cs->mbus_attr << 8) |
+ 		       (dram->mbus_dram_target_id << 4) | 1,
+ 		       hpriv->mmio + AHCI_WINDOW_CTRL(i));
+-		writel(cs->base, hpriv->mmio + AHCI_WINDOW_BASE(i));
++		writel(cs->base >> 16, hpriv->mmio + AHCI_WINDOW_BASE(i));
+ 		writel(((cs->size - 1) & 0xffff0000),
+ 		       hpriv->mmio + AHCI_WINDOW_SIZE(i));
+ 	}
+diff --git a/drivers/ata/pata_octeon_cf.c b/drivers/ata/pata_octeon_cf.c
+index 80a80548ad0a..27245957eee3 100644
+--- a/drivers/ata/pata_octeon_cf.c
++++ b/drivers/ata/pata_octeon_cf.c
+@@ -1053,7 +1053,7 @@ static struct of_device_id octeon_cf_match[] = {
+ 	},
+ 	{},
+ };
+-MODULE_DEVICE_TABLE(of, octeon_i2c_match);
++MODULE_DEVICE_TABLE(of, octeon_cf_match);
+ 
+ static struct platform_driver octeon_cf_driver = {
+ 	.probe		= octeon_cf_probe,
+diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
+index 9c2ba1c97c42..df0c66cb7ad3 100644
+--- a/drivers/base/cacheinfo.c
++++ b/drivers/base/cacheinfo.c
+@@ -179,7 +179,7 @@ static int detect_cache_attributes(unsigned int cpu)
+ {
+ 	int ret;
+ 
+-	if (init_cache_level(cpu))
++	if (init_cache_level(cpu) || !cache_leaves(cpu))
+ 		return -ENOENT;
+ 
+ 	per_cpu_cacheinfo(cpu) = kcalloc(cache_leaves(cpu),
+diff --git a/drivers/bus/mvebu-mbus.c b/drivers/bus/mvebu-mbus.c
+index fb9ec6221730..6f047dcb94c2 100644
+--- a/drivers/bus/mvebu-mbus.c
++++ b/drivers/bus/mvebu-mbus.c
+@@ -58,7 +58,6 @@
+ #include <linux/debugfs.h>
+ #include <linux/log2.h>
+ #include <linux/syscore_ops.h>
+-#include <linux/memblock.h>
+ 
+ /*
+  * DDR target is the same on all platforms.
+@@ -70,6 +69,7 @@
+  */
+ #define WIN_CTRL_OFF		0x0000
+ #define   WIN_CTRL_ENABLE       BIT(0)
++/* Only on HW I/O coherency capable platforms */
+ #define   WIN_CTRL_SYNCBARRIER  BIT(1)
+ #define   WIN_CTRL_TGT_MASK     0xf0
+ #define   WIN_CTRL_TGT_SHIFT    4
+@@ -102,9 +102,7 @@
+ 
+ /* Relative to mbusbridge_base */
+ #define MBUS_BRIDGE_CTRL_OFF	0x0
+-#define  MBUS_BRIDGE_SIZE_MASK  0xffff0000
+ #define MBUS_BRIDGE_BASE_OFF	0x4
+-#define  MBUS_BRIDGE_BASE_MASK  0xffff0000
+ 
+ /* Maximum number of windows, for all known platforms */
+ #define MBUS_WINS_MAX           20
+@@ -323,8 +321,9 @@ static int mvebu_mbus_setup_window(struct mvebu_mbus_state *mbus,
+ 	ctrl = ((size - 1) & WIN_CTRL_SIZE_MASK) |
+ 		(attr << WIN_CTRL_ATTR_SHIFT)    |
+ 		(target << WIN_CTRL_TGT_SHIFT)   |
+-		WIN_CTRL_SYNCBARRIER             |
+ 		WIN_CTRL_ENABLE;
++	if (mbus->hw_io_coherency)
++		ctrl |= WIN_CTRL_SYNCBARRIER;
+ 
+ 	writel(base & WIN_BASE_LOW, addr + WIN_BASE_OFF);
+ 	writel(ctrl, addr + WIN_CTRL_OFF);
+@@ -577,106 +576,36 @@ static unsigned int armada_xp_mbus_win_remap_offset(int win)
+ 		return MVEBU_MBUS_NO_REMAP;
+ }
+ 
+-/*
+- * Use the memblock information to find the MBus bridge hole in the
+- * physical address space.
+- */
+-static void __init
+-mvebu_mbus_find_bridge_hole(uint64_t *start, uint64_t *end)
+-{
+-	struct memblock_region *r;
+-	uint64_t s = 0;
+-
+-	for_each_memblock(memory, r) {
+-		/*
+-		 * This part of the memory is above 4 GB, so we don't
+-		 * care for the MBus bridge hole.
+-		 */
+-		if (r->base >= 0x100000000)
+-			continue;
+-
+-		/*
+-		 * The MBus bridge hole is at the end of the RAM under
+-		 * the 4 GB limit.
+-		 */
+-		if (r->base + r->size > s)
+-			s = r->base + r->size;
+-	}
+-
+-	*start = s;
+-	*end = 0x100000000;
+-}
+-
+ static void __init
+ mvebu_mbus_default_setup_cpu_target(struct mvebu_mbus_state *mbus)
+ {
+ 	int i;
+ 	int cs;
+-	uint64_t mbus_bridge_base, mbus_bridge_end;
+ 
+ 	mvebu_mbus_dram_info.mbus_dram_target_id = TARGET_DDR;
+ 
+-	mvebu_mbus_find_bridge_hole(&mbus_bridge_base, &mbus_bridge_end);
+-
+ 	for (i = 0, cs = 0; i < 4; i++) {
+-		u64 base = readl(mbus->sdramwins_base + DDR_BASE_CS_OFF(i));
+-		u64 size = readl(mbus->sdramwins_base + DDR_SIZE_CS_OFF(i));
+-		u64 end;
+-		struct mbus_dram_window *w;
+-
+-		/* Ignore entries that are not enabled */
+-		if (!(size & DDR_SIZE_ENABLED))
+-			continue;
+-
+-		/*
+-		 * Ignore entries whose base address is above 2^32,
+-		 * since devices cannot DMA to such high addresses
+-		 */
+-		if (base & DDR_BASE_CS_HIGH_MASK)
+-			continue;
+-
+-		base = base & DDR_BASE_CS_LOW_MASK;
+-		size = (size | ~DDR_SIZE_MASK) + 1;
+-		end = base + size;
+-
+-		/*
+-		 * Adjust base/size of the current CS to make sure it
+-		 * doesn't overlap with the MBus bridge hole. This is
+-		 * particularly important for devices that do DMA from
+-		 * DRAM to a SRAM mapped in a MBus window, such as the
+-		 * CESA cryptographic engine.
+-		 */
++		u32 base = readl(mbus->sdramwins_base + DDR_BASE_CS_OFF(i));
++		u32 size = readl(mbus->sdramwins_base + DDR_SIZE_CS_OFF(i));
+ 
+ 		/*
+-		 * The CS is fully enclosed inside the MBus bridge
+-		 * area, so ignore it.
++		 * We only take care of entries for which the chip
++		 * select is enabled, and that don't have high base
++		 * address bits set (devices can only access the first
++		 * 32 bits of the memory).
+ 		 */
+-		if (base >= mbus_bridge_base && end <= mbus_bridge_end)
+-			continue;
++		if ((size & DDR_SIZE_ENABLED) &&
++		    !(base & DDR_BASE_CS_HIGH_MASK)) {
++			struct mbus_dram_window *w;
+ 
+-		/*
+-		 * Beginning of CS overlaps with end of MBus, raise CS
+-		 * base address, and shrink its size.
+-		 */
+-		if (base >= mbus_bridge_base && end > mbus_bridge_end) {
+-			size -= mbus_bridge_end - base;
+-			base = mbus_bridge_end;
++			w = &mvebu_mbus_dram_info.cs[cs++];
++			w->cs_index = i;
++			w->mbus_attr = 0xf & ~(1 << i);
++			if (mbus->hw_io_coherency)
++				w->mbus_attr |= ATTR_HW_COHERENCY;
++			w->base = base & DDR_BASE_CS_LOW_MASK;
++			w->size = (size | ~DDR_SIZE_MASK) + 1;
+ 		}
+-
+-		/*
+-		 * End of CS overlaps with beginning of MBus, shrink
+-		 * CS size.
+-		 */
+-		if (base < mbus_bridge_base && end > mbus_bridge_base)
+-			size -= end - mbus_bridge_base;
+-
+-		w = &mvebu_mbus_dram_info.cs[cs++];
+-		w->cs_index = i;
+-		w->mbus_attr = 0xf & ~(1 << i);
+-		if (mbus->hw_io_coherency)
+-			w->mbus_attr |= ATTR_HW_COHERENCY;
+-		w->base = base;
+-		w->size = size;
+ 	}
+ 	mvebu_mbus_dram_info.num_cs = cs;
+ }
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index d9891d3461f6..7992164ea9ec 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -174,6 +174,8 @@
+ #define AT_XDMAC_MBR_UBC_NDV3		(0x3 << 27)	/* Next Descriptor View 3 */
+ 
+ #define AT_XDMAC_MAX_CHAN	0x20
++#define AT_XDMAC_MAX_CSIZE	16	/* 16 data */
++#define AT_XDMAC_MAX_DWIDTH	8	/* 64 bits */
+ 
+ #define AT_XDMAC_DMA_BUSWIDTHS\
+ 	(BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) |\
+@@ -192,20 +194,17 @@ struct at_xdmac_chan {
+ 	struct dma_chan			chan;
+ 	void __iomem			*ch_regs;
+ 	u32				mask;		/* Channel Mask */
+-	u32				cfg[2];		/* Channel Configuration Register */
+-	#define	AT_XDMAC_DEV_TO_MEM_CFG	0		/* Predifined dev to mem channel conf */
+-	#define	AT_XDMAC_MEM_TO_DEV_CFG	1		/* Predifined mem to dev channel conf */
++	u32				cfg;		/* Channel Configuration Register */
+ 	u8				perid;		/* Peripheral ID */
+ 	u8				perif;		/* Peripheral Interface */
+ 	u8				memif;		/* Memory Interface */
+-	u32				per_src_addr;
+-	u32				per_dst_addr;
+ 	u32				save_cc;
+ 	u32				save_cim;
+ 	u32				save_cnda;
+ 	u32				save_cndc;
+ 	unsigned long			status;
+ 	struct tasklet_struct		tasklet;
++	struct dma_slave_config		sconfig;
+ 
+ 	spinlock_t			lock;
+ 
+@@ -415,8 +414,9 @@ static dma_cookie_t at_xdmac_tx_submit(struct dma_async_tx_descriptor *tx)
+ 	struct at_xdmac_desc	*desc = txd_to_at_desc(tx);
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(tx->chan);
+ 	dma_cookie_t		cookie;
++	unsigned long		irqflags;
+ 
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, irqflags);
+ 	cookie = dma_cookie_assign(tx);
+ 
+ 	dev_vdbg(chan2dev(tx->chan), "%s: atchan 0x%p, add desc 0x%p to xfers_list\n",
+@@ -425,7 +425,7 @@ static dma_cookie_t at_xdmac_tx_submit(struct dma_async_tx_descriptor *tx)
+ 	if (list_is_singular(&atchan->xfers_list))
+ 		at_xdmac_start_xfer(atchan, desc);
+ 
+-	spin_unlock_bh(&atchan->lock);
++	spin_unlock_irqrestore(&atchan->lock, irqflags);
+ 	return cookie;
+ }
+ 
+@@ -494,61 +494,94 @@ static struct dma_chan *at_xdmac_xlate(struct of_phandle_args *dma_spec,
+ 	return chan;
+ }
+ 
++static int at_xdmac_compute_chan_conf(struct dma_chan *chan,
++				      enum dma_transfer_direction direction)
++{
++	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
++	int			csize, dwidth;
++
++	if (direction == DMA_DEV_TO_MEM) {
++		atchan->cfg =
++			AT91_XDMAC_DT_PERID(atchan->perid)
++			| AT_XDMAC_CC_DAM_INCREMENTED_AM
++			| AT_XDMAC_CC_SAM_FIXED_AM
++			| AT_XDMAC_CC_DIF(atchan->memif)
++			| AT_XDMAC_CC_SIF(atchan->perif)
++			| AT_XDMAC_CC_SWREQ_HWR_CONNECTED
++			| AT_XDMAC_CC_DSYNC_PER2MEM
++			| AT_XDMAC_CC_MBSIZE_SIXTEEN
++			| AT_XDMAC_CC_TYPE_PER_TRAN;
++		csize = ffs(atchan->sconfig.src_maxburst) - 1;
++		if (csize < 0) {
++			dev_err(chan2dev(chan), "invalid src maxburst value\n");
++			return -EINVAL;
++		}
++		atchan->cfg |= AT_XDMAC_CC_CSIZE(csize);
++		dwidth = ffs(atchan->sconfig.src_addr_width) - 1;
++		if (dwidth < 0) {
++			dev_err(chan2dev(chan), "invalid src addr width value\n");
++			return -EINVAL;
++		}
++		atchan->cfg |= AT_XDMAC_CC_DWIDTH(dwidth);
++	} else if (direction == DMA_MEM_TO_DEV) {
++		atchan->cfg =
++			AT91_XDMAC_DT_PERID(atchan->perid)
++			| AT_XDMAC_CC_DAM_FIXED_AM
++			| AT_XDMAC_CC_SAM_INCREMENTED_AM
++			| AT_XDMAC_CC_DIF(atchan->perif)
++			| AT_XDMAC_CC_SIF(atchan->memif)
++			| AT_XDMAC_CC_SWREQ_HWR_CONNECTED
++			| AT_XDMAC_CC_DSYNC_MEM2PER
++			| AT_XDMAC_CC_MBSIZE_SIXTEEN
++			| AT_XDMAC_CC_TYPE_PER_TRAN;
++		csize = ffs(atchan->sconfig.dst_maxburst) - 1;
++		if (csize < 0) {
++			dev_err(chan2dev(chan), "invalid src maxburst value\n");
++			return -EINVAL;
++		}
++		atchan->cfg |= AT_XDMAC_CC_CSIZE(csize);
++		dwidth = ffs(atchan->sconfig.dst_addr_width) - 1;
++		if (dwidth < 0) {
++			dev_err(chan2dev(chan), "invalid dst addr width value\n");
++			return -EINVAL;
++		}
++		atchan->cfg |= AT_XDMAC_CC_DWIDTH(dwidth);
++	}
++
++	dev_dbg(chan2dev(chan),	"%s: cfg=0x%08x\n", __func__, atchan->cfg);
++
++	return 0;
++}
++
++/*
++ * Only check that maxburst and addr width values are supported by the
++ * the controller but not that the configuration is good to perform the
++ * transfer since we don't know the direction at this stage.
++ */
++static int at_xdmac_check_slave_config(struct dma_slave_config *sconfig)
++{
++	if ((sconfig->src_maxburst > AT_XDMAC_MAX_CSIZE)
++	    || (sconfig->dst_maxburst > AT_XDMAC_MAX_CSIZE))
++		return -EINVAL;
++
++	if ((sconfig->src_addr_width > AT_XDMAC_MAX_DWIDTH)
++	    || (sconfig->dst_addr_width > AT_XDMAC_MAX_DWIDTH))
++		return -EINVAL;
++
++	return 0;
++}
++
+ static int at_xdmac_set_slave_config(struct dma_chan *chan,
+ 				      struct dma_slave_config *sconfig)
+ {
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
+-	u8 dwidth;
+-	int csize;
+ 
+-	atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG] =
+-		AT91_XDMAC_DT_PERID(atchan->perid)
+-		| AT_XDMAC_CC_DAM_INCREMENTED_AM
+-		| AT_XDMAC_CC_SAM_FIXED_AM
+-		| AT_XDMAC_CC_DIF(atchan->memif)
+-		| AT_XDMAC_CC_SIF(atchan->perif)
+-		| AT_XDMAC_CC_SWREQ_HWR_CONNECTED
+-		| AT_XDMAC_CC_DSYNC_PER2MEM
+-		| AT_XDMAC_CC_MBSIZE_SIXTEEN
+-		| AT_XDMAC_CC_TYPE_PER_TRAN;
+-	csize = at_xdmac_csize(sconfig->src_maxburst);
+-	if (csize < 0) {
+-		dev_err(chan2dev(chan), "invalid src maxburst value\n");
++	if (at_xdmac_check_slave_config(sconfig)) {
++		dev_err(chan2dev(chan), "invalid slave configuration\n");
+ 		return -EINVAL;
+ 	}
+-	atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG] |= AT_XDMAC_CC_CSIZE(csize);
+-	dwidth = ffs(sconfig->src_addr_width) - 1;
+-	atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG] |= AT_XDMAC_CC_DWIDTH(dwidth);
+-
+-
+-	atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG] =
+-		AT91_XDMAC_DT_PERID(atchan->perid)
+-		| AT_XDMAC_CC_DAM_FIXED_AM
+-		| AT_XDMAC_CC_SAM_INCREMENTED_AM
+-		| AT_XDMAC_CC_DIF(atchan->perif)
+-		| AT_XDMAC_CC_SIF(atchan->memif)
+-		| AT_XDMAC_CC_SWREQ_HWR_CONNECTED
+-		| AT_XDMAC_CC_DSYNC_MEM2PER
+-		| AT_XDMAC_CC_MBSIZE_SIXTEEN
+-		| AT_XDMAC_CC_TYPE_PER_TRAN;
+-	csize = at_xdmac_csize(sconfig->dst_maxburst);
+-	if (csize < 0) {
+-		dev_err(chan2dev(chan), "invalid src maxburst value\n");
+-		return -EINVAL;
+-	}
+-	atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG] |= AT_XDMAC_CC_CSIZE(csize);
+-	dwidth = ffs(sconfig->dst_addr_width) - 1;
+-	atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG] |= AT_XDMAC_CC_DWIDTH(dwidth);
+-
+-	/* Src and dst addr are needed to configure the link list descriptor. */
+-	atchan->per_src_addr = sconfig->src_addr;
+-	atchan->per_dst_addr = sconfig->dst_addr;
+ 
+-	dev_dbg(chan2dev(chan),
+-		"%s: cfg[dev2mem]=0x%08x, cfg[mem2dev]=0x%08x, per_src_addr=0x%08x, per_dst_addr=0x%08x\n",
+-		__func__, atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG],
+-		atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG],
+-		atchan->per_src_addr, atchan->per_dst_addr);
++	memcpy(&atchan->sconfig, sconfig, sizeof(atchan->sconfig));
+ 
+ 	return 0;
+ }
+@@ -563,6 +596,8 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 	struct scatterlist	*sg;
+ 	int			i;
+ 	unsigned int		xfer_size = 0;
++	unsigned long		irqflags;
++	struct dma_async_tx_descriptor	*ret = NULL;
+ 
+ 	if (!sgl)
+ 		return NULL;
+@@ -578,7 +613,10 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 		 flags);
+ 
+ 	/* Protect dma_sconfig field that can be modified by set_slave_conf. */
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, irqflags);
++
++	if (at_xdmac_compute_chan_conf(chan, direction))
++		goto spin_unlock;
+ 
+ 	/* Prepare descriptors. */
+ 	for_each_sg(sgl, sg, sg_len, i) {
+@@ -589,8 +627,7 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 		mem = sg_dma_address(sg);
+ 		if (unlikely(!len)) {
+ 			dev_err(chan2dev(chan), "sg data length is zero\n");
+-			spin_unlock_bh(&atchan->lock);
+-			return NULL;
++			goto spin_unlock;
+ 		}
+ 		dev_dbg(chan2dev(chan), "%s: * sg%d len=%u, mem=0x%08x\n",
+ 			 __func__, i, len, mem);
+@@ -600,20 +637,18 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 			dev_err(chan2dev(chan), "can't get descriptor\n");
+ 			if (first)
+ 				list_splice_init(&first->descs_list, &atchan->free_descs_list);
+-			spin_unlock_bh(&atchan->lock);
+-			return NULL;
++			goto spin_unlock;
+ 		}
+ 
+ 		/* Linked list descriptor setup. */
+ 		if (direction == DMA_DEV_TO_MEM) {
+-			desc->lld.mbr_sa = atchan->per_src_addr;
++			desc->lld.mbr_sa = atchan->sconfig.src_addr;
+ 			desc->lld.mbr_da = mem;
+-			desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG];
+ 		} else {
+ 			desc->lld.mbr_sa = mem;
+-			desc->lld.mbr_da = atchan->per_dst_addr;
+-			desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG];
++			desc->lld.mbr_da = atchan->sconfig.dst_addr;
+ 		}
++		desc->lld.mbr_cfg = atchan->cfg;
+ 		dwidth = at_xdmac_get_dwidth(desc->lld.mbr_cfg);
+ 		fixed_dwidth = IS_ALIGNED(len, 1 << dwidth)
+ 			       ? at_xdmac_get_dwidth(desc->lld.mbr_cfg)
+@@ -645,13 +680,15 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ 		xfer_size += len;
+ 	}
+ 
+-	spin_unlock_bh(&atchan->lock);
+ 
+ 	first->tx_dma_desc.flags = flags;
+ 	first->xfer_size = xfer_size;
+ 	first->direction = direction;
++	ret = &first->tx_dma_desc;
+ 
+-	return &first->tx_dma_desc;
++spin_unlock:
++	spin_unlock_irqrestore(&atchan->lock, irqflags);
++	return ret;
+ }
+ 
+ static struct dma_async_tx_descriptor *
+@@ -664,6 +701,7 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
+ 	struct at_xdmac_desc	*first = NULL, *prev = NULL;
+ 	unsigned int		periods = buf_len / period_len;
+ 	int			i;
++	unsigned long		irqflags;
+ 
+ 	dev_dbg(chan2dev(chan), "%s: buf_addr=%pad, buf_len=%zd, period_len=%zd, dir=%s, flags=0x%lx\n",
+ 		__func__, &buf_addr, buf_len, period_len,
+@@ -679,32 +717,34 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
+ 		return NULL;
+ 	}
+ 
++	if (at_xdmac_compute_chan_conf(chan, direction))
++		return NULL;
++
+ 	for (i = 0; i < periods; i++) {
+ 		struct at_xdmac_desc	*desc = NULL;
+ 
+-		spin_lock_bh(&atchan->lock);
++		spin_lock_irqsave(&atchan->lock, irqflags);
+ 		desc = at_xdmac_get_desc(atchan);
+ 		if (!desc) {
+ 			dev_err(chan2dev(chan), "can't get descriptor\n");
+ 			if (first)
+ 				list_splice_init(&first->descs_list, &atchan->free_descs_list);
+-			spin_unlock_bh(&atchan->lock);
++			spin_unlock_irqrestore(&atchan->lock, irqflags);
+ 			return NULL;
+ 		}
+-		spin_unlock_bh(&atchan->lock);
++		spin_unlock_irqrestore(&atchan->lock, irqflags);
+ 		dev_dbg(chan2dev(chan),
+ 			"%s: desc=0x%p, tx_dma_desc.phys=%pad\n",
+ 			__func__, desc, &desc->tx_dma_desc.phys);
+ 
+ 		if (direction == DMA_DEV_TO_MEM) {
+-			desc->lld.mbr_sa = atchan->per_src_addr;
++			desc->lld.mbr_sa = atchan->sconfig.src_addr;
+ 			desc->lld.mbr_da = buf_addr + i * period_len;
+-			desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG];
+ 		} else {
+ 			desc->lld.mbr_sa = buf_addr + i * period_len;
+-			desc->lld.mbr_da = atchan->per_dst_addr;
+-			desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG];
++			desc->lld.mbr_da = atchan->sconfig.dst_addr;
+ 		}
++		desc->lld.mbr_cfg = atchan->cfg;
+ 		desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV1
+ 			| AT_XDMAC_MBR_UBC_NDEN
+ 			| AT_XDMAC_MBR_UBC_NSEN
+@@ -766,6 +806,7 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
+ 					| AT_XDMAC_CC_SIF(0)
+ 					| AT_XDMAC_CC_MBSIZE_SIXTEEN
+ 					| AT_XDMAC_CC_TYPE_MEM_TRAN;
++	unsigned long		irqflags;
+ 
+ 	dev_dbg(chan2dev(chan), "%s: src=%pad, dest=%pad, len=%zd, flags=0x%lx\n",
+ 		__func__, &src, &dest, len, flags);
+@@ -798,9 +839,9 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
+ 
+ 		dev_dbg(chan2dev(chan), "%s: remaining_size=%zu\n", __func__, remaining_size);
+ 
+-		spin_lock_bh(&atchan->lock);
++		spin_lock_irqsave(&atchan->lock, irqflags);
+ 		desc = at_xdmac_get_desc(atchan);
+-		spin_unlock_bh(&atchan->lock);
++		spin_unlock_irqrestore(&atchan->lock, irqflags);
+ 		if (!desc) {
+ 			dev_err(chan2dev(chan), "can't get descriptor\n");
+ 			if (first)
+@@ -886,6 +927,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+ 	int			residue;
+ 	u32			cur_nda, mask, value;
+ 	u8			dwidth = 0;
++	unsigned long		flags;
+ 
+ 	ret = dma_cookie_status(chan, cookie, txstate);
+ 	if (ret == DMA_COMPLETE)
+@@ -894,7 +936,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+ 	if (!txstate)
+ 		return ret;
+ 
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, flags);
+ 
+ 	desc = list_first_entry(&atchan->xfers_list, struct at_xdmac_desc, xfer_node);
+ 
+@@ -904,8 +946,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+ 	 */
+ 	if (!desc->active_xfer) {
+ 		dma_set_residue(txstate, desc->xfer_size);
+-		spin_unlock_bh(&atchan->lock);
+-		return ret;
++		goto spin_unlock;
+ 	}
+ 
+ 	residue = desc->xfer_size;
+@@ -936,14 +977,14 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+ 	}
+ 	residue += at_xdmac_chan_read(atchan, AT_XDMAC_CUBC) << dwidth;
+ 
+-	spin_unlock_bh(&atchan->lock);
+-
+ 	dma_set_residue(txstate, residue);
+ 
+ 	dev_dbg(chan2dev(chan),
+ 		 "%s: desc=0x%p, tx_dma_desc.phys=%pad, tx_status=%d, cookie=%d, residue=%d\n",
+ 		 __func__, desc, &desc->tx_dma_desc.phys, ret, cookie, residue);
+ 
++spin_unlock:
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ 	return ret;
+ }
+ 
+@@ -964,8 +1005,9 @@ static void at_xdmac_remove_xfer(struct at_xdmac_chan *atchan,
+ static void at_xdmac_advance_work(struct at_xdmac_chan *atchan)
+ {
+ 	struct at_xdmac_desc	*desc;
++	unsigned long		flags;
+ 
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, flags);
+ 
+ 	/*
+ 	 * If channel is enabled, do nothing, advance_work will be triggered
+@@ -980,7 +1022,7 @@ static void at_xdmac_advance_work(struct at_xdmac_chan *atchan)
+ 			at_xdmac_start_xfer(atchan, desc);
+ 	}
+ 
+-	spin_unlock_bh(&atchan->lock);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ }
+ 
+ static void at_xdmac_handle_cyclic(struct at_xdmac_chan *atchan)
+@@ -1116,12 +1158,13 @@ static int at_xdmac_device_config(struct dma_chan *chan,
+ {
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
+ 	int ret;
++	unsigned long		flags;
+ 
+ 	dev_dbg(chan2dev(chan), "%s\n", __func__);
+ 
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, flags);
+ 	ret = at_xdmac_set_slave_config(chan, config);
+-	spin_unlock_bh(&atchan->lock);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+ 	return ret;
+ }
+@@ -1130,18 +1173,19 @@ static int at_xdmac_device_pause(struct dma_chan *chan)
+ {
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
+ 	struct at_xdmac		*atxdmac = to_at_xdmac(atchan->chan.device);
++	unsigned long		flags;
+ 
+ 	dev_dbg(chan2dev(chan), "%s\n", __func__);
+ 
+ 	if (test_and_set_bit(AT_XDMAC_CHAN_IS_PAUSED, &atchan->status))
+ 		return 0;
+ 
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, flags);
+ 	at_xdmac_write(atxdmac, AT_XDMAC_GRWS, atchan->mask);
+ 	while (at_xdmac_chan_read(atchan, AT_XDMAC_CC)
+ 	       & (AT_XDMAC_CC_WRIP | AT_XDMAC_CC_RDIP))
+ 		cpu_relax();
+-	spin_unlock_bh(&atchan->lock);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -1150,16 +1194,19 @@ static int at_xdmac_device_resume(struct dma_chan *chan)
+ {
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
+ 	struct at_xdmac		*atxdmac = to_at_xdmac(atchan->chan.device);
++	unsigned long		flags;
+ 
+ 	dev_dbg(chan2dev(chan), "%s\n", __func__);
+ 
+-	spin_lock_bh(&atchan->lock);
+-	if (!at_xdmac_chan_is_paused(atchan))
++	spin_lock_irqsave(&atchan->lock, flags);
++	if (!at_xdmac_chan_is_paused(atchan)) {
++		spin_unlock_irqrestore(&atchan->lock, flags);
+ 		return 0;
++	}
+ 
+ 	at_xdmac_write(atxdmac, AT_XDMAC_GRWR, atchan->mask);
+ 	clear_bit(AT_XDMAC_CHAN_IS_PAUSED, &atchan->status);
+-	spin_unlock_bh(&atchan->lock);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -1169,10 +1216,11 @@ static int at_xdmac_device_terminate_all(struct dma_chan *chan)
+ 	struct at_xdmac_desc	*desc, *_desc;
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
+ 	struct at_xdmac		*atxdmac = to_at_xdmac(atchan->chan.device);
++	unsigned long		flags;
+ 
+ 	dev_dbg(chan2dev(chan), "%s\n", __func__);
+ 
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, flags);
+ 	at_xdmac_write(atxdmac, AT_XDMAC_GD, atchan->mask);
+ 	while (at_xdmac_read(atxdmac, AT_XDMAC_GS) & atchan->mask)
+ 		cpu_relax();
+@@ -1182,7 +1230,7 @@ static int at_xdmac_device_terminate_all(struct dma_chan *chan)
+ 		at_xdmac_remove_xfer(atchan, desc);
+ 
+ 	clear_bit(AT_XDMAC_CHAN_IS_CYCLIC, &atchan->status);
+-	spin_unlock_bh(&atchan->lock);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -1192,8 +1240,9 @@ static int at_xdmac_alloc_chan_resources(struct dma_chan *chan)
+ 	struct at_xdmac_chan	*atchan = to_at_xdmac_chan(chan);
+ 	struct at_xdmac_desc	*desc;
+ 	int			i;
++	unsigned long		flags;
+ 
+-	spin_lock_bh(&atchan->lock);
++	spin_lock_irqsave(&atchan->lock, flags);
+ 
+ 	if (at_xdmac_chan_is_enabled(atchan)) {
+ 		dev_err(chan2dev(chan),
+@@ -1224,7 +1273,7 @@ static int at_xdmac_alloc_chan_resources(struct dma_chan *chan)
+ 	dev_dbg(chan2dev(chan), "%s: allocated %d descriptors\n", __func__, i);
+ 
+ spin_unlock:
+-	spin_unlock_bh(&atchan->lock);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ 	return i;
+ }
+ 
+diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
+index ac336a961dea..8e70e580c98a 100644
+--- a/drivers/dma/dmaengine.c
++++ b/drivers/dma/dmaengine.c
+@@ -505,7 +505,11 @@ int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps)
+ 	caps->directions = device->directions;
+ 	caps->residue_granularity = device->residue_granularity;
+ 
+-	caps->cmd_pause = !!device->device_pause;
++	/*
++	 * Some devices implement only pause (e.g. to get residuum) but no
++	 * resume. However cmd_pause is advertised as pause AND resume.
++	 */
++	caps->cmd_pause = !!(device->device_pause && device->device_resume);
+ 	caps->cmd_terminate = !!device->device_terminate_all;
+ 
+ 	return 0;
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index 0e1f56772855..a2771a8d4377 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -2127,6 +2127,7 @@ static int pl330_terminate_all(struct dma_chan *chan)
+ 	struct pl330_dmac *pl330 = pch->dmac;
+ 	LIST_HEAD(list);
+ 
++	pm_runtime_get_sync(pl330->ddma.dev);
+ 	spin_lock_irqsave(&pch->lock, flags);
+ 	spin_lock(&pl330->lock);
+ 	_stop(pch->thread);
+@@ -2151,6 +2152,8 @@ static int pl330_terminate_all(struct dma_chan *chan)
+ 	list_splice_tail_init(&pch->work_list, &pl330->desc_pool);
+ 	list_splice_tail_init(&pch->completed_list, &pl330->desc_pool);
+ 	spin_unlock_irqrestore(&pch->lock, flags);
++	pm_runtime_mark_last_busy(pl330->ddma.dev);
++	pm_runtime_put_autosuspend(pl330->ddma.dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index 406624a0b201..340e21918f33 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -684,8 +684,6 @@ static ssize_t node_show(struct kobject *kobj, struct attribute *attr,
+ 			dev->node_props.cpu_core_id_base);
+ 	sysfs_show_32bit_prop(buffer, "simd_id_base",
+ 			dev->node_props.simd_id_base);
+-	sysfs_show_32bit_prop(buffer, "capability",
+-			dev->node_props.capability);
+ 	sysfs_show_32bit_prop(buffer, "max_waves_per_simd",
+ 			dev->node_props.max_waves_per_simd);
+ 	sysfs_show_32bit_prop(buffer, "lds_size_in_kb",
+@@ -735,6 +733,8 @@ static ssize_t node_show(struct kobject *kobj, struct attribute *attr,
+ 				kfd2kgd->get_fw_version(
+ 						dev->gpu->kgd,
+ 						KGD_ENGINE_MEC1));
++		sysfs_show_32bit_prop(buffer, "capability",
++				dev->node_props.capability);
+ 	}
+ 
+ 	return sysfs_show_32bit_prop(buffer, "max_engine_clk_ccompute",
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index 27ea6bdebce7..7a628e4cb27a 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -2732,9 +2732,6 @@ void i915_gem_reset(struct drm_device *dev)
+ void
+ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
+ {
+-	if (list_empty(&ring->request_list))
+-		return;
+-
+ 	WARN_ON(i915_verify_lists(ring->dev));
+ 
+ 	/* Retire requests first as we use it above for the early return.
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index 88b36a9173c9..336e8b63ca08 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -881,10 +881,8 @@ intel_dp_aux_ch(struct intel_dp *intel_dp,
+ 				      DP_AUX_CH_CTL_RECEIVE_ERROR))
+ 				continue;
+ 			if (status & DP_AUX_CH_CTL_DONE)
+-				break;
++				goto done;
+ 		}
+-		if (status & DP_AUX_CH_CTL_DONE)
+-			break;
+ 	}
+ 
+ 	if ((status & DP_AUX_CH_CTL_DONE) == 0) {
+@@ -893,6 +891,7 @@ intel_dp_aux_ch(struct intel_dp *intel_dp,
+ 		goto out;
+ 	}
+ 
++done:
+ 	/* Check for timeout or receive error.
+ 	 * Timeouts occur when the sink is not connected
+ 	 */
+diff --git a/drivers/gpu/drm/i915/intel_i2c.c b/drivers/gpu/drm/i915/intel_i2c.c
+index 56e437e31580..ae628001fd97 100644
+--- a/drivers/gpu/drm/i915/intel_i2c.c
++++ b/drivers/gpu/drm/i915/intel_i2c.c
+@@ -435,7 +435,7 @@ gmbus_xfer(struct i2c_adapter *adapter,
+ 					       struct intel_gmbus,
+ 					       adapter);
+ 	struct drm_i915_private *dev_priv = bus->dev_priv;
+-	int i, reg_offset;
++	int i = 0, inc, try = 0, reg_offset;
+ 	int ret = 0;
+ 
+ 	intel_aux_display_runtime_get(dev_priv);
+@@ -448,12 +448,14 @@ gmbus_xfer(struct i2c_adapter *adapter,
+ 
+ 	reg_offset = dev_priv->gpio_mmio_base;
+ 
++retry:
+ 	I915_WRITE(GMBUS0 + reg_offset, bus->reg0);
+ 
+-	for (i = 0; i < num; i++) {
++	for (; i < num; i += inc) {
++		inc = 1;
+ 		if (gmbus_is_index_read(msgs, i, num)) {
+ 			ret = gmbus_xfer_index_read(dev_priv, &msgs[i]);
+-			i += 1;  /* set i to the index of the read xfer */
++			inc = 2; /* an index read is two msgs */
+ 		} else if (msgs[i].flags & I2C_M_RD) {
+ 			ret = gmbus_xfer_read(dev_priv, &msgs[i], 0);
+ 		} else {
+@@ -525,6 +527,18 @@ clear_err:
+ 			 adapter->name, msgs[i].addr,
+ 			 (msgs[i].flags & I2C_M_RD) ? 'r' : 'w', msgs[i].len);
+ 
++	/*
++	 * Passive adapters sometimes NAK the first probe. Retry the first
++	 * message once on -ENXIO for GMBUS transfers; the bit banging algorithm
++	 * has retries internally. See also the retry loop in
++	 * drm_do_probe_ddc_edid, which bails out on the first -ENXIO.
++	 */
++	if (ret == -ENXIO && i == 0 && try++ == 0) {
++		DRM_DEBUG_KMS("GMBUS [%s] NAK on first message, retry\n",
++			      adapter->name);
++		goto retry;
++	}
++
+ 	goto out;
+ 
+ timeout:
+diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c
+index 965a45619f6b..9bd56116fd5a 100644
+--- a/drivers/gpu/drm/radeon/atombios_crtc.c
++++ b/drivers/gpu/drm/radeon/atombios_crtc.c
+@@ -580,9 +580,6 @@ static u32 atombios_adjust_pll(struct drm_crtc *crtc,
+ 		else
+ 			radeon_crtc->pll_flags |= RADEON_PLL_PREFER_LOW_REF_DIV;
+ 
+-		/* if there is no audio, set MINM_OVER_MAXP  */
+-		if (!drm_detect_monitor_audio(radeon_connector_edid(connector)))
+-			radeon_crtc->pll_flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
+ 		if (rdev->family < CHIP_RV770)
+ 			radeon_crtc->pll_flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
+ 		/* use frac fb div on APUs */
+@@ -1789,9 +1786,7 @@ static int radeon_get_shared_nondp_ppll(struct drm_crtc *crtc)
+ 			if ((crtc->mode.clock == test_crtc->mode.clock) &&
+ 			    (adjusted_clock == test_adjusted_clock) &&
+ 			    (radeon_crtc->ss_enabled == test_radeon_crtc->ss_enabled) &&
+-			    (test_radeon_crtc->pll_id != ATOM_PPLL_INVALID) &&
+-			    (drm_detect_monitor_audio(radeon_connector_edid(test_radeon_crtc->connector)) ==
+-			     drm_detect_monitor_audio(radeon_connector_edid(radeon_crtc->connector))))
++			    (test_radeon_crtc->pll_id != ATOM_PPLL_INVALID))
+ 				return test_radeon_crtc->pll_id;
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/radeon/dce3_1_afmt.c b/drivers/gpu/drm/radeon/dce3_1_afmt.c
+index f04205170b8a..cfa3a84a2af0 100644
+--- a/drivers/gpu/drm/radeon/dce3_1_afmt.c
++++ b/drivers/gpu/drm/radeon/dce3_1_afmt.c
+@@ -173,7 +173,7 @@ void dce3_2_hdmi_update_acr(struct drm_encoder *encoder, long offset,
+ 	struct drm_device *dev = encoder->dev;
+ 	struct radeon_device *rdev = dev->dev_private;
+ 
+-	WREG32(HDMI0_ACR_PACKET_CONTROL + offset,
++	WREG32(DCE3_HDMI0_ACR_PACKET_CONTROL + offset,
+ 		HDMI0_ACR_SOURCE |		/* select SW CTS value */
+ 		HDMI0_ACR_AUTO_SEND);	/* allow hw to sent ACR packets when required */
+ 
+diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
+index bd7519fdd3f4..aa232fd25992 100644
+--- a/drivers/gpu/drm/radeon/radeon_device.c
++++ b/drivers/gpu/drm/radeon/radeon_device.c
+@@ -1458,6 +1458,21 @@ int radeon_device_init(struct radeon_device *rdev,
+ 	if (r)
+ 		DRM_ERROR("ib ring test failed (%d).\n", r);
+ 
++	/*
++	 * Turks/Thames GPU will freeze whole laptop if DPM is not restarted
++	 * after the CP ring have chew one packet at least. Hence here we stop
++	 * and restart DPM after the radeon_ib_ring_tests().
++	 */
++	if (rdev->pm.dpm_enabled &&
++	    (rdev->pm.pm_method == PM_METHOD_DPM) &&
++	    (rdev->family == CHIP_TURKS) &&
++	    (rdev->flags & RADEON_IS_MOBILITY)) {
++		mutex_lock(&rdev->pm.mutex);
++		radeon_dpm_disable(rdev);
++		radeon_dpm_enable(rdev);
++		mutex_unlock(&rdev->pm.mutex);
++	}
++
+ 	if ((radeon_testing & 1)) {
+ 		if (rdev->accel_working)
+ 			radeon_test_moves(rdev);
+diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c
+index de42fc4a22b8..9c3377ca17b7 100644
+--- a/drivers/gpu/drm/radeon/radeon_vm.c
++++ b/drivers/gpu/drm/radeon/radeon_vm.c
+@@ -458,14 +458,16 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 		/* make sure object fit at this offset */
+ 		eoffset = soffset + size;
+ 		if (soffset >= eoffset) {
+-			return -EINVAL;
++			r = -EINVAL;
++			goto error_unreserve;
+ 		}
+ 
+ 		last_pfn = eoffset / RADEON_GPU_PAGE_SIZE;
+ 		if (last_pfn > rdev->vm_manager.max_pfn) {
+ 			dev_err(rdev->dev, "va above limit (0x%08X > 0x%08X)\n",
+ 				last_pfn, rdev->vm_manager.max_pfn);
+-			return -EINVAL;
++			r = -EINVAL;
++			goto error_unreserve;
+ 		}
+ 
+ 	} else {
+@@ -486,7 +488,8 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 				"(bo %p 0x%010lx 0x%010lx)\n", bo_va->bo,
+ 				soffset, tmp->bo, tmp->it.start, tmp->it.last);
+ 			mutex_unlock(&vm->mutex);
+-			return -EINVAL;
++			r = -EINVAL;
++			goto error_unreserve;
+ 		}
+ 	}
+ 
+@@ -497,7 +500,8 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 			tmp = kzalloc(sizeof(struct radeon_bo_va), GFP_KERNEL);
+ 			if (!tmp) {
+ 				mutex_unlock(&vm->mutex);
+-				return -ENOMEM;
++				r = -ENOMEM;
++				goto error_unreserve;
+ 			}
+ 			tmp->it.start = bo_va->it.start;
+ 			tmp->it.last = bo_va->it.last;
+@@ -555,7 +559,6 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 		r = radeon_vm_clear_bo(rdev, pt);
+ 		if (r) {
+ 			radeon_bo_unref(&pt);
+-			radeon_bo_reserve(bo_va->bo, false);
+ 			return r;
+ 		}
+ 
+@@ -575,6 +578,10 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev,
+ 
+ 	mutex_unlock(&vm->mutex);
+ 	return 0;
++
++error_unreserve:
++	radeon_bo_unreserve(bo_va->bo);
++	return r;
+ }
+ 
+ /**
+diff --git a/drivers/i2c/busses/i2c-hix5hd2.c b/drivers/i2c/busses/i2c-hix5hd2.c
+index 8fe78d08e01c..7c6966434ee7 100644
+--- a/drivers/i2c/busses/i2c-hix5hd2.c
++++ b/drivers/i2c/busses/i2c-hix5hd2.c
+@@ -554,4 +554,4 @@ module_platform_driver(hix5hd2_i2c_driver);
+ MODULE_DESCRIPTION("Hix5hd2 I2C Bus driver");
+ MODULE_AUTHOR("Wei Yan <sledge.yanwei@huawei.com>");
+ MODULE_LICENSE("GPL");
+-MODULE_ALIAS("platform:i2c-hix5hd2");
++MODULE_ALIAS("platform:hix5hd2-i2c");
+diff --git a/drivers/i2c/busses/i2c-s3c2410.c b/drivers/i2c/busses/i2c-s3c2410.c
+index 958c8db4ec30..297e9c9ac943 100644
+--- a/drivers/i2c/busses/i2c-s3c2410.c
++++ b/drivers/i2c/busses/i2c-s3c2410.c
+@@ -1143,6 +1143,7 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	i2c->quirks = s3c24xx_get_device_quirks(pdev);
++	i2c->sysreg = ERR_PTR(-ENOENT);
+ 	if (pdata)
+ 		memcpy(i2c->pdata, pdata, sizeof(*pdata));
+ 	else
+diff --git a/drivers/iio/adc/twl6030-gpadc.c b/drivers/iio/adc/twl6030-gpadc.c
+index 89d8aa1d2818..df12c57e6ce0 100644
+--- a/drivers/iio/adc/twl6030-gpadc.c
++++ b/drivers/iio/adc/twl6030-gpadc.c
+@@ -1001,7 +1001,7 @@ static struct platform_driver twl6030_gpadc_driver = {
+ 
+ module_platform_driver(twl6030_gpadc_driver);
+ 
+-MODULE_ALIAS("platform: " DRIVER_NAME);
++MODULE_ALIAS("platform:" DRIVER_NAME);
+ MODULE_AUTHOR("Balaji T K <balajitk@ti.com>");
+ MODULE_AUTHOR("Graeme Gregory <gg@slimlogic.co.uk>");
+ MODULE_AUTHOR("Oleksandr Kozaruk <oleksandr.kozaruk@ti.com");
+diff --git a/drivers/iio/imu/adis16400.h b/drivers/iio/imu/adis16400.h
+index 0916bf6b6c31..73b189c1c0fb 100644
+--- a/drivers/iio/imu/adis16400.h
++++ b/drivers/iio/imu/adis16400.h
+@@ -139,6 +139,7 @@
+ #define ADIS16400_NO_BURST		BIT(1)
+ #define ADIS16400_HAS_SLOW_MODE		BIT(2)
+ #define ADIS16400_HAS_SERIAL_NUMBER	BIT(3)
++#define ADIS16400_BURST_DIAG_STAT	BIT(4)
+ 
+ struct adis16400_state;
+ 
+@@ -165,6 +166,7 @@ struct adis16400_state {
+ 	int				filt_int;
+ 
+ 	struct adis adis;
++	unsigned long avail_scan_mask[2];
+ };
+ 
+ /* At the moment triggers are only used for ring buffer
+diff --git a/drivers/iio/imu/adis16400_buffer.c b/drivers/iio/imu/adis16400_buffer.c
+index 6e727ffe5262..90c24a23c679 100644
+--- a/drivers/iio/imu/adis16400_buffer.c
++++ b/drivers/iio/imu/adis16400_buffer.c
+@@ -18,7 +18,8 @@ int adis16400_update_scan_mode(struct iio_dev *indio_dev,
+ {
+ 	struct adis16400_state *st = iio_priv(indio_dev);
+ 	struct adis *adis = &st->adis;
+-	uint16_t *tx;
++	unsigned int burst_length;
++	u8 *tx;
+ 
+ 	if (st->variant->flags & ADIS16400_NO_BURST)
+ 		return adis_update_scan_mode(indio_dev, scan_mask);
+@@ -26,26 +27,29 @@ int adis16400_update_scan_mode(struct iio_dev *indio_dev,
+ 	kfree(adis->xfer);
+ 	kfree(adis->buffer);
+ 
++	/* All but the timestamp channel */
++	burst_length = (indio_dev->num_channels - 1) * sizeof(u16);
++	if (st->variant->flags & ADIS16400_BURST_DIAG_STAT)
++		burst_length += sizeof(u16);
++
+ 	adis->xfer = kcalloc(2, sizeof(*adis->xfer), GFP_KERNEL);
+ 	if (!adis->xfer)
+ 		return -ENOMEM;
+ 
+-	adis->buffer = kzalloc(indio_dev->scan_bytes + sizeof(u16),
+-		GFP_KERNEL);
++	adis->buffer = kzalloc(burst_length + sizeof(u16), GFP_KERNEL);
+ 	if (!adis->buffer)
+ 		return -ENOMEM;
+ 
+-	tx = adis->buffer + indio_dev->scan_bytes;
+-
++	tx = adis->buffer + burst_length;
+ 	tx[0] = ADIS_READ_REG(ADIS16400_GLOB_CMD);
+ 	tx[1] = 0;
+ 
+ 	adis->xfer[0].tx_buf = tx;
+ 	adis->xfer[0].bits_per_word = 8;
+ 	adis->xfer[0].len = 2;
+-	adis->xfer[1].tx_buf = tx;
++	adis->xfer[1].rx_buf = adis->buffer;
+ 	adis->xfer[1].bits_per_word = 8;
+-	adis->xfer[1].len = indio_dev->scan_bytes;
++	adis->xfer[1].len = burst_length;
+ 
+ 	spi_message_init(&adis->msg);
+ 	spi_message_add_tail(&adis->xfer[0], &adis->msg);
+@@ -61,6 +65,7 @@ irqreturn_t adis16400_trigger_handler(int irq, void *p)
+ 	struct adis16400_state *st = iio_priv(indio_dev);
+ 	struct adis *adis = &st->adis;
+ 	u32 old_speed_hz = st->adis.spi->max_speed_hz;
++	void *buffer;
+ 	int ret;
+ 
+ 	if (!adis->buffer)
+@@ -81,7 +86,12 @@ irqreturn_t adis16400_trigger_handler(int irq, void *p)
+ 		spi_setup(st->adis.spi);
+ 	}
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, adis->buffer,
++	if (st->variant->flags & ADIS16400_BURST_DIAG_STAT)
++		buffer = adis->buffer + sizeof(u16);
++	else
++		buffer = adis->buffer;
++
++	iio_push_to_buffers_with_timestamp(indio_dev, buffer,
+ 		pf->timestamp);
+ 
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/imu/adis16400_core.c b/drivers/iio/imu/adis16400_core.c
+index fa795dcd5f75..2fd68f2219a7 100644
+--- a/drivers/iio/imu/adis16400_core.c
++++ b/drivers/iio/imu/adis16400_core.c
+@@ -405,6 +405,11 @@ static int adis16400_read_raw(struct iio_dev *indio_dev,
+ 			*val = st->variant->temp_scale_nano / 1000000;
+ 			*val2 = (st->variant->temp_scale_nano % 1000000);
+ 			return IIO_VAL_INT_PLUS_MICRO;
++		case IIO_PRESSURE:
++			/* 20 uBar = 0.002kPascal */
++			*val = 0;
++			*val2 = 2000;
++			return IIO_VAL_INT_PLUS_MICRO;
+ 		default:
+ 			return -EINVAL;
+ 		}
+@@ -454,10 +459,10 @@ static int adis16400_read_raw(struct iio_dev *indio_dev,
+ 	}
+ }
+ 
+-#define ADIS16400_VOLTAGE_CHAN(addr, bits, name, si) { \
++#define ADIS16400_VOLTAGE_CHAN(addr, bits, name, si, chn) { \
+ 	.type = IIO_VOLTAGE, \
+ 	.indexed = 1, \
+-	.channel = 0, \
++	.channel = chn, \
+ 	.extend_name = name, \
+ 	.info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | \
+ 		BIT(IIO_CHAN_INFO_SCALE), \
+@@ -474,10 +479,10 @@ static int adis16400_read_raw(struct iio_dev *indio_dev,
+ }
+ 
+ #define ADIS16400_SUPPLY_CHAN(addr, bits) \
+-	ADIS16400_VOLTAGE_CHAN(addr, bits, "supply", ADIS16400_SCAN_SUPPLY)
++	ADIS16400_VOLTAGE_CHAN(addr, bits, "supply", ADIS16400_SCAN_SUPPLY, 0)
+ 
+ #define ADIS16400_AUX_ADC_CHAN(addr, bits) \
+-	ADIS16400_VOLTAGE_CHAN(addr, bits, NULL, ADIS16400_SCAN_ADC)
++	ADIS16400_VOLTAGE_CHAN(addr, bits, NULL, ADIS16400_SCAN_ADC, 1)
+ 
+ #define ADIS16400_GYRO_CHAN(mod, addr, bits) { \
+ 	.type = IIO_ANGL_VEL, \
+@@ -773,7 +778,8 @@ static struct adis16400_chip_info adis16400_chips[] = {
+ 		.channels = adis16448_channels,
+ 		.num_channels = ARRAY_SIZE(adis16448_channels),
+ 		.flags = ADIS16400_HAS_PROD_ID |
+-				ADIS16400_HAS_SERIAL_NUMBER,
++				ADIS16400_HAS_SERIAL_NUMBER |
++				ADIS16400_BURST_DIAG_STAT,
+ 		.gyro_scale_micro = IIO_DEGREE_TO_RAD(10000), /* 0.01 deg/s */
+ 		.accel_scale_micro = IIO_G_TO_M_S_2(833), /* 1/1200 g */
+ 		.temp_scale_nano = 73860000, /* 0.07386 C */
+@@ -791,11 +797,6 @@ static const struct iio_info adis16400_info = {
+ 	.debugfs_reg_access = adis_debugfs_reg_access,
+ };
+ 
+-static const unsigned long adis16400_burst_scan_mask[] = {
+-	~0UL,
+-	0,
+-};
+-
+ static const char * const adis16400_status_error_msgs[] = {
+ 	[ADIS16400_DIAG_STAT_ZACCL_FAIL] = "Z-axis accelerometer self-test failure",
+ 	[ADIS16400_DIAG_STAT_YACCL_FAIL] = "Y-axis accelerometer self-test failure",
+@@ -843,6 +844,20 @@ static const struct adis_data adis16400_data = {
+ 		BIT(ADIS16400_DIAG_STAT_POWER_LOW),
+ };
+ 
++static void adis16400_setup_chan_mask(struct adis16400_state *st)
++{
++	const struct adis16400_chip_info *chip_info = st->variant;
++	unsigned i;
++
++	for (i = 0; i < chip_info->num_channels; i++) {
++		const struct iio_chan_spec *ch = &chip_info->channels[i];
++
++		if (ch->scan_index >= 0 &&
++		    ch->scan_index != ADIS16400_SCAN_TIMESTAMP)
++			st->avail_scan_mask[0] |= BIT(ch->scan_index);
++	}
++}
++
+ static int adis16400_probe(struct spi_device *spi)
+ {
+ 	struct adis16400_state *st;
+@@ -866,8 +881,10 @@ static int adis16400_probe(struct spi_device *spi)
+ 	indio_dev->info = &adis16400_info;
+ 	indio_dev->modes = INDIO_DIRECT_MODE;
+ 
+-	if (!(st->variant->flags & ADIS16400_NO_BURST))
+-		indio_dev->available_scan_masks = adis16400_burst_scan_mask;
++	if (!(st->variant->flags & ADIS16400_NO_BURST)) {
++		adis16400_setup_chan_mask(st);
++		indio_dev->available_scan_masks = st->avail_scan_mask;
++	}
+ 
+ 	ret = adis_init(&st->adis, indio_dev, spi, &adis16400_data);
+ 	if (ret)
+diff --git a/drivers/input/mouse/alps.c b/drivers/input/mouse/alps.c
+index ea6cb64dfb28..d5335e664240 100644
+--- a/drivers/input/mouse/alps.c
++++ b/drivers/input/mouse/alps.c
+@@ -1042,9 +1042,8 @@ static void alps_process_trackstick_packet_v7(struct psmouse *psmouse)
+ 	right = (packet[1] & 0x02) >> 1;
+ 	middle = (packet[1] & 0x04) >> 2;
+ 
+-	/* Divide 2 since trackpoint's speed is too fast */
+-	input_report_rel(dev2, REL_X, (char)x / 2);
+-	input_report_rel(dev2, REL_Y, -((char)y / 2));
++	input_report_rel(dev2, REL_X, (char)x);
++	input_report_rel(dev2, REL_Y, -((char)y));
+ 
+ 	input_report_key(dev2, BTN_LEFT, left);
+ 	input_report_key(dev2, BTN_RIGHT, right);
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index 79363b687195..ce3d40004458 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -1376,10 +1376,11 @@ static bool elantech_is_signature_valid(const unsigned char *param)
+ 		return true;
+ 
+ 	/*
+-	 * Some models have a revision higher then 20. Meaning param[2] may
+-	 * be 10 or 20, skip the rates check for these.
++	 * Some hw_version >= 4 models have a revision higher then 20. Meaning
++	 * that param[2] may be 10 or 20, skip the rates check for these.
+ 	 */
+-	if (param[0] == 0x46 && (param[1] & 0xef) == 0x0f && param[2] < 40)
++	if ((param[0] & 0x0f) >= 0x06 && (param[1] & 0xaf) == 0x0f &&
++	    param[2] < 40)
+ 		return true;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(rates); i++)
+@@ -1555,6 +1556,7 @@ static int elantech_set_properties(struct elantech_data *etd)
+ 		case 9:
+ 		case 10:
+ 		case 13:
++		case 14:
+ 			etd->hw_version = 4;
+ 			break;
+ 		default:
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index 3b06c8a360b6..907ac9bdd763 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -148,6 +148,11 @@ static const struct min_max_quirk min_max_pnpid_table[] = {
+ 		1024, 5112, 2024, 4832
+ 	},
+ 	{
++		(const char * const []){"LEN2000", NULL},
++		{ANY_BOARD_ID, ANY_BOARD_ID},
++		1024, 5113, 2021, 4832
++	},
++	{
+ 		(const char * const []){"LEN2001", NULL},
+ 		{ANY_BOARD_ID, ANY_BOARD_ID},
+ 		1024, 5022, 2508, 4832
+@@ -188,7 +193,7 @@ static const char * const topbuttonpad_pnp_ids[] = {
+ 	"LEN0045",
+ 	"LEN0047",
+ 	"LEN0049",
+-	"LEN2000",
++	"LEN2000", /* S540 */
+ 	"LEN2001", /* Edge E431 */
+ 	"LEN2002", /* Edge E531 */
+ 	"LEN2003",
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 2d1e05bdbb53..272149d66f5b 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -50,6 +50,7 @@
+ #define CONTEXT_SIZE		VTD_PAGE_SIZE
+ 
+ #define IS_GFX_DEVICE(pdev) ((pdev->class >> 16) == PCI_BASE_CLASS_DISPLAY)
++#define IS_USB_DEVICE(pdev) ((pdev->class >> 8) == PCI_CLASS_SERIAL_USB)
+ #define IS_ISA_DEVICE(pdev) ((pdev->class >> 8) == PCI_CLASS_BRIDGE_ISA)
+ #define IS_AZALIA(pdev) ((pdev)->vendor == 0x8086 && (pdev)->device == 0x3a3e)
+ 
+@@ -672,6 +673,11 @@ static void domain_update_iommu_cap(struct dmar_domain *domain)
+ 	domain->iommu_superpage = domain_update_iommu_superpage(NULL);
+ }
+ 
++static int iommu_dummy(struct device *dev)
++{
++	return dev->archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO;
++}
++
+ static struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devfn)
+ {
+ 	struct dmar_drhd_unit *drhd = NULL;
+@@ -681,6 +687,9 @@ static struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devf
+ 	u16 segment = 0;
+ 	int i;
+ 
++	if (iommu_dummy(dev))
++		return NULL;
++
+ 	if (dev_is_pci(dev)) {
+ 		pdev = to_pci_dev(dev);
+ 		segment = pci_domain_nr(pdev->bus);
+@@ -2554,6 +2563,10 @@ static bool device_has_rmrr(struct device *dev)
+  * In both cases we assume that PCI USB devices with RMRRs have them largely
+  * for historical reasons and that the RMRR space is not actively used post
+  * boot.  This exclusion may change if vendors begin to abuse it.
++ *
++ * The same exception is made for graphics devices, with the requirement that
++ * any use of the RMRR regions will be torn down before assigning the device
++ * to a guest.
+  */
+ static bool device_is_rmrr_locked(struct device *dev)
+ {
+@@ -2563,7 +2576,7 @@ static bool device_is_rmrr_locked(struct device *dev)
+ 	if (dev_is_pci(dev)) {
+ 		struct pci_dev *pdev = to_pci_dev(dev);
+ 
+-		if ((pdev->class >> 8) == PCI_CLASS_SERIAL_USB)
++		if (IS_USB_DEVICE(pdev) || IS_GFX_DEVICE(pdev))
+ 			return false;
+ 	}
+ 
+@@ -2969,11 +2982,6 @@ static inline struct dmar_domain *get_valid_domain_for_dev(struct device *dev)
+ 	return __get_valid_domain_for_dev(dev);
+ }
+ 
+-static int iommu_dummy(struct device *dev)
+-{
+-	return dev->archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO;
+-}
+-
+ /* Check if the dev needs to go through non-identity map and unmap process.*/
+ static int iommu_no_mapping(struct device *dev)
+ {
+diff --git a/drivers/irqchip/irq-sunxi-nmi.c b/drivers/irqchip/irq-sunxi-nmi.c
+index 4a9ce5b50c5b..6b2b582433bd 100644
+--- a/drivers/irqchip/irq-sunxi-nmi.c
++++ b/drivers/irqchip/irq-sunxi-nmi.c
+@@ -104,7 +104,7 @@ static int sunxi_sc_nmi_set_type(struct irq_data *data, unsigned int flow_type)
+ 	irqd_set_trigger_type(data, flow_type);
+ 	irq_setup_alt_chip(data, flow_type);
+ 
+-	for (i = 0; i <= gc->num_ct; i++, ct++)
++	for (i = 0; i < gc->num_ct; i++, ct++)
+ 		if (ct->type & flow_type)
+ 			ctrl_off = ct->regs.type;
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 907534b7f40d..b7bf8ee857fa 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -3765,7 +3765,7 @@ array_state_store(struct mddev *mddev, const char *buf, size_t len)
+ 				err = -EBUSY;
+ 		}
+ 		spin_unlock(&mddev->lock);
+-		return err;
++		return err ?: len;
+ 	}
+ 	err = mddev_lock(mddev);
+ 	if (err)
+@@ -4144,13 +4144,14 @@ action_store(struct mddev *mddev, const char *page, size_t len)
+ 			set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 		else
+ 			clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+-		flush_workqueue(md_misc_wq);
+-		if (mddev->sync_thread) {
+-			set_bit(MD_RECOVERY_INTR, &mddev->recovery);
+-			if (mddev_lock(mddev) == 0) {
++		if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) &&
++		    mddev_lock(mddev) == 0) {
++			flush_workqueue(md_misc_wq);
++			if (mddev->sync_thread) {
++				set_bit(MD_RECOVERY_INTR, &mddev->recovery);
+ 				md_reap_sync_thread(mddev);
+-				mddev_unlock(mddev);
+ 			}
++			mddev_unlock(mddev);
+ 		}
+ 	} else if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) ||
+ 		   test_bit(MD_RECOVERY_NEEDED, &mddev->recovery))
+diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
+index 4df28943d222..e8d3c1d35453 100644
+--- a/drivers/net/bonding/bond_options.c
++++ b/drivers/net/bonding/bond_options.c
+@@ -624,7 +624,7 @@ int __bond_opt_set(struct bonding *bond,
+ out:
+ 	if (ret)
+ 		bond_opt_error_interpret(bond, opt, ret, val);
+-	else
++	else if (bond->dev->reg_state == NETREG_REGISTERED)
+ 		call_netdevice_notifiers(NETDEV_CHANGEINFODATA, bond->dev);
+ 
+ 	return ret;
+diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
+index 7f05f309e935..da36bcf32404 100644
+--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
++++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
+@@ -1773,9 +1773,9 @@ int be_cmd_get_regs(struct be_adapter *adapter, u32 buf_len, void *buf)
+ 	total_size = buf_len;
+ 
+ 	get_fat_cmd.size = sizeof(struct be_cmd_req_get_fat) + 60*1024;
+-	get_fat_cmd.va = pci_alloc_consistent(adapter->pdev,
+-					      get_fat_cmd.size,
+-					      &get_fat_cmd.dma);
++	get_fat_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
++					     get_fat_cmd.size,
++					     &get_fat_cmd.dma, GFP_ATOMIC);
+ 	if (!get_fat_cmd.va) {
+ 		dev_err(&adapter->pdev->dev,
+ 			"Memory allocation failure while reading FAT data\n");
+@@ -1820,8 +1820,8 @@ int be_cmd_get_regs(struct be_adapter *adapter, u32 buf_len, void *buf)
+ 		log_offset += buf_size;
+ 	}
+ err:
+-	pci_free_consistent(adapter->pdev, get_fat_cmd.size,
+-			    get_fat_cmd.va, get_fat_cmd.dma);
++	dma_free_coherent(&adapter->pdev->dev, get_fat_cmd.size,
++			  get_fat_cmd.va, get_fat_cmd.dma);
+ 	spin_unlock_bh(&adapter->mcc_lock);
+ 	return status;
+ }
+@@ -2272,12 +2272,12 @@ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
+ 		return -EINVAL;
+ 
+ 	cmd.size = sizeof(struct be_cmd_resp_port_type);
+-	cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
++	cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
++				     GFP_ATOMIC);
+ 	if (!cmd.va) {
+ 		dev_err(&adapter->pdev->dev, "Memory allocation failed\n");
+ 		return -ENOMEM;
+ 	}
+-	memset(cmd.va, 0, cmd.size);
+ 
+ 	spin_lock_bh(&adapter->mcc_lock);
+ 
+@@ -2302,7 +2302,7 @@ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
+ 	}
+ err:
+ 	spin_unlock_bh(&adapter->mcc_lock);
+-	pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
++	dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
+ 	return status;
+ }
+ 
+@@ -2777,7 +2777,8 @@ int be_cmd_get_phy_info(struct be_adapter *adapter)
+ 		goto err;
+ 	}
+ 	cmd.size = sizeof(struct be_cmd_req_get_phy_info);
+-	cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
++	cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
++				     GFP_ATOMIC);
+ 	if (!cmd.va) {
+ 		dev_err(&adapter->pdev->dev, "Memory alloc failure\n");
+ 		status = -ENOMEM;
+@@ -2811,7 +2812,7 @@ int be_cmd_get_phy_info(struct be_adapter *adapter)
+ 				BE_SUPPORTED_SPEED_1GBPS;
+ 		}
+ 	}
+-	pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
++	dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
+ err:
+ 	spin_unlock_bh(&adapter->mcc_lock);
+ 	return status;
+@@ -2862,8 +2863,9 @@ int be_cmd_get_cntl_attributes(struct be_adapter *adapter)
+ 
+ 	memset(&attribs_cmd, 0, sizeof(struct be_dma_mem));
+ 	attribs_cmd.size = sizeof(struct be_cmd_resp_cntl_attribs);
+-	attribs_cmd.va = pci_alloc_consistent(adapter->pdev, attribs_cmd.size,
+-					      &attribs_cmd.dma);
++	attribs_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
++					     attribs_cmd.size,
++					     &attribs_cmd.dma, GFP_ATOMIC);
+ 	if (!attribs_cmd.va) {
+ 		dev_err(&adapter->pdev->dev, "Memory allocation failure\n");
+ 		status = -ENOMEM;
+@@ -2890,8 +2892,8 @@ int be_cmd_get_cntl_attributes(struct be_adapter *adapter)
+ err:
+ 	mutex_unlock(&adapter->mbox_lock);
+ 	if (attribs_cmd.va)
+-		pci_free_consistent(adapter->pdev, attribs_cmd.size,
+-				    attribs_cmd.va, attribs_cmd.dma);
++		dma_free_coherent(&adapter->pdev->dev, attribs_cmd.size,
++				  attribs_cmd.va, attribs_cmd.dma);
+ 	return status;
+ }
+ 
+@@ -3029,9 +3031,10 @@ int be_cmd_get_mac_from_list(struct be_adapter *adapter, u8 *mac,
+ 
+ 	memset(&get_mac_list_cmd, 0, sizeof(struct be_dma_mem));
+ 	get_mac_list_cmd.size = sizeof(struct be_cmd_resp_get_mac_list);
+-	get_mac_list_cmd.va = pci_alloc_consistent(adapter->pdev,
+-						   get_mac_list_cmd.size,
+-						   &get_mac_list_cmd.dma);
++	get_mac_list_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
++						  get_mac_list_cmd.size,
++						  &get_mac_list_cmd.dma,
++						  GFP_ATOMIC);
+ 
+ 	if (!get_mac_list_cmd.va) {
+ 		dev_err(&adapter->pdev->dev,
+@@ -3104,8 +3107,8 @@ int be_cmd_get_mac_from_list(struct be_adapter *adapter, u8 *mac,
+ 
+ out:
+ 	spin_unlock_bh(&adapter->mcc_lock);
+-	pci_free_consistent(adapter->pdev, get_mac_list_cmd.size,
+-			    get_mac_list_cmd.va, get_mac_list_cmd.dma);
++	dma_free_coherent(&adapter->pdev->dev, get_mac_list_cmd.size,
++			  get_mac_list_cmd.va, get_mac_list_cmd.dma);
+ 	return status;
+ }
+ 
+@@ -3158,8 +3161,8 @@ int be_cmd_set_mac_list(struct be_adapter *adapter, u8 *mac_array,
+ 
+ 	memset(&cmd, 0, sizeof(struct be_dma_mem));
+ 	cmd.size = sizeof(struct be_cmd_req_set_mac_list);
+-	cmd.va = dma_alloc_coherent(&adapter->pdev->dev, cmd.size,
+-				    &cmd.dma, GFP_KERNEL);
++	cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
++				     GFP_KERNEL);
+ 	if (!cmd.va)
+ 		return -ENOMEM;
+ 
+@@ -3348,7 +3351,8 @@ int be_cmd_get_acpi_wol_cap(struct be_adapter *adapter)
+ 
+ 	memset(&cmd, 0, sizeof(struct be_dma_mem));
+ 	cmd.size = sizeof(struct be_cmd_resp_acpi_wol_magic_config_v1);
+-	cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
++	cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
++				     GFP_ATOMIC);
+ 	if (!cmd.va) {
+ 		dev_err(&adapter->pdev->dev, "Memory allocation failure\n");
+ 		status = -ENOMEM;
+@@ -3383,7 +3387,8 @@ int be_cmd_get_acpi_wol_cap(struct be_adapter *adapter)
+ err:
+ 	mutex_unlock(&adapter->mbox_lock);
+ 	if (cmd.va)
+-		pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
++		dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va,
++				  cmd.dma);
+ 	return status;
+ 
+ }
+@@ -3397,8 +3402,9 @@ int be_cmd_set_fw_log_level(struct be_adapter *adapter, u32 level)
+ 
+ 	memset(&extfat_cmd, 0, sizeof(struct be_dma_mem));
+ 	extfat_cmd.size = sizeof(struct be_cmd_resp_get_ext_fat_caps);
+-	extfat_cmd.va = pci_alloc_consistent(adapter->pdev, extfat_cmd.size,
+-					     &extfat_cmd.dma);
++	extfat_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
++					    extfat_cmd.size, &extfat_cmd.dma,
++					    GFP_ATOMIC);
+ 	if (!extfat_cmd.va)
+ 		return -ENOMEM;
+ 
+@@ -3420,8 +3426,8 @@ int be_cmd_set_fw_log_level(struct be_adapter *adapter, u32 level)
+ 
+ 	status = be_cmd_set_ext_fat_capabilites(adapter, &extfat_cmd, cfgs);
+ err:
+-	pci_free_consistent(adapter->pdev, extfat_cmd.size, extfat_cmd.va,
+-			    extfat_cmd.dma);
++	dma_free_coherent(&adapter->pdev->dev, extfat_cmd.size, extfat_cmd.va,
++			  extfat_cmd.dma);
+ 	return status;
+ }
+ 
+@@ -3434,8 +3440,9 @@ int be_cmd_get_fw_log_level(struct be_adapter *adapter)
+ 
+ 	memset(&extfat_cmd, 0, sizeof(struct be_dma_mem));
+ 	extfat_cmd.size = sizeof(struct be_cmd_resp_get_ext_fat_caps);
+-	extfat_cmd.va = pci_alloc_consistent(adapter->pdev, extfat_cmd.size,
+-					     &extfat_cmd.dma);
++	extfat_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
++					    extfat_cmd.size, &extfat_cmd.dma,
++					    GFP_ATOMIC);
+ 
+ 	if (!extfat_cmd.va) {
+ 		dev_err(&adapter->pdev->dev, "%s: Memory allocation failure\n",
+@@ -3453,8 +3460,8 @@ int be_cmd_get_fw_log_level(struct be_adapter *adapter)
+ 				level = cfgs->module[0].trace_lvl[j].dbg_lvl;
+ 		}
+ 	}
+-	pci_free_consistent(adapter->pdev, extfat_cmd.size, extfat_cmd.va,
+-			    extfat_cmd.dma);
++	dma_free_coherent(&adapter->pdev->dev, extfat_cmd.size, extfat_cmd.va,
++			  extfat_cmd.dma);
+ err:
+ 	return level;
+ }
+@@ -3652,7 +3659,8 @@ int be_cmd_get_func_config(struct be_adapter *adapter, struct be_resources *res)
+ 
+ 	memset(&cmd, 0, sizeof(struct be_dma_mem));
+ 	cmd.size = sizeof(struct be_cmd_resp_get_func_config);
+-	cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
++	cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
++				     GFP_ATOMIC);
+ 	if (!cmd.va) {
+ 		dev_err(&adapter->pdev->dev, "Memory alloc failure\n");
+ 		status = -ENOMEM;
+@@ -3692,7 +3700,8 @@ int be_cmd_get_func_config(struct be_adapter *adapter, struct be_resources *res)
+ err:
+ 	mutex_unlock(&adapter->mbox_lock);
+ 	if (cmd.va)
+-		pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
++		dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va,
++				  cmd.dma);
+ 	return status;
+ }
+ 
+@@ -3713,7 +3722,8 @@ int be_cmd_get_profile_config(struct be_adapter *adapter,
+ 
+ 	memset(&cmd, 0, sizeof(struct be_dma_mem));
+ 	cmd.size = sizeof(struct be_cmd_resp_get_profile_config);
+-	cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
++	cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
++				     GFP_ATOMIC);
+ 	if (!cmd.va)
+ 		return -ENOMEM;
+ 
+@@ -3752,7 +3762,8 @@ int be_cmd_get_profile_config(struct be_adapter *adapter,
+ 		res->vf_if_cap_flags = vf_res->cap_flags;
+ err:
+ 	if (cmd.va)
+-		pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
++		dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va,
++				  cmd.dma);
+ 	return status;
+ }
+ 
+@@ -3767,7 +3778,8 @@ static int be_cmd_set_profile_config(struct be_adapter *adapter, void *desc,
+ 
+ 	memset(&cmd, 0, sizeof(struct be_dma_mem));
+ 	cmd.size = sizeof(struct be_cmd_req_set_profile_config);
+-	cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
++	cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
++				     GFP_ATOMIC);
+ 	if (!cmd.va)
+ 		return -ENOMEM;
+ 
+@@ -3783,7 +3795,8 @@ static int be_cmd_set_profile_config(struct be_adapter *adapter, void *desc,
+ 	status = be_cmd_notify_wait(adapter, &wrb);
+ 
+ 	if (cmd.va)
+-		pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
++		dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va,
++				  cmd.dma);
+ 	return status;
+ }
+ 
+diff --git a/drivers/net/ethernet/emulex/benet/be_ethtool.c b/drivers/net/ethernet/emulex/benet/be_ethtool.c
+index 4d2de4700769..22ffcd81a6b5 100644
+--- a/drivers/net/ethernet/emulex/benet/be_ethtool.c
++++ b/drivers/net/ethernet/emulex/benet/be_ethtool.c
+@@ -264,8 +264,8 @@ static int lancer_cmd_read_file(struct be_adapter *adapter, u8 *file_name,
+ 	int status = 0;
+ 
+ 	read_cmd.size = LANCER_READ_FILE_CHUNK;
+-	read_cmd.va = pci_alloc_consistent(adapter->pdev, read_cmd.size,
+-					   &read_cmd.dma);
++	read_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, read_cmd.size,
++					  &read_cmd.dma, GFP_ATOMIC);
+ 
+ 	if (!read_cmd.va) {
+ 		dev_err(&adapter->pdev->dev,
+@@ -289,8 +289,8 @@ static int lancer_cmd_read_file(struct be_adapter *adapter, u8 *file_name,
+ 			break;
+ 		}
+ 	}
+-	pci_free_consistent(adapter->pdev, read_cmd.size, read_cmd.va,
+-			    read_cmd.dma);
++	dma_free_coherent(&adapter->pdev->dev, read_cmd.size, read_cmd.va,
++			  read_cmd.dma);
+ 
+ 	return status;
+ }
+@@ -818,8 +818,9 @@ static int be_test_ddr_dma(struct be_adapter *adapter)
+ 	};
+ 
+ 	ddrdma_cmd.size = sizeof(struct be_cmd_req_ddrdma_test);
+-	ddrdma_cmd.va = dma_alloc_coherent(&adapter->pdev->dev, ddrdma_cmd.size,
+-					   &ddrdma_cmd.dma, GFP_KERNEL);
++	ddrdma_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
++					    ddrdma_cmd.size, &ddrdma_cmd.dma,
++					    GFP_KERNEL);
+ 	if (!ddrdma_cmd.va)
+ 		return -ENOMEM;
+ 
+@@ -941,8 +942,9 @@ static int be_read_eeprom(struct net_device *netdev,
+ 
+ 	memset(&eeprom_cmd, 0, sizeof(struct be_dma_mem));
+ 	eeprom_cmd.size = sizeof(struct be_cmd_req_seeprom_read);
+-	eeprom_cmd.va = dma_alloc_coherent(&adapter->pdev->dev, eeprom_cmd.size,
+-					   &eeprom_cmd.dma, GFP_KERNEL);
++	eeprom_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
++					    eeprom_cmd.size, &eeprom_cmd.dma,
++					    GFP_KERNEL);
+ 
+ 	if (!eeprom_cmd.va)
+ 		return -ENOMEM;
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index e6b790f0d9dc..893753f18098 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -4392,8 +4392,8 @@ static int lancer_fw_download(struct be_adapter *adapter,
+ 
+ 	flash_cmd.size = sizeof(struct lancer_cmd_req_write_object)
+ 				+ LANCER_FW_DOWNLOAD_CHUNK;
+-	flash_cmd.va = dma_alloc_coherent(dev, flash_cmd.size,
+-					  &flash_cmd.dma, GFP_KERNEL);
++	flash_cmd.va = dma_zalloc_coherent(dev, flash_cmd.size,
++					   &flash_cmd.dma, GFP_KERNEL);
+ 	if (!flash_cmd.va)
+ 		return -ENOMEM;
+ 
+@@ -4526,8 +4526,8 @@ static int be_fw_download(struct be_adapter *adapter, const struct firmware* fw)
+ 	}
+ 
+ 	flash_cmd.size = sizeof(struct be_cmd_write_flashrom);
+-	flash_cmd.va = dma_alloc_coherent(dev, flash_cmd.size, &flash_cmd.dma,
+-					  GFP_KERNEL);
++	flash_cmd.va = dma_zalloc_coherent(dev, flash_cmd.size, &flash_cmd.dma,
++					   GFP_KERNEL);
+ 	if (!flash_cmd.va)
+ 		return -ENOMEM;
+ 
+@@ -4941,10 +4941,10 @@ static int be_ctrl_init(struct be_adapter *adapter)
+ 		goto done;
+ 
+ 	mbox_mem_alloc->size = sizeof(struct be_mcc_mailbox) + 16;
+-	mbox_mem_alloc->va = dma_alloc_coherent(&adapter->pdev->dev,
+-						mbox_mem_alloc->size,
+-						&mbox_mem_alloc->dma,
+-						GFP_KERNEL);
++	mbox_mem_alloc->va = dma_zalloc_coherent(&adapter->pdev->dev,
++						 mbox_mem_alloc->size,
++						 &mbox_mem_alloc->dma,
++						 GFP_KERNEL);
+ 	if (!mbox_mem_alloc->va) {
+ 		status = -ENOMEM;
+ 		goto unmap_pci_bars;
+diff --git a/drivers/net/phy/dp83640.c b/drivers/net/phy/dp83640.c
+index e22e602beef3..c5789cdf7778 100644
+--- a/drivers/net/phy/dp83640.c
++++ b/drivers/net/phy/dp83640.c
+@@ -47,7 +47,7 @@
+ #define PSF_TX		0x1000
+ #define EXT_EVENT	1
+ #define CAL_EVENT	7
+-#define CAL_TRIGGER	7
++#define CAL_TRIGGER	1
+ #define DP83640_N_PINS	12
+ 
+ #define MII_DP83640_MICR 0x11
+@@ -495,7 +495,9 @@ static int ptp_dp83640_enable(struct ptp_clock_info *ptp,
+ 			else
+ 				evnt |= EVNT_RISE;
+ 		}
++		mutex_lock(&clock->extreg_lock);
+ 		ext_write(0, phydev, PAGE5, PTP_EVNT, evnt);
++		mutex_unlock(&clock->extreg_lock);
+ 		return 0;
+ 
+ 	case PTP_CLK_REQ_PEROUT:
+@@ -531,6 +533,8 @@ static u8 status_frame_src[6] = { 0x08, 0x00, 0x17, 0x0B, 0x6B, 0x0F };
+ 
+ static void enable_status_frames(struct phy_device *phydev, bool on)
+ {
++	struct dp83640_private *dp83640 = phydev->priv;
++	struct dp83640_clock *clock = dp83640->clock;
+ 	u16 cfg0 = 0, ver;
+ 
+ 	if (on)
+@@ -538,9 +542,13 @@ static void enable_status_frames(struct phy_device *phydev, bool on)
+ 
+ 	ver = (PSF_PTPVER & VERSIONPTP_MASK) << VERSIONPTP_SHIFT;
+ 
++	mutex_lock(&clock->extreg_lock);
++
+ 	ext_write(0, phydev, PAGE5, PSF_CFG0, cfg0);
+ 	ext_write(0, phydev, PAGE6, PSF_CFG1, ver);
+ 
++	mutex_unlock(&clock->extreg_lock);
++
+ 	if (!phydev->attached_dev) {
+ 		pr_warn("expected to find an attached netdevice\n");
+ 		return;
+@@ -837,7 +845,7 @@ static void decode_rxts(struct dp83640_private *dp83640,
+ 	list_del_init(&rxts->list);
+ 	phy2rxts(phy_rxts, rxts);
+ 
+-	spin_lock_irqsave(&dp83640->rx_queue.lock, flags);
++	spin_lock(&dp83640->rx_queue.lock);
+ 	skb_queue_walk(&dp83640->rx_queue, skb) {
+ 		struct dp83640_skb_info *skb_info;
+ 
+@@ -852,7 +860,7 @@ static void decode_rxts(struct dp83640_private *dp83640,
+ 			break;
+ 		}
+ 	}
+-	spin_unlock_irqrestore(&dp83640->rx_queue.lock, flags);
++	spin_unlock(&dp83640->rx_queue.lock);
+ 
+ 	if (!shhwtstamps)
+ 		list_add_tail(&rxts->list, &dp83640->rxts);
+@@ -1172,11 +1180,18 @@ static int dp83640_config_init(struct phy_device *phydev)
+ 
+ 	if (clock->chosen && !list_empty(&clock->phylist))
+ 		recalibrate(clock);
+-	else
++	else {
++		mutex_lock(&clock->extreg_lock);
+ 		enable_broadcast(phydev, clock->page, 1);
++		mutex_unlock(&clock->extreg_lock);
++	}
+ 
+ 	enable_status_frames(phydev, true);
++
++	mutex_lock(&clock->extreg_lock);
+ 	ext_write(0, phydev, PAGE4, PTP_CTL, PTP_ENABLE);
++	mutex_unlock(&clock->extreg_lock);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 52cd8db2c57d..757f28a4284c 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -1053,13 +1053,14 @@ int phy_init_eee(struct phy_device *phydev, bool clk_stop_enable)
+ {
+ 	/* According to 802.3az,the EEE is supported only in full duplex-mode.
+ 	 * Also EEE feature is active when core is operating with MII, GMII
+-	 * or RGMII. Internal PHYs are also allowed to proceed and should
+-	 * return an error if they do not support EEE.
++	 * or RGMII (all kinds). Internal PHYs are also allowed to proceed and
++	 * should return an error if they do not support EEE.
+ 	 */
+ 	if ((phydev->duplex == DUPLEX_FULL) &&
+ 	    ((phydev->interface == PHY_INTERFACE_MODE_MII) ||
+ 	    (phydev->interface == PHY_INTERFACE_MODE_GMII) ||
+-	    (phydev->interface == PHY_INTERFACE_MODE_RGMII) ||
++	    (phydev->interface >= PHY_INTERFACE_MODE_RGMII &&
++	     phydev->interface <= PHY_INTERFACE_MODE_RGMII_TXID) ||
+ 	     phy_is_internal(phydev))) {
+ 		int eee_lp, eee_cap, eee_adv;
+ 		u32 lp, cap, adv;
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index c3e4da9e79ca..8067b8fbb0ee 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -1182,7 +1182,7 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
+ 	 * payload data instead.
+ 	 */
+ 	usbnet_set_skb_tx_stats(skb_out, n,
+-				ctx->tx_curr_frame_payload - skb_out->len);
++				(long)ctx->tx_curr_frame_payload - skb_out->len);
+ 
+ 	return skb_out;
+ 
+diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
+index 794204e34fba..152131a10047 100644
+--- a/drivers/net/xen-netback/xenbus.c
++++ b/drivers/net/xen-netback/xenbus.c
+@@ -34,6 +34,8 @@ struct backend_info {
+ 	enum xenbus_state frontend_state;
+ 	struct xenbus_watch hotplug_status_watch;
+ 	u8 have_hotplug_status_watch:1;
++
++	const char *hotplug_script;
+ };
+ 
+ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue);
+@@ -236,6 +238,7 @@ static int netback_remove(struct xenbus_device *dev)
+ 		xenvif_free(be->vif);
+ 		be->vif = NULL;
+ 	}
++	kfree(be->hotplug_script);
+ 	kfree(be);
+ 	dev_set_drvdata(&dev->dev, NULL);
+ 	return 0;
+@@ -253,6 +256,7 @@ static int netback_probe(struct xenbus_device *dev,
+ 	struct xenbus_transaction xbt;
+ 	int err;
+ 	int sg;
++	const char *script;
+ 	struct backend_info *be = kzalloc(sizeof(struct backend_info),
+ 					  GFP_KERNEL);
+ 	if (!be) {
+@@ -345,6 +349,15 @@ static int netback_probe(struct xenbus_device *dev,
+ 	if (err)
+ 		pr_debug("Error writing multi-queue-max-queues\n");
+ 
++	script = xenbus_read(XBT_NIL, dev->nodename, "script", NULL);
++	if (IS_ERR(script)) {
++		err = PTR_ERR(script);
++		xenbus_dev_fatal(dev, err, "reading script");
++		goto fail;
++	}
++
++	be->hotplug_script = script;
++
+ 	err = xenbus_switch_state(dev, XenbusStateInitWait);
+ 	if (err)
+ 		goto fail;
+@@ -377,22 +390,14 @@ static int netback_uevent(struct xenbus_device *xdev,
+ 			  struct kobj_uevent_env *env)
+ {
+ 	struct backend_info *be = dev_get_drvdata(&xdev->dev);
+-	char *val;
+ 
+-	val = xenbus_read(XBT_NIL, xdev->nodename, "script", NULL);
+-	if (IS_ERR(val)) {
+-		int err = PTR_ERR(val);
+-		xenbus_dev_fatal(xdev, err, "reading script");
+-		return err;
+-	} else {
+-		if (add_uevent_var(env, "script=%s", val)) {
+-			kfree(val);
+-			return -ENOMEM;
+-		}
+-		kfree(val);
+-	}
++	if (!be)
++		return 0;
++
++	if (add_uevent_var(env, "script=%s", be->hotplug_script))
++		return -ENOMEM;
+ 
+-	if (!be || !be->vif)
++	if (!be->vif)
+ 		return 0;
+ 
+ 	return add_uevent_var(env, "vif=%s", be->vif->dev->name);
+@@ -736,6 +741,7 @@ static void connect(struct backend_info *be)
+ 			goto err;
+ 		}
+ 
++		queue->credit_bytes = credit_bytes;
+ 		queue->remaining_credit = credit_bytes;
+ 		queue->credit_usec = credit_usec;
+ 
+diff --git a/drivers/of/dynamic.c b/drivers/of/dynamic.c
+index 3351ef408125..53826b84e0ec 100644
+--- a/drivers/of/dynamic.c
++++ b/drivers/of/dynamic.c
+@@ -225,7 +225,7 @@ void __of_attach_node(struct device_node *np)
+ 	phandle = __of_get_property(np, "phandle", &sz);
+ 	if (!phandle)
+ 		phandle = __of_get_property(np, "linux,phandle", &sz);
+-	if (IS_ENABLED(PPC_PSERIES) && !phandle)
++	if (IS_ENABLED(CONFIG_PPC_PSERIES) && !phandle)
+ 		phandle = __of_get_property(np, "ibm,phandle", &sz);
+ 	np->phandle = (phandle && (sz >= 4)) ? be32_to_cpup(phandle) : 0;
+ 
+diff --git a/drivers/staging/ozwpan/ozhcd.c b/drivers/staging/ozwpan/ozhcd.c
+index 8543bb29a138..9737a979b8db 100644
+--- a/drivers/staging/ozwpan/ozhcd.c
++++ b/drivers/staging/ozwpan/ozhcd.c
+@@ -743,8 +743,8 @@ void oz_hcd_pd_reset(void *hpd, void *hport)
+ /*
+  * Context: softirq
+  */
+-void oz_hcd_get_desc_cnf(void *hport, u8 req_id, int status, const u8 *desc,
+-			int length, int offset, int total_size)
++void oz_hcd_get_desc_cnf(void *hport, u8 req_id, u8 status, const u8 *desc,
++			u8 length, u16 offset, u16 total_size)
+ {
+ 	struct oz_port *port = hport;
+ 	struct urb *urb;
+@@ -756,8 +756,8 @@ void oz_hcd_get_desc_cnf(void *hport, u8 req_id, int status, const u8 *desc,
+ 	if (!urb)
+ 		return;
+ 	if (status == 0) {
+-		int copy_len;
+-		int required_size = urb->transfer_buffer_length;
++		unsigned int copy_len;
++		unsigned int required_size = urb->transfer_buffer_length;
+ 
+ 		if (required_size > total_size)
+ 			required_size = total_size;
+diff --git a/drivers/staging/ozwpan/ozusbif.h b/drivers/staging/ozwpan/ozusbif.h
+index 4249fa374012..d2a6085345be 100644
+--- a/drivers/staging/ozwpan/ozusbif.h
++++ b/drivers/staging/ozwpan/ozusbif.h
+@@ -29,8 +29,8 @@ void oz_usb_request_heartbeat(void *hpd);
+ 
+ /* Confirmation functions.
+  */
+-void oz_hcd_get_desc_cnf(void *hport, u8 req_id, int status,
+-	const u8 *desc, int length, int offset, int total_size);
++void oz_hcd_get_desc_cnf(void *hport, u8 req_id, u8 status,
++	const u8 *desc, u8 length, u16 offset, u16 total_size);
+ void oz_hcd_control_cnf(void *hport, u8 req_id, u8 rcode,
+ 	const u8 *data, int data_len);
+ 
+diff --git a/drivers/staging/ozwpan/ozusbsvc1.c b/drivers/staging/ozwpan/ozusbsvc1.c
+index d434d8c6fff6..f660bb198c65 100644
+--- a/drivers/staging/ozwpan/ozusbsvc1.c
++++ b/drivers/staging/ozwpan/ozusbsvc1.c
+@@ -326,7 +326,11 @@ static void oz_usb_handle_ep_data(struct oz_usb_ctx *usb_ctx,
+ 			struct oz_multiple_fixed *body =
+ 				(struct oz_multiple_fixed *)data_hdr;
+ 			u8 *data = body->data;
+-			int n = (len - sizeof(struct oz_multiple_fixed)+1)
++			unsigned int n;
++			if (!body->unit_size ||
++				len < sizeof(struct oz_multiple_fixed) - 1)
++				break;
++			n = (len - (sizeof(struct oz_multiple_fixed) - 1))
+ 				/ body->unit_size;
+ 			while (n--) {
+ 				oz_hcd_data_ind(usb_ctx->hport, body->endpoint,
+@@ -390,10 +394,15 @@ void oz_usb_rx(struct oz_pd *pd, struct oz_elt *elt)
+ 	case OZ_GET_DESC_RSP: {
+ 			struct oz_get_desc_rsp *body =
+ 				(struct oz_get_desc_rsp *)usb_hdr;
+-			int data_len = elt->length -
+-					sizeof(struct oz_get_desc_rsp) + 1;
+-			u16 offs = le16_to_cpu(get_unaligned(&body->offset));
+-			u16 total_size =
++			u16 offs, total_size;
++			u8 data_len;
++
++			if (elt->length < sizeof(struct oz_get_desc_rsp) - 1)
++				break;
++			data_len = elt->length -
++					(sizeof(struct oz_get_desc_rsp) - 1);
++			offs = le16_to_cpu(get_unaligned(&body->offset));
++			total_size =
+ 				le16_to_cpu(get_unaligned(&body->total_size));
+ 			oz_dbg(ON, "USB_REQ_GET_DESCRIPTOR - cnf\n");
+ 			oz_hcd_get_desc_cnf(usb_ctx->hport, body->req_id,
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index cc57a3a6b02b..eee40b5cb025 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -162,6 +162,17 @@ static inline int tty_put_user(struct tty_struct *tty, unsigned char x,
+ 	return put_user(x, ptr);
+ }
+ 
++static inline int tty_copy_to_user(struct tty_struct *tty,
++					void __user *to,
++					const void *from,
++					unsigned long n)
++{
++	struct n_tty_data *ldata = tty->disc_data;
++
++	tty_audit_add_data(tty, to, n, ldata->icanon);
++	return copy_to_user(to, from, n);
++}
++
+ /**
+  *	n_tty_kick_worker - start input worker (if required)
+  *	@tty: terminal
+@@ -2084,12 +2095,12 @@ static int canon_copy_from_read_buf(struct tty_struct *tty,
+ 		    __func__, eol, found, n, c, size, more);
+ 
+ 	if (n > size) {
+-		ret = copy_to_user(*b, read_buf_addr(ldata, tail), size);
++		ret = tty_copy_to_user(tty, *b, read_buf_addr(ldata, tail), size);
+ 		if (ret)
+ 			return -EFAULT;
+-		ret = copy_to_user(*b + size, ldata->read_buf, n - size);
++		ret = tty_copy_to_user(tty, *b + size, ldata->read_buf, n - size);
+ 	} else
+-		ret = copy_to_user(*b, read_buf_addr(ldata, tail), n);
++		ret = tty_copy_to_user(tty, *b, read_buf_addr(ldata, tail), n);
+ 
+ 	if (ret)
+ 		return -EFAULT;
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 23061918b0e4..f74f400fcb57 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -959,6 +959,14 @@ static void dma_rx_callback(void *data)
+ 
+ 	status = dmaengine_tx_status(chan, (dma_cookie_t)0, &state);
+ 	count = RX_BUF_SIZE - state.residue;
++
++	if (readl(sport->port.membase + USR2) & USR2_IDLE) {
++		/* In condition [3] the SDMA counted up too early */
++		count--;
++
++		writel(USR2_IDLE, sport->port.membase + USR2);
++	}
++
+ 	dev_dbg(sport->port.dev, "We get %d bytes.\n", count);
+ 
+ 	if (count) {
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index d201910b892f..f176941a92dd 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -339,7 +339,7 @@
+ #define DWC3_DGCMD_SET_ENDPOINT_NRDY	0x0c
+ #define DWC3_DGCMD_RUN_SOC_BUS_LOOPBACK	0x10
+ 
+-#define DWC3_DGCMD_STATUS(n)		(((n) >> 15) & 1)
++#define DWC3_DGCMD_STATUS(n)		(((n) >> 12) & 0x0F)
+ #define DWC3_DGCMD_CMDACT		(1 << 10)
+ #define DWC3_DGCMD_CMDIOC		(1 << 8)
+ 
+@@ -355,7 +355,7 @@
+ #define DWC3_DEPCMD_PARAM_SHIFT		16
+ #define DWC3_DEPCMD_PARAM(x)		((x) << DWC3_DEPCMD_PARAM_SHIFT)
+ #define DWC3_DEPCMD_GET_RSC_IDX(x)	(((x) >> DWC3_DEPCMD_PARAM_SHIFT) & 0x7f)
+-#define DWC3_DEPCMD_STATUS(x)		(((x) >> 15) & 1)
++#define DWC3_DEPCMD_STATUS(x)		(((x) >> 12) & 0x0F)
+ #define DWC3_DEPCMD_HIPRI_FORCERM	(1 << 11)
+ #define DWC3_DEPCMD_CMDACT		(1 << 10)
+ #define DWC3_DEPCMD_CMDIOC		(1 << 8)
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index ec8ac1674854..36bf089b708f 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -3682,18 +3682,21 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ {
+ 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ 	unsigned long flags;
+-	int ret;
++	int ret, slot_id;
+ 	struct xhci_command *command;
+ 
+ 	command = xhci_alloc_command(xhci, false, false, GFP_KERNEL);
+ 	if (!command)
+ 		return 0;
+ 
++	/* xhci->slot_id and xhci->addr_dev are not thread-safe */
++	mutex_lock(&xhci->mutex);
+ 	spin_lock_irqsave(&xhci->lock, flags);
+ 	command->completion = &xhci->addr_dev;
+ 	ret = xhci_queue_slot_control(xhci, command, TRB_ENABLE_SLOT, 0);
+ 	if (ret) {
+ 		spin_unlock_irqrestore(&xhci->lock, flags);
++		mutex_unlock(&xhci->mutex);
+ 		xhci_dbg(xhci, "FIXME: allocate a command ring segment\n");
+ 		kfree(command);
+ 		return 0;
+@@ -3702,8 +3705,10 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ 	spin_unlock_irqrestore(&xhci->lock, flags);
+ 
+ 	wait_for_completion(command->completion);
++	slot_id = xhci->slot_id;
++	mutex_unlock(&xhci->mutex);
+ 
+-	if (!xhci->slot_id || command->status != COMP_SUCCESS) {
++	if (!slot_id || command->status != COMP_SUCCESS) {
+ 		xhci_err(xhci, "Error while assigning device slot ID\n");
+ 		xhci_err(xhci, "Max number of devices this xHCI host supports is %u.\n",
+ 				HCS_MAX_SLOTS(
+@@ -3728,11 +3733,11 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ 	 * xhci_discover_or_reset_device(), which may be called as part of
+ 	 * mass storage driver error handling.
+ 	 */
+-	if (!xhci_alloc_virt_device(xhci, xhci->slot_id, udev, GFP_NOIO)) {
++	if (!xhci_alloc_virt_device(xhci, slot_id, udev, GFP_NOIO)) {
+ 		xhci_warn(xhci, "Could not allocate xHCI USB device data structures\n");
+ 		goto disable_slot;
+ 	}
+-	udev->slot_id = xhci->slot_id;
++	udev->slot_id = slot_id;
+ 
+ #ifndef CONFIG_USB_DEFAULT_PERSIST
+ 	/*
+@@ -3778,12 +3783,15 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 	struct xhci_slot_ctx *slot_ctx;
+ 	struct xhci_input_control_ctx *ctrl_ctx;
+ 	u64 temp_64;
+-	struct xhci_command *command;
++	struct xhci_command *command = NULL;
++
++	mutex_lock(&xhci->mutex);
+ 
+ 	if (!udev->slot_id) {
+ 		xhci_dbg_trace(xhci, trace_xhci_dbg_address,
+ 				"Bad Slot ID %d", udev->slot_id);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out;
+ 	}
+ 
+ 	virt_dev = xhci->devs[udev->slot_id];
+@@ -3796,7 +3804,8 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 		 */
+ 		xhci_warn(xhci, "Virt dev invalid for slot_id 0x%x!\n",
+ 			udev->slot_id);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out;
+ 	}
+ 
+ 	if (setup == SETUP_CONTEXT_ONLY) {
+@@ -3804,13 +3813,15 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 		if (GET_SLOT_STATE(le32_to_cpu(slot_ctx->dev_state)) ==
+ 		    SLOT_STATE_DEFAULT) {
+ 			xhci_dbg(xhci, "Slot already in default state\n");
+-			return 0;
++			goto out;
+ 		}
+ 	}
+ 
+ 	command = xhci_alloc_command(xhci, false, false, GFP_KERNEL);
+-	if (!command)
+-		return -ENOMEM;
++	if (!command) {
++		ret = -ENOMEM;
++		goto out;
++	}
+ 
+ 	command->in_ctx = virt_dev->in_ctx;
+ 	command->completion = &xhci->addr_dev;
+@@ -3820,8 +3831,8 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 	if (!ctrl_ctx) {
+ 		xhci_warn(xhci, "%s: Could not get input context, bad type.\n",
+ 				__func__);
+-		kfree(command);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out;
+ 	}
+ 	/*
+ 	 * If this is the first Set Address since device plug-in or
+@@ -3848,8 +3859,7 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 		spin_unlock_irqrestore(&xhci->lock, flags);
+ 		xhci_dbg_trace(xhci, trace_xhci_dbg_address,
+ 				"FIXME: allocate a command ring segment");
+-		kfree(command);
+-		return ret;
++		goto out;
+ 	}
+ 	xhci_ring_cmd_db(xhci);
+ 	spin_unlock_irqrestore(&xhci->lock, flags);
+@@ -3896,10 +3906,8 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 		ret = -EINVAL;
+ 		break;
+ 	}
+-	if (ret) {
+-		kfree(command);
+-		return ret;
+-	}
++	if (ret)
++		goto out;
+ 	temp_64 = xhci_read_64(xhci, &xhci->op_regs->dcbaa_ptr);
+ 	xhci_dbg_trace(xhci, trace_xhci_dbg_address,
+ 			"Op regs DCBAA ptr = %#016llx", temp_64);
+@@ -3932,8 +3940,10 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 	xhci_dbg_trace(xhci, trace_xhci_dbg_address,
+ 		       "Internal device address = %d",
+ 		       le32_to_cpu(slot_ctx->dev_state) & DEV_ADDR_MASK);
++out:
++	mutex_unlock(&xhci->mutex);
+ 	kfree(command);
+-	return 0;
++	return ret;
+ }
+ 
+ int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
+@@ -4855,6 +4865,7 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks)
+ 		return 0;
+ 	}
+ 
++	mutex_init(&xhci->mutex);
+ 	xhci->cap_regs = hcd->regs;
+ 	xhci->op_regs = hcd->regs +
+ 		HC_LENGTH(readl(&xhci->cap_regs->hc_capbase));
+@@ -5011,4 +5022,12 @@ static int __init xhci_hcd_init(void)
+ 	BUILD_BUG_ON(sizeof(struct xhci_run_regs) != (8+8*128)*32/8);
+ 	return 0;
+ }
++
++/*
++ * If an init function is provided, an exit function must also be provided
++ * to allow module unload.
++ */
++static void __exit xhci_hcd_fini(void) { }
++
+ module_init(xhci_hcd_init);
++module_exit(xhci_hcd_fini);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index ea75e8ccd3c1..6977f8491fa7 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1497,6 +1497,8 @@ struct xhci_hcd {
+ 	struct list_head	lpm_failed_devs;
+ 
+ 	/* slot enabling and address device helpers */
++	/* these are not thread safe so use mutex */
++	struct mutex mutex;
+ 	struct completion	addr_dev;
+ 	int slot_id;
+ 	/* For USB 3.0 LPM enable/disable. */
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 9031750e7404..ffd739e31bfc 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -128,6 +128,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */
+ 	{ USB_DEVICE(0x10C4, 0x8977) },	/* CEL MeshWorks DevKit Device */
+ 	{ USB_DEVICE(0x10C4, 0x8998) }, /* KCF Technologies PRN */
++	{ USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */
+ 	{ USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */
+ 	{ USB_DEVICE(0x10C4, 0xEA61) }, /* Silicon Labs factory default */
+ 	{ USB_DEVICE(0x10C4, 0xEA70) }, /* Silicon Labs factory default */
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 8eb68a31cab6..4c8b3b82103d 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -699,6 +699,7 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(XSENS_VID, XSENS_AWINDA_DONGLE_PID) },
+ 	{ USB_DEVICE(XSENS_VID, XSENS_AWINDA_STATION_PID) },
+ 	{ USB_DEVICE(XSENS_VID, XSENS_CONVERTER_PID) },
++	{ USB_DEVICE(XSENS_VID, XSENS_MTDEVBOARD_PID) },
+ 	{ USB_DEVICE(XSENS_VID, XSENS_MTW_PID) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_OMNI1509) },
+ 	{ USB_DEVICE(MOBILITY_VID, MOBILITY_USB_SERIAL_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 4e4f46f3c89c..792e054126de 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -155,6 +155,7 @@
+ #define XSENS_AWINDA_STATION_PID 0x0101
+ #define XSENS_AWINDA_DONGLE_PID 0x0102
+ #define XSENS_MTW_PID		0x0200	/* Xsens MTw */
++#define XSENS_MTDEVBOARD_PID	0x0300	/* Motion Tracker Development Board */
+ #define XSENS_CONVERTER_PID	0xD00D	/* Xsens USB-serial converter */
+ 
+ /* Xsens devices using FTDI VID */
+diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
+index e894eb278d83..eba1b7ac7294 100644
+--- a/drivers/virtio/virtio_pci_common.c
++++ b/drivers/virtio/virtio_pci_common.c
+@@ -423,6 +423,7 @@ int vp_set_vq_affinity(struct virtqueue *vq, int cpu)
+ 		if (cpu == -1)
+ 			irq_set_affinity_hint(irq, NULL);
+ 		else {
++			cpumask_clear(mask);
+ 			cpumask_set_cpu(cpu, mask);
+ 			irq_set_affinity_hint(irq, mask);
+ 		}
+diff --git a/fs/aio.c b/fs/aio.c
+index a793f7023755..a1736e98c278 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -77,6 +77,11 @@ struct kioctx_cpu {
+ 	unsigned		reqs_available;
+ };
+ 
++struct ctx_rq_wait {
++	struct completion comp;
++	atomic_t count;
++};
++
+ struct kioctx {
+ 	struct percpu_ref	users;
+ 	atomic_t		dead;
+@@ -115,7 +120,7 @@ struct kioctx {
+ 	/*
+ 	 * signals when all in-flight requests are done
+ 	 */
+-	struct completion *requests_done;
++	struct ctx_rq_wait	*rq_wait;
+ 
+ 	struct {
+ 		/*
+@@ -539,8 +544,8 @@ static void free_ioctx_reqs(struct percpu_ref *ref)
+ 	struct kioctx *ctx = container_of(ref, struct kioctx, reqs);
+ 
+ 	/* At this point we know that there are no any in-flight requests */
+-	if (ctx->requests_done)
+-		complete(ctx->requests_done);
++	if (ctx->rq_wait && atomic_dec_and_test(&ctx->rq_wait->count))
++		complete(&ctx->rq_wait->comp);
+ 
+ 	INIT_WORK(&ctx->free_work, free_ioctx);
+ 	schedule_work(&ctx->free_work);
+@@ -751,7 +756,7 @@ err:
+  *	the rapid destruction of the kioctx.
+  */
+ static int kill_ioctx(struct mm_struct *mm, struct kioctx *ctx,
+-		struct completion *requests_done)
++		      struct ctx_rq_wait *wait)
+ {
+ 	struct kioctx_table *table;
+ 
+@@ -781,7 +786,7 @@ static int kill_ioctx(struct mm_struct *mm, struct kioctx *ctx,
+ 	if (ctx->mmap_size)
+ 		vm_munmap(ctx->mmap_base, ctx->mmap_size);
+ 
+-	ctx->requests_done = requests_done;
++	ctx->rq_wait = wait;
+ 	percpu_ref_kill(&ctx->users);
+ 	return 0;
+ }
+@@ -813,18 +818,24 @@ EXPORT_SYMBOL(wait_on_sync_kiocb);
+ void exit_aio(struct mm_struct *mm)
+ {
+ 	struct kioctx_table *table = rcu_dereference_raw(mm->ioctx_table);
+-	int i;
++	struct ctx_rq_wait wait;
++	int i, skipped;
+ 
+ 	if (!table)
+ 		return;
+ 
++	atomic_set(&wait.count, table->nr);
++	init_completion(&wait.comp);
++
++	skipped = 0;
+ 	for (i = 0; i < table->nr; ++i) {
+ 		struct kioctx *ctx = table->table[i];
+-		struct completion requests_done =
+-			COMPLETION_INITIALIZER_ONSTACK(requests_done);
+ 
+-		if (!ctx)
++		if (!ctx) {
++			skipped++;
+ 			continue;
++		}
++
+ 		/*
+ 		 * We don't need to bother with munmap() here - exit_mmap(mm)
+ 		 * is coming and it'll unmap everything. And we simply can't,
+@@ -833,10 +844,12 @@ void exit_aio(struct mm_struct *mm)
+ 		 * that it needs to unmap the area, just set it to 0.
+ 		 */
+ 		ctx->mmap_size = 0;
+-		kill_ioctx(mm, ctx, &requests_done);
++		kill_ioctx(mm, ctx, &wait);
++	}
+ 
++	if (!atomic_sub_and_test(skipped, &wait.count)) {
+ 		/* Wait until all IO for the context are done. */
+-		wait_for_completion(&requests_done);
++		wait_for_completion(&wait.comp);
+ 	}
+ 
+ 	RCU_INIT_POINTER(mm->ioctx_table, NULL);
+@@ -1321,15 +1334,17 @@ SYSCALL_DEFINE1(io_destroy, aio_context_t, ctx)
+ {
+ 	struct kioctx *ioctx = lookup_ioctx(ctx);
+ 	if (likely(NULL != ioctx)) {
+-		struct completion requests_done =
+-			COMPLETION_INITIALIZER_ONSTACK(requests_done);
++		struct ctx_rq_wait wait;
+ 		int ret;
+ 
++		init_completion(&wait.comp);
++		atomic_set(&wait.count, 1);
++
+ 		/* Pass requests_done to kill_ioctx() where it can be set
+ 		 * in a thread-safe way. If we try to set it here then we have
+ 		 * a race condition if two io_destroy() called simultaneously.
+ 		 */
+-		ret = kill_ioctx(current->mm, ioctx, &requests_done);
++		ret = kill_ioctx(current->mm, ioctx, &wait);
+ 		percpu_ref_put(&ioctx->users);
+ 
+ 		/* Wait until all IO for the context are done. Otherwise kernel
+@@ -1337,7 +1352,7 @@ SYSCALL_DEFINE1(io_destroy, aio_context_t, ctx)
+ 		 * is destroyed.
+ 		 */
+ 		if (!ret)
+-			wait_for_completion(&requests_done);
++			wait_for_completion(&wait.comp);
+ 
+ 		return ret;
+ 	}
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 8b33da6ec3dd..63be2a96ed6a 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -8535,6 +8535,24 @@ int btrfs_set_block_group_ro(struct btrfs_root *root,
+ 	trans = btrfs_join_transaction(root);
+ 	if (IS_ERR(trans))
+ 		return PTR_ERR(trans);
++	/*
++	 * if we are changing raid levels, try to allocate a corresponding
++	 * block group with the new raid level.
++	 */
++	alloc_flags = update_block_group_flags(root, cache->flags);
++	if (alloc_flags != cache->flags) {
++		ret = do_chunk_alloc(trans, root, alloc_flags,
++				     CHUNK_ALLOC_FORCE);
++		/*
++		 * ENOSPC is allowed here, we may have enough space
++		 * already allocated at the new raid level to
++		 * carry on
++		 */
++		if (ret == -ENOSPC)
++			ret = 0;
++		if (ret < 0)
++			goto out;
++	}
+ 
+ 	ret = set_block_group_ro(cache, 0);
+ 	if (!ret)
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index d688cfe5d496..782f3bc4651d 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -4514,8 +4514,11 @@ int extent_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ 		}
+ 		ret = fiemap_fill_next_extent(fieinfo, em_start, disko,
+ 					      em_len, flags);
+-		if (ret)
++		if (ret) {
++			if (ret == 1)
++				ret = 0;
+ 			goto out_free;
++		}
+ 	}
+ out_free:
+ 	free_extent_map(em);
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 2b4c5423672d..64e8fb639f72 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3206,6 +3206,8 @@ static int btrfs_clone(struct inode *src, struct inode *inode,
+ 	key.offset = off;
+ 
+ 	while (1) {
++		u64 next_key_min_offset = key.offset + 1;
++
+ 		/*
+ 		 * note the key will change type as we walk through the
+ 		 * tree.
+@@ -3286,7 +3288,7 @@ process_slot:
+ 			} else if (key.offset >= off + len) {
+ 				break;
+ 			}
+-
++			next_key_min_offset = key.offset + datal;
+ 			size = btrfs_item_size_nr(leaf, slot);
+ 			read_extent_buffer(leaf, buf,
+ 					   btrfs_item_ptr_offset(leaf, slot),
+@@ -3501,7 +3503,7 @@ process_slot:
+ 				break;
+ 		}
+ 		btrfs_release_path(path);
+-		key.offset++;
++		key.offset = next_key_min_offset;
+ 	}
+ 	ret = 0;
+ 
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index d6033f540cc7..571de5a08fe7 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -5852,19 +5852,20 @@ long btrfs_ioctl_send(struct file *mnt_file, void __user *arg_)
+ 				ret = PTR_ERR(clone_root);
+ 				goto out;
+ 			}
+-			clone_sources_to_rollback = i + 1;
+ 			spin_lock(&clone_root->root_item_lock);
+-			clone_root->send_in_progress++;
+-			if (!btrfs_root_readonly(clone_root)) {
++			if (!btrfs_root_readonly(clone_root) ||
++			    btrfs_root_dead(clone_root)) {
+ 				spin_unlock(&clone_root->root_item_lock);
+ 				srcu_read_unlock(&fs_info->subvol_srcu, index);
+ 				ret = -EPERM;
+ 				goto out;
+ 			}
++			clone_root->send_in_progress++;
+ 			spin_unlock(&clone_root->root_item_lock);
+ 			srcu_read_unlock(&fs_info->subvol_srcu, index);
+ 
+ 			sctx->clone_roots[i].root = clone_root;
++			clone_sources_to_rollback = i + 1;
+ 		}
+ 		vfree(clone_sources_tmp);
+ 		clone_sources_tmp = NULL;
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 05fef198ff94..e477ed67a49a 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -901,6 +901,15 @@ find_root:
+ 	if (IS_ERR(new_root))
+ 		return ERR_CAST(new_root);
+ 
++	if (!(sb->s_flags & MS_RDONLY)) {
++		int ret;
++		down_read(&fs_info->cleanup_work_sem);
++		ret = btrfs_orphan_cleanup(new_root);
++		up_read(&fs_info->cleanup_work_sem);
++		if (ret)
++			return ERR_PTR(ret);
++	}
++
+ 	dir_id = btrfs_root_dirid(&new_root->root_item);
+ setup_root:
+ 	location.objectid = dir_id;
+diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
+index aff923ae8c4b..d87d8eced064 100644
+--- a/include/linux/backing-dev.h
++++ b/include/linux/backing-dev.h
+@@ -116,7 +116,6 @@ __printf(3, 4)
+ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
+ 		const char *fmt, ...);
+ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
+-void bdi_unregister(struct backing_dev_info *bdi);
+ int __must_check bdi_setup_and_register(struct backing_dev_info *, char *);
+ void bdi_start_writeback(struct backing_dev_info *bdi, long nr_pages,
+ 			enum wb_reason reason);
+diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
+index 5976bdecf58b..9fe865ccc3f3 100644
+--- a/include/net/inet_connection_sock.h
++++ b/include/net/inet_connection_sock.h
+@@ -98,7 +98,8 @@ struct inet_connection_sock {
+ 	const struct tcp_congestion_ops *icsk_ca_ops;
+ 	const struct inet_connection_sock_af_ops *icsk_af_ops;
+ 	unsigned int		  (*icsk_sync_mss)(struct sock *sk, u32 pmtu);
+-	__u8			  icsk_ca_state:7,
++	__u8			  icsk_ca_state:6,
++				  icsk_ca_setsockopt:1,
+ 				  icsk_ca_dst_locked:1;
+ 	__u8			  icsk_retransmits;
+ 	__u8			  icsk_pending;
+diff --git a/include/net/sctp/sctp.h b/include/net/sctp/sctp.h
+index 856f01cb51dd..230775f5952a 100644
+--- a/include/net/sctp/sctp.h
++++ b/include/net/sctp/sctp.h
+@@ -571,11 +571,14 @@ static inline void sctp_v6_map_v4(union sctp_addr *addr)
+ /* Map v4 address to v4-mapped v6 address */
+ static inline void sctp_v4_map_v6(union sctp_addr *addr)
+ {
++	__be16 port;
++
++	port = addr->v4.sin_port;
++	addr->v6.sin6_addr.s6_addr32[3] = addr->v4.sin_addr.s_addr;
++	addr->v6.sin6_port = port;
+ 	addr->v6.sin6_family = AF_INET6;
+ 	addr->v6.sin6_flowinfo = 0;
+ 	addr->v6.sin6_scope_id = 0;
+-	addr->v6.sin6_port = addr->v4.sin_port;
+-	addr->v6.sin6_addr.s6_addr32[3] = addr->v4.sin_addr.s_addr;
+ 	addr->v6.sin6_addr.s6_addr32[0] = 0;
+ 	addr->v6.sin6_addr.s6_addr32[1] = 0;
+ 	addr->v6.sin6_addr.s6_addr32[2] = htonl(0x0000ffff);
+diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h
+index 5a14ead59696..885d3a380451 100644
+--- a/include/trace/events/writeback.h
++++ b/include/trace/events/writeback.h
+@@ -233,7 +233,6 @@ DEFINE_EVENT(writeback_class, name, \
+ DEFINE_WRITEBACK_EVENT(writeback_nowork);
+ DEFINE_WRITEBACK_EVENT(writeback_wake_background);
+ DEFINE_WRITEBACK_EVENT(writeback_bdi_register);
+-DEFINE_WRITEBACK_EVENT(writeback_bdi_unregister);
+ 
+ DECLARE_EVENT_CLASS(wbc_class,
+ 	TP_PROTO(struct writeback_control *wbc, struct backing_dev_info *bdi),
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 241213be507c..486d00c408b0 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -2166,7 +2166,7 @@ void task_numa_work(struct callback_head *work)
+ 	}
+ 	for (; vma; vma = vma->vm_next) {
+ 		if (!vma_migratable(vma) || !vma_policy_mof(vma) ||
+-			is_vm_hugetlb_page(vma)) {
++			is_vm_hugetlb_page(vma) || (vma->vm_flags & VM_MIXEDMAP)) {
+ 			continue;
+ 		}
+ 
+diff --git a/kernel/trace/ring_buffer_benchmark.c b/kernel/trace/ring_buffer_benchmark.c
+index 13d945c0d03f..1b28df2d9104 100644
+--- a/kernel/trace/ring_buffer_benchmark.c
++++ b/kernel/trace/ring_buffer_benchmark.c
+@@ -450,7 +450,7 @@ static int __init ring_buffer_benchmark_init(void)
+ 
+ 	if (producer_fifo >= 0) {
+ 		struct sched_param param = {
+-			.sched_priority = consumer_fifo
++			.sched_priority = producer_fifo
+ 		};
+ 		sched_setscheduler(producer, SCHED_FIFO, &param);
+ 	} else
+diff --git a/mm/backing-dev.c b/mm/backing-dev.c
+index 6dc4580df2af..000e7b3b9896 100644
+--- a/mm/backing-dev.c
++++ b/mm/backing-dev.c
+@@ -359,23 +359,6 @@ static void bdi_wb_shutdown(struct backing_dev_info *bdi)
+ 	flush_delayed_work(&bdi->wb.dwork);
+ }
+ 
+-/*
+- * Called when the device behind @bdi has been removed or ejected.
+- *
+- * We can't really do much here except for reducing the dirty ratio at
+- * the moment.  In the future we should be able to set a flag so that
+- * the filesystem can handle errors at mark_inode_dirty time instead
+- * of only at writeback time.
+- */
+-void bdi_unregister(struct backing_dev_info *bdi)
+-{
+-	if (WARN_ON_ONCE(!bdi->dev))
+-		return;
+-
+-	bdi_set_min_ratio(bdi, 0);
+-}
+-EXPORT_SYMBOL(bdi_unregister);
+-
+ static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
+ {
+ 	memset(wb, 0, sizeof(*wb));
+@@ -443,6 +426,7 @@ void bdi_destroy(struct backing_dev_info *bdi)
+ 	int i;
+ 
+ 	bdi_wb_shutdown(bdi);
++	bdi_set_min_ratio(bdi, 0);
+ 
+ 	WARN_ON(!list_empty(&bdi->work_list));
+ 	WARN_ON(delayed_work_pending(&bdi->wb.dwork));
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 65842d688b7c..93caba791cde 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1978,8 +1978,10 @@ void try_offline_node(int nid)
+ 		 * wait_table may be allocated from boot memory,
+ 		 * here only free if it's allocated by vmalloc.
+ 		 */
+-		if (is_vmalloc_addr(zone->wait_table))
++		if (is_vmalloc_addr(zone->wait_table)) {
+ 			vfree(zone->wait_table);
++			zone->wait_table = NULL;
++		}
+ 	}
+ }
+ EXPORT_SYMBOL(try_offline_node);
+diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
+index e0670d7054f9..659fb96672e4 100644
+--- a/net/bridge/br_fdb.c
++++ b/net/bridge/br_fdb.c
+@@ -796,9 +796,11 @@ static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge_port *p,
+ 	int err = 0;
+ 
+ 	if (ndm->ndm_flags & NTF_USE) {
++		local_bh_disable();
+ 		rcu_read_lock();
+ 		br_fdb_update(p->br, p, addr, vid, true);
+ 		rcu_read_unlock();
++		local_bh_enable();
+ 	} else {
+ 		spin_lock_bh(&p->br->hash_lock);
+ 		err = fdb_add_entry(p, addr, ndm->ndm_state,
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index c465876c7861..b0aee78dba41 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -1071,7 +1071,7 @@ static int br_ip6_multicast_mld2_report(struct net_bridge *br,
+ 
+ 		err = br_ip6_multicast_add_group(br, port, &grec->grec_mca,
+ 						 vid);
+-		if (!err)
++		if (err)
+ 			break;
+ 	}
+ 
+@@ -1821,7 +1821,7 @@ static void br_multicast_query_expired(struct net_bridge *br,
+ 	if (query->startup_sent < br->multicast_startup_query_count)
+ 		query->startup_sent++;
+ 
+-	RCU_INIT_POINTER(querier, NULL);
++	RCU_INIT_POINTER(querier->port, NULL);
+ 	br_multicast_send_query(br, NULL, query);
+ 	spin_unlock(&br->multicast_lock);
+ }
+diff --git a/net/caif/caif_socket.c b/net/caif/caif_socket.c
+index a6e2da0bc718..982101c12258 100644
+--- a/net/caif/caif_socket.c
++++ b/net/caif/caif_socket.c
+@@ -330,6 +330,10 @@ static long caif_stream_data_wait(struct sock *sk, long timeo)
+ 		release_sock(sk);
+ 		timeo = schedule_timeout(timeo);
+ 		lock_sock(sk);
++
++		if (sock_flag(sk, SOCK_DEAD))
++			break;
++
+ 		clear_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags);
+ 	}
+ 
+@@ -374,6 +378,10 @@ static int caif_stream_recvmsg(struct kiocb *iocb, struct socket *sock,
+ 		struct sk_buff *skb;
+ 
+ 		lock_sock(sk);
++		if (sock_flag(sk, SOCK_DEAD)) {
++			err = -ECONNRESET;
++			goto unlock;
++		}
+ 		skb = skb_dequeue(&sk->sk_receive_queue);
+ 		caif_check_flow_release(sk);
+ 
+diff --git a/net/ceph/crush/mapper.c b/net/ceph/crush/mapper.c
+index a1ef53c04415..b1f2d1f44d37 100644
+--- a/net/ceph/crush/mapper.c
++++ b/net/ceph/crush/mapper.c
+@@ -290,6 +290,7 @@ static int is_out(const struct crush_map *map,
+  * @type: the type of item to choose
+  * @out: pointer to output vector
+  * @outpos: our position in that vector
++ * @out_size: size of the out vector
+  * @tries: number of attempts to make
+  * @recurse_tries: number of attempts to have recursive chooseleaf make
+  * @local_retries: localized retries
+@@ -304,6 +305,7 @@ static int crush_choose_firstn(const struct crush_map *map,
+ 			       const __u32 *weight, int weight_max,
+ 			       int x, int numrep, int type,
+ 			       int *out, int outpos,
++			       int out_size,
+ 			       unsigned int tries,
+ 			       unsigned int recurse_tries,
+ 			       unsigned int local_retries,
+@@ -322,6 +324,7 @@ static int crush_choose_firstn(const struct crush_map *map,
+ 	int item = 0;
+ 	int itemtype;
+ 	int collide, reject;
++	int count = out_size;
+ 
+ 	dprintk("CHOOSE%s bucket %d x %d outpos %d numrep %d tries %d recurse_tries %d local_retries %d local_fallback_retries %d parent_r %d\n",
+ 		recurse_to_leaf ? "_LEAF" : "",
+@@ -329,7 +332,7 @@ static int crush_choose_firstn(const struct crush_map *map,
+ 		tries, recurse_tries, local_retries, local_fallback_retries,
+ 		parent_r);
+ 
+-	for (rep = outpos; rep < numrep; rep++) {
++	for (rep = outpos; rep < numrep && count > 0 ; rep++) {
+ 		/* keep trying until we get a non-out, non-colliding item */
+ 		ftotal = 0;
+ 		skip_rep = 0;
+@@ -403,7 +406,7 @@ static int crush_choose_firstn(const struct crush_map *map,
+ 							 map->buckets[-1-item],
+ 							 weight, weight_max,
+ 							 x, outpos+1, 0,
+-							 out2, outpos,
++							 out2, outpos, count,
+ 							 recurse_tries, 0,
+ 							 local_retries,
+ 							 local_fallback_retries,
+@@ -463,6 +466,7 @@ reject:
+ 		dprintk("CHOOSE got %d\n", item);
+ 		out[outpos] = item;
+ 		outpos++;
++		count--;
+ 	}
+ 
+ 	dprintk("CHOOSE returns %d\n", outpos);
+@@ -654,6 +658,7 @@ int crush_do_rule(const struct crush_map *map,
+ 	__u32 step;
+ 	int i, j;
+ 	int numrep;
++	int out_size;
+ 	/*
+ 	 * the original choose_total_tries value was off by one (it
+ 	 * counted "retries" and not "tries").  add one.
+@@ -761,6 +766,7 @@ int crush_do_rule(const struct crush_map *map,
+ 						x, numrep,
+ 						curstep->arg2,
+ 						o+osize, j,
++						result_max-osize,
+ 						choose_tries,
+ 						recurse_tries,
+ 						choose_local_retries,
+@@ -770,11 +776,13 @@ int crush_do_rule(const struct crush_map *map,
+ 						c+osize,
+ 						0);
+ 				} else {
++					out_size = ((numrep < (result_max-osize)) ?
++                                                    numrep : (result_max-osize));
+ 					crush_choose_indep(
+ 						map,
+ 						map->buckets[-1-w[i]],
+ 						weight, weight_max,
+-						x, numrep, numrep,
++						x, out_size, numrep,
+ 						curstep->arg2,
+ 						o+osize, j,
+ 						choose_tries,
+@@ -783,7 +791,7 @@ int crush_do_rule(const struct crush_map *map,
+ 						recurse_to_leaf,
+ 						c+osize,
+ 						0);
+-					osize += numrep;
++					osize += out_size;
+ 				}
+ 			}
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 22a53acdb5bb..e977e15c2ac0 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -5170,7 +5170,7 @@ static int __netdev_upper_dev_link(struct net_device *dev,
+ 	if (__netdev_find_adj(upper_dev, dev, &upper_dev->all_adj_list.upper))
+ 		return -EBUSY;
+ 
+-	if (__netdev_find_adj(dev, upper_dev, &dev->all_adj_list.upper))
++	if (__netdev_find_adj(dev, upper_dev, &dev->adj_list.upper))
+ 		return -EEXIST;
+ 
+ 	if (master && netdev_master_upper_dev_get(dev))
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 7ebed55b5f7d..a2b90e1fc115 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2337,6 +2337,9 @@ void rtmsg_ifinfo(int type, struct net_device *dev, unsigned int change,
+ {
+ 	struct sk_buff *skb;
+ 
++	if (dev->reg_state != NETREG_REGISTERED)
++		return;
++
+ 	skb = rtmsg_ifinfo_build_skb(type, dev, change, flags);
+ 	if (skb)
+ 		rtmsg_ifinfo_send(skb, dev, flags);
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 20fc0202cbbe..e262a087050b 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -903,6 +903,10 @@ static int ip_error(struct sk_buff *skb)
+ 	bool send;
+ 	int code;
+ 
++	/* IP on this device is disabled. */
++	if (!in_dev)
++		goto out;
++
+ 	net = dev_net(rt->dst.dev);
+ 	if (!IN_DEV_FORWARD(in_dev)) {
+ 		switch (rt->dst.error) {
+diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
+index 62856e185a93..9d2fbd88df93 100644
+--- a/net/ipv4/tcp_cong.c
++++ b/net/ipv4/tcp_cong.c
+@@ -187,6 +187,7 @@ static void tcp_reinit_congestion_control(struct sock *sk,
+ 
+ 	tcp_cleanup_congestion_control(sk);
+ 	icsk->icsk_ca_ops = ca;
++	icsk->icsk_ca_setsockopt = 1;
+ 
+ 	if (sk->sk_state != TCP_CLOSE && icsk->icsk_ca_ops->init)
+ 		icsk->icsk_ca_ops->init(sk);
+@@ -335,8 +336,10 @@ int tcp_set_congestion_control(struct sock *sk, const char *name)
+ 	rcu_read_lock();
+ 	ca = __tcp_ca_find_autoload(name);
+ 	/* No change asking for existing value */
+-	if (ca == icsk->icsk_ca_ops)
++	if (ca == icsk->icsk_ca_ops) {
++		icsk->icsk_ca_setsockopt = 1;
+ 		goto out;
++	}
+ 	if (!ca)
+ 		err = -ENOENT;
+ 	else if (!((ca->flags & TCP_CONG_NON_RESTRICTED) ||
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index dd11ac7798c6..50277af92485 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -316,7 +316,7 @@ void tcp_time_wait(struct sock *sk, int state, int timeo)
+ 			tw->tw_v6_daddr = sk->sk_v6_daddr;
+ 			tw->tw_v6_rcv_saddr = sk->sk_v6_rcv_saddr;
+ 			tw->tw_tclass = np->tclass;
+-			tw->tw_flowlabel = np->flow_label >> 12;
++			tw->tw_flowlabel = be32_to_cpu(np->flow_label & IPV6_FLOWLABEL_MASK);
+ 			tw->tw_ipv6only = sk->sk_ipv6only;
+ 		}
+ #endif
+@@ -437,7 +437,10 @@ void tcp_ca_openreq_child(struct sock *sk, const struct dst_entry *dst)
+ 		rcu_read_unlock();
+ 	}
+ 
+-	if (!ca_got_dst && !try_module_get(icsk->icsk_ca_ops->owner))
++	/* If no valid choice made yet, assign current system default ca. */
++	if (!ca_got_dst &&
++	    (!icsk->icsk_ca_setsockopt ||
++	     !try_module_get(icsk->icsk_ca_ops->owner)))
+ 		tcp_assign_congestion_control(sk);
+ 
+ 	tcp_set_ca_state(sk, TCP_CA_Open);
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 97ef1f8b7be8..51f17454bd7b 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -90,6 +90,7 @@
+ #include <linux/socket.h>
+ #include <linux/sockios.h>
+ #include <linux/igmp.h>
++#include <linux/inetdevice.h>
+ #include <linux/in.h>
+ #include <linux/errno.h>
+ #include <linux/timer.h>
+@@ -1348,10 +1349,8 @@ csum_copy_err:
+ 	}
+ 	unlock_sock_fast(sk, slow);
+ 
+-	if (noblock)
+-		return -EAGAIN;
+-
+-	/* starting over for a new packet */
++	/* starting over for a new packet, but check if we need to yield */
++	cond_resched();
+ 	msg->msg_flags &= ~MSG_TRUNC;
+ 	goto try_again;
+ }
+@@ -1968,6 +1967,7 @@ void udp_v4_early_demux(struct sk_buff *skb)
+ 	struct sock *sk;
+ 	struct dst_entry *dst;
+ 	int dif = skb->dev->ifindex;
++	int ours;
+ 
+ 	/* validate the packet */
+ 	if (!pskb_may_pull(skb, skb_transport_offset(skb) + sizeof(struct udphdr)))
+@@ -1977,14 +1977,24 @@ void udp_v4_early_demux(struct sk_buff *skb)
+ 	uh = udp_hdr(skb);
+ 
+ 	if (skb->pkt_type == PACKET_BROADCAST ||
+-	    skb->pkt_type == PACKET_MULTICAST)
++	    skb->pkt_type == PACKET_MULTICAST) {
++		struct in_device *in_dev = __in_dev_get_rcu(skb->dev);
++
++		if (!in_dev)
++			return;
++
++		ours = ip_check_mc_rcu(in_dev, iph->daddr, iph->saddr,
++				       iph->protocol);
++		if (!ours)
++			return;
+ 		sk = __udp4_lib_mcast_demux_lookup(net, uh->dest, iph->daddr,
+ 						   uh->source, iph->saddr, dif);
+-	else if (skb->pkt_type == PACKET_HOST)
++	} else if (skb->pkt_type == PACKET_HOST) {
+ 		sk = __udp4_lib_demux_lookup(net, uh->dest, iph->daddr,
+ 					     uh->source, iph->saddr, dif);
+-	else
++	} else {
+ 		return;
++	}
+ 
+ 	if (!sk)
+ 		return;
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 1f5e62229aaa..5ca3bc880fef 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -975,7 +975,7 @@ static void tcp_v6_timewait_ack(struct sock *sk, struct sk_buff *skb)
+ 			tcptw->tw_rcv_wnd >> tw->tw_rcv_wscale,
+ 			tcp_time_stamp + tcptw->tw_ts_offset,
+ 			tcptw->tw_ts_recent, tw->tw_bound_dev_if, tcp_twsk_md5_key(tcptw),
+-			tw->tw_tclass, (tw->tw_flowlabel << 12));
++			tw->tw_tclass, cpu_to_be32(tw->tw_flowlabel));
+ 
+ 	inet_twsk_put(tw);
+ }
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index d048d46779fc..1c9512aba77e 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -528,10 +528,8 @@ csum_copy_err:
+ 	}
+ 	unlock_sock_fast(sk, slow);
+ 
+-	if (noblock)
+-		return -EAGAIN;
+-
+-	/* starting over for a new packet */
++	/* starting over for a new packet, but check if we need to yield */
++	cond_resched();
+ 	msg->msg_flags &= ~MSG_TRUNC;
+ 	goto try_again;
+ }
+@@ -734,7 +732,9 @@ static bool __udp_v6_is_mcast_sock(struct net *net, struct sock *sk,
+ 	    (inet->inet_dport && inet->inet_dport != rmt_port) ||
+ 	    (!ipv6_addr_any(&sk->sk_v6_daddr) &&
+ 		    !ipv6_addr_equal(&sk->sk_v6_daddr, rmt_addr)) ||
+-	    (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif))
++	    (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif) ||
++	    (!ipv6_addr_any(&sk->sk_v6_rcv_saddr) &&
++		    !ipv6_addr_equal(&sk->sk_v6_rcv_saddr, loc_addr)))
+ 		return false;
+ 	if (!inet6_mc_check(sk, loc_addr, rmt_addr))
+ 		return false;
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index d1d7a8166f46..0e9c28dc86b7 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1052,7 +1052,7 @@ static int netlink_insert(struct sock *sk, u32 portid)
+ 	struct netlink_table *table = &nl_table[sk->sk_protocol];
+ 	int err;
+ 
+-	lock_sock(sk);
++	mutex_lock(&table->hash.mutex);
+ 
+ 	err = -EBUSY;
+ 	if (nlk_sk(sk)->portid)
+@@ -1069,11 +1069,12 @@ static int netlink_insert(struct sock *sk, u32 portid)
+ 	err = 0;
+ 	if (!__netlink_insert(table, sk)) {
+ 		err = -EADDRINUSE;
++		nlk_sk(sk)->portid = 0;
+ 		sock_put(sk);
+ 	}
+ 
+ err:
+-	release_sock(sk);
++	mutex_unlock(&table->hash.mutex);
+ 	return err;
+ }
+ 
+@@ -1082,10 +1083,12 @@ static void netlink_remove(struct sock *sk)
+ 	struct netlink_table *table;
+ 
+ 	table = &nl_table[sk->sk_protocol];
++	mutex_lock(&table->hash.mutex);
+ 	if (rhashtable_remove(&table->hash, &nlk_sk(sk)->node)) {
+ 		WARN_ON(atomic_read(&sk->sk_refcnt) == 1);
+ 		__sock_put(sk);
+ 	}
++	mutex_unlock(&table->hash.mutex);
+ 
+ 	netlink_table_grab();
+ 	if (nlk_sk(sk)->subscriptions) {
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index baef987fe2c0..d3328a19f5b2 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -81,6 +81,11 @@ int unregister_tcf_proto_ops(struct tcf_proto_ops *ops)
+ 	struct tcf_proto_ops *t;
+ 	int rc = -ENOENT;
+ 
++	/* Wait for outstanding call_rcu()s, if any, from a
++	 * tcf_proto_ops's destroy() handler.
++	 */
++	rcu_barrier();
++
+ 	write_lock(&cls_mod_lock);
+ 	list_for_each_entry(t, &tcf_proto_base, head) {
+ 		if (t == ops) {
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 243b7d169d61..d9c2ee6d2959 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -815,10 +815,8 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
+ 		if (dev->flags & IFF_UP)
+ 			dev_deactivate(dev);
+ 
+-		if (new && new->ops->attach) {
+-			new->ops->attach(new);
+-			num_q = 0;
+-		}
++		if (new && new->ops->attach)
++			goto skip;
+ 
+ 		for (i = 0; i < num_q; i++) {
+ 			struct netdev_queue *dev_queue = dev_ingress_queue(dev);
+@@ -834,12 +832,16 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
+ 				qdisc_destroy(old);
+ 		}
+ 
++skip:
+ 		if (!ingress) {
+ 			notify_and_destroy(net, skb, n, classid,
+ 					   dev->qdisc, new);
+ 			if (new && !new->ops->attach)
+ 				atomic_inc(&new->refcnt);
+ 			dev->qdisc = new ? : &noop_qdisc;
++
++			if (new && new->ops->attach)
++				new->ops->attach(new);
+ 		} else {
+ 			notify_and_destroy(net, skb, n, classid, old, new);
+ 		}
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 526b6edab018..146881f068e2 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -1887,6 +1887,10 @@ static long unix_stream_data_wait(struct sock *sk, long timeo,
+ 		unix_state_unlock(sk);
+ 		timeo = freezable_schedule_timeout(timeo);
+ 		unix_state_lock(sk);
++
++		if (sock_flag(sk, SOCK_DEAD))
++			break;
++
+ 		clear_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags);
+ 	}
+ 
+@@ -1947,6 +1951,10 @@ static int unix_stream_recvmsg(struct kiocb *iocb, struct socket *sock,
+ 		struct sk_buff *skb, *last;
+ 
+ 		unix_state_lock(sk);
++		if (sock_flag(sk, SOCK_DEAD)) {
++			err = -ECONNRESET;
++			goto unlock;
++		}
+ 		last = skb = skb_peek(&sk->sk_receive_queue);
+ again:
+ 		if (skb == NULL) {
+diff --git a/net/wireless/wext-compat.c b/net/wireless/wext-compat.c
+index 5b24d39d7903..318026617b57 100644
+--- a/net/wireless/wext-compat.c
++++ b/net/wireless/wext-compat.c
+@@ -1333,6 +1333,8 @@ static struct iw_statistics *cfg80211_wireless_stats(struct net_device *dev)
+ 	memcpy(bssid, wdev->current_bss->pub.bssid, ETH_ALEN);
+ 	wdev_unlock(wdev);
+ 
++	memset(&sinfo, 0, sizeof(sinfo));
++
+ 	if (rdev_get_station(rdev, dev, bssid, &sinfo))
+ 		return NULL;
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 93c78c3c4b95..a556d63564e6 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2167,6 +2167,7 @@ static const struct hda_fixup alc882_fixups[] = {
+ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x006c, "Acer Aspire 9810", ALC883_FIXUP_ACER_EAPD),
+ 	SND_PCI_QUIRK(0x1025, 0x0090, "Acer Aspire", ALC883_FIXUP_ACER_EAPD),
++	SND_PCI_QUIRK(0x1025, 0x0107, "Acer Aspire", ALC883_FIXUP_ACER_EAPD),
+ 	SND_PCI_QUIRK(0x1025, 0x010a, "Acer Ferrari 5000", ALC883_FIXUP_ACER_EAPD),
+ 	SND_PCI_QUIRK(0x1025, 0x0110, "Acer Aspire", ALC883_FIXUP_ACER_EAPD),
+ 	SND_PCI_QUIRK(0x1025, 0x0112, "Acer Aspire 9303", ALC883_FIXUP_ACER_EAPD),
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 3e2ef61c627b..8b7e391dd0b8 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -918,6 +918,7 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval,
+ 	case USB_ID(0x046d, 0x081d): /* HD Webcam c510 */
+ 	case USB_ID(0x046d, 0x0825): /* HD Webcam c270 */
+ 	case USB_ID(0x046d, 0x0826): /* HD Webcam c525 */
++	case USB_ID(0x046d, 0x08ca): /* Logitech Quickcam Fusion */
+ 	case USB_ID(0x046d, 0x0991):
+ 	/* Most audio usb devices lie about volume resolution.
+ 	 * Most Logitech webcams have res = 384.
+@@ -1582,12 +1583,6 @@ static int parse_audio_mixer_unit(struct mixer_build *state, int unitid,
+ 			      unitid);
+ 		return -EINVAL;
+ 	}
+-	/* no bmControls field (e.g. Maya44) -> ignore */
+-	if (desc->bLength <= 10 + input_pins) {
+-		usb_audio_dbg(state->chip, "MU %d has no bmControls field\n",
+-			      unitid);
+-		return 0;
+-	}
+ 
+ 	num_ins = 0;
+ 	ich = 0;
+@@ -1595,6 +1590,9 @@ static int parse_audio_mixer_unit(struct mixer_build *state, int unitid,
+ 		err = parse_audio_unit(state, desc->baSourceID[pin]);
+ 		if (err < 0)
+ 			continue;
++		/* no bmControls field (e.g. Maya44) -> ignore */
++		if (desc->bLength <= 10 + input_pins)
++			continue;
+ 		err = check_input_term(state, desc->baSourceID[pin], &iterm);
+ 		if (err < 0)
+ 			return err;
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index b703cb3cda19..e5000da9e9d7 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -437,6 +437,11 @@ static struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 		.map = ebox44_map,
+ 	},
+ 	{
++		/* MAYA44 USB+ */
++		.id = USB_ID(0x2573, 0x0008),
++		.map = maya44_map,
++	},
++	{
+ 		/* KEF X300A */
+ 		.id = USB_ID(0x27ac, 0x1000),
+ 		.map = scms_usb3318_map,
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index e21ec5abcc3a..2a408c60114b 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1120,6 +1120,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
+ 	case USB_ID(0x045E, 0x0772): /* MS Lifecam Studio */
+ 	case USB_ID(0x045E, 0x0779): /* MS Lifecam HD-3000 */
+ 	case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */
++	case USB_ID(0x074D, 0x3553): /* Outlaw RR2150 (Micronas UAC3553B) */
+ 		return true;
+ 	}
+ 	return false;
+@@ -1266,8 +1267,9 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ 		if (fp->altsetting == 2)
+ 			return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+ 		break;
+-	/* DIYINHK DSD DXD 384kHz USB to I2S/DSD */
+-	case USB_ID(0x20b1, 0x2009):
++
++	case USB_ID(0x20b1, 0x2009): /* DIYINHK DSD DXD 384kHz USB to I2S/DSD */
++	case USB_ID(0x20b1, 0x2023): /* JLsounds I2SoverUSB */
+ 		if (fp->altsetting == 3)
+ 			return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+ 		break;


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-06-30 15:01 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-06-30 15:01 UTC (permalink / raw
  To: gentoo-commits

commit:     ff006a9d7ad689c96ec4a56f89cab306dc08a6c2
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 30 14:58:28 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 30 14:58:28 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ff006a9d

Linux patch 4.0.7

 0000_README            |   4 +
 1006_linux-4.0.7.patch | 707 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 711 insertions(+)

diff --git a/0000_README b/0000_README
index 8761846..077a9de 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-4.0.6.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.0.6
 
+Patch:  1006_linux-4.0.7.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.0.7
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1006_linux-4.0.7.patch b/1006_linux-4.0.7.patch
new file mode 100644
index 0000000..ba486f4
--- /dev/null
+++ b/1006_linux-4.0.7.patch
@@ -0,0 +1,707 @@
+diff --git a/Makefile b/Makefile
+index af6da040b952..bd76a8e94395 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+diff --git a/arch/arm/mach-exynos/common.h b/arch/arm/mach-exynos/common.h
+index f70eca7ee705..0ef8d4b47102 100644
+--- a/arch/arm/mach-exynos/common.h
++++ b/arch/arm/mach-exynos/common.h
+@@ -153,6 +153,8 @@ extern void exynos_enter_aftr(void);
+ 
+ extern struct cpuidle_exynos_data cpuidle_coupled_exynos_data;
+ 
++extern void exynos_set_delayed_reset_assertion(bool enable);
++
+ extern void s5p_init_cpu(void __iomem *cpuid_addr);
+ extern unsigned int samsung_rev(void);
+ extern void __iomem *cpu_boot_reg_base(void);
+diff --git a/arch/arm/mach-exynos/exynos.c b/arch/arm/mach-exynos/exynos.c
+index 9e9dfdfad9d7..1081ff1f03c6 100644
+--- a/arch/arm/mach-exynos/exynos.c
++++ b/arch/arm/mach-exynos/exynos.c
+@@ -166,6 +166,33 @@ static void __init exynos_init_io(void)
+ 	exynos_map_io();
+ }
+ 
++/*
++ * Set or clear the USE_DELAYED_RESET_ASSERTION option. Used by smp code
++ * and suspend.
++ *
++ * This is necessary only on Exynos4 SoCs. When system is running
++ * USE_DELAYED_RESET_ASSERTION should be set so the ARM CLK clock down
++ * feature could properly detect global idle state when secondary CPU is
++ * powered down.
++ *
++ * However this should not be set when such system is going into suspend.
++ */
++void exynos_set_delayed_reset_assertion(bool enable)
++{
++	if (soc_is_exynos4()) {
++		unsigned int tmp, core_id;
++
++		for (core_id = 0; core_id < num_possible_cpus(); core_id++) {
++			tmp = pmu_raw_readl(EXYNOS_ARM_CORE_OPTION(core_id));
++			if (enable)
++				tmp |= S5P_USE_DELAYED_RESET_ASSERTION;
++			else
++				tmp &= ~(S5P_USE_DELAYED_RESET_ASSERTION);
++			pmu_raw_writel(tmp, EXYNOS_ARM_CORE_OPTION(core_id));
++		}
++	}
++}
++
+ static const struct of_device_id exynos_dt_pmu_match[] = {
+ 	{ .compatible = "samsung,exynos3250-pmu" },
+ 	{ .compatible = "samsung,exynos4210-pmu" },
+diff --git a/arch/arm/mach-exynos/platsmp.c b/arch/arm/mach-exynos/platsmp.c
+index d2e9f12d12f1..d45e8cd23925 100644
+--- a/arch/arm/mach-exynos/platsmp.c
++++ b/arch/arm/mach-exynos/platsmp.c
+@@ -34,30 +34,6 @@
+ 
+ extern void exynos4_secondary_startup(void);
+ 
+-/*
+- * Set or clear the USE_DELAYED_RESET_ASSERTION option, set on Exynos4 SoCs
+- * during hot-(un)plugging CPUx.
+- *
+- * The feature can be cleared safely during first boot of secondary CPU.
+- *
+- * Exynos4 SoCs require setting USE_DELAYED_RESET_ASSERTION during powering
+- * down a CPU so the CPU idle clock down feature could properly detect global
+- * idle state when CPUx is off.
+- */
+-static void exynos_set_delayed_reset_assertion(u32 core_id, bool enable)
+-{
+-	if (soc_is_exynos4()) {
+-		unsigned int tmp;
+-
+-		tmp = pmu_raw_readl(EXYNOS_ARM_CORE_OPTION(core_id));
+-		if (enable)
+-			tmp |= S5P_USE_DELAYED_RESET_ASSERTION;
+-		else
+-			tmp &= ~(S5P_USE_DELAYED_RESET_ASSERTION);
+-		pmu_raw_writel(tmp, EXYNOS_ARM_CORE_OPTION(core_id));
+-	}
+-}
+-
+ #ifdef CONFIG_HOTPLUG_CPU
+ static inline void cpu_leave_lowpower(u32 core_id)
+ {
+@@ -73,8 +49,6 @@ static inline void cpu_leave_lowpower(u32 core_id)
+ 	  : "=&r" (v)
+ 	  : "Ir" (CR_C), "Ir" (0x40)
+ 	  : "cc");
+-
+-	 exynos_set_delayed_reset_assertion(core_id, false);
+ }
+ 
+ static inline void platform_do_lowpower(unsigned int cpu, int *spurious)
+@@ -87,14 +61,6 @@ static inline void platform_do_lowpower(unsigned int cpu, int *spurious)
+ 		/* Turn the CPU off on next WFI instruction. */
+ 		exynos_cpu_power_down(core_id);
+ 
+-		/*
+-		 * Exynos4 SoCs require setting
+-		 * USE_DELAYED_RESET_ASSERTION so the CPU idle
+-		 * clock down feature could properly detect
+-		 * global idle state when CPUx is off.
+-		 */
+-		exynos_set_delayed_reset_assertion(core_id, true);
+-
+ 		wfi();
+ 
+ 		if (pen_release == core_id) {
+@@ -354,9 +320,6 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 		udelay(10);
+ 	}
+ 
+-	/* No harm if this is called during first boot of secondary CPU */
+-	exynos_set_delayed_reset_assertion(core_id, false);
+-
+ 	/*
+ 	 * now the secondary core is starting up let it run its
+ 	 * calibrations, then wait for it to finish
+@@ -403,6 +366,8 @@ static void __init exynos_smp_prepare_cpus(unsigned int max_cpus)
+ 
+ 	exynos_sysram_init();
+ 
++	exynos_set_delayed_reset_assertion(true);
++
+ 	if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9)
+ 		scu_enable(scu_base_addr());
+ 
+diff --git a/arch/arm/mach-exynos/suspend.c b/arch/arm/mach-exynos/suspend.c
+index 318d127df147..582ef2df960d 100644
+--- a/arch/arm/mach-exynos/suspend.c
++++ b/arch/arm/mach-exynos/suspend.c
+@@ -235,6 +235,8 @@ static void exynos_pm_enter_sleep_mode(void)
+ 
+ static void exynos_pm_prepare(void)
+ {
++	exynos_set_delayed_reset_assertion(false);
++
+ 	/* Set wake-up mask registers */
+ 	exynos_pm_set_wakeup_mask();
+ 
+@@ -383,6 +385,7 @@ early_wakeup:
+ 
+ 	/* Clear SLEEP mode set in INFORM1 */
+ 	pmu_raw_writel(0x0, S5P_INFORM1);
++	exynos_set_delayed_reset_assertion(true);
+ }
+ 
+ static void exynos3250_pm_resume(void)
+diff --git a/arch/powerpc/kernel/idle_power7.S b/arch/powerpc/kernel/idle_power7.S
+index 05adc8bbdef8..401d8d0085aa 100644
+--- a/arch/powerpc/kernel/idle_power7.S
++++ b/arch/powerpc/kernel/idle_power7.S
+@@ -500,9 +500,11 @@ BEGIN_FTR_SECTION
+ 	CHECK_HMI_INTERRUPT
+ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
+ 	ld	r1,PACAR1(r13)
++	ld	r6,_CCR(r1)
+ 	ld	r4,_MSR(r1)
+ 	ld	r5,_NIP(r1)
+ 	addi	r1,r1,INT_FRAME_SIZE
++	mtcr	r6
+ 	mtspr	SPRN_SRR1,r4
+ 	mtspr	SPRN_SRR0,r5
+ 	rfid
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 4e3d5a9621fe..03189d86357d 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -354,6 +354,7 @@ int __copy_instruction(u8 *dest, u8 *src)
+ {
+ 	struct insn insn;
+ 	kprobe_opcode_t buf[MAX_INSN_SIZE];
++	int length;
+ 	unsigned long recovered_insn =
+ 		recover_probed_instruction(buf, (unsigned long)src);
+ 
+@@ -361,16 +362,18 @@ int __copy_instruction(u8 *dest, u8 *src)
+ 		return 0;
+ 	kernel_insn_init(&insn, (void *)recovered_insn, MAX_INSN_SIZE);
+ 	insn_get_length(&insn);
++	length = insn.length;
++
+ 	/* Another subsystem puts a breakpoint, failed to recover */
+ 	if (insn.opcode.bytes[0] == BREAKPOINT_INSTRUCTION)
+ 		return 0;
+-	memcpy(dest, insn.kaddr, insn.length);
++	memcpy(dest, insn.kaddr, length);
+ 
+ #ifdef CONFIG_X86_64
+ 	if (insn_rip_relative(&insn)) {
+ 		s64 newdisp;
+ 		u8 *disp;
+-		kernel_insn_init(&insn, dest, insn.length);
++		kernel_insn_init(&insn, dest, length);
+ 		insn_get_displacement(&insn);
+ 		/*
+ 		 * The copied instruction uses the %rip-relative addressing
+@@ -394,7 +397,7 @@ int __copy_instruction(u8 *dest, u8 *src)
+ 		*(s32 *) disp = (s32) newdisp;
+ 	}
+ #endif
+-	return insn.length;
++	return length;
+ }
+ 
+ static int arch_copy_kprobe(struct kprobe *p)
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 4ee827d7bf36..3cb2b58fa26b 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -1064,6 +1064,17 @@ static void update_divide_count(struct kvm_lapic *apic)
+ 				   apic->divide_count);
+ }
+ 
++static void apic_update_lvtt(struct kvm_lapic *apic)
++{
++	u32 timer_mode = kvm_apic_get_reg(apic, APIC_LVTT) &
++			apic->lapic_timer.timer_mode_mask;
++
++	if (apic->lapic_timer.timer_mode != timer_mode) {
++		apic->lapic_timer.timer_mode = timer_mode;
++		hrtimer_cancel(&apic->lapic_timer.timer);
++	}
++}
++
+ static void apic_timer_expired(struct kvm_lapic *apic)
+ {
+ 	struct kvm_vcpu *vcpu = apic->vcpu;
+@@ -1272,6 +1283,7 @@ static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
+ 				apic_set_reg(apic, APIC_LVTT + 0x10 * i,
+ 					     lvt_val | APIC_LVT_MASKED);
+ 			}
++			apic_update_lvtt(apic);
+ 			atomic_set(&apic->lapic_timer.pending, 0);
+ 
+ 		}
+@@ -1304,20 +1316,13 @@ static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
+ 
+ 		break;
+ 
+-	case APIC_LVTT: {
+-		u32 timer_mode = val & apic->lapic_timer.timer_mode_mask;
+-
+-		if (apic->lapic_timer.timer_mode != timer_mode) {
+-			apic->lapic_timer.timer_mode = timer_mode;
+-			hrtimer_cancel(&apic->lapic_timer.timer);
+-		}
+-
++	case APIC_LVTT:
+ 		if (!kvm_apic_sw_enabled(apic))
+ 			val |= APIC_LVT_MASKED;
+ 		val &= (apic_lvt_mask[0] | apic->lapic_timer.timer_mode_mask);
+ 		apic_set_reg(apic, APIC_LVTT, val);
++		apic_update_lvtt(apic);
+ 		break;
+-	}
+ 
+ 	case APIC_TMICT:
+ 		if (apic_lvtt_tscdeadline(apic))
+@@ -1552,7 +1557,7 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu)
+ 
+ 	for (i = 0; i < APIC_LVT_NUM; i++)
+ 		apic_set_reg(apic, APIC_LVTT + 0x10 * i, APIC_LVT_MASKED);
+-	apic->lapic_timer.timer_mode = 0;
++	apic_update_lvtt(apic);
+ 	apic_set_reg(apic, APIC_LVT0,
+ 		     SET_APIC_DELIVERY_MODE(0, APIC_MODE_EXTINT));
+ 
+@@ -1778,6 +1783,7 @@ void kvm_apic_post_state_restore(struct kvm_vcpu *vcpu,
+ 
+ 	apic_update_ppr(apic);
+ 	hrtimer_cancel(&apic->lapic_timer.timer);
++	apic_update_lvtt(apic);
+ 	update_divide_count(apic);
+ 	start_apic_timer(apic);
+ 	apic->irr_pending = true;
+diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c
+index 288547a3c566..f26ebc5e0be6 100644
+--- a/drivers/bluetooth/ath3k.c
++++ b/drivers/bluetooth/ath3k.c
+@@ -80,6 +80,7 @@ static const struct usb_device_id ath3k_table[] = {
+ 	{ USB_DEVICE(0x0489, 0xe057) },
+ 	{ USB_DEVICE(0x0489, 0xe056) },
+ 	{ USB_DEVICE(0x0489, 0xe05f) },
++	{ USB_DEVICE(0x0489, 0xe076) },
+ 	{ USB_DEVICE(0x0489, 0xe078) },
+ 	{ USB_DEVICE(0x04c5, 0x1330) },
+ 	{ USB_DEVICE(0x04CA, 0x3004) },
+@@ -111,6 +112,7 @@ static const struct usb_device_id ath3k_table[] = {
+ 	{ USB_DEVICE(0x13d3, 0x3408) },
+ 	{ USB_DEVICE(0x13d3, 0x3423) },
+ 	{ USB_DEVICE(0x13d3, 0x3432) },
++	{ USB_DEVICE(0x13d3, 0x3474) },
+ 
+ 	/* Atheros AR5BBU12 with sflash firmware */
+ 	{ USB_DEVICE(0x0489, 0xE02C) },
+@@ -135,6 +137,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
+ 	{ USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0489, 0xe05f), .driver_info = BTUSB_ATH3012 },
++	{ USB_DEVICE(0x0489, 0xe076), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0489, 0xe078), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 },
+@@ -166,6 +169,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
+ 	{ USB_DEVICE(0x13d3, 0x3408), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3423), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3432), .driver_info = BTUSB_ATH3012 },
++	{ USB_DEVICE(0x13d3, 0x3474), .driver_info = BTUSB_ATH3012 },
+ 
+ 	/* Atheros AR5BBU22 with sflash firmware */
+ 	{ USB_DEVICE(0x0489, 0xE036), .driver_info = BTUSB_ATH3012 },
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 2c527da668ae..4fc415703ffc 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -174,6 +174,7 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0489, 0xe05f), .driver_info = BTUSB_ATH3012 },
++	{ USB_DEVICE(0x0489, 0xe076), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0489, 0xe078), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 },
+@@ -205,6 +206,7 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x13d3, 0x3408), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3423), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3432), .driver_info = BTUSB_ATH3012 },
++	{ USB_DEVICE(0x13d3, 0x3474), .driver_info = BTUSB_ATH3012 },
+ 
+ 	/* Atheros AR5BBU12 with sflash firmware */
+ 	{ USB_DEVICE(0x0489, 0xe02c), .driver_info = BTUSB_IGNORE },
+diff --git a/drivers/clk/at91/clk-pll.c b/drivers/clk/at91/clk-pll.c
+index 6ec79dbc0840..cbbe40377ad6 100644
+--- a/drivers/clk/at91/clk-pll.c
++++ b/drivers/clk/at91/clk-pll.c
+@@ -173,8 +173,7 @@ static long clk_pll_get_best_div_mul(struct clk_pll *pll, unsigned long rate,
+ 	int i = 0;
+ 
+ 	/* Check if parent_rate is a valid input rate */
+-	if (parent_rate < characteristics->input.min ||
+-	    parent_rate > characteristics->input.max)
++	if (parent_rate < characteristics->input.min)
+ 		return -ERANGE;
+ 
+ 	/*
+@@ -187,6 +186,15 @@ static long clk_pll_get_best_div_mul(struct clk_pll *pll, unsigned long rate,
+ 	if (!mindiv)
+ 		mindiv = 1;
+ 
++	if (parent_rate > characteristics->input.max) {
++		tmpdiv = DIV_ROUND_UP(parent_rate, characteristics->input.max);
++		if (tmpdiv > PLL_DIV_MAX)
++			return -ERANGE;
++
++		if (tmpdiv > mindiv)
++			mindiv = tmpdiv;
++	}
++
+ 	/*
+ 	 * Calculate the maximum divider which is limited by PLL register
+ 	 * layout (limited by the MUL or DIV field size).
+diff --git a/drivers/clk/at91/pmc.h b/drivers/clk/at91/pmc.h
+index 69abb08cf146..eb8e5dc9076d 100644
+--- a/drivers/clk/at91/pmc.h
++++ b/drivers/clk/at91/pmc.h
+@@ -121,7 +121,7 @@ extern void __init of_at91sam9x5_clk_smd_setup(struct device_node *np,
+ 					       struct at91_pmc *pmc);
+ #endif
+ 
+-#if defined(CONFIG_HAVE_AT91_SMD)
++#if defined(CONFIG_HAVE_AT91_H32MX)
+ extern void __init of_sama5d4_clk_h32mx_setup(struct device_node *np,
+ 					      struct at91_pmc *pmc);
+ #endif
+diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
+index f347ab7eea95..08b0da23c4ab 100644
+--- a/drivers/crypto/caam/caamhash.c
++++ b/drivers/crypto/caam/caamhash.c
+@@ -1543,6 +1543,8 @@ static int ahash_init(struct ahash_request *req)
+ 
+ 	state->current_buf = 0;
+ 	state->buf_dma = 0;
++	state->buflen_0 = 0;
++	state->buflen_1 = 0;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c
+index ae31e555793c..a48dc251b14f 100644
+--- a/drivers/crypto/caam/caamrng.c
++++ b/drivers/crypto/caam/caamrng.c
+@@ -56,7 +56,7 @@
+ 
+ /* Buffer, its dma address and lock */
+ struct buf_data {
+-	u8 buf[RN_BUF_SIZE];
++	u8 buf[RN_BUF_SIZE] ____cacheline_aligned;
+ 	dma_addr_t addr;
+ 	struct completion filled;
+ 	u32 hw_desc[DESC_JOB_O_LEN];
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index ec4d932f8be4..169123a6ad0e 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -693,6 +693,16 @@ static int i915_drm_resume(struct drm_device *dev)
+ 		intel_init_pch_refclk(dev);
+ 		drm_mode_config_reset(dev);
+ 
++		/*
++		 * Interrupts have to be enabled before any batches are run.
++		 * If not the GPU will hang. i915_gem_init_hw() will initiate
++		 * batches to update/restore the context.
++		 *
++		 * Modeset enabling in intel_modeset_init_hw() also needs
++		 * working interrupts.
++		 */
++		intel_runtime_pm_enable_interrupts(dev_priv);
++
+ 		mutex_lock(&dev->struct_mutex);
+ 		if (i915_gem_init_hw(dev)) {
+ 			DRM_ERROR("failed to re-initialize GPU, declaring wedged!\n");
+@@ -700,9 +710,6 @@ static int i915_drm_resume(struct drm_device *dev)
+ 		}
+ 		mutex_unlock(&dev->struct_mutex);
+ 
+-		/* We need working interrupts for modeset enabling ... */
+-		intel_runtime_pm_enable_interrupts(dev_priv);
+-
+ 		intel_modeset_init_hw(dev);
+ 
+ 		spin_lock_irq(&dev_priv->irq_lock);
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index 7a628e4cb27a..9536ec390614 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -2732,6 +2732,9 @@ void i915_gem_reset(struct drm_device *dev)
+ void
+ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
+ {
++	if (list_empty(&ring->request_list))
++		return;
++
+ 	WARN_ON(i915_verify_lists(ring->dev));
+ 
+ 	/* Retire requests first as we use it above for the early return.
+@@ -3088,8 +3091,8 @@ int i915_vma_unbind(struct i915_vma *vma)
+ 		} else if (vma->ggtt_view.pages) {
+ 			sg_free_table(vma->ggtt_view.pages);
+ 			kfree(vma->ggtt_view.pages);
+-			vma->ggtt_view.pages = NULL;
+ 		}
++		vma->ggtt_view.pages = NULL;
+ 	}
+ 
+ 	drm_mm_remove_node(&vma->node);
+diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
+index 9872ba9abf1a..2ffeda3589c2 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
++++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
+@@ -1526,6 +1526,11 @@ static int mga_vga_mode_valid(struct drm_connector *connector,
+ 		return MODE_BANDWIDTH;
+ 	}
+ 
++	if ((mode->hdisplay % 8) != 0 || (mode->hsync_start % 8) != 0 ||
++	    (mode->hsync_end % 8) != 0 || (mode->htotal % 8) != 0) {
++		return MODE_H_ILLEGAL;
++	}
++
+ 	if (mode->crtc_hdisplay > 2048 || mode->crtc_hsync_start > 4096 ||
+ 	    mode->crtc_hsync_end > 4096 || mode->crtc_htotal > 4096 ||
+ 	    mode->crtc_vdisplay > 2048 || mode->crtc_vsync_start > 4096 ||
+diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c
+index 686411e4e4f6..b82f2dd1fc32 100644
+--- a/drivers/gpu/drm/radeon/radeon_kms.c
++++ b/drivers/gpu/drm/radeon/radeon_kms.c
+@@ -547,6 +547,9 @@ static int radeon_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 		else
+ 			*value = 1;
+ 		break;
++	case RADEON_INFO_VA_UNMAP_WORKING:
++		*value = true;
++		break;
+ 	default:
+ 		DRM_DEBUG_KMS("Invalid request %d\n", info->request);
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index 147029adb885..ac72ece70160 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -2316,7 +2316,6 @@ isert_build_rdma_wr(struct isert_conn *isert_conn, struct isert_cmd *isert_cmd,
+ 	page_off = offset % PAGE_SIZE;
+ 
+ 	send_wr->sg_list = ib_sge;
+-	send_wr->num_sge = sg_nents;
+ 	send_wr->wr_id = (uintptr_t)&isert_cmd->tx_desc;
+ 	/*
+ 	 * Perform mapping of TCM scatterlist memory ib_sge dma_addr.
+@@ -2336,14 +2335,17 @@ isert_build_rdma_wr(struct isert_conn *isert_conn, struct isert_cmd *isert_cmd,
+ 			  ib_sge->addr, ib_sge->length, ib_sge->lkey);
+ 		page_off = 0;
+ 		data_left -= ib_sge->length;
++		if (!data_left)
++			break;
+ 		ib_sge++;
+ 		isert_dbg("Incrementing ib_sge pointer to %p\n", ib_sge);
+ 	}
+ 
++	send_wr->num_sge = ++i;
+ 	isert_dbg("Set outgoing sg_list: %p num_sg: %u from TCM SGLs\n",
+ 		  send_wr->sg_list, send_wr->num_sge);
+ 
+-	return sg_nents;
++	return send_wr->num_sge;
+ }
+ 
+ static int
+@@ -3311,6 +3313,7 @@ static void isert_free_conn(struct iscsi_conn *conn)
+ {
+ 	struct isert_conn *isert_conn = conn->context;
+ 
++	isert_wait4flush(isert_conn);
+ 	isert_put_conn(isert_conn);
+ }
+ 
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 9b4e30a82e4a..beda011cb741 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1889,8 +1889,8 @@ static int map_request(struct dm_target *ti, struct request *rq,
+ 			dm_kill_unmapped_request(rq, r);
+ 			return r;
+ 		}
+-		if (IS_ERR(clone))
+-			return DM_MAPIO_REQUEUE;
++		if (r != DM_MAPIO_REMAPPED)
++			return r;
+ 		if (setup_clone(clone, rq, tio, GFP_KERNEL)) {
+ 			/* -ENOMEM */
+ 			ti->type->release_clone_rq(clone);
+diff --git a/drivers/net/wireless/b43/main.c b/drivers/net/wireless/b43/main.c
+index 75345c1e8c34..5c91df5c1f4f 100644
+--- a/drivers/net/wireless/b43/main.c
++++ b/drivers/net/wireless/b43/main.c
+@@ -5365,6 +5365,10 @@ static void b43_supported_bands(struct b43_wldev *dev, bool *have_2ghz_phy,
+ 		*have_5ghz_phy = true;
+ 		return;
+ 	case 0x4321: /* BCM4306 */
++		/* There are 14e4:4321 PCI devs with 2.4 GHz BCM4321 (N-PHY) */
++		if (dev->phy.type != B43_PHYTYPE_G)
++			break;
++		/* fall through */
+ 	case 0x4313: /* BCM4311 */
+ 	case 0x431a: /* BCM4318 */
+ 	case 0x432a: /* BCM4321 */
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 220c0fd059bb..50faef4f056f 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1468,6 +1468,11 @@ skip_countries:
+ 		goto alloc_fail8;
+ 	}
+ 
++	if (quirks & CLEAR_HALT_CONDITIONS) {
++		usb_clear_halt(usb_dev, usb_rcvbulkpipe(usb_dev, epread->bEndpointAddress));
++		usb_clear_halt(usb_dev, usb_sndbulkpipe(usb_dev, epwrite->bEndpointAddress));
++	}
++
+ 	return 0;
+ alloc_fail8:
+ 	if (acm->country_codes) {
+@@ -1747,6 +1752,10 @@ static const struct usb_device_id acm_ids[] = {
+ 	.driver_info = NO_UNION_NORMAL, /* reports zero length descriptor */
+ 	},
+ 
++	{ USB_DEVICE(0x2912, 0x0001), /* ATOL FPrint */
++	.driver_info = CLEAR_HALT_CONDITIONS,
++	},
++
+ 	/* Nokia S60 phones expose two ACM channels. The first is
+ 	 * a modem and is picked up by the standard AT-command
+ 	 * information below. The second is 'vendor-specific' but
+diff --git a/drivers/usb/class/cdc-acm.h b/drivers/usb/class/cdc-acm.h
+index ffeb3c83941f..b3b6c9db6fe5 100644
+--- a/drivers/usb/class/cdc-acm.h
++++ b/drivers/usb/class/cdc-acm.h
+@@ -133,3 +133,4 @@ struct acm {
+ #define NO_DATA_INTERFACE		BIT(4)
+ #define IGNORE_DEVICE			BIT(5)
+ #define QUIRK_CONTROL_LINE_STATE	BIT(6)
++#define CLEAR_HALT_CONDITIONS		BIT(7)
+diff --git a/include/uapi/drm/radeon_drm.h b/include/uapi/drm/radeon_drm.h
+index 50d0fb41a3bf..76d2edea5bd1 100644
+--- a/include/uapi/drm/radeon_drm.h
++++ b/include/uapi/drm/radeon_drm.h
+@@ -1034,6 +1034,7 @@ struct drm_radeon_cs {
+ #define RADEON_INFO_VRAM_USAGE		0x1e
+ #define RADEON_INFO_GTT_USAGE		0x1f
+ #define RADEON_INFO_ACTIVE_CU_COUNT	0x20
++#define RADEON_INFO_VA_UNMAP_WORKING	0x25
+ 
+ struct drm_radeon_info {
+ 	uint32_t		request;
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index ced69da0ff55..7f2e97ce71a7 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -1369,19 +1369,26 @@ static int check_preds(struct filter_parse_state *ps)
+ {
+ 	int n_normal_preds = 0, n_logical_preds = 0;
+ 	struct postfix_elt *elt;
++	int cnt = 0;
+ 
+ 	list_for_each_entry(elt, &ps->postfix, list) {
+-		if (elt->op == OP_NONE)
++		if (elt->op == OP_NONE) {
++			cnt++;
+ 			continue;
++		}
+ 
+ 		if (elt->op == OP_AND || elt->op == OP_OR) {
+ 			n_logical_preds++;
++			cnt--;
+ 			continue;
+ 		}
++		if (elt->op != OP_NOT)
++			cnt--;
+ 		n_normal_preds++;
++		WARN_ON_ONCE(cnt < 0);
+ 	}
+ 
+-	if (!n_normal_preds || n_logical_preds >= n_normal_preds) {
++	if (cnt != 1 || !n_normal_preds || n_logical_preds >= n_normal_preds) {
+ 		parse_error(ps, FILT_ERR_INVALID_FILTER, 0);
+ 		return -EINVAL;
+ 	}
+diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
+index 87eff3173ce9..60b3100a2120 100644
+--- a/sound/pci/hda/patch_sigmatel.c
++++ b/sound/pci/hda/patch_sigmatel.c
+@@ -100,6 +100,7 @@ enum {
+ 	STAC_HP_ENVY_BASS,
+ 	STAC_HP_BNB13_EQ,
+ 	STAC_HP_ENVY_TS_BASS,
++	STAC_HP_ENVY_TS_DAC_BIND,
+ 	STAC_92HD83XXX_GPIO10_EAPD,
+ 	STAC_92HD83XXX_MODELS
+ };
+@@ -2170,6 +2171,22 @@ static void stac92hd83xxx_fixup_gpio10_eapd(struct hda_codec *codec,
+ 	spec->eapd_switch = 0;
+ }
+ 
++static void hp_envy_ts_fixup_dac_bind(struct hda_codec *codec,
++					    const struct hda_fixup *fix,
++					    int action)
++{
++	struct sigmatel_spec *spec = codec->spec;
++	static hda_nid_t preferred_pairs[] = {
++		0xd, 0x13,
++		0
++	};
++
++	if (action != HDA_FIXUP_ACT_PRE_PROBE)
++		return;
++
++	spec->gen.preferred_dacs = preferred_pairs;
++}
++
+ static const struct hda_verb hp_bnb13_eq_verbs[] = {
+ 	/* 44.1KHz base */
+ 	{ 0x22, 0x7A6, 0x3E },
+@@ -2685,6 +2702,12 @@ static const struct hda_fixup stac92hd83xxx_fixups[] = {
+ 			{}
+ 		},
+ 	},
++	[STAC_HP_ENVY_TS_DAC_BIND] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = hp_envy_ts_fixup_dac_bind,
++		.chained = true,
++		.chain_id = STAC_HP_ENVY_TS_BASS,
++	},
+ 	[STAC_92HD83XXX_GPIO10_EAPD] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = stac92hd83xxx_fixup_gpio10_eapd,
+@@ -2763,6 +2786,8 @@ static const struct snd_pci_quirk stac92hd83xxx_fixup_tbl[] = {
+ 			  "HP bNB13", STAC_HP_BNB13_EQ),
+ 	SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x190e,
+ 			  "HP ENVY TS", STAC_HP_ENVY_TS_BASS),
++	SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x1967,
++			  "HP ENVY TS", STAC_HP_ENVY_TS_DAC_BIND),
+ 	SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x1940,
+ 			  "HP bNB13", STAC_HP_BNB13_EQ),
+ 	SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x1941,


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-07-02 12:28 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-07-02 12:28 UTC (permalink / raw
  To: gentoo-commits

commit:     ac1cc90498ebd52ec27442424e08ace5bef33921
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jul  2 12:28:45 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jul  2 12:28:45 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ac1cc904

Version bump for BFQ Schedular patchset.

 0000_README                                        |  12 +-
 ...roups-kconfig-build-bits-for-BFQ-v7r8-4.0.patch |   6 +-
 ...introduce-the-BFQ-v7r8-I-O-sched-for-4.0.patch1 | 198 ++++++++++-----------
 ...rly-Queue-Merge-EQM-to-BFQ-v7r8-for-4.0.0.patch |  96 +++++-----
 4 files changed, 148 insertions(+), 164 deletions(-)

diff --git a/0000_README b/0000_README
index 077a9de..32ebe25 100644
--- a/0000_README
+++ b/0000_README
@@ -111,17 +111,17 @@ Patch:  5000_enable-additional-cpu-optimizations-for-gcc.patch
 From:   https://github.com/graysky2/kernel_gcc_patch/
 Desc:   Kernel patch enables gcc < v4.9 optimizations for additional CPUs.
 
-Patch:  5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r7-4.0.patch
+Patch:  5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r8-4.0.patch
 From:   http://algo.ing.unimo.it/people/paolo/disk_sched/
-Desc:   BFQ v7r7 patch 1 for 4.0: Build, cgroups and kconfig bits
+Desc:   BFQ v7r8 patch 1 for 4.0: Build, cgroups and kconfig bits
 
-Patch:  5002_block-introduce-the-BFQ-v7r7-I-O-sched-for-4.0.patch1
+Patch:  5002_block-introduce-the-BFQ-v7r8-I-O-sched-for-4.0.patch1
 From:   http://algo.ing.unimo.it/people/paolo/disk_sched/
-Desc:   BFQ v7r7 patch 2 for 4.0: BFQ Scheduler
+Desc:   BFQ v7r8 patch 2 for 4.0: BFQ Scheduler
 
-Patch:  5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r7-for-4.0.0.patch
+Patch:  5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r8-for-4.0.0.patch
 From:   http://algo.ing.unimo.it/people/paolo/disk_sched/
-Desc:   BFQ v7r7 patch 3 for 4.0: Early Queue Merge (EQM)
+Desc:   BFQ v7r8 patch 3 for 4.0: Early Queue Merge (EQM)
 
 Patch:  5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
 From:   https://github.com/graysky2/kernel_gcc_patch/

diff --git a/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r7-4.0.patch b/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r8-4.0.patch
similarity index 97%
rename from 5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r7-4.0.patch
rename to 5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r8-4.0.patch
index 468d157..d0eebb8 100644
--- a/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r7-4.0.patch
+++ b/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r8-4.0.patch
@@ -1,7 +1,7 @@
-From 63e26848e2df36a3c29d2d38ce8b008539d64a5d Mon Sep 17 00:00:00 2001
+From 3da922f94aa64cb77ef8942a0bcb5ffbc29ff3ff Mon Sep 17 00:00:00 2001
 From: Paolo Valente <paolo.valente@unimore.it>
 Date: Tue, 7 Apr 2015 13:39:12 +0200
-Subject: [PATCH 1/3] block: cgroups, kconfig, build bits for BFQ-v7r7-4.0
+Subject: [PATCH 1/3] block: cgroups, kconfig, build bits for BFQ-v7r8-4.0
 
 Update Kconfig.iosched and do the related Makefile changes to include
 kernel configuration options for BFQ. Also add the bfqio controller
@@ -100,5 +100,5 @@ index e4a96fb..267d681 100644
  SUBSYS(perf_event)
  #endif
 -- 
-2.1.0
+2.1.4
 

diff --git a/5002_block-introduce-the-BFQ-v7r7-I-O-sched-for-4.0.patch1 b/5002_block-introduce-the-BFQ-v7r8-I-O-sched-for-4.0.patch1
similarity index 98%
rename from 5002_block-introduce-the-BFQ-v7r7-I-O-sched-for-4.0.patch1
rename to 5002_block-introduce-the-BFQ-v7r8-I-O-sched-for-4.0.patch1
index a6cfc58..f3c91ed 100644
--- a/5002_block-introduce-the-BFQ-v7r7-I-O-sched-for-4.0.patch1
+++ b/5002_block-introduce-the-BFQ-v7r8-I-O-sched-for-4.0.patch1
@@ -1,9 +1,9 @@
-From 8cdf2dae6ee87049c7bb086d34e2ce981b545813 Mon Sep 17 00:00:00 2001
+From 2a8eeb849e2fecc7d9d3c8317d43904aab585eab Mon Sep 17 00:00:00 2001
 From: Paolo Valente <paolo.valente@unimore.it>
 Date: Thu, 9 May 2013 19:10:02 +0200
-Subject: [PATCH 2/3] block: introduce the BFQ-v7r7 I/O sched for 4.0
+Subject: [PATCH 2/3] block: introduce the BFQ-v7r8 I/O sched for 4.0
 
-Add the BFQ-v7r7 I/O scheduler to 4.0.
+Add the BFQ-v7r8 I/O scheduler to 4.0.
 The general structure is borrowed from CFQ, as much of the code for
 handling I/O contexts. Over time, several useful features have been
 ported from CFQ as well (details in the changelog in README.BFQ). A
@@ -56,12 +56,12 @@ until it expires.
 Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
 Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com>
 ---
- block/bfq-cgroup.c  |  936 ++++++++++++
+ block/bfq-cgroup.c  |  936 +++++++++++++
  block/bfq-ioc.c     |   36 +
- block/bfq-iosched.c | 3902 +++++++++++++++++++++++++++++++++++++++++++++++++++
- block/bfq-sched.c   | 1214 ++++++++++++++++
- block/bfq.h         |  775 ++++++++++
- 5 files changed, 6863 insertions(+)
+ block/bfq-iosched.c | 3898 +++++++++++++++++++++++++++++++++++++++++++++++++++
+ block/bfq-sched.c   | 1208 ++++++++++++++++
+ block/bfq.h         |  771 ++++++++++
+ 5 files changed, 6849 insertions(+)
  create mode 100644 block/bfq-cgroup.c
  create mode 100644 block/bfq-ioc.c
  create mode 100644 block/bfq-iosched.c
@@ -1054,10 +1054,10 @@ index 0000000..7f6b000
 +}
 diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
 new file mode 100644
-index 0000000..97ee934
+index 0000000..773b2ee
 --- /dev/null
 +++ b/block/bfq-iosched.c
-@@ -0,0 +1,3902 @@
+@@ -0,0 +1,3898 @@
 +/*
 + * Budget Fair Queueing (BFQ) disk scheduler.
 + *
@@ -1130,9 +1130,6 @@ index 0000000..97ee934
 +#include "bfq.h"
 +#include "blk.h"
 +
-+/* Max number of dispatches in one round of service. */
-+static const int bfq_quantum = 4;
-+
 +/* Expiration time of sync (0) and async (1) requests, in jiffies. */
 +static const int bfq_fifo_expire[2] = { HZ / 4, HZ / 8 };
 +
@@ -1240,6 +1237,20 @@ index 0000000..97ee934
 +#define bfq_sample_valid(samples)	((samples) > 80)
 +
 +/*
++ * The following macro groups conditions that need to be evaluated when
++ * checking if existing queues and groups form a symmetric scenario
++ * and therefore idling can be reduced or disabled for some of the
++ * queues. See the comment to the function bfq_bfqq_must_not_expire()
++ * for further details.
++ */
++#ifdef CONFIG_CGROUP_BFQIO
++#define symmetric_scenario	  (!bfqd->active_numerous_groups && \
++				   !bfq_differentiated_weights(bfqd))
++#else
++#define symmetric_scenario	  (!bfq_differentiated_weights(bfqd))
++#endif
++
++/*
 + * We regard a request as SYNC, if either it's a read or has the SYNC bit
 + * set (in which case it could also be a direct WRITE).
 + */
@@ -1429,7 +1440,6 @@ index 0000000..97ee934
 + */
 +static inline bool bfq_differentiated_weights(struct bfq_data *bfqd)
 +{
-+	BUG_ON(!bfqd->hw_tag);
 +	/*
 +	 * For weights to differ, at least one of the trees must contain
 +	 * at least two nodes.
@@ -1466,19 +1476,19 @@ index 0000000..97ee934
 +	struct rb_node **new = &(root->rb_node), *parent = NULL;
 +
 +	/*
-+	 * Do not insert if:
-+	 * - the device does not support queueing;
-+	 * - the entity is already associated with a counter, which happens if:
-+	 *   1) the entity is associated with a queue, 2) a request arrival
-+	 *   has caused the queue to become both non-weight-raised, and hence
-+	 *   change its weight, and backlogged; in this respect, each
-+	 *   of the two events causes an invocation of this function,
-+	 *   3) this is the invocation of this function caused by the second
-+	 *   event. This second invocation is actually useless, and we handle
-+	 *   this fact by exiting immediately. More efficient or clearer
-+	 *   solutions might possibly be adopted.
++	 * Do not insert if the entity is already associated with a
++	 * counter, which happens if:
++	 *   1) the entity is associated with a queue,
++	 *   2) a request arrival has caused the queue to become both
++	 *      non-weight-raised, and hence change its weight, and
++	 *      backlogged; in this respect, each of the two events
++	 *      causes an invocation of this function,
++	 *   3) this is the invocation of this function caused by the
++	 *      second event. This second invocation is actually useless,
++	 *      and we handle this fact by exiting immediately. More
++	 *      efficient or clearer solutions might possibly be adopted.
 +	 */
-+	if (!bfqd->hw_tag || entity->weight_counter)
++	if (entity->weight_counter)
 +		return;
 +
 +	while (*new) {
@@ -1517,14 +1527,6 @@ index 0000000..97ee934
 +				    struct bfq_entity *entity,
 +				    struct rb_root *root)
 +{
-+	/*
-+	 * Check whether the entity is actually associated with a counter.
-+	 * In fact, the device may not be considered NCQ-capable for a while,
-+	 * which implies that no insertion in the weight trees is performed,
-+	 * after which the device may start to be deemed NCQ-capable, and hence
-+	 * this function may start to be invoked. This may cause the function
-+	 * to be invoked for entities that are not associated with any counter.
-+	 */
 +	if (!entity->weight_counter)
 +		return;
 +
@@ -2084,7 +2086,8 @@ index 0000000..97ee934
 +		bfq_updated_next_req(bfqd, bfqq);
 +	}
 +
-+	list_del_init(&rq->queuelist);
++	if (rq->queuelist.prev != &rq->queuelist)
++		list_del_init(&rq->queuelist);
 +	BUG_ON(bfqq->queued[sync] == 0);
 +	bfqq->queued[sync]--;
 +	bfqd->queued--;
@@ -2159,14 +2162,22 @@ index 0000000..97ee934
 +static void bfq_merged_requests(struct request_queue *q, struct request *rq,
 +				struct request *next)
 +{
-+	struct bfq_queue *bfqq = RQ_BFQQ(rq);
++	struct bfq_queue *bfqq = RQ_BFQQ(rq), *next_bfqq = RQ_BFQQ(next);
 +
 +	/*
-+	 * Reposition in fifo if next is older than rq.
++	 * If next and rq belong to the same bfq_queue and next is older
++	 * than rq, then reposition rq in the fifo (by substituting next
++	 * with rq). Otherwise, if next and rq belong to different
++	 * bfq_queues, never reposition rq: in fact, we would have to
++	 * reposition it with respect to next's position in its own fifo,
++	 * which would most certainly be too expensive with respect to
++	 * the benefits.
 +	 */
-+	if (!list_empty(&rq->queuelist) && !list_empty(&next->queuelist) &&
++	if (bfqq == next_bfqq &&
++	    !list_empty(&rq->queuelist) && !list_empty(&next->queuelist) &&
 +	    time_before(next->fifo_time, rq->fifo_time)) {
-+		list_move(&rq->queuelist, &next->queuelist);
++		list_del_init(&rq->queuelist);
++		list_replace_init(&next->queuelist, &rq->queuelist);
 +		rq->fifo_time = next->fifo_time;
 +	}
 +
@@ -2444,14 +2455,16 @@ index 0000000..97ee934
 +	 */
 +	sl = bfqd->bfq_slice_idle;
 +	/*
-+	 * Unless the queue is being weight-raised, grant only minimum idle
-+	 * time if the queue either has been seeky for long enough or has
-+	 * already proved to be constantly seeky.
++	 * Unless the queue is being weight-raised or the scenario is
++	 * asymmetric, grant only minimum idle time if the queue either
++	 * has been seeky for long enough or has already proved to be
++	 * constantly seeky.
 +	 */
 +	if (bfq_sample_valid(bfqq->seek_samples) &&
 +	    ((BFQQ_SEEKY(bfqq) && bfqq->entity.service >
 +				  bfq_max_budget(bfqq->bfqd) / 8) ||
-+	      bfq_bfqq_constantly_seeky(bfqq)) && bfqq->wr_coeff == 1)
++	      bfq_bfqq_constantly_seeky(bfqq)) && bfqq->wr_coeff == 1 &&
++	    symmetric_scenario)
 +		sl = min(sl, msecs_to_jiffies(BFQ_MIN_TT));
 +	else if (bfqq->wr_coeff > 1)
 +		sl = sl * 3;
@@ -3265,12 +3278,6 @@ index 0000000..97ee934
 +static inline bool bfq_bfqq_must_not_expire(struct bfq_queue *bfqq)
 +{
 +	struct bfq_data *bfqd = bfqq->bfqd;
-+#ifdef CONFIG_CGROUP_BFQIO
-+#define symmetric_scenario	  (!bfqd->active_numerous_groups && \
-+				   !bfq_differentiated_weights(bfqd))
-+#else
-+#define symmetric_scenario	  (!bfq_differentiated_weights(bfqd))
-+#endif
 +#define cond_for_seeky_on_ncq_hdd (bfq_bfqq_constantly_seeky(bfqq) && \
 +				   bfqd->busy_in_flight_queues == \
 +				   bfqd->const_seeky_busy_in_flight_queues)
@@ -3286,13 +3293,12 @@ index 0000000..97ee934
 + */
 +#define cond_for_expiring_non_wr  (bfqd->hw_tag && \
 +				   (bfqd->wr_busy_queues > 0 || \
-+				    (symmetric_scenario && \
-+				     (blk_queue_nonrot(bfqd->queue) || \
-+				      cond_for_seeky_on_ncq_hdd))))
++				    (blk_queue_nonrot(bfqd->queue) || \
++				      cond_for_seeky_on_ncq_hdd)))
 +
 +	return bfq_bfqq_sync(bfqq) &&
 +		!cond_for_expiring_in_burst &&
-+		(bfqq->wr_coeff > 1 ||
++		(bfqq->wr_coeff > 1 || !symmetric_scenario ||
 +		 (bfq_bfqq_IO_bound(bfqq) && bfq_bfqq_idle_window(bfqq) &&
 +		  !cond_for_expiring_non_wr)
 +	);
@@ -3390,9 +3396,9 @@ index 0000000..97ee934
 +	}
 +
 +	/*
-+	 * No requests pending.  If the in-service queue still has requests
-+	 * in flight (possibly waiting for a completion) or is idling for a
-+	 * new request, then keep it.
++	 * No requests pending. However, if the in-service queue is idling
++	 * for a new request, or has requests waiting for a completion and
++	 * may idle after their completion, then keep it anyway.
 +	 */
 +	if (new_bfqq == NULL && (timer_pending(&bfqd->idle_slice_timer) ||
 +	    (bfqq->dispatched != 0 && bfq_bfqq_must_not_expire(bfqq)))) {
@@ -3595,14 +3601,13 @@ index 0000000..97ee934
 +	if (bfqq == NULL)
 +		return 0;
 +
-+	max_dispatch = bfqd->bfq_quantum;
 +	if (bfq_class_idle(bfqq))
 +		max_dispatch = 1;
 +
 +	if (!bfq_bfqq_sync(bfqq))
 +		max_dispatch = bfqd->bfq_max_budget_async_rq;
 +
-+	if (bfqq->dispatched >= max_dispatch) {
++	if (!bfq_bfqq_sync(bfqq) && bfqq->dispatched >= max_dispatch) {
 +		if (bfqd->busy_queues > 1)
 +			return 0;
 +		if (bfqq->dispatched >= 4 * max_dispatch)
@@ -3618,8 +3623,8 @@ index 0000000..97ee934
 +	if (!bfq_dispatch_request(bfqd, bfqq))
 +		return 0;
 +
-+	bfq_log_bfqq(bfqd, bfqq, "dispatched one request of %d (max_disp %d)",
-+			bfqq->pid, max_dispatch);
++	bfq_log_bfqq(bfqd, bfqq, "dispatched %s request",
++			bfq_bfqq_sync(bfqq) ? "sync" : "async");
 +
 +	return 1;
 +}
@@ -3724,14 +3729,11 @@ index 0000000..97ee934
 + * Update the entity prio values; note that the new values will not
 + * be used until the next (re)activation.
 + */
-+static void bfq_init_prio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
++static void bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
 +{
 +	struct task_struct *tsk = current;
 +	int ioprio_class;
 +
-+	if (!bfq_bfqq_prio_changed(bfqq))
-+		return;
-+
 +	ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
 +	switch (ioprio_class) {
 +	default:
@@ -3761,17 +3763,16 @@ index 0000000..97ee934
 +
 +	if (bfqq->entity.new_ioprio < 0 ||
 +	    bfqq->entity.new_ioprio >= IOPRIO_BE_NR) {
-+		printk(KERN_CRIT "bfq_init_prio_data: new_ioprio %d\n",
++		printk(KERN_CRIT "bfq_set_next_ioprio_data: new_ioprio %d\n",
 +				 bfqq->entity.new_ioprio);
 +		BUG();
 +	}
 +
++	bfqq->entity.new_weight = bfq_ioprio_to_weight(bfqq->entity.new_ioprio);
 +	bfqq->entity.ioprio_changed = 1;
-+
-+	bfq_clear_bfqq_prio_changed(bfqq);
 +}
 +
-+static void bfq_changed_ioprio(struct bfq_io_cq *bic)
++static void bfq_check_ioprio_change(struct bfq_io_cq *bic)
 +{
 +	struct bfq_data *bfqd;
 +	struct bfq_queue *bfqq, *new_bfqq;
@@ -3788,6 +3789,8 @@ index 0000000..97ee934
 +	if (unlikely(bfqd == NULL) || likely(bic->ioprio == ioprio))
 +		goto out;
 +
++	bic->ioprio = ioprio;
++
 +	bfqq = bic->bfqq[BLK_RW_ASYNC];
 +	if (bfqq != NULL) {
 +		bfqg = container_of(bfqq->entity.sched_data, struct bfq_group,
@@ -3797,7 +3800,7 @@ index 0000000..97ee934
 +		if (new_bfqq != NULL) {
 +			bic->bfqq[BLK_RW_ASYNC] = new_bfqq;
 +			bfq_log_bfqq(bfqd, bfqq,
-+				     "changed_ioprio: bfqq %p %d",
++				     "check_ioprio_change: bfqq %p %d",
 +				     bfqq, atomic_read(&bfqq->ref));
 +			bfq_put_queue(bfqq);
 +		}
@@ -3805,16 +3808,14 @@ index 0000000..97ee934
 +
 +	bfqq = bic->bfqq[BLK_RW_SYNC];
 +	if (bfqq != NULL)
-+		bfq_mark_bfqq_prio_changed(bfqq);
-+
-+	bic->ioprio = ioprio;
++		bfq_set_next_ioprio_data(bfqq, bic);
 +
 +out:
 +	bfq_put_bfqd_unlock(bfqd, &flags);
 +}
 +
 +static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
-+			  pid_t pid, int is_sync)
++			  struct bfq_io_cq *bic, pid_t pid, int is_sync)
 +{
 +	RB_CLEAR_NODE(&bfqq->entity.rb_node);
 +	INIT_LIST_HEAD(&bfqq->fifo);
@@ -3823,7 +3824,8 @@ index 0000000..97ee934
 +	atomic_set(&bfqq->ref, 0);
 +	bfqq->bfqd = bfqd;
 +
-+	bfq_mark_bfqq_prio_changed(bfqq);
++	if (bic)
++		bfq_set_next_ioprio_data(bfqq, bic);
 +
 +	if (is_sync) {
 +		if (!bfq_class_idle(bfqq))
@@ -3881,8 +3883,8 @@ index 0000000..97ee934
 +		}
 +
 +		if (bfqq != NULL) {
-+			bfq_init_bfqq(bfqd, bfqq, current->pid, is_sync);
-+			bfq_init_prio_data(bfqq, bic);
++			bfq_init_bfqq(bfqd, bfqq, bic, current->pid,
++                                      is_sync);
 +			bfq_init_entity(&bfqq->entity, bfqg);
 +			bfq_log_bfqq(bfqd, bfqq, "allocated");
 +		} else {
@@ -4120,7 +4122,6 @@ index 0000000..97ee934
 +	struct bfq_queue *bfqq = RQ_BFQQ(rq);
 +
 +	assert_spin_locked(bfqd->queue->queue_lock);
-+	bfq_init_prio_data(bfqq, RQ_BIC(rq));
 +
 +	bfq_add_request(rq);
 +
@@ -4257,11 +4258,8 @@ index 0000000..97ee934
 +		return ELV_MQUEUE_MAY;
 +
 +	bfqq = bic_to_bfqq(bic, rw_is_sync(rw));
-+	if (bfqq != NULL) {
-+		bfq_init_prio_data(bfqq, bic);
-+
++	if (bfqq != NULL)
 +		return __bfq_may_queue(bfqq);
-+	}
 +
 +	return ELV_MQUEUE_MAY;
 +}
@@ -4339,7 +4337,7 @@ index 0000000..97ee934
 +
 +	might_sleep_if(gfp_mask & __GFP_WAIT);
 +
-+	bfq_changed_ioprio(bic);
++	bfq_check_ioprio_change(bic);
 +
 +	spin_lock_irqsave(q->queue_lock, flags);
 +
@@ -4543,10 +4541,12 @@ index 0000000..97ee934
 +	 * Grab a permanent reference to it, so that the normal code flow
 +	 * will not attempt to free it.
 +	 */
-+	bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, 1, 0);
++	bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0);
 +	atomic_inc(&bfqd->oom_bfqq.ref);
 +	bfqd->oom_bfqq.entity.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO;
 +	bfqd->oom_bfqq.entity.new_ioprio_class = IOPRIO_CLASS_BE;
++	bfqd->oom_bfqq.entity.new_weight =
++		bfq_ioprio_to_weight(bfqd->oom_bfqq.entity.new_ioprio);
 +	/*
 +	 * Trigger weight initialization, according to ioprio, at the
 +	 * oom_bfqq's first activation. The oom_bfqq's ioprio and ioprio
@@ -4591,7 +4591,6 @@ index 0000000..97ee934
 +
 +	bfqd->bfq_max_budget = bfq_default_max_budget;
 +
-+	bfqd->bfq_quantum = bfq_quantum;
 +	bfqd->bfq_fifo_expire[0] = bfq_fifo_expire[0];
 +	bfqd->bfq_fifo_expire[1] = bfq_fifo_expire[1];
 +	bfqd->bfq_back_max = bfq_back_max;
@@ -4725,7 +4724,6 @@ index 0000000..97ee934
 +		__data = jiffies_to_msecs(__data);			\
 +	return bfq_var_show(__data, (page));				\
 +}
-+SHOW_FUNCTION(bfq_quantum_show, bfqd->bfq_quantum, 0);
 +SHOW_FUNCTION(bfq_fifo_expire_sync_show, bfqd->bfq_fifo_expire[1], 1);
 +SHOW_FUNCTION(bfq_fifo_expire_async_show, bfqd->bfq_fifo_expire[0], 1);
 +SHOW_FUNCTION(bfq_back_seek_max_show, bfqd->bfq_back_max, 0);
@@ -4762,7 +4760,6 @@ index 0000000..97ee934
 +		*(__PTR) = __data;					\
 +	return ret;							\
 +}
-+STORE_FUNCTION(bfq_quantum_store, &bfqd->bfq_quantum, 1, INT_MAX, 0);
 +STORE_FUNCTION(bfq_fifo_expire_sync_store, &bfqd->bfq_fifo_expire[1], 1,
 +		INT_MAX, 1);
 +STORE_FUNCTION(bfq_fifo_expire_async_store, &bfqd->bfq_fifo_expire[0], 1,
@@ -4863,7 +4860,6 @@ index 0000000..97ee934
 +	__ATTR(name, S_IRUGO|S_IWUSR, bfq_##name##_show, bfq_##name##_store)
 +
 +static struct elv_fs_entry bfq_attrs[] = {
-+	BFQ_ATTR(quantum),
 +	BFQ_ATTR(fifo_expire_sync),
 +	BFQ_ATTR(fifo_expire_async),
 +	BFQ_ATTR(back_seek_max),
@@ -4944,7 +4940,7 @@ index 0000000..97ee934
 +	device_speed_thresh[1] = (R_fast[1] + R_slow[1]) / 2;
 +
 +	elv_register(&iosched_bfq);
-+	pr_info("BFQ I/O-scheduler version: v7r7");
++	pr_info("BFQ I/O-scheduler: v7r8");
 +
 +	return 0;
 +}
@@ -4962,10 +4958,10 @@ index 0000000..97ee934
 +MODULE_LICENSE("GPL");
 diff --git a/block/bfq-sched.c b/block/bfq-sched.c
 new file mode 100644
-index 0000000..2931563
+index 0000000..c343099
 --- /dev/null
 +++ b/block/bfq-sched.c
-@@ -0,0 +1,1214 @@
+@@ -0,0 +1,1208 @@
 +/*
 + * BFQ: Hierarchical B-WF2Q+ scheduler.
 + *
@@ -5604,13 +5600,7 @@ index 0000000..2931563
 +			entity->orig_weight = entity->new_weight;
 +			entity->ioprio =
 +				bfq_weight_to_ioprio(entity->orig_weight);
-+		} else if (entity->new_ioprio != entity->ioprio) {
-+			entity->ioprio = entity->new_ioprio;
-+			entity->orig_weight =
-+					bfq_ioprio_to_weight(entity->ioprio);
-+		} else
-+			entity->new_weight = entity->orig_weight =
-+				bfq_ioprio_to_weight(entity->ioprio);
++		}
 +
 +		entity->ioprio_class = entity->new_ioprio_class;
 +		entity->ioprio_changed = 0;
@@ -6182,12 +6172,12 @@ index 0000000..2931563
 +}
 diff --git a/block/bfq.h b/block/bfq.h
 new file mode 100644
-index 0000000..518f2ac
+index 0000000..81a89c3
 --- /dev/null
 +++ b/block/bfq.h
-@@ -0,0 +1,775 @@
+@@ -0,0 +1,771 @@
 +/*
-+ * BFQ-v7r7 for 4.0.0: data structures and common functions prototypes.
++ * BFQ-v7r8 for 4.0.0: data structures and common functions prototypes.
 + *
 + * Based on ideas and code from CFQ:
 + * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
@@ -6573,7 +6563,6 @@ index 0000000..518f2ac
 + * @group_list: list of all the bfq_groups active on the device.
 + * @active_list: list of all the bfq_queues active on the device.
 + * @idle_list: list of all the bfq_queues idle on the device.
-+ * @bfq_quantum: max number of requests dispatched per dispatch round.
 + * @bfq_fifo_expire: timeout for async/sync requests; when it expires
 + *                   requests are served in fifo order.
 + * @bfq_back_penalty: weight of backward seeks wrt forward ones.
@@ -6681,7 +6670,6 @@ index 0000000..518f2ac
 +	struct list_head active_list;
 +	struct list_head idle_list;
 +
-+	unsigned int bfq_quantum;
 +	unsigned int bfq_fifo_expire[2];
 +	unsigned int bfq_back_penalty;
 +	unsigned int bfq_back_max;
@@ -6724,7 +6712,6 @@ index 0000000..518f2ac
 +	BFQ_BFQQ_FLAG_must_alloc,	/* must be allowed rq alloc */
 +	BFQ_BFQQ_FLAG_fifo_expire,	/* FIFO checked in this slice */
 +	BFQ_BFQQ_FLAG_idle_window,	/* slice idling enabled */
-+	BFQ_BFQQ_FLAG_prio_changed,	/* task priority has changed */
 +	BFQ_BFQQ_FLAG_sync,		/* synchronous queue */
 +	BFQ_BFQQ_FLAG_budget_new,	/* no completion with this budget */
 +	BFQ_BFQQ_FLAG_IO_bound,         /*
@@ -6767,7 +6754,6 @@ index 0000000..518f2ac
 +BFQ_BFQQ_FNS(must_alloc);
 +BFQ_BFQQ_FNS(fifo_expire);
 +BFQ_BFQQ_FNS(idle_window);
-+BFQ_BFQQ_FNS(prio_changed);
 +BFQ_BFQQ_FNS(sync);
 +BFQ_BFQQ_FNS(budget_new);
 +BFQ_BFQQ_FNS(IO_bound);
@@ -6949,7 +6935,7 @@ index 0000000..518f2ac
 +	spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
 +}
 +
-+static void bfq_changed_ioprio(struct bfq_io_cq *bic);
++static void bfq_check_ioprio_change(struct bfq_io_cq *bic);
 +static void bfq_put_queue(struct bfq_queue *bfqq);
 +static void bfq_dispatch_insert(struct request_queue *q, struct request *rq);
 +static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
@@ -6962,5 +6948,5 @@ index 0000000..518f2ac
 +
 +#endif /* _BFQ_H */
 -- 
-2.1.0
+2.1.4
 

diff --git a/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r7-for-4.0.0.patch b/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r8-for-4.0.0.patch
similarity index 94%
rename from 5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r7-for-4.0.0.patch
rename to 5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r8-for-4.0.0.patch
index 53267cd..421750b 100644
--- a/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r7-for-4.0.0.patch
+++ b/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r8-for-4.0.0.patch
@@ -1,7 +1,7 @@
-From d49cf2e7913ec1c4b86a9de657140d9ec5fa8c19 Mon Sep 17 00:00:00 2001
+From e7fc26f7742ad582a7fa70bd31f23da9f48dacff Mon Sep 17 00:00:00 2001
 From: Mauro Andreolini <mauro.andreolini@unimore.it>
-Date: Thu, 18 Dec 2014 21:32:08 +0100
-Subject: [PATCH 3/3] block, bfq: add Early Queue Merge (EQM) to BFQ-v7r7 for
+Date: Fri, 5 Jun 2015 17:45:40 +0200
+Subject: [PATCH 3/3] block, bfq: add Early Queue Merge (EQM) to BFQ-v7r8 for
  4.0.0
 
 A set of processes may happen  to  perform interleaved reads, i.e.,requests
@@ -34,16 +34,16 @@ Signed-off-by: Mauro Andreolini <mauro.andreolini@unimore.it>
 Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com>
 Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
 ---
- block/bfq-iosched.c | 751 +++++++++++++++++++++++++++++++++++++---------------
+ block/bfq-iosched.c | 750 +++++++++++++++++++++++++++++++++++++---------------
  block/bfq-sched.c   |  28 --
  block/bfq.h         |  54 +++-
- 3 files changed, 581 insertions(+), 252 deletions(-)
+ 3 files changed, 580 insertions(+), 252 deletions(-)
 
 diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
-index 97ee934..328f33c 100644
+index 773b2ee..71b51c1 100644
 --- a/block/bfq-iosched.c
 +++ b/block/bfq-iosched.c
-@@ -571,6 +571,57 @@ static inline unsigned int bfq_wr_duration(struct bfq_data *bfqd)
+@@ -573,6 +573,57 @@ static inline unsigned int bfq_wr_duration(struct bfq_data *bfqd)
  	return dur;
  }
  
@@ -101,7 +101,7 @@ index 97ee934..328f33c 100644
  /* Empty burst list and add just bfqq (see comments to bfq_handle_burst) */
  static inline void bfq_reset_burst_list(struct bfq_data *bfqd,
  					struct bfq_queue *bfqq)
-@@ -815,7 +866,7 @@ static void bfq_add_request(struct request *rq)
+@@ -817,7 +868,7 @@ static void bfq_add_request(struct request *rq)
  		bfq_rq_pos_tree_add(bfqd, bfqq);
  
  	if (!bfq_bfqq_busy(bfqq)) {
@@ -110,7 +110,7 @@ index 97ee934..328f33c 100644
  		     idle_for_long_time = time_is_before_jiffies(
  						bfqq->budget_timeout +
  						bfqd->bfq_wr_min_idle_time);
-@@ -839,11 +890,12 @@ static void bfq_add_request(struct request *rq)
+@@ -841,11 +892,12 @@ static void bfq_add_request(struct request *rq)
  				bfqd->last_ins_in_burst = jiffies;
  		}
  
@@ -126,7 +126,7 @@ index 97ee934..328f33c 100644
  		entity->budget = max_t(unsigned long, bfqq->max_budget,
  				       bfq_serv_to_charge(next_rq, bfqq));
  
-@@ -862,11 +914,20 @@ static void bfq_add_request(struct request *rq)
+@@ -864,11 +916,20 @@ static void bfq_add_request(struct request *rq)
  		if (!bfqd->low_latency)
  			goto add_bfqq_busy;
  
@@ -150,7 +150,7 @@ index 97ee934..328f33c 100644
  			bfqq->wr_coeff = bfqd->bfq_wr_coeff;
  			if (interactive)
  				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
-@@ -880,7 +941,7 @@ static void bfq_add_request(struct request *rq)
+@@ -882,7 +943,7 @@ static void bfq_add_request(struct request *rq)
  		} else if (old_wr_coeff > 1) {
  			if (interactive)
  				bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
@@ -159,7 +159,7 @@ index 97ee934..328f33c 100644
  				 (bfqq->wr_cur_max_time ==
  				  bfqd->bfq_wr_rt_max_time &&
  				  !soft_rt)) {
-@@ -899,18 +960,18 @@ static void bfq_add_request(struct request *rq)
+@@ -901,18 +962,18 @@ static void bfq_add_request(struct request *rq)
  				/*
  				 *
  				 * The remaining weight-raising time is lower
@@ -190,7 +190,7 @@ index 97ee934..328f33c 100644
  				 *
  				 * In addition, the application is now meeting
  				 * the requirements for being deemed soft rt.
-@@ -945,6 +1006,7 @@ static void bfq_add_request(struct request *rq)
+@@ -947,6 +1008,7 @@ static void bfq_add_request(struct request *rq)
  					bfqd->bfq_wr_rt_max_time;
  			}
  		}
@@ -198,7 +198,7 @@ index 97ee934..328f33c 100644
  		if (old_wr_coeff != bfqq->wr_coeff)
  			entity->ioprio_changed = 1;
  add_bfqq_busy:
-@@ -1156,90 +1218,35 @@ static void bfq_end_wr(struct bfq_data *bfqd)
+@@ -1167,90 +1229,35 @@ static void bfq_end_wr(struct bfq_data *bfqd)
  	spin_unlock_irq(bfqd->queue->queue_lock);
  }
  
@@ -303,7 +303,7 @@ index 97ee934..328f33c 100644
  
  	if (RB_EMPTY_ROOT(root))
  		return NULL;
-@@ -1258,7 +1265,7 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
+@@ -1269,7 +1276,7 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
  	 * next_request position).
  	 */
  	__bfqq = rb_entry(parent, struct bfq_queue, pos_node);
@@ -312,7 +312,7 @@ index 97ee934..328f33c 100644
  		return __bfqq;
  
  	if (blk_rq_pos(__bfqq->next_rq) < sector)
-@@ -1269,7 +1276,7 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
+@@ -1280,7 +1287,7 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
  		return NULL;
  
  	__bfqq = rb_entry(node, struct bfq_queue, pos_node);
@@ -321,7 +321,7 @@ index 97ee934..328f33c 100644
  		return __bfqq;
  
  	return NULL;
-@@ -1278,14 +1285,12 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
+@@ -1289,14 +1296,12 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
  /*
   * bfqd - obvious
   * cur_bfqq - passed in so that we don't decide that the current queue
@@ -340,7 +340,7 @@ index 97ee934..328f33c 100644
  {
  	struct bfq_queue *bfqq;
  
-@@ -1305,7 +1310,7 @@ static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
+@@ -1316,7 +1321,7 @@ static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
  	 * working closely on the same area of the disk. In that case,
  	 * we can group them together and don't waste time idling.
  	 */
@@ -349,7 +349,7 @@ index 97ee934..328f33c 100644
  	if (bfqq == NULL || bfqq == cur_bfqq)
  		return NULL;
  
-@@ -1332,6 +1337,315 @@ static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
+@@ -1343,6 +1348,315 @@ static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
  	return bfqq;
  }
  
@@ -665,7 +665,7 @@ index 97ee934..328f33c 100644
  /*
   * If enough samples have been computed, return the current max budget
   * stored in bfqd, which is dynamically updated according to the
-@@ -1475,61 +1789,6 @@ static struct request *bfq_check_fifo(struct bfq_queue *bfqq)
+@@ -1488,61 +1802,6 @@ static struct request *bfq_check_fifo(struct bfq_queue *bfqq)
  	return rq;
  }
  
@@ -727,7 +727,7 @@ index 97ee934..328f33c 100644
  static inline unsigned long bfq_bfqq_budget_left(struct bfq_queue *bfqq)
  {
  	struct bfq_entity *entity = &bfqq->entity;
-@@ -2263,7 +2522,7 @@ static inline bool bfq_bfqq_must_idle(struct bfq_queue *bfqq)
+@@ -2269,7 +2528,7 @@ static inline bool bfq_bfqq_must_idle(struct bfq_queue *bfqq)
   */
  static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
  {
@@ -736,7 +736,7 @@ index 97ee934..328f33c 100644
  	struct request *next_rq;
  	enum bfqq_expiration reason = BFQ_BFQQ_BUDGET_TIMEOUT;
  
-@@ -2273,17 +2532,6 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+@@ -2279,17 +2538,6 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
  
  	bfq_log_bfqq(bfqd, bfqq, "select_queue: already in-service queue");
  
@@ -754,7 +754,7 @@ index 97ee934..328f33c 100644
  	if (bfq_may_expire_for_budg_timeout(bfqq) &&
  	    !timer_pending(&bfqd->idle_slice_timer) &&
  	    !bfq_bfqq_must_idle(bfqq))
-@@ -2322,10 +2570,7 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+@@ -2328,10 +2576,7 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
  				bfq_clear_bfqq_wait_request(bfqq);
  				del_timer(&bfqd->idle_slice_timer);
  			}
@@ -766,9 +766,9 @@ index 97ee934..328f33c 100644
  		}
  	}
  
-@@ -2334,40 +2579,30 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
- 	 * in flight (possibly waiting for a completion) or is idling for a
- 	 * new request, then keep it.
+@@ -2340,40 +2585,30 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ 	 * for a new request, or has requests waiting for a completion and
+ 	 * may idle after their completion, then keep it anyway.
  	 */
 -	if (new_bfqq == NULL && (timer_pending(&bfqd->idle_slice_timer) ||
 -	    (bfqq->dispatched != 0 && bfq_bfqq_must_not_expire(bfqq)))) {
@@ -814,7 +814,7 @@ index 97ee934..328f33c 100644
  			jiffies_to_msecs(bfqq->wr_cur_max_time),
  			bfqq->wr_coeff,
  			bfqq->entity.weight, bfqq->entity.orig_weight);
-@@ -2376,12 +2611,16 @@ static void bfq_update_wr_data(struct bfq_data *bfqd,
+@@ -2382,12 +2617,16 @@ static void bfq_update_wr_data(struct bfq_data *bfqd,
  		       entity->orig_weight * bfqq->wr_coeff);
  		if (entity->ioprio_changed)
  			bfq_log_bfqq(bfqd, bfqq, "WARN: pending prio change");
@@ -832,7 +832,7 @@ index 97ee934..328f33c 100644
  		    time_is_before_jiffies(bfqq->last_wr_start_finish +
  					   bfqq->wr_cur_max_time)) {
  			bfqq->last_wr_start_finish = jiffies;
-@@ -2390,11 +2629,13 @@ static void bfq_update_wr_data(struct bfq_data *bfqd,
+@@ -2396,11 +2635,13 @@ static void bfq_update_wr_data(struct bfq_data *bfqd,
  				     bfqq->last_wr_start_finish,
  				     jiffies_to_msecs(bfqq->wr_cur_max_time));
  			bfq_bfqq_end_wr(bfqq);
@@ -849,7 +849,7 @@ index 97ee934..328f33c 100644
  }
  
  /*
-@@ -2642,6 +2883,25 @@ static inline void bfq_init_icq(struct io_cq *icq)
+@@ -2647,6 +2888,25 @@ static inline void bfq_init_icq(struct io_cq *icq)
  	struct bfq_io_cq *bic = icq_to_bic(icq);
  
  	bic->ttime.last_end_request = jiffies;
@@ -875,7 +875,7 @@ index 97ee934..328f33c 100644
  }
  
  static void bfq_exit_icq(struct io_cq *icq)
-@@ -2655,6 +2915,13 @@ static void bfq_exit_icq(struct io_cq *icq)
+@@ -2660,6 +2920,13 @@ static void bfq_exit_icq(struct io_cq *icq)
  	}
  
  	if (bic->bfqq[BLK_RW_SYNC]) {
@@ -889,7 +889,7 @@ index 97ee934..328f33c 100644
  		bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_SYNC]);
  		bic->bfqq[BLK_RW_SYNC] = NULL;
  	}
-@@ -2950,6 +3217,10 @@ static void bfq_update_idle_window(struct bfq_data *bfqd,
+@@ -2952,6 +3219,10 @@ static void bfq_update_idle_window(struct bfq_data *bfqd,
  	if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq))
  		return;
  
@@ -900,7 +900,7 @@ index 97ee934..328f33c 100644
  	enable_idle = bfq_bfqq_idle_window(bfqq);
  
  	if (atomic_read(&bic->icq.ioc->active_ref) == 0 ||
-@@ -2997,6 +3268,7 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+@@ -2999,6 +3270,7 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
  	if (bfqq->entity.service > bfq_max_budget(bfqd) / 8 ||
  	    !BFQQ_SEEKY(bfqq))
  		bfq_update_idle_window(bfqd, bfqq, bic);
@@ -908,7 +908,7 @@ index 97ee934..328f33c 100644
  
  	bfq_log_bfqq(bfqd, bfqq,
  		     "rq_enqueued: idle_window=%d (seeky %d, mean %llu)",
-@@ -3057,13 +3329,49 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+@@ -3059,12 +3331,47 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
  static void bfq_insert_request(struct request_queue *q, struct request *rq)
  {
  	struct bfq_data *bfqd = q->elevator->elevator_data;
@@ -916,7 +916,7 @@ index 97ee934..328f33c 100644
 +	struct bfq_queue *bfqq = RQ_BFQQ(rq), *new_bfqq;
  
  	assert_spin_locked(bfqd->queue->queue_lock);
-+
+ 
 +	/*
 +	 * An unplug may trigger a requeue of a request from the device
 +	 * driver: make sure we are in process context while trying to
@@ -944,8 +944,6 @@ index 97ee934..328f33c 100644
 +			bfq_bfqq_increase_failed_cooperations(bfqq);
 +	}
 +
- 	bfq_init_prio_data(bfqq, RQ_BIC(rq));
- 
  	bfq_add_request(rq);
  
 +	/*
@@ -959,7 +957,7 @@ index 97ee934..328f33c 100644
  	rq->fifo_time = jiffies + bfqd->bfq_fifo_expire[rq_is_sync(rq)];
  	list_add_tail(&rq->queuelist, &bfqq->fifo);
  
-@@ -3228,18 +3536,6 @@ static void bfq_put_request(struct request *rq)
+@@ -3226,18 +3533,6 @@ static void bfq_put_request(struct request *rq)
  	}
  }
  
@@ -978,7 +976,7 @@ index 97ee934..328f33c 100644
  /*
   * Returns NULL if a new bfqq should be allocated, or the old bfqq if this
   * was the last process referring to said bfqq.
-@@ -3248,6 +3544,9 @@ static struct bfq_queue *
+@@ -3246,6 +3541,9 @@ static struct bfq_queue *
  bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
  {
  	bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue");
@@ -988,7 +986,7 @@ index 97ee934..328f33c 100644
  	if (bfqq_process_refs(bfqq) == 1) {
  		bfqq->pid = current->pid;
  		bfq_clear_bfqq_coop(bfqq);
-@@ -3276,6 +3575,7 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+@@ -3274,6 +3572,7 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
  	struct bfq_queue *bfqq;
  	struct bfq_group *bfqg;
  	unsigned long flags;
@@ -996,7 +994,7 @@ index 97ee934..328f33c 100644
  
  	might_sleep_if(gfp_mask & __GFP_WAIT);
  
-@@ -3293,25 +3593,26 @@ new_queue:
+@@ -3291,25 +3590,26 @@ new_queue:
  	if (bfqq == NULL || bfqq == &bfqd->oom_bfqq) {
  		bfqq = bfq_get_queue(bfqd, bfqg, is_sync, bic, gfp_mask);
  		bic_set_bfqq(bic, bfqq, is_sync);
@@ -1035,7 +1033,7 @@ index 97ee934..328f33c 100644
  	}
  
  	bfqq->allocated[rw]++;
-@@ -3322,6 +3623,26 @@ new_queue:
+@@ -3320,6 +3620,26 @@ new_queue:
  	rq->elv.priv[0] = bic;
  	rq->elv.priv[1] = bfqq;
  
@@ -1063,10 +1061,10 @@ index 97ee934..328f33c 100644
  
  	return 0;
 diff --git a/block/bfq-sched.c b/block/bfq-sched.c
-index 2931563..6764a7e 100644
+index c343099..d0890c6 100644
 --- a/block/bfq-sched.c
 +++ b/block/bfq-sched.c
-@@ -1091,34 +1091,6 @@ static struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
+@@ -1085,34 +1085,6 @@ static struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
  	return bfqq;
  }
  
@@ -1102,7 +1100,7 @@ index 2931563..6764a7e 100644
  {
  	if (bfqd->in_service_bic != NULL) {
 diff --git a/block/bfq.h b/block/bfq.h
-index 518f2ac..4f519ea 100644
+index 81a89c3..0ea164d 100644
 --- a/block/bfq.h
 +++ b/block/bfq.h
 @@ -218,18 +218,21 @@ struct bfq_group;
@@ -1184,8 +1182,8 @@ index 518f2ac..4f519ea 100644
  };
  
  enum bfq_device_speed {
-@@ -539,7 +573,7 @@ enum bfqq_state_flags {
- 	BFQ_BFQQ_FLAG_prio_changed,	/* task priority has changed */
+@@ -536,7 +570,7 @@ enum bfqq_state_flags {
+ 	BFQ_BFQQ_FLAG_idle_window,	/* slice idling enabled */
  	BFQ_BFQQ_FLAG_sync,		/* synchronous queue */
  	BFQ_BFQQ_FLAG_budget_new,	/* no completion with this budget */
 -	BFQ_BFQQ_FLAG_IO_bound,         /*
@@ -1193,7 +1191,7 @@ index 518f2ac..4f519ea 100644
  					 * bfqq has timed-out at least once
  					 * having consumed at most 2/10 of
  					 * its budget
-@@ -552,12 +586,13 @@ enum bfqq_state_flags {
+@@ -549,12 +583,13 @@ enum bfqq_state_flags {
  					 * bfqq has proved to be slow and
  					 * seeky until budget timeout
  					 */
@@ -1209,7 +1207,7 @@ index 518f2ac..4f519ea 100644
  };
  
  #define BFQ_BFQQ_FNS(name)						\
-@@ -587,6 +622,7 @@ BFQ_BFQQ_FNS(in_large_burst);
+@@ -583,6 +618,7 @@ BFQ_BFQQ_FNS(in_large_burst);
  BFQ_BFQQ_FNS(constantly_seeky);
  BFQ_BFQQ_FNS(coop);
  BFQ_BFQQ_FNS(split_coop);
@@ -1218,5 +1216,5 @@ index 518f2ac..4f519ea 100644
  #undef BFQ_BFQQ_FNS
  
 -- 
-2.1.0
+2.1.4
 


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-07-10 23:45 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-07-10 23:45 UTC (permalink / raw
  To: gentoo-commits

commit:     3594f6ef73513ee5c6adefc2a074b8b310dc4de3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jul 10 23:33:38 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jul 10 23:33:38 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3594f6ef

Linux patch 4.0.8

 0000_README            |    4 +
 1007_linux-4.0.8.patch | 2139 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2143 insertions(+)

diff --git a/0000_README b/0000_README
index 32ebe25..6a1359e 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-4.0.7.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.0.7
 
+Patch:  1007_linux-4.0.8.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.0.8
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1007_linux-4.0.8.patch b/1007_linux-4.0.8.patch
new file mode 100644
index 0000000..88c73a0
--- /dev/null
+++ b/1007_linux-4.0.8.patch
@@ -0,0 +1,2139 @@
+diff --git a/Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt b/Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt
+index 750d577e8083..f5a8ca29aff0 100644
+--- a/Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt
++++ b/Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt
+@@ -1,7 +1,7 @@
+ * Marvell Armada 370 / Armada XP Ethernet Controller (NETA)
+ 
+ Required properties:
+-- compatible: should be "marvell,armada-370-neta".
++- compatible: "marvell,armada-370-neta" or "marvell,armada-xp-neta".
+ - reg: address and length of the register set for the device.
+ - interrupts: interrupt for the device
+ - phy: See ethernet.txt file in the same directory.
+diff --git a/Makefile b/Makefile
+index bd76a8e94395..0e315d6e1a41 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+diff --git a/arch/arm/boot/dts/armada-370-xp.dtsi b/arch/arm/boot/dts/armada-370-xp.dtsi
+index 8a322ad57e5f..a038c201ffba 100644
+--- a/arch/arm/boot/dts/armada-370-xp.dtsi
++++ b/arch/arm/boot/dts/armada-370-xp.dtsi
+@@ -265,7 +265,6 @@
+ 			};
+ 
+ 			eth0: ethernet@70000 {
+-				compatible = "marvell,armada-370-neta";
+ 				reg = <0x70000 0x4000>;
+ 				interrupts = <8>;
+ 				clocks = <&gateclk 4>;
+@@ -281,7 +280,6 @@
+ 			};
+ 
+ 			eth1: ethernet@74000 {
+-				compatible = "marvell,armada-370-neta";
+ 				reg = <0x74000 0x4000>;
+ 				interrupts = <10>;
+ 				clocks = <&gateclk 3>;
+diff --git a/arch/arm/boot/dts/armada-370.dtsi b/arch/arm/boot/dts/armada-370.dtsi
+index 27397f151def..37730254e667 100644
+--- a/arch/arm/boot/dts/armada-370.dtsi
++++ b/arch/arm/boot/dts/armada-370.dtsi
+@@ -306,6 +306,14 @@
+ 					dmacap,memset;
+ 				};
+ 			};
++
++			ethernet@70000 {
++				compatible = "marvell,armada-370-neta";
++			};
++
++			ethernet@74000 {
++				compatible = "marvell,armada-370-neta";
++			};
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/armada-xp-mv78260.dtsi b/arch/arm/boot/dts/armada-xp-mv78260.dtsi
+index 4a7cbed79b07..1676d30e9be2 100644
+--- a/arch/arm/boot/dts/armada-xp-mv78260.dtsi
++++ b/arch/arm/boot/dts/armada-xp-mv78260.dtsi
+@@ -319,7 +319,7 @@
+ 			};
+ 
+ 			eth3: ethernet@34000 {
+-				compatible = "marvell,armada-370-neta";
++				compatible = "marvell,armada-xp-neta";
+ 				reg = <0x34000 0x4000>;
+ 				interrupts = <14>;
+ 				clocks = <&gateclk 1>;
+diff --git a/arch/arm/boot/dts/armada-xp-mv78460.dtsi b/arch/arm/boot/dts/armada-xp-mv78460.dtsi
+index 36ce63a96cc9..d41fe88ffea4 100644
+--- a/arch/arm/boot/dts/armada-xp-mv78460.dtsi
++++ b/arch/arm/boot/dts/armada-xp-mv78460.dtsi
+@@ -357,7 +357,7 @@
+ 			};
+ 
+ 			eth3: ethernet@34000 {
+-				compatible = "marvell,armada-370-neta";
++				compatible = "marvell,armada-xp-neta";
+ 				reg = <0x34000 0x4000>;
+ 				interrupts = <14>;
+ 				clocks = <&gateclk 1>;
+diff --git a/arch/arm/boot/dts/armada-xp.dtsi b/arch/arm/boot/dts/armada-xp.dtsi
+index 82917236a2fb..9ce7d5fd8a34 100644
+--- a/arch/arm/boot/dts/armada-xp.dtsi
++++ b/arch/arm/boot/dts/armada-xp.dtsi
+@@ -175,7 +175,7 @@
+ 			};
+ 
+ 			eth2: ethernet@30000 {
+-				compatible = "marvell,armada-370-neta";
++				compatible = "marvell,armada-xp-neta";
+ 				reg = <0x30000 0x4000>;
+ 				interrupts = <12>;
+ 				clocks = <&gateclk 2>;
+@@ -218,6 +218,14 @@
+ 				};
+ 			};
+ 
++			ethernet@70000 {
++				compatible = "marvell,armada-xp-neta";
++			};
++
++			ethernet@74000 {
++				compatible = "marvell,armada-xp-neta";
++			};
++
+ 			xor@f0900 {
+ 				compatible = "marvell,orion-xor";
+ 				reg = <0xF0900 0x100
+diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
+index 79caf79b304a..f7db3a5d80e3 100644
+--- a/arch/arm/kvm/interrupts.S
++++ b/arch/arm/kvm/interrupts.S
+@@ -170,13 +170,9 @@ __kvm_vcpu_return:
+ 	@ Don't trap coprocessor accesses for host kernel
+ 	set_hstr vmexit
+ 	set_hdcr vmexit
+-	set_hcptr vmexit, (HCPTR_TTA | HCPTR_TCP(10) | HCPTR_TCP(11))
++	set_hcptr vmexit, (HCPTR_TTA | HCPTR_TCP(10) | HCPTR_TCP(11)), after_vfp_restore
+ 
+ #ifdef CONFIG_VFPv3
+-	@ Save floating point registers we if let guest use them.
+-	tst	r2, #(HCPTR_TCP(10) | HCPTR_TCP(11))
+-	bne	after_vfp_restore
+-
+ 	@ Switch VFP/NEON hardware state to the host's
+ 	add	r7, vcpu, #VCPU_VFP_GUEST
+ 	store_vfp_state r7
+@@ -188,6 +184,8 @@ after_vfp_restore:
+ 	@ Restore FPEXC_EN which we clobbered on entry
+ 	pop	{r2}
+ 	VFPFMXR FPEXC, r2
++#else
++after_vfp_restore:
+ #endif
+ 
+ 	@ Reset Hyp-role
+@@ -483,7 +481,7 @@ switch_to_guest_vfp:
+ 	push	{r3-r7}
+ 
+ 	@ NEON/VFP used.  Turn on VFP access.
+-	set_hcptr vmexit, (HCPTR_TCP(10) | HCPTR_TCP(11))
++	set_hcptr vmtrap, (HCPTR_TCP(10) | HCPTR_TCP(11))
+ 
+ 	@ Switch VFP/NEON hardware state to the guest's
+ 	add	r7, r0, #VCPU_VFP_HOST
+diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
+index 14d488388480..f6f14812d106 100644
+--- a/arch/arm/kvm/interrupts_head.S
++++ b/arch/arm/kvm/interrupts_head.S
+@@ -599,8 +599,13 @@ ARM_BE8(rev	r6, r6  )
+ .endm
+ 
+ /* Configures the HCPTR (Hyp Coprocessor Trap Register) on entry/return
+- * (hardware reset value is 0). Keep previous value in r2. */
+-.macro set_hcptr operation, mask
++ * (hardware reset value is 0). Keep previous value in r2.
++ * An ISB is emited on vmexit/vmtrap, but executed on vmexit only if
++ * VFP wasn't already enabled (always executed on vmtrap).
++ * If a label is specified with vmexit, it is branched to if VFP wasn't
++ * enabled.
++ */
++.macro set_hcptr operation, mask, label = none
+ 	mrc	p15, 4, r2, c1, c1, 2
+ 	ldr	r3, =\mask
+ 	.if \operation == vmentry
+@@ -609,6 +614,17 @@ ARM_BE8(rev	r6, r6  )
+ 	bic	r3, r2, r3		@ Don't trap defined coproc-accesses
+ 	.endif
+ 	mcr	p15, 4, r3, c1, c1, 2
++	.if \operation != vmentry
++	.if \operation == vmexit
++	tst	r2, #(HCPTR_TCP(10) | HCPTR_TCP(11))
++	beq	1f
++	.endif
++	isb
++	.if \label != none
++	b	\label
++	.endif
++1:
++	.endif
+ .endm
+ 
+ /* Configures the HDCR (Hyp Debug Configuration Register) on entry/return
+diff --git a/arch/arm/kvm/psci.c b/arch/arm/kvm/psci.c
+index 02fa8eff6ae1..531e922486b2 100644
+--- a/arch/arm/kvm/psci.c
++++ b/arch/arm/kvm/psci.c
+@@ -230,10 +230,6 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
+ 	case PSCI_0_2_FN64_AFFINITY_INFO:
+ 		val = kvm_psci_vcpu_affinity_info(vcpu);
+ 		break;
+-	case PSCI_0_2_FN_MIGRATE:
+-	case PSCI_0_2_FN64_MIGRATE:
+-		val = PSCI_RET_NOT_SUPPORTED;
+-		break;
+ 	case PSCI_0_2_FN_MIGRATE_INFO_TYPE:
+ 		/*
+ 		 * Trusted OS is MP hence does not require migration
+@@ -242,10 +238,6 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
+ 		 */
+ 		val = PSCI_0_2_TOS_MP;
+ 		break;
+-	case PSCI_0_2_FN_MIGRATE_INFO_UP_CPU:
+-	case PSCI_0_2_FN64_MIGRATE_INFO_UP_CPU:
+-		val = PSCI_RET_NOT_SUPPORTED;
+-		break;
+ 	case PSCI_0_2_FN_SYSTEM_OFF:
+ 		kvm_psci_system_off(vcpu);
+ 		/*
+@@ -271,7 +263,8 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
+ 		ret = 0;
+ 		break;
+ 	default:
+-		return -EINVAL;
++		val = PSCI_RET_NOT_SUPPORTED;
++		break;
+ 	}
+ 
+ 	*vcpu_reg(vcpu, 0) = val;
+@@ -291,12 +284,9 @@ static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
+ 	case KVM_PSCI_FN_CPU_ON:
+ 		val = kvm_psci_vcpu_on(vcpu);
+ 		break;
+-	case KVM_PSCI_FN_CPU_SUSPEND:
+-	case KVM_PSCI_FN_MIGRATE:
++	default:
+ 		val = PSCI_RET_NOT_SUPPORTED;
+ 		break;
+-	default:
+-		return -EINVAL;
+ 	}
+ 
+ 	*vcpu_reg(vcpu, 0) = val;
+diff --git a/arch/arm/mach-imx/clk-imx6q.c b/arch/arm/mach-imx/clk-imx6q.c
+index d04a430607b8..3a3f88c04e8f 100644
+--- a/arch/arm/mach-imx/clk-imx6q.c
++++ b/arch/arm/mach-imx/clk-imx6q.c
+@@ -439,7 +439,7 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
+ 	clk[IMX6QDL_CLK_GPMI_IO]      = imx_clk_gate2("gpmi_io",       "enfc",              base + 0x78, 28);
+ 	clk[IMX6QDL_CLK_GPMI_APB]     = imx_clk_gate2("gpmi_apb",      "usdhc3",            base + 0x78, 30);
+ 	clk[IMX6QDL_CLK_ROM]          = imx_clk_gate2("rom",           "ahb",               base + 0x7c, 0);
+-	clk[IMX6QDL_CLK_SATA]         = imx_clk_gate2("sata",          "ipg",               base + 0x7c, 4);
++	clk[IMX6QDL_CLK_SATA]         = imx_clk_gate2("sata",          "ahb",               base + 0x7c, 4);
+ 	clk[IMX6QDL_CLK_SDMA]         = imx_clk_gate2("sdma",          "ahb",               base + 0x7c, 6);
+ 	clk[IMX6QDL_CLK_SPBA]         = imx_clk_gate2("spba",          "ipg",               base + 0x7c, 12);
+ 	clk[IMX6QDL_CLK_SPDIF]        = imx_clk_gate2("spdif",         "spdif_podf",        base + 0x7c, 14);
+diff --git a/arch/arm/mach-mvebu/pm-board.c b/arch/arm/mach-mvebu/pm-board.c
+index 6dfd4ab97b2a..301ab38d38ba 100644
+--- a/arch/arm/mach-mvebu/pm-board.c
++++ b/arch/arm/mach-mvebu/pm-board.c
+@@ -43,6 +43,9 @@ static void mvebu_armada_xp_gp_pm_enter(void __iomem *sdram_reg, u32 srcmd)
+ 	for (i = 0; i < ARMADA_XP_GP_PIC_NR_GPIOS; i++)
+ 		ackcmd |= BIT(pic_raw_gpios[i]);
+ 
++	srcmd = cpu_to_le32(srcmd);
++	ackcmd = cpu_to_le32(ackcmd);
++
+ 	/*
+ 	 * Wait a while, the PIC needs quite a bit of time between the
+ 	 * two GPIO commands.
+diff --git a/arch/arm/mach-tegra/cpuidle-tegra20.c b/arch/arm/mach-tegra/cpuidle-tegra20.c
+index 4f25a7c7ca0f..a351eff089f3 100644
+--- a/arch/arm/mach-tegra/cpuidle-tegra20.c
++++ b/arch/arm/mach-tegra/cpuidle-tegra20.c
+@@ -35,6 +35,7 @@
+ #include "iomap.h"
+ #include "irq.h"
+ #include "pm.h"
++#include "reset.h"
+ #include "sleep.h"
+ 
+ #ifdef CONFIG_PM_SLEEP
+@@ -71,15 +72,13 @@ static struct cpuidle_driver tegra_idle_driver = {
+ 
+ #ifdef CONFIG_PM_SLEEP
+ #ifdef CONFIG_SMP
+-static void __iomem *pmc = IO_ADDRESS(TEGRA_PMC_BASE);
+-
+ static int tegra20_reset_sleeping_cpu_1(void)
+ {
+ 	int ret = 0;
+ 
+ 	tegra_pen_lock();
+ 
+-	if (readl(pmc + PMC_SCRATCH41) == CPU_RESETTABLE)
++	if (readb(tegra20_cpu1_resettable_status) == CPU_RESETTABLE)
+ 		tegra20_cpu_shutdown(1);
+ 	else
+ 		ret = -EINVAL;
+diff --git a/arch/arm/mach-tegra/reset-handler.S b/arch/arm/mach-tegra/reset-handler.S
+index 71be4af5e975..e3070fdab80b 100644
+--- a/arch/arm/mach-tegra/reset-handler.S
++++ b/arch/arm/mach-tegra/reset-handler.S
+@@ -169,10 +169,10 @@ after_errata:
+ 	cmp	r6, #TEGRA20
+ 	bne	1f
+ 	/* If not CPU0, don't let CPU0 reset CPU1 now that CPU1 is coming up. */
+-	mov32	r5, TEGRA_PMC_BASE
+-	mov	r0, #0
++	mov32	r5, TEGRA_IRAM_BASE + TEGRA_IRAM_RESET_HANDLER_OFFSET
++	mov	r0, #CPU_NOT_RESETTABLE
+ 	cmp	r10, #0
+-	strne	r0, [r5, #PMC_SCRATCH41]
++	strneb	r0, [r5, #__tegra20_cpu1_resettable_status_offset]
+ 1:
+ #endif
+ 
+@@ -281,6 +281,10 @@ __tegra_cpu_reset_handler_data:
+ 	.rept	TEGRA_RESET_DATA_SIZE
+ 	.long	0
+ 	.endr
++	.globl	__tegra20_cpu1_resettable_status_offset
++	.equ	__tegra20_cpu1_resettable_status_offset, \
++					. - __tegra_cpu_reset_handler_start
++	.byte	0
+ 	.align L1_CACHE_SHIFT
+ 
+ ENTRY(__tegra_cpu_reset_handler_end)
+diff --git a/arch/arm/mach-tegra/reset.h b/arch/arm/mach-tegra/reset.h
+index 76a93434c6ee..29c3dec0126a 100644
+--- a/arch/arm/mach-tegra/reset.h
++++ b/arch/arm/mach-tegra/reset.h
+@@ -35,6 +35,7 @@ extern unsigned long __tegra_cpu_reset_handler_data[TEGRA_RESET_DATA_SIZE];
+ 
+ void __tegra_cpu_reset_handler_start(void);
+ void __tegra_cpu_reset_handler(void);
++void __tegra20_cpu1_resettable_status_offset(void);
+ void __tegra_cpu_reset_handler_end(void);
+ void tegra_secondary_startup(void);
+ 
+@@ -47,6 +48,9 @@ void tegra_secondary_startup(void);
+ 	(IO_ADDRESS(TEGRA_IRAM_BASE + TEGRA_IRAM_RESET_HANDLER_OFFSET + \
+ 	((u32)&__tegra_cpu_reset_handler_data[TEGRA_RESET_MASK_LP2] - \
+ 	 (u32)__tegra_cpu_reset_handler_start)))
++#define tegra20_cpu1_resettable_status \
++	(IO_ADDRESS(TEGRA_IRAM_BASE + TEGRA_IRAM_RESET_HANDLER_OFFSET + \
++	 (u32)__tegra20_cpu1_resettable_status_offset))
+ #endif
+ 
+ #define tegra_cpu_reset_handler_offset \
+diff --git a/arch/arm/mach-tegra/sleep-tegra20.S b/arch/arm/mach-tegra/sleep-tegra20.S
+index be4bc5f853f5..e6b684e14322 100644
+--- a/arch/arm/mach-tegra/sleep-tegra20.S
++++ b/arch/arm/mach-tegra/sleep-tegra20.S
+@@ -97,9 +97,10 @@ ENDPROC(tegra20_hotplug_shutdown)
+ ENTRY(tegra20_cpu_shutdown)
+ 	cmp	r0, #0
+ 	reteq	lr			@ must not be called for CPU 0
+-	mov32	r1, TEGRA_PMC_VIRT + PMC_SCRATCH41
++	mov32	r1, TEGRA_IRAM_RESET_BASE_VIRT
++	ldr	r2, =__tegra20_cpu1_resettable_status_offset
+ 	mov	r12, #CPU_RESETTABLE
+-	str	r12, [r1]
++	strb	r12, [r1, r2]
+ 
+ 	cpu_to_halt_reg r1, r0
+ 	ldr	r3, =TEGRA_FLOW_CTRL_VIRT
+@@ -182,38 +183,41 @@ ENDPROC(tegra_pen_unlock)
+ /*
+  * tegra20_cpu_clear_resettable(void)
+  *
+- * Called to clear the "resettable soon" flag in PMC_SCRATCH41 when
++ * Called to clear the "resettable soon" flag in IRAM variable when
+  * it is expected that the secondary CPU will be idle soon.
+  */
+ ENTRY(tegra20_cpu_clear_resettable)
+-	mov32	r1, TEGRA_PMC_VIRT + PMC_SCRATCH41
++	mov32	r1, TEGRA_IRAM_RESET_BASE_VIRT
++	ldr	r2, =__tegra20_cpu1_resettable_status_offset
+ 	mov	r12, #CPU_NOT_RESETTABLE
+-	str	r12, [r1]
++	strb	r12, [r1, r2]
+ 	ret	lr
+ ENDPROC(tegra20_cpu_clear_resettable)
+ 
+ /*
+  * tegra20_cpu_set_resettable_soon(void)
+  *
+- * Called to set the "resettable soon" flag in PMC_SCRATCH41 when
++ * Called to set the "resettable soon" flag in IRAM variable when
+  * it is expected that the secondary CPU will be idle soon.
+  */
+ ENTRY(tegra20_cpu_set_resettable_soon)
+-	mov32	r1, TEGRA_PMC_VIRT + PMC_SCRATCH41
++	mov32	r1, TEGRA_IRAM_RESET_BASE_VIRT
++	ldr	r2, =__tegra20_cpu1_resettable_status_offset
+ 	mov	r12, #CPU_RESETTABLE_SOON
+-	str	r12, [r1]
++	strb	r12, [r1, r2]
+ 	ret	lr
+ ENDPROC(tegra20_cpu_set_resettable_soon)
+ 
+ /*
+  * tegra20_cpu_is_resettable_soon(void)
+  *
+- * Returns true if the "resettable soon" flag in PMC_SCRATCH41 has been
++ * Returns true if the "resettable soon" flag in IRAM variable has been
+  * set because it is expected that the secondary CPU will be idle soon.
+  */
+ ENTRY(tegra20_cpu_is_resettable_soon)
+-	mov32	r1, TEGRA_PMC_VIRT + PMC_SCRATCH41
+-	ldr	r12, [r1]
++	mov32	r1, TEGRA_IRAM_RESET_BASE_VIRT
++	ldr	r2, =__tegra20_cpu1_resettable_status_offset
++	ldrb	r12, [r1, r2]
+ 	cmp	r12, #CPU_RESETTABLE_SOON
+ 	moveq	r0, #1
+ 	movne	r0, #0
+@@ -256,9 +260,10 @@ ENTRY(tegra20_sleep_cpu_secondary_finish)
+ 	mov	r0, #TEGRA_FLUSH_CACHE_LOUIS
+ 	bl	tegra_disable_clean_inv_dcache
+ 
+-	mov32	r0, TEGRA_PMC_VIRT + PMC_SCRATCH41
++	mov32	r0, TEGRA_IRAM_RESET_BASE_VIRT
++	ldr	r4, =__tegra20_cpu1_resettable_status_offset
+ 	mov	r3, #CPU_RESETTABLE
+-	str	r3, [r0]
++	strb	r3, [r0, r4]
+ 
+ 	bl	tegra_cpu_do_idle
+ 
+@@ -274,10 +279,10 @@ ENTRY(tegra20_sleep_cpu_secondary_finish)
+ 
+ 	bl	tegra_pen_lock
+ 
+-	mov32	r3, TEGRA_PMC_VIRT
+-	add	r0, r3, #PMC_SCRATCH41
++	mov32	r0, TEGRA_IRAM_RESET_BASE_VIRT
++	ldr	r4, =__tegra20_cpu1_resettable_status_offset
+ 	mov	r3, #CPU_NOT_RESETTABLE
+-	str	r3, [r0]
++	strb	r3, [r0, r4]
+ 
+ 	bl	tegra_pen_unlock
+ 
+diff --git a/arch/arm/mach-tegra/sleep.h b/arch/arm/mach-tegra/sleep.h
+index 92d46ec1361a..0d59360d891d 100644
+--- a/arch/arm/mach-tegra/sleep.h
++++ b/arch/arm/mach-tegra/sleep.h
+@@ -18,6 +18,7 @@
+ #define __MACH_TEGRA_SLEEP_H
+ 
+ #include "iomap.h"
++#include "irammap.h"
+ 
+ #define TEGRA_ARM_PERIF_VIRT (TEGRA_ARM_PERIF_BASE - IO_CPU_PHYS \
+ 					+ IO_CPU_VIRT)
+@@ -29,6 +30,9 @@
+ 					+ IO_APB_VIRT)
+ #define TEGRA_PMC_VIRT	(TEGRA_PMC_BASE - IO_APB_PHYS + IO_APB_VIRT)
+ 
++#define TEGRA_IRAM_RESET_BASE_VIRT (IO_IRAM_VIRT + \
++				TEGRA_IRAM_RESET_HANDLER_OFFSET)
++
+ /* PMC_SCRATCH37-39 and 41 are used for tegra_pen_lock and idle */
+ #define PMC_SCRATCH37	0x130
+ #define PMC_SCRATCH38	0x134
+diff --git a/arch/mips/include/asm/mach-generic/spaces.h b/arch/mips/include/asm/mach-generic/spaces.h
+index 9488fa5f8866..afc96ecb9004 100644
+--- a/arch/mips/include/asm/mach-generic/spaces.h
++++ b/arch/mips/include/asm/mach-generic/spaces.h
+@@ -94,7 +94,11 @@
+ #endif
+ 
+ #ifndef FIXADDR_TOP
++#ifdef CONFIG_KVM_GUEST
++#define FIXADDR_TOP		((unsigned long)(long)(int)0x7ffe0000)
++#else
+ #define FIXADDR_TOP		((unsigned long)(long)(int)0xfffe0000)
+ #endif
++#endif
+ 
+ #endif /* __ASM_MACH_GENERIC_SPACES_H */
+diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
+index f5e7ddab02f7..adf38868b006 100644
+--- a/arch/mips/kvm/mips.c
++++ b/arch/mips/kvm/mips.c
+@@ -785,7 +785,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
+ 
+ 	/* If nothing is dirty, don't bother messing with page tables. */
+ 	if (is_dirty) {
+-		memslot = &kvm->memslots->memslots[log->slot];
++		memslot = id_to_memslot(kvm->memslots, log->slot);
+ 
+ 		ga = memslot->base_gfn << PAGE_SHIFT;
+ 		ga_end = ga + (memslot->npages << PAGE_SHIFT);
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index 7c4f6690533a..3cb25fdbc468 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -131,7 +131,16 @@ static void pmao_restore_workaround(bool ebb) { }
+ 
+ static bool regs_use_siar(struct pt_regs *regs)
+ {
+-	return !!regs->result;
++	/*
++	 * When we take a performance monitor exception the regs are setup
++	 * using perf_read_regs() which overloads some fields, in particular
++	 * regs->result to tell us whether to use SIAR.
++	 *
++	 * However if the regs are from another exception, eg. a syscall, then
++	 * they have not been setup using perf_read_regs() and so regs->result
++	 * is something random.
++	 */
++	return ((TRAP(regs) == 0xf00) && regs->result);
+ }
+ 
+ /*
+diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
+index 9f73c8059022..49b74454d7ee 100644
+--- a/arch/s390/kernel/crash_dump.c
++++ b/arch/s390/kernel/crash_dump.c
+@@ -415,7 +415,7 @@ static void *nt_s390_vx_low(void *ptr, __vector128 *vx_regs)
+ 	ptr += len;
+ 	/* Copy lower halves of SIMD registers 0-15 */
+ 	for (i = 0; i < 16; i++) {
+-		memcpy(ptr, &vx_regs[i], 8);
++		memcpy(ptr, &vx_regs[i].u[2], 8);
+ 		ptr += 8;
+ 	}
+ 	return ptr;
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index e7bc2fdb6f67..b2b7ddfe864c 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -1037,7 +1037,7 @@ static int __inject_extcall(struct kvm_vcpu *vcpu, struct kvm_s390_irq *irq)
+ 	if (sclp_has_sigpif())
+ 		return __inject_extcall_sigpif(vcpu, src_id);
+ 
+-	if (!test_and_set_bit(IRQ_PEND_EXT_EXTERNAL, &li->pending_irqs))
++	if (test_and_set_bit(IRQ_PEND_EXT_EXTERNAL, &li->pending_irqs))
+ 		return -EBUSY;
+ 	*extcall = irq->u.extcall;
+ 	atomic_set_mask(CPUSTAT_EXT_INT, li->cpuflags);
+diff --git a/arch/sparc/kernel/ldc.c b/arch/sparc/kernel/ldc.c
+index 274a9f59d95c..591f119fcd99 100644
+--- a/arch/sparc/kernel/ldc.c
++++ b/arch/sparc/kernel/ldc.c
+@@ -2313,7 +2313,7 @@ void *ldc_alloc_exp_dring(struct ldc_channel *lp, unsigned int len,
+ 	if (len & (8UL - 1))
+ 		return ERR_PTR(-EINVAL);
+ 
+-	buf = kzalloc(len, GFP_KERNEL);
++	buf = kzalloc(len, GFP_ATOMIC);
+ 	if (!buf)
+ 		return ERR_PTR(-ENOMEM);
+ 
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index b7d31ca55187..570c71dd4b63 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -177,7 +177,7 @@ config SBUS
+ 
+ config NEED_DMA_MAP_STATE
+ 	def_bool y
+-	depends on X86_64 || INTEL_IOMMU || DMA_API_DEBUG
++	depends on X86_64 || INTEL_IOMMU || DMA_API_DEBUG || SWIOTLB
+ 
+ config NEED_SG_DMA_LENGTH
+ 	def_bool y
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 1c0fb570b5c2..e02589dd215a 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -583,7 +583,7 @@ struct kvm_arch {
+ 	struct kvm_pic *vpic;
+ 	struct kvm_ioapic *vioapic;
+ 	struct kvm_pit *vpit;
+-	int vapics_in_nmi_mode;
++	atomic_t vapics_in_nmi_mode;
+ 	struct mutex apic_map_lock;
+ 	struct kvm_apic_map *apic_map;
+ 
+diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
+index 298781d4cfb4..1406ffde3e35 100644
+--- a/arch/x86/kvm/i8254.c
++++ b/arch/x86/kvm/i8254.c
+@@ -305,7 +305,7 @@ static void pit_do_work(struct kthread_work *work)
+ 		 * LVT0 to NMI delivery. Other PIC interrupts are just sent to
+ 		 * VCPU0, and only if its LVT0 is in EXTINT mode.
+ 		 */
+-		if (kvm->arch.vapics_in_nmi_mode > 0)
++		if (atomic_read(&kvm->arch.vapics_in_nmi_mode) > 0)
+ 			kvm_for_each_vcpu(i, vcpu, kvm)
+ 				kvm_apic_nmi_wd_deliver(vcpu);
+ 	}
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 3cb2b58fa26b..8ee4aa7f567d 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -1224,10 +1224,10 @@ static void apic_manage_nmi_watchdog(struct kvm_lapic *apic, u32 lvt0_val)
+ 		if (!nmi_wd_enabled) {
+ 			apic_debug("Receive NMI setting on APIC_LVT0 "
+ 				   "for cpu %d\n", apic->vcpu->vcpu_id);
+-			apic->vcpu->kvm->arch.vapics_in_nmi_mode++;
++			atomic_inc(&apic->vcpu->kvm->arch.vapics_in_nmi_mode);
+ 		}
+ 	} else if (nmi_wd_enabled)
+-		apic->vcpu->kvm->arch.vapics_in_nmi_mode--;
++		atomic_dec(&apic->vcpu->kvm->arch.vapics_in_nmi_mode);
+ }
+ 
+ static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
+@@ -1784,6 +1784,7 @@ void kvm_apic_post_state_restore(struct kvm_vcpu *vcpu,
+ 	apic_update_ppr(apic);
+ 	hrtimer_cancel(&apic->lapic_timer.timer);
+ 	apic_update_lvtt(apic);
++	apic_manage_nmi_watchdog(apic, kvm_apic_get_reg(apic, APIC_LVT0));
+ 	update_divide_count(apic);
+ 	start_apic_timer(apic);
+ 	apic->irr_pending = true;
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index a4e62fcfabcb..1b32e2979de9 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -511,8 +511,10 @@ static void skip_emulated_instruction(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_svm *svm = to_svm(vcpu);
+ 
+-	if (svm->vmcb->control.next_rip != 0)
++	if (svm->vmcb->control.next_rip != 0) {
++		WARN_ON(!static_cpu_has(X86_FEATURE_NRIPS));
+ 		svm->next_rip = svm->vmcb->control.next_rip;
++	}
+ 
+ 	if (!svm->next_rip) {
+ 		if (emulate_instruction(vcpu, EMULTYPE_SKIP) !=
+@@ -4310,7 +4312,9 @@ static int svm_check_intercept(struct kvm_vcpu *vcpu,
+ 		break;
+ 	}
+ 
+-	vmcb->control.next_rip  = info->next_rip;
++	/* TODO: Advertise NRIPS to guest hypervisor unconditionally */
++	if (static_cpu_has(X86_FEATURE_NRIPS))
++		vmcb->control.next_rip  = info->next_rip;
+ 	vmcb->control.exit_code = icpt_info.exit_code;
+ 	vmexit = nested_svm_exit_handled(svm);
+ 
+diff --git a/arch/x86/pci/acpi.c b/arch/x86/pci/acpi.c
+index d93963340c3c..b33615f1efc5 100644
+--- a/arch/x86/pci/acpi.c
++++ b/arch/x86/pci/acpi.c
+@@ -81,6 +81,17 @@ static const struct dmi_system_id pci_crs_quirks[] __initconst = {
+ 			DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies, LTD"),
+ 		},
+ 	},
++	/* https://bugs.launchpad.net/ubuntu/+source/alsa-driver/+bug/931368 */
++	/* https://bugs.launchpad.net/ubuntu/+source/alsa-driver/+bug/1033299 */
++	{
++		.callback = set_use_crs,
++		.ident = "Foxconn K8M890-8237A",
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Foxconn"),
++			DMI_MATCH(DMI_BOARD_NAME, "K8M890-8237A"),
++			DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies, LTD"),
++		},
++	},
+ 
+ 	/* Now for the blacklist.. */
+ 
+@@ -121,8 +132,10 @@ void __init pci_acpi_crs_quirks(void)
+ {
+ 	int year;
+ 
+-	if (dmi_get_date(DMI_BIOS_DATE, &year, NULL, NULL) && year < 2008)
+-		pci_use_crs = false;
++	if (dmi_get_date(DMI_BIOS_DATE, &year, NULL, NULL) && year < 2008) {
++		if (iomem_resource.end <= 0xffffffff)
++			pci_use_crs = false;
++	}
+ 
+ 	dmi_check_system(pci_crs_quirks);
+ 
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 872c5772c5d3..2c867a6a1b1a 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -534,7 +534,7 @@ static void byt_set_pstate(struct cpudata *cpudata, int pstate)
+ 
+ 	val |= vid;
+ 
+-	wrmsrl(MSR_IA32_PERF_CTL, val);
++	wrmsrl_on_cpu(cpudata->cpu, MSR_IA32_PERF_CTL, val);
+ }
+ 
+ #define BYT_BCLK_FREQS 5
+diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c
+index 59372077ec7c..3442764a5293 100644
+--- a/drivers/cpuidle/cpuidle-powernv.c
++++ b/drivers/cpuidle/cpuidle-powernv.c
+@@ -60,6 +60,8 @@ static int nap_loop(struct cpuidle_device *dev,
+ 	return index;
+ }
+ 
++/* Register for fastsleep only in oneshot mode of broadcast */
++#ifdef CONFIG_TICK_ONESHOT
+ static int fastsleep_loop(struct cpuidle_device *dev,
+ 				struct cpuidle_driver *drv,
+ 				int index)
+@@ -83,7 +85,7 @@ static int fastsleep_loop(struct cpuidle_device *dev,
+ 
+ 	return index;
+ }
+-
++#endif
+ /*
+  * States for dedicated partition case.
+  */
+@@ -209,7 +211,14 @@ static int powernv_add_idle_states(void)
+ 			powernv_states[nr_idle_states].flags = 0;
+ 			powernv_states[nr_idle_states].target_residency = 100;
+ 			powernv_states[nr_idle_states].enter = &nap_loop;
+-		} else if (flags[i] & OPAL_PM_SLEEP_ENABLED ||
++		}
++
++		/*
++		 * All cpuidle states with CPUIDLE_FLAG_TIMER_STOP set must come
++		 * within this config dependency check.
++		 */
++#ifdef CONFIG_TICK_ONESHOT
++		if (flags[i] & OPAL_PM_SLEEP_ENABLED ||
+ 			flags[i] & OPAL_PM_SLEEP_ENABLED_ER1) {
+ 			/* Add FASTSLEEP state */
+ 			strcpy(powernv_states[nr_idle_states].name, "FastSleep");
+@@ -218,7 +227,7 @@ static int powernv_add_idle_states(void)
+ 			powernv_states[nr_idle_states].target_residency = 300000;
+ 			powernv_states[nr_idle_states].enter = &fastsleep_loop;
+ 		}
+-
++#endif
+ 		powernv_states[nr_idle_states].exit_latency =
+ 				((unsigned int)latency_ns[i]) / 1000;
+ 
+diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
+index ebbae8d3ce0d..9f7333abb821 100644
+--- a/drivers/crypto/talitos.c
++++ b/drivers/crypto/talitos.c
+@@ -927,7 +927,8 @@ static int sg_to_link_tbl(struct scatterlist *sg, int sg_count,
+ 		sg_count--;
+ 		link_tbl_ptr--;
+ 	}
+-	be16_add_cpu(&link_tbl_ptr->len, cryptlen);
++	link_tbl_ptr->len = cpu_to_be16(be16_to_cpu(link_tbl_ptr->len)
++					+ cryptlen);
+ 
+ 	/* tag end of link table */
+ 	link_tbl_ptr->j_extent = DESC_PTR_LNKTBL_RETURN;
+@@ -2563,6 +2564,7 @@ static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev,
+ 		break;
+ 	default:
+ 		dev_err(dev, "unknown algorithm type %d\n", t_alg->algt.type);
++		kfree(t_alg);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index 48882c126245..13cfbf470925 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -1870,9 +1870,15 @@ static void free_pt_##LVL (unsigned long __pt)			\
+ 	pt = (u64 *)__pt;					\
+ 								\
+ 	for (i = 0; i < 512; ++i) {				\
++		/* PTE present? */				\
+ 		if (!IOMMU_PTE_PRESENT(pt[i]))			\
+ 			continue;				\
+ 								\
++		/* Large PTE? */				\
++		if (PM_PTE_LEVEL(pt[i]) == 0 ||			\
++		    PM_PTE_LEVEL(pt[i]) == 7)			\
++			continue;				\
++								\
+ 		p = (unsigned long)IOMMU_PTE_PAGE(pt[i]);	\
+ 		FN(p);						\
+ 	}							\
+diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
+index bd6252b01510..2d1b203280d0 100644
+--- a/drivers/iommu/arm-smmu.c
++++ b/drivers/iommu/arm-smmu.c
+@@ -1533,7 +1533,7 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
+ 		return -ENODEV;
+ 	}
+ 
+-	if ((id & ID0_S1TS) && ((smmu->version == 1) || (id & ID0_ATOSNS))) {
++	if ((id & ID0_S1TS) && ((smmu->version == 1) || !(id & ID0_ATOSNS))) {
+ 		smmu->features |= ARM_SMMU_FEAT_TRANS_OPS;
+ 		dev_notice(smmu->dev, "\taddress translation ops\n");
+ 	}
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 0ad412a4876f..d3a7bff4d230 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -846,7 +846,7 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
+ 			int sg_cnt;
+ 
+ 			sg_cnt = sdhci_pre_dma_transfer(host, data, NULL);
+-			if (sg_cnt == 0) {
++			if (sg_cnt <= 0) {
+ 				/*
+ 				 * This only happens when someone fed
+ 				 * us an invalid request.
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+index d81fc6bd4759..5c92fb71b37e 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+@@ -263,7 +263,7 @@ static int xgbe_alloc_pages(struct xgbe_prv_data *pdata,
+ 	int ret;
+ 
+ 	/* Try to obtain pages, decreasing order if necessary */
+-	gfp |= __GFP_COLD | __GFP_COMP;
++	gfp |= __GFP_COLD | __GFP_COMP | __GFP_NOWARN;
+ 	while (order >= 0) {
+ 		pages = alloc_pages(gfp, order);
+ 		if (pages)
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+index 1ec635f54994..196474fc54c0 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+@@ -9323,7 +9323,8 @@ unload_error:
+ 	 * function stop ramrod is sent, since as part of this ramrod FW access
+ 	 * PTP registers.
+ 	 */
+-	bnx2x_stop_ptp(bp);
++	if (bp->flags & PTP_SUPPORTED)
++		bnx2x_stop_ptp(bp);
+ 
+ 	/* Disable HW interrupts, NAPI */
+ 	bnx2x_netif_stop(bp, 1);
+diff --git a/drivers/net/ethernet/intel/igb/igb_ptp.c b/drivers/net/ethernet/intel/igb/igb_ptp.c
+index d20fc8ed11f1..c3657652b631 100644
+--- a/drivers/net/ethernet/intel/igb/igb_ptp.c
++++ b/drivers/net/ethernet/intel/igb/igb_ptp.c
+@@ -540,8 +540,8 @@ static int igb_ptp_feature_enable_i210(struct ptp_clock_info *ptp,
+ 			igb->perout[i].start.tv_nsec = rq->perout.start.nsec;
+ 			igb->perout[i].period.tv_sec = ts.tv_sec;
+ 			igb->perout[i].period.tv_nsec = ts.tv_nsec;
+-			wr32(trgttiml, rq->perout.start.sec);
+-			wr32(trgttimh, rq->perout.start.nsec);
++			wr32(trgttimh, rq->perout.start.sec);
++			wr32(trgttiml, rq->perout.start.nsec);
+ 			tsauxc |= tsauxc_mask;
+ 			tsim |= tsim_mask;
+ 		} else {
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 2db653225a0e..87c7f52c3419 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -304,6 +304,7 @@ struct mvneta_port {
+ 	unsigned int link;
+ 	unsigned int duplex;
+ 	unsigned int speed;
++	unsigned int tx_csum_limit;
+ };
+ 
+ /* The mvneta_tx_desc and mvneta_rx_desc structures describe the
+@@ -2441,8 +2442,10 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
+ 
+ 	dev->mtu = mtu;
+ 
+-	if (!netif_running(dev))
++	if (!netif_running(dev)) {
++		netdev_update_features(dev);
+ 		return 0;
++	}
+ 
+ 	/* The interface is running, so we have to force a
+ 	 * reallocation of the queues
+@@ -2471,9 +2474,26 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
+ 	mvneta_start_dev(pp);
+ 	mvneta_port_up(pp);
+ 
++	netdev_update_features(dev);
++
+ 	return 0;
+ }
+ 
++static netdev_features_t mvneta_fix_features(struct net_device *dev,
++					     netdev_features_t features)
++{
++	struct mvneta_port *pp = netdev_priv(dev);
++
++	if (pp->tx_csum_limit && dev->mtu > pp->tx_csum_limit) {
++		features &= ~(NETIF_F_IP_CSUM | NETIF_F_TSO);
++		netdev_info(dev,
++			    "Disable IP checksum for MTU greater than %dB\n",
++			    pp->tx_csum_limit);
++	}
++
++	return features;
++}
++
+ /* Get mac address */
+ static void mvneta_get_mac_addr(struct mvneta_port *pp, unsigned char *addr)
+ {
+@@ -2785,6 +2805,7 @@ static const struct net_device_ops mvneta_netdev_ops = {
+ 	.ndo_set_rx_mode     = mvneta_set_rx_mode,
+ 	.ndo_set_mac_address = mvneta_set_mac_addr,
+ 	.ndo_change_mtu      = mvneta_change_mtu,
++	.ndo_fix_features    = mvneta_fix_features,
+ 	.ndo_get_stats64     = mvneta_get_stats64,
+ 	.ndo_do_ioctl        = mvneta_ioctl,
+ };
+@@ -3023,6 +3044,9 @@ static int mvneta_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	if (of_device_is_compatible(dn, "marvell,armada-370-neta"))
++		pp->tx_csum_limit = 1600;
++
+ 	pp->tx_ring_size = MVNETA_MAX_TXD;
+ 	pp->rx_ring_size = MVNETA_MAX_RXD;
+ 
+@@ -3095,6 +3119,7 @@ static int mvneta_remove(struct platform_device *pdev)
+ 
+ static const struct of_device_id mvneta_match[] = {
+ 	{ .compatible = "marvell,armada-370-neta" },
++	{ .compatible = "marvell,armada-xp-neta" },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(of, mvneta_match);
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+index 2f1324bed7b3..f30c32241580 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+@@ -1971,10 +1971,6 @@ void mlx4_en_free_resources(struct mlx4_en_priv *priv)
+ 			mlx4_en_destroy_cq(priv, &priv->rx_cq[i]);
+ 	}
+ 
+-	if (priv->base_tx_qpn) {
+-		mlx4_qp_release_range(priv->mdev->dev, priv->base_tx_qpn, priv->tx_ring_num);
+-		priv->base_tx_qpn = 0;
+-	}
+ }
+ 
+ int mlx4_en_alloc_resources(struct mlx4_en_priv *priv)
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+index 05ec5e151ded..3478c87f10e7 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+@@ -723,7 +723,7 @@ static int get_fixed_ipv6_csum(__wsum hw_checksum, struct sk_buff *skb,
+ }
+ #endif
+ static int check_csum(struct mlx4_cqe *cqe, struct sk_buff *skb, void *va,
+-		      int hwtstamp_rx_filter)
++		      netdev_features_t dev_features)
+ {
+ 	__wsum hw_checksum = 0;
+ 
+@@ -731,14 +731,8 @@ static int check_csum(struct mlx4_cqe *cqe, struct sk_buff *skb, void *va,
+ 
+ 	hw_checksum = csum_unfold((__force __sum16)cqe->checksum);
+ 
+-	if (((struct ethhdr *)va)->h_proto == htons(ETH_P_8021Q) &&
+-	    hwtstamp_rx_filter != HWTSTAMP_FILTER_NONE) {
+-		/* next protocol non IPv4 or IPv6 */
+-		if (((struct vlan_hdr *)hdr)->h_vlan_encapsulated_proto
+-		    != htons(ETH_P_IP) &&
+-		    ((struct vlan_hdr *)hdr)->h_vlan_encapsulated_proto
+-		    != htons(ETH_P_IPV6))
+-			return -1;
++	if (cqe->vlan_my_qpn & cpu_to_be32(MLX4_CQE_VLAN_PRESENT_MASK) &&
++	    !(dev_features & NETIF_F_HW_VLAN_CTAG_RX)) {
+ 		hw_checksum = get_fixed_vlan_csum(hw_checksum, hdr);
+ 		hdr += sizeof(struct vlan_hdr);
+ 	}
+@@ -901,7 +895,8 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
+ 
+ 			if (ip_summed == CHECKSUM_COMPLETE) {
+ 				void *va = skb_frag_address(skb_shinfo(gro_skb)->frags);
+-				if (check_csum(cqe, gro_skb, va, ring->hwtstamp_rx_filter)) {
++				if (check_csum(cqe, gro_skb, va,
++					       dev->features)) {
+ 					ip_summed = CHECKSUM_NONE;
+ 					ring->csum_none++;
+ 					ring->csum_complete--;
+@@ -956,7 +951,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
+ 		}
+ 
+ 		if (ip_summed == CHECKSUM_COMPLETE) {
+-			if (check_csum(cqe, skb, skb->data, ring->hwtstamp_rx_filter)) {
++			if (check_csum(cqe, skb, skb->data, dev->features)) {
+ 				ip_summed = CHECKSUM_NONE;
+ 				ring->csum_complete--;
+ 				ring->csum_none++;
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+index 8c234ec1d8aa..35dd887447d6 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+@@ -66,6 +66,7 @@ int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
+ 	ring->size = size;
+ 	ring->size_mask = size - 1;
+ 	ring->stride = stride;
++	ring->full_size = ring->size - HEADROOM - MAX_DESC_TXBBS;
+ 
+ 	tmp = size * sizeof(struct mlx4_en_tx_info);
+ 	ring->tx_info = kmalloc_node(tmp, GFP_KERNEL | __GFP_NOWARN, node);
+@@ -180,6 +181,7 @@ void mlx4_en_destroy_tx_ring(struct mlx4_en_priv *priv,
+ 		mlx4_bf_free(mdev->dev, &ring->bf);
+ 	mlx4_qp_remove(mdev->dev, &ring->qp);
+ 	mlx4_qp_free(mdev->dev, &ring->qp);
++	mlx4_qp_release_range(priv->mdev->dev, ring->qpn, 1);
+ 	mlx4_en_unmap_buffer(&ring->wqres.buf);
+ 	mlx4_free_hwq_res(mdev->dev, &ring->wqres, ring->buf_size);
+ 	kfree(ring->bounce_buf);
+@@ -231,6 +233,11 @@ void mlx4_en_deactivate_tx_ring(struct mlx4_en_priv *priv,
+ 		       MLX4_QP_STATE_RST, NULL, 0, 0, &ring->qp);
+ }
+ 
++static inline bool mlx4_en_is_tx_ring_full(struct mlx4_en_tx_ring *ring)
++{
++	return ring->prod - ring->cons > ring->full_size;
++}
++
+ static void mlx4_en_stamp_wqe(struct mlx4_en_priv *priv,
+ 			      struct mlx4_en_tx_ring *ring, int index,
+ 			      u8 owner)
+@@ -473,11 +480,10 @@ static bool mlx4_en_process_tx_cq(struct net_device *dev,
+ 
+ 	netdev_tx_completed_queue(ring->tx_queue, packets, bytes);
+ 
+-	/*
+-	 * Wakeup Tx queue if this stopped, and at least 1 packet
+-	 * was completed
++	/* Wakeup Tx queue if this stopped, and ring is not full.
+ 	 */
+-	if (netif_tx_queue_stopped(ring->tx_queue) && txbbs_skipped > 0) {
++	if (netif_tx_queue_stopped(ring->tx_queue) &&
++	    !mlx4_en_is_tx_ring_full(ring)) {
+ 		netif_tx_wake_queue(ring->tx_queue);
+ 		ring->wake_queue++;
+ 	}
+@@ -921,8 +927,7 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	skb_tx_timestamp(skb);
+ 
+ 	/* Check available TXBBs And 2K spare for prefetch */
+-	stop_queue = (int)(ring->prod - ring_cons) >
+-		      ring->size - HEADROOM - MAX_DESC_TXBBS;
++	stop_queue = mlx4_en_is_tx_ring_full(ring);
+ 	if (unlikely(stop_queue)) {
+ 		netif_tx_stop_queue(ring->tx_queue);
+ 		ring->queue_stopped++;
+@@ -991,8 +996,7 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		smp_rmb();
+ 
+ 		ring_cons = ACCESS_ONCE(ring->cons);
+-		if (unlikely(((int)(ring->prod - ring_cons)) <=
+-			     ring->size - HEADROOM - MAX_DESC_TXBBS)) {
++		if (unlikely(!mlx4_en_is_tx_ring_full(ring))) {
+ 			netif_tx_wake_queue(ring->tx_queue);
+ 			ring->wake_queue++;
+ 		}
+diff --git a/drivers/net/ethernet/mellanox/mlx4/intf.c b/drivers/net/ethernet/mellanox/mlx4/intf.c
+index 6fce58718837..0d80aed59043 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/intf.c
++++ b/drivers/net/ethernet/mellanox/mlx4/intf.c
+@@ -93,8 +93,14 @@ int mlx4_register_interface(struct mlx4_interface *intf)
+ 	mutex_lock(&intf_mutex);
+ 
+ 	list_add_tail(&intf->list, &intf_list);
+-	list_for_each_entry(priv, &dev_list, dev_list)
++	list_for_each_entry(priv, &dev_list, dev_list) {
++		if (mlx4_is_mfunc(&priv->dev) && (intf->flags & MLX4_INTFF_BONDING)) {
++			mlx4_dbg(&priv->dev,
++				 "SRIOV, disabling HA mode for intf proto %d\n", intf->protocol);
++			intf->flags &= ~MLX4_INTFF_BONDING;
++		}
+ 		mlx4_add_device(intf, priv);
++	}
+ 
+ 	mutex_unlock(&intf_mutex);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+index 8687c8d54227..0bf0fdd5d2b5 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
++++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+@@ -280,6 +280,7 @@ struct mlx4_en_tx_ring {
+ 	u32			size; /* number of TXBBs */
+ 	u32			size_mask;
+ 	u16			stride;
++	u32			full_size;
+ 	u16			cqn;	/* index of port CQ associated with this ring */
+ 	u32			buf_size;
+ 	__be32			doorbell_qpn;
+@@ -601,7 +602,6 @@ struct mlx4_en_priv {
+ 	int vids[128];
+ 	bool wol;
+ 	struct device *ddev;
+-	int base_tx_qpn;
+ 	struct hlist_head mac_hash[MLX4_EN_MAC_HASH_SIZE];
+ 	struct hwtstamp_config hwtstamp_config;
+ 
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index bdfe51fc3a65..d551df62e61a 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -796,10 +796,11 @@ static int genphy_config_advert(struct phy_device *phydev)
+ 	if (phydev->supported & (SUPPORTED_1000baseT_Half |
+ 				 SUPPORTED_1000baseT_Full)) {
+ 		adv |= ethtool_adv_to_mii_ctrl1000_t(advertise);
+-		if (adv != oldadv)
+-			changed = 1;
+ 	}
+ 
++	if (adv != oldadv)
++		changed = 1;
++
+ 	err = phy_write(phydev, MII_CTRL1000, adv);
+ 	if (err < 0)
+ 		return err;
+diff --git a/drivers/s390/kvm/virtio_ccw.c b/drivers/s390/kvm/virtio_ccw.c
+index 71d7802aa8b4..57171173739f 100644
+--- a/drivers/s390/kvm/virtio_ccw.c
++++ b/drivers/s390/kvm/virtio_ccw.c
+@@ -65,6 +65,7 @@ struct virtio_ccw_device {
+ 	bool is_thinint;
+ 	bool going_away;
+ 	bool device_lost;
++	unsigned int config_ready;
+ 	void *airq_info;
+ };
+ 
+@@ -833,8 +834,11 @@ static void virtio_ccw_get_config(struct virtio_device *vdev,
+ 	if (ret)
+ 		goto out_free;
+ 
+-	memcpy(vcdev->config, config_area, sizeof(vcdev->config));
+-	memcpy(buf, &vcdev->config[offset], len);
++	memcpy(vcdev->config, config_area, offset + len);
++	if (buf)
++		memcpy(buf, &vcdev->config[offset], len);
++	if (vcdev->config_ready < offset + len)
++		vcdev->config_ready = offset + len;
+ 
+ out_free:
+ 	kfree(config_area);
+@@ -857,6 +861,9 @@ static void virtio_ccw_set_config(struct virtio_device *vdev,
+ 	if (!config_area)
+ 		goto out_free;
+ 
++	/* Make sure we don't overwrite fields. */
++	if (vcdev->config_ready < offset)
++		virtio_ccw_get_config(vdev, 0, NULL, offset);
+ 	memcpy(&vcdev->config[offset], buf, len);
+ 	/* Write the config area to the host. */
+ 	memcpy(config_area, vcdev->config, sizeof(vcdev->config));
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 175c9956cbe3..ce3b40734a86 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -845,7 +845,7 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
+ 				ret = ep->status;
+ 				if (io_data->read && ret > 0) {
+ 					ret = copy_to_iter(data, ret, &io_data->data);
+-					if (unlikely(iov_iter_count(&io_data->data)))
++					if (!ret)
+ 						ret = -EFAULT;
+ 				}
+ 			}
+@@ -3433,6 +3433,7 @@ done:
+ static void ffs_closed(struct ffs_data *ffs)
+ {
+ 	struct ffs_dev *ffs_obj;
++	struct f_fs_opts *opts;
+ 
+ 	ENTER();
+ 	ffs_dev_lock();
+@@ -3446,8 +3447,13 @@ static void ffs_closed(struct ffs_data *ffs)
+ 	if (ffs_obj->ffs_closed_callback)
+ 		ffs_obj->ffs_closed_callback(ffs);
+ 
+-	if (!ffs_obj->opts || ffs_obj->opts->no_configfs
+-	    || !ffs_obj->opts->func_inst.group.cg_item.ci_parent)
++	if (ffs_obj->opts)
++		opts = ffs_obj->opts;
++	else
++		goto done;
++
++	if (opts->no_configfs || !opts->func_inst.group.cg_item.ci_parent
++	    || !atomic_read(&opts->func_inst.group.cg_item.ci_kref.refcount))
+ 		goto done;
+ 
+ 	unregister_gadget_item(ffs_obj->opts->
+diff --git a/fs/dcache.c b/fs/dcache.c
+index 922f23ef6041..b05c557d0422 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -2896,17 +2896,6 @@ restart:
+ 				vfsmnt = &mnt->mnt;
+ 				continue;
+ 			}
+-			/*
+-			 * Filesystems needing to implement special "root names"
+-			 * should do so with ->d_dname()
+-			 */
+-			if (IS_ROOT(dentry) &&
+-			   (dentry->d_name.len != 1 ||
+-			    dentry->d_name.name[0] != '/')) {
+-				WARN(1, "Root dentry has weird name <%.*s>\n",
+-				     (int) dentry->d_name.len,
+-				     dentry->d_name.name);
+-			}
+ 			if (!error)
+ 				error = is_mounted(vfsmnt) ? 1 : 2;
+ 			break;
+diff --git a/fs/inode.c b/fs/inode.c
+index f00b16f45507..c60671d556bc 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -1693,8 +1693,8 @@ int file_remove_suid(struct file *file)
+ 		error = security_inode_killpriv(dentry);
+ 	if (!error && killsuid)
+ 		error = __remove_suid(dentry, killsuid);
+-	if (!error && (inode->i_sb->s_flags & MS_NOSEC))
+-		inode->i_flags |= S_NOSEC;
++	if (!error)
++		inode_has_no_xattr(inode);
+ 
+ 	return error;
+ }
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 13b0f7bfc096..f07c7691ace1 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -3187,11 +3187,15 @@ bool fs_fully_visible(struct file_system_type *type)
+ 		if (mnt->mnt.mnt_root != mnt->mnt.mnt_sb->s_root)
+ 			continue;
+ 
+-		/* This mount is not fully visible if there are any child mounts
+-		 * that cover anything except for empty directories.
++		/* This mount is not fully visible if there are any
++		 * locked child mounts that cover anything except for
++		 * empty directories.
+ 		 */
+ 		list_for_each_entry(child, &mnt->mnt_mounts, mnt_child) {
+ 			struct inode *inode = child->mnt_mountpoint->d_inode;
++			/* Only worry about locked mounts */
++			if (!(mnt->mnt.mnt_flags & MNT_LOCKED))
++				continue;
+ 			if (!S_ISDIR(inode->i_mode))
+ 				goto next;
+ 			if (inode->i_nlink > 2)
+diff --git a/fs/ufs/balloc.c b/fs/ufs/balloc.c
+index 2c1036080d52..a7106eda5024 100644
+--- a/fs/ufs/balloc.c
++++ b/fs/ufs/balloc.c
+@@ -51,8 +51,8 @@ void ufs_free_fragments(struct inode *inode, u64 fragment, unsigned count)
+ 	
+ 	if (ufs_fragnum(fragment) + count > uspi->s_fpg)
+ 		ufs_error (sb, "ufs_free_fragments", "internal error");
+-	
+-	lock_ufs(sb);
++
++	mutex_lock(&UFS_SB(sb)->s_lock);
+ 	
+ 	cgno = ufs_dtog(uspi, fragment);
+ 	bit = ufs_dtogd(uspi, fragment);
+@@ -115,13 +115,13 @@ void ufs_free_fragments(struct inode *inode, u64 fragment, unsigned count)
+ 	if (sb->s_flags & MS_SYNCHRONOUS)
+ 		ubh_sync_block(UCPI_UBH(ucpi));
+ 	ufs_mark_sb_dirty(sb);
+-	
+-	unlock_ufs(sb);
++
++	mutex_unlock(&UFS_SB(sb)->s_lock);
+ 	UFSD("EXIT\n");
+ 	return;
+ 
+ failed:
+-	unlock_ufs(sb);
++	mutex_unlock(&UFS_SB(sb)->s_lock);
+ 	UFSD("EXIT (FAILED)\n");
+ 	return;
+ }
+@@ -151,7 +151,7 @@ void ufs_free_blocks(struct inode *inode, u64 fragment, unsigned count)
+ 		goto failed;
+ 	}
+ 
+-	lock_ufs(sb);
++	mutex_lock(&UFS_SB(sb)->s_lock);
+ 	
+ do_more:
+ 	overflow = 0;
+@@ -211,12 +211,12 @@ do_more:
+ 	}
+ 
+ 	ufs_mark_sb_dirty(sb);
+-	unlock_ufs(sb);
++	mutex_unlock(&UFS_SB(sb)->s_lock);
+ 	UFSD("EXIT\n");
+ 	return;
+ 
+ failed_unlock:
+-	unlock_ufs(sb);
++	mutex_unlock(&UFS_SB(sb)->s_lock);
+ failed:
+ 	UFSD("EXIT (FAILED)\n");
+ 	return;
+@@ -357,7 +357,7 @@ u64 ufs_new_fragments(struct inode *inode, void *p, u64 fragment,
+ 	usb1 = ubh_get_usb_first(uspi);
+ 	*err = -ENOSPC;
+ 
+-	lock_ufs(sb);
++	mutex_lock(&UFS_SB(sb)->s_lock);
+ 	tmp = ufs_data_ptr_to_cpu(sb, p);
+ 
+ 	if (count + ufs_fragnum(fragment) > uspi->s_fpb) {
+@@ -378,19 +378,19 @@ u64 ufs_new_fragments(struct inode *inode, void *p, u64 fragment,
+ 				  "fragment %llu, tmp %llu\n",
+ 				  (unsigned long long)fragment,
+ 				  (unsigned long long)tmp);
+-			unlock_ufs(sb);
++			mutex_unlock(&UFS_SB(sb)->s_lock);
+ 			return INVBLOCK;
+ 		}
+ 		if (fragment < UFS_I(inode)->i_lastfrag) {
+ 			UFSD("EXIT (ALREADY ALLOCATED)\n");
+-			unlock_ufs(sb);
++			mutex_unlock(&UFS_SB(sb)->s_lock);
+ 			return 0;
+ 		}
+ 	}
+ 	else {
+ 		if (tmp) {
+ 			UFSD("EXIT (ALREADY ALLOCATED)\n");
+-			unlock_ufs(sb);
++			mutex_unlock(&UFS_SB(sb)->s_lock);
+ 			return 0;
+ 		}
+ 	}
+@@ -399,7 +399,7 @@ u64 ufs_new_fragments(struct inode *inode, void *p, u64 fragment,
+ 	 * There is not enough space for user on the device
+ 	 */
+ 	if (!capable(CAP_SYS_RESOURCE) && ufs_freespace(uspi, UFS_MINFREE) <= 0) {
+-		unlock_ufs(sb);
++		mutex_unlock(&UFS_SB(sb)->s_lock);
+ 		UFSD("EXIT (FAILED)\n");
+ 		return 0;
+ 	}
+@@ -424,7 +424,7 @@ u64 ufs_new_fragments(struct inode *inode, void *p, u64 fragment,
+ 			ufs_clear_frags(inode, result + oldcount,
+ 					newcount - oldcount, locked_page != NULL);
+ 		}
+-		unlock_ufs(sb);
++		mutex_unlock(&UFS_SB(sb)->s_lock);
+ 		UFSD("EXIT, result %llu\n", (unsigned long long)result);
+ 		return result;
+ 	}
+@@ -439,7 +439,7 @@ u64 ufs_new_fragments(struct inode *inode, void *p, u64 fragment,
+ 						fragment + count);
+ 		ufs_clear_frags(inode, result + oldcount, newcount - oldcount,
+ 				locked_page != NULL);
+-		unlock_ufs(sb);
++		mutex_unlock(&UFS_SB(sb)->s_lock);
+ 		UFSD("EXIT, result %llu\n", (unsigned long long)result);
+ 		return result;
+ 	}
+@@ -477,7 +477,7 @@ u64 ufs_new_fragments(struct inode *inode, void *p, u64 fragment,
+ 		*err = 0;
+ 		UFS_I(inode)->i_lastfrag = max(UFS_I(inode)->i_lastfrag,
+ 						fragment + count);
+-		unlock_ufs(sb);
++		mutex_unlock(&UFS_SB(sb)->s_lock);
+ 		if (newcount < request)
+ 			ufs_free_fragments (inode, result + newcount, request - newcount);
+ 		ufs_free_fragments (inode, tmp, oldcount);
+@@ -485,7 +485,7 @@ u64 ufs_new_fragments(struct inode *inode, void *p, u64 fragment,
+ 		return result;
+ 	}
+ 
+-	unlock_ufs(sb);
++	mutex_unlock(&UFS_SB(sb)->s_lock);
+ 	UFSD("EXIT (FAILED)\n");
+ 	return 0;
+ }		
+diff --git a/fs/ufs/ialloc.c b/fs/ufs/ialloc.c
+index 7caa01652888..fd0203ce1f7f 100644
+--- a/fs/ufs/ialloc.c
++++ b/fs/ufs/ialloc.c
+@@ -69,11 +69,11 @@ void ufs_free_inode (struct inode * inode)
+ 	
+ 	ino = inode->i_ino;
+ 
+-	lock_ufs(sb);
++	mutex_lock(&UFS_SB(sb)->s_lock);
+ 
+ 	if (!((ino > 1) && (ino < (uspi->s_ncg * uspi->s_ipg )))) {
+ 		ufs_warning(sb, "ufs_free_inode", "reserved inode or nonexistent inode %u\n", ino);
+-		unlock_ufs(sb);
++		mutex_unlock(&UFS_SB(sb)->s_lock);
+ 		return;
+ 	}
+ 	
+@@ -81,7 +81,7 @@ void ufs_free_inode (struct inode * inode)
+ 	bit = ufs_inotocgoff (ino);
+ 	ucpi = ufs_load_cylinder (sb, cg);
+ 	if (!ucpi) {
+-		unlock_ufs(sb);
++		mutex_unlock(&UFS_SB(sb)->s_lock);
+ 		return;
+ 	}
+ 	ucg = ubh_get_ucg(UCPI_UBH(ucpi));
+@@ -115,7 +115,7 @@ void ufs_free_inode (struct inode * inode)
+ 		ubh_sync_block(UCPI_UBH(ucpi));
+ 	
+ 	ufs_mark_sb_dirty(sb);
+-	unlock_ufs(sb);
++	mutex_unlock(&UFS_SB(sb)->s_lock);
+ 	UFSD("EXIT\n");
+ }
+ 
+@@ -193,7 +193,7 @@ struct inode *ufs_new_inode(struct inode *dir, umode_t mode)
+ 	sbi = UFS_SB(sb);
+ 	uspi = sbi->s_uspi;
+ 
+-	lock_ufs(sb);
++	mutex_lock(&sbi->s_lock);
+ 
+ 	/*
+ 	 * Try to place the inode in its parent directory
+@@ -331,21 +331,21 @@ cg_found:
+ 			sync_dirty_buffer(bh);
+ 		brelse(bh);
+ 	}
+-	unlock_ufs(sb);
++	mutex_unlock(&sbi->s_lock);
+ 
+ 	UFSD("allocating inode %lu\n", inode->i_ino);
+ 	UFSD("EXIT\n");
+ 	return inode;
+ 
+ fail_remove_inode:
+-	unlock_ufs(sb);
++	mutex_unlock(&sbi->s_lock);
+ 	clear_nlink(inode);
+ 	unlock_new_inode(inode);
+ 	iput(inode);
+ 	UFSD("EXIT (FAILED): err %d\n", err);
+ 	return ERR_PTR(err);
+ failed:
+-	unlock_ufs(sb);
++	mutex_unlock(&sbi->s_lock);
+ 	make_bad_inode(inode);
+ 	iput (inode);
+ 	UFSD("EXIT (FAILED): err %d\n", err);
+diff --git a/fs/ufs/inode.c b/fs/ufs/inode.c
+index be7d42c7d938..2d93ab07da8a 100644
+--- a/fs/ufs/inode.c
++++ b/fs/ufs/inode.c
+@@ -902,6 +902,9 @@ void ufs_evict_inode(struct inode * inode)
+ 	invalidate_inode_buffers(inode);
+ 	clear_inode(inode);
+ 
+-	if (want_delete)
++	if (want_delete) {
++		lock_ufs(inode->i_sb);
+ 		ufs_free_inode(inode);
++		unlock_ufs(inode->i_sb);
++	}
+ }
+diff --git a/fs/ufs/namei.c b/fs/ufs/namei.c
+index fd65deb4b5f0..e8ee2985b068 100644
+--- a/fs/ufs/namei.c
++++ b/fs/ufs/namei.c
+@@ -128,12 +128,12 @@ static int ufs_symlink (struct inode * dir, struct dentry * dentry,
+ 	if (l > sb->s_blocksize)
+ 		goto out_notlocked;
+ 
++	lock_ufs(dir->i_sb);
+ 	inode = ufs_new_inode(dir, S_IFLNK | S_IRWXUGO);
+ 	err = PTR_ERR(inode);
+ 	if (IS_ERR(inode))
+-		goto out_notlocked;
++		goto out;
+ 
+-	lock_ufs(dir->i_sb);
+ 	if (l > UFS_SB(sb)->s_uspi->s_maxsymlinklen) {
+ 		/* slow symlink */
+ 		inode->i_op = &ufs_symlink_inode_operations;
+@@ -174,7 +174,12 @@ static int ufs_link (struct dentry * old_dentry, struct inode * dir,
+ 	inode_inc_link_count(inode);
+ 	ihold(inode);
+ 
+-	error = ufs_add_nondir(dentry, inode);
++	error = ufs_add_link(dentry, inode);
++	if (error) {
++		inode_dec_link_count(inode);
++		iput(inode);
++	} else
++		d_instantiate(dentry, inode);
+ 	unlock_ufs(dir->i_sb);
+ 	return error;
+ }
+@@ -184,9 +189,13 @@ static int ufs_mkdir(struct inode * dir, struct dentry * dentry, umode_t mode)
+ 	struct inode * inode;
+ 	int err;
+ 
++	lock_ufs(dir->i_sb);
++	inode_inc_link_count(dir);
++
+ 	inode = ufs_new_inode(dir, S_IFDIR|mode);
++	err = PTR_ERR(inode);
+ 	if (IS_ERR(inode))
+-		return PTR_ERR(inode);
++		goto out_dir;
+ 
+ 	inode->i_op = &ufs_dir_inode_operations;
+ 	inode->i_fop = &ufs_dir_operations;
+@@ -194,9 +203,6 @@ static int ufs_mkdir(struct inode * dir, struct dentry * dentry, umode_t mode)
+ 
+ 	inode_inc_link_count(inode);
+ 
+-	lock_ufs(dir->i_sb);
+-	inode_inc_link_count(dir);
+-
+ 	err = ufs_make_empty(inode, dir);
+ 	if (err)
+ 		goto out_fail;
+@@ -206,6 +212,7 @@ static int ufs_mkdir(struct inode * dir, struct dentry * dentry, umode_t mode)
+ 		goto out_fail;
+ 	unlock_ufs(dir->i_sb);
+ 
++	unlock_new_inode(inode);
+ 	d_instantiate(dentry, inode);
+ out:
+ 	return err;
+@@ -215,6 +222,7 @@ out_fail:
+ 	inode_dec_link_count(inode);
+ 	unlock_new_inode(inode);
+ 	iput (inode);
++out_dir:
+ 	inode_dec_link_count(dir);
+ 	unlock_ufs(dir->i_sb);
+ 	goto out;
+diff --git a/fs/ufs/super.c b/fs/ufs/super.c
+index 8092d3759a5e..eb1679176cbc 100644
+--- a/fs/ufs/super.c
++++ b/fs/ufs/super.c
+@@ -694,6 +694,7 @@ static int ufs_sync_fs(struct super_block *sb, int wait)
+ 	unsigned flags;
+ 
+ 	lock_ufs(sb);
++	mutex_lock(&UFS_SB(sb)->s_lock);
+ 
+ 	UFSD("ENTER\n");
+ 
+@@ -711,6 +712,7 @@ static int ufs_sync_fs(struct super_block *sb, int wait)
+ 	ufs_put_cstotal(sb);
+ 
+ 	UFSD("EXIT\n");
++	mutex_unlock(&UFS_SB(sb)->s_lock);
+ 	unlock_ufs(sb);
+ 
+ 	return 0;
+@@ -799,6 +801,7 @@ static int ufs_fill_super(struct super_block *sb, void *data, int silent)
+ 	UFSD("flag %u\n", (int)(sb->s_flags & MS_RDONLY));
+ 	
+ 	mutex_init(&sbi->mutex);
++	mutex_init(&sbi->s_lock);
+ 	spin_lock_init(&sbi->work_lock);
+ 	INIT_DELAYED_WORK(&sbi->sync_work, delayed_sync_fs);
+ 	/*
+@@ -1277,6 +1280,7 @@ static int ufs_remount (struct super_block *sb, int *mount_flags, char *data)
+ 
+ 	sync_filesystem(sb);
+ 	lock_ufs(sb);
++	mutex_lock(&UFS_SB(sb)->s_lock);
+ 	uspi = UFS_SB(sb)->s_uspi;
+ 	flags = UFS_SB(sb)->s_flags;
+ 	usb1 = ubh_get_usb_first(uspi);
+@@ -1290,6 +1294,7 @@ static int ufs_remount (struct super_block *sb, int *mount_flags, char *data)
+ 	new_mount_opt = 0;
+ 	ufs_set_opt (new_mount_opt, ONERROR_LOCK);
+ 	if (!ufs_parse_options (data, &new_mount_opt)) {
++		mutex_unlock(&UFS_SB(sb)->s_lock);
+ 		unlock_ufs(sb);
+ 		return -EINVAL;
+ 	}
+@@ -1297,12 +1302,14 @@ static int ufs_remount (struct super_block *sb, int *mount_flags, char *data)
+ 		new_mount_opt |= ufstype;
+ 	} else if ((new_mount_opt & UFS_MOUNT_UFSTYPE) != ufstype) {
+ 		pr_err("ufstype can't be changed during remount\n");
++		mutex_unlock(&UFS_SB(sb)->s_lock);
+ 		unlock_ufs(sb);
+ 		return -EINVAL;
+ 	}
+ 
+ 	if ((*mount_flags & MS_RDONLY) == (sb->s_flags & MS_RDONLY)) {
+ 		UFS_SB(sb)->s_mount_opt = new_mount_opt;
++		mutex_unlock(&UFS_SB(sb)->s_lock);
+ 		unlock_ufs(sb);
+ 		return 0;
+ 	}
+@@ -1326,6 +1333,7 @@ static int ufs_remount (struct super_block *sb, int *mount_flags, char *data)
+ 	 */
+ #ifndef CONFIG_UFS_FS_WRITE
+ 		pr_err("ufs was compiled with read-only support, can't be mounted as read-write\n");
++		mutex_unlock(&UFS_SB(sb)->s_lock);
+ 		unlock_ufs(sb);
+ 		return -EINVAL;
+ #else
+@@ -1335,11 +1343,13 @@ static int ufs_remount (struct super_block *sb, int *mount_flags, char *data)
+ 		    ufstype != UFS_MOUNT_UFSTYPE_SUNx86 &&
+ 		    ufstype != UFS_MOUNT_UFSTYPE_UFS2) {
+ 			pr_err("this ufstype is read-only supported\n");
++			mutex_unlock(&UFS_SB(sb)->s_lock);
+ 			unlock_ufs(sb);
+ 			return -EINVAL;
+ 		}
+ 		if (!ufs_read_cylinder_structures(sb)) {
+ 			pr_err("failed during remounting\n");
++			mutex_unlock(&UFS_SB(sb)->s_lock);
+ 			unlock_ufs(sb);
+ 			return -EPERM;
+ 		}
+@@ -1347,6 +1357,7 @@ static int ufs_remount (struct super_block *sb, int *mount_flags, char *data)
+ #endif
+ 	}
+ 	UFS_SB(sb)->s_mount_opt = new_mount_opt;
++	mutex_unlock(&UFS_SB(sb)->s_lock);
+ 	unlock_ufs(sb);
+ 	return 0;
+ }
+diff --git a/fs/ufs/ufs.h b/fs/ufs/ufs.h
+index 2a07396d5f9e..cf6368d42d4a 100644
+--- a/fs/ufs/ufs.h
++++ b/fs/ufs/ufs.h
+@@ -30,6 +30,7 @@ struct ufs_sb_info {
+ 	int work_queued; /* non-zero if the delayed work is queued */
+ 	struct delayed_work sync_work; /* FS sync delayed work */
+ 	spinlock_t work_lock; /* protects sync_work and work_queued */
++	struct mutex s_lock;
+ };
+ 
+ struct ufs_inode_info {
+diff --git a/include/net/netns/sctp.h b/include/net/netns/sctp.h
+index 3573a81815ad..8ba379f9e467 100644
+--- a/include/net/netns/sctp.h
++++ b/include/net/netns/sctp.h
+@@ -31,6 +31,7 @@ struct netns_sctp {
+ 	struct list_head addr_waitq;
+ 	struct timer_list addr_wq_timer;
+ 	struct list_head auto_asconf_splist;
++	/* Lock that protects both addr_waitq and auto_asconf_splist */
+ 	spinlock_t addr_wq_lock;
+ 
+ 	/* Lock that protects the local_addr_list writers */
+diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
+index 2bb2fcf5b11f..495c87e367b3 100644
+--- a/include/net/sctp/structs.h
++++ b/include/net/sctp/structs.h
+@@ -223,6 +223,10 @@ struct sctp_sock {
+ 	atomic_t pd_mode;
+ 	/* Receive to here while partial delivery is in effect. */
+ 	struct sk_buff_head pd_lobby;
++
++	/* These must be the last fields, as they will skipped on copies,
++	 * like on accept and peeloff operations
++	 */
+ 	struct list_head auto_asconf_list;
+ 	int do_auto_asconf;
+ };
+diff --git a/net/bridge/br_ioctl.c b/net/bridge/br_ioctl.c
+index a9a4a1b7863d..8d423bc649b9 100644
+--- a/net/bridge/br_ioctl.c
++++ b/net/bridge/br_ioctl.c
+@@ -247,9 +247,7 @@ static int old_dev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+ 		if (!ns_capable(dev_net(dev)->user_ns, CAP_NET_ADMIN))
+ 			return -EPERM;
+ 
+-		spin_lock_bh(&br->lock);
+ 		br_stp_set_bridge_priority(br, args[1]);
+-		spin_unlock_bh(&br->lock);
+ 		return 0;
+ 
+ 	case BRCTL_SET_PORT_PRIORITY:
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index b0aee78dba41..c08f510fce30 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -1166,6 +1166,9 @@ static void br_multicast_add_router(struct net_bridge *br,
+ 	struct net_bridge_port *p;
+ 	struct hlist_node *slot = NULL;
+ 
++	if (!hlist_unhashed(&port->rlist))
++		return;
++
+ 	hlist_for_each_entry(p, &br->router_list, rlist) {
+ 		if ((unsigned long) port >= (unsigned long) p)
+ 			break;
+@@ -1193,12 +1196,8 @@ static void br_multicast_mark_router(struct net_bridge *br,
+ 	if (port->multicast_router != 1)
+ 		return;
+ 
+-	if (!hlist_unhashed(&port->rlist))
+-		goto timer;
+-
+ 	br_multicast_add_router(br, port);
+ 
+-timer:
+ 	mod_timer(&port->multicast_router_timer,
+ 		  now + br->multicast_querier_interval);
+ }
+diff --git a/net/bridge/br_stp_if.c b/net/bridge/br_stp_if.c
+index 41146872c1b4..7832d07f48f6 100644
+--- a/net/bridge/br_stp_if.c
++++ b/net/bridge/br_stp_if.c
+@@ -243,12 +243,13 @@ bool br_stp_recalculate_bridge_id(struct net_bridge *br)
+ 	return true;
+ }
+ 
+-/* called under bridge lock */
++/* Acquires and releases bridge lock */
+ void br_stp_set_bridge_priority(struct net_bridge *br, u16 newprio)
+ {
+ 	struct net_bridge_port *p;
+ 	int wasroot;
+ 
++	spin_lock_bh(&br->lock);
+ 	wasroot = br_is_root_bridge(br);
+ 
+ 	list_for_each_entry(p, &br->port_list, list) {
+@@ -266,6 +267,7 @@ void br_stp_set_bridge_priority(struct net_bridge *br, u16 newprio)
+ 	br_port_state_selection(br);
+ 	if (br_is_root_bridge(br) && !wasroot)
+ 		br_become_root_bridge(br);
++	spin_unlock_bh(&br->lock);
+ }
+ 
+ /* called under bridge lock */
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 70fe9e10ac86..d0e5d6613b1b 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -971,6 +971,8 @@ int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb)
+ 	rc = 0;
+ 	if (neigh->nud_state & (NUD_CONNECTED | NUD_DELAY | NUD_PROBE))
+ 		goto out_unlock_bh;
++	if (neigh->dead)
++		goto out_dead;
+ 
+ 	if (!(neigh->nud_state & (NUD_STALE | NUD_INCOMPLETE))) {
+ 		if (NEIGH_VAR(neigh->parms, MCAST_PROBES) +
+@@ -1027,6 +1029,13 @@ out_unlock_bh:
+ 		write_unlock(&neigh->lock);
+ 	local_bh_enable();
+ 	return rc;
++
++out_dead:
++	if (neigh->nud_state & NUD_STALE)
++		goto out_unlock_bh;
++	write_unlock_bh(&neigh->lock);
++	kfree_skb(skb);
++	return 1;
+ }
+ EXPORT_SYMBOL(__neigh_event_send);
+ 
+@@ -1090,6 +1099,8 @@ int neigh_update(struct neighbour *neigh, const u8 *lladdr, u8 new,
+ 	if (!(flags & NEIGH_UPDATE_F_ADMIN) &&
+ 	    (old & (NUD_NOARP | NUD_PERMANENT)))
+ 		goto out;
++	if (neigh->dead)
++		goto out;
+ 
+ 	if (!(new & NUD_VALID)) {
+ 		neigh_del_timer(neigh);
+@@ -1239,6 +1250,8 @@ EXPORT_SYMBOL(neigh_update);
+  */
+ void __neigh_set_probe_once(struct neighbour *neigh)
+ {
++	if (neigh->dead)
++		return;
+ 	neigh->updated = jiffies;
+ 	if (!(neigh->nud_state & NUD_FAILED))
+ 		return;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index e9f9a15fce4e..1e3abb8ac2ef 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -4443,7 +4443,7 @@ struct sk_buff *alloc_skb_with_frags(unsigned long header_len,
+ 
+ 		while (order) {
+ 			if (npages >= 1 << order) {
+-				page = alloc_pages(gfp_mask |
++				page = alloc_pages((gfp_mask & ~__GFP_WAIT) |
+ 						   __GFP_COMP |
+ 						   __GFP_NOWARN |
+ 						   __GFP_NORETRY,
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 71e3e5f1eaa0..c77d5d21a85f 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1895,7 +1895,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t gfp)
+ 
+ 	pfrag->offset = 0;
+ 	if (SKB_FRAG_PAGE_ORDER) {
+-		pfrag->page = alloc_pages(gfp | __GFP_COMP |
++		pfrag->page = alloc_pages((gfp & ~__GFP_WAIT) | __GFP_COMP |
+ 					  __GFP_NOWARN | __GFP_NORETRY,
+ 					  SKB_FRAG_PAGE_ORDER);
+ 		if (likely(pfrag->page)) {
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index d2e49baaff63..61edc496b7d0 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -228,6 +228,8 @@ int inet_listen(struct socket *sock, int backlog)
+ 				err = 0;
+ 			if (err)
+ 				goto out;
++
++			tcp_fastopen_init_key_once(true);
+ 		}
+ 		err = inet_csk_listen_start(sk, backlog);
+ 		if (err)
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index 5cd99271d3a6..d9e8ff31aba0 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -432,6 +432,15 @@ void ip_local_error(struct sock *sk, int err, __be32 daddr, __be16 port, u32 inf
+ 		kfree_skb(skb);
+ }
+ 
++/* For some errors we have valid addr_offset even with zero payload and
++ * zero port. Also, addr_offset should be supported if port is set.
++ */
++static inline bool ipv4_datagram_support_addr(struct sock_exterr_skb *serr)
++{
++	return serr->ee.ee_origin == SO_EE_ORIGIN_ICMP ||
++	       serr->ee.ee_origin == SO_EE_ORIGIN_LOCAL || serr->port;
++}
++
+ /* IPv4 supports cmsg on all imcp errors and some timestamps
+  *
+  * Timestamp code paths do not initialize the fields expected by cmsg:
+@@ -498,7 +507,7 @@ int ip_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
+ 
+ 	serr = SKB_EXT_ERR(skb);
+ 
+-	if (sin && serr->port) {
++	if (sin && ipv4_datagram_support_addr(serr)) {
+ 		sin->sin_family = AF_INET;
+ 		sin->sin_addr.s_addr = *(__be32 *)(skb_network_header(skb) +
+ 						   serr->addr_offset);
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 995a2259bcfc..d03a344210aa 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -2541,10 +2541,13 @@ static int do_tcp_setsockopt(struct sock *sk, int level,
+ 
+ 	case TCP_FASTOPEN:
+ 		if (val >= 0 && ((1 << sk->sk_state) & (TCPF_CLOSE |
+-		    TCPF_LISTEN)))
++		    TCPF_LISTEN))) {
++			tcp_fastopen_init_key_once(true);
++
+ 			err = fastopen_init_queue(sk, val);
+-		else
++		} else {
+ 			err = -EINVAL;
++		}
+ 		break;
+ 	case TCP_TIMESTAMP:
+ 		if (!tp->repair)
+diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
+index ea82fd492c1b..9c371815092a 100644
+--- a/net/ipv4/tcp_fastopen.c
++++ b/net/ipv4/tcp_fastopen.c
+@@ -78,8 +78,6 @@ static bool __tcp_fastopen_cookie_gen(const void *path,
+ 	struct tcp_fastopen_context *ctx;
+ 	bool ok = false;
+ 
+-	tcp_fastopen_init_key_once(true);
+-
+ 	rcu_read_lock();
+ 	ctx = rcu_dereference(tcp_fastopen_ctx);
+ 	if (ctx) {
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index ace8daca5c83..d174b914fc77 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -325,6 +325,16 @@ void ipv6_local_rxpmtu(struct sock *sk, struct flowi6 *fl6, u32 mtu)
+ 	kfree_skb(skb);
+ }
+ 
++/* For some errors we have valid addr_offset even with zero payload and
++ * zero port. Also, addr_offset should be supported if port is set.
++ */
++static inline bool ipv6_datagram_support_addr(struct sock_exterr_skb *serr)
++{
++	return serr->ee.ee_origin == SO_EE_ORIGIN_ICMP6 ||
++	       serr->ee.ee_origin == SO_EE_ORIGIN_ICMP ||
++	       serr->ee.ee_origin == SO_EE_ORIGIN_LOCAL || serr->port;
++}
++
+ /* IPv6 supports cmsg on all origins aside from SO_EE_ORIGIN_LOCAL.
+  *
+  * At one point, excluding local errors was a quick test to identify icmp/icmp6
+@@ -389,7 +399,7 @@ int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
+ 
+ 	serr = SKB_EXT_ERR(skb);
+ 
+-	if (sin && serr->port) {
++	if (sin && ipv6_datagram_support_addr(serr)) {
+ 		const unsigned char *nh = skb_network_header(skb);
+ 		sin->sin6_family = AF_INET6;
+ 		sin->sin6_flowinfo = 0;
+diff --git a/net/netfilter/nft_rbtree.c b/net/netfilter/nft_rbtree.c
+index 46214f245665..2c75361077f7 100644
+--- a/net/netfilter/nft_rbtree.c
++++ b/net/netfilter/nft_rbtree.c
+@@ -37,10 +37,11 @@ static bool nft_rbtree_lookup(const struct nft_set *set,
+ {
+ 	const struct nft_rbtree *priv = nft_set_priv(set);
+ 	const struct nft_rbtree_elem *rbe, *interval = NULL;
+-	const struct rb_node *parent = priv->root.rb_node;
++	const struct rb_node *parent;
+ 	int d;
+ 
+ 	spin_lock_bh(&nft_rbtree_lock);
++	parent = priv->root.rb_node;
+ 	while (parent != NULL) {
+ 		rbe = rb_entry(parent, struct nft_rbtree_elem, node);
+ 
+@@ -158,7 +159,6 @@ static int nft_rbtree_get(const struct nft_set *set, struct nft_set_elem *elem)
+ 	struct nft_rbtree_elem *rbe;
+ 	int d;
+ 
+-	spin_lock_bh(&nft_rbtree_lock);
+ 	while (parent != NULL) {
+ 		rbe = rb_entry(parent, struct nft_rbtree_elem, node);
+ 
+@@ -173,11 +173,9 @@ static int nft_rbtree_get(const struct nft_set *set, struct nft_set_elem *elem)
+ 			    !(rbe->flags & NFT_SET_ELEM_INTERVAL_END))
+ 				nft_data_copy(&elem->data, rbe->data);
+ 			elem->flags = rbe->flags;
+-			spin_unlock_bh(&nft_rbtree_lock);
+ 			return 0;
+ 		}
+ 	}
+-	spin_unlock_bh(&nft_rbtree_lock);
+ 	return -ENOENT;
+ }
+ 
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index f8db7064d81c..bfe5c6916dac 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -1266,16 +1266,6 @@ static void packet_sock_destruct(struct sock *sk)
+ 	sk_refcnt_debug_dec(sk);
+ }
+ 
+-static int fanout_rr_next(struct packet_fanout *f, unsigned int num)
+-{
+-	int x = atomic_read(&f->rr_cur) + 1;
+-
+-	if (x >= num)
+-		x = 0;
+-
+-	return x;
+-}
+-
+ static unsigned int fanout_demux_hash(struct packet_fanout *f,
+ 				      struct sk_buff *skb,
+ 				      unsigned int num)
+@@ -1287,13 +1277,9 @@ static unsigned int fanout_demux_lb(struct packet_fanout *f,
+ 				    struct sk_buff *skb,
+ 				    unsigned int num)
+ {
+-	int cur, old;
++	unsigned int val = atomic_inc_return(&f->rr_cur);
+ 
+-	cur = atomic_read(&f->rr_cur);
+-	while ((old = atomic_cmpxchg(&f->rr_cur, cur,
+-				     fanout_rr_next(f, num))) != cur)
+-		cur = old;
+-	return cur;
++	return val % num;
+ }
+ 
+ static unsigned int fanout_demux_cpu(struct packet_fanout *f,
+@@ -1347,7 +1333,7 @@ static int packet_rcv_fanout(struct sk_buff *skb, struct net_device *dev,
+ 			     struct packet_type *pt, struct net_device *orig_dev)
+ {
+ 	struct packet_fanout *f = pt->af_packet_priv;
+-	unsigned int num = f->num_members;
++	unsigned int num = READ_ONCE(f->num_members);
+ 	struct packet_sock *po;
+ 	unsigned int idx;
+ 
+diff --git a/net/sctp/output.c b/net/sctp/output.c
+index fc5e45b8a832..abe7c2db2412 100644
+--- a/net/sctp/output.c
++++ b/net/sctp/output.c
+@@ -599,7 +599,9 @@ out:
+ 	return err;
+ no_route:
+ 	kfree_skb(nskb);
+-	IP_INC_STATS(sock_net(asoc->base.sk), IPSTATS_MIB_OUTNOROUTES);
++
++	if (asoc)
++		IP_INC_STATS(sock_net(asoc->base.sk), IPSTATS_MIB_OUTNOROUTES);
+ 
+ 	/* FIXME: Returning the 'err' will effect all the associations
+ 	 * associated with a socket, although only one of the paths of the
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index aafe94bf292e..4e565715d016 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -1533,8 +1533,10 @@ static void sctp_close(struct sock *sk, long timeout)
+ 
+ 	/* Supposedly, no process has access to the socket, but
+ 	 * the net layers still may.
++	 * Also, sctp_destroy_sock() needs to be called with addr_wq_lock
++	 * held and that should be grabbed before socket lock.
+ 	 */
+-	local_bh_disable();
++	spin_lock_bh(&net->sctp.addr_wq_lock);
+ 	bh_lock_sock(sk);
+ 
+ 	/* Hold the sock, since sk_common_release() will put sock_put()
+@@ -1544,7 +1546,7 @@ static void sctp_close(struct sock *sk, long timeout)
+ 	sk_common_release(sk);
+ 
+ 	bh_unlock_sock(sk);
+-	local_bh_enable();
++	spin_unlock_bh(&net->sctp.addr_wq_lock);
+ 
+ 	sock_put(sk);
+ 
+@@ -3587,6 +3589,7 @@ static int sctp_setsockopt_auto_asconf(struct sock *sk, char __user *optval,
+ 	if ((val && sp->do_auto_asconf) || (!val && !sp->do_auto_asconf))
+ 		return 0;
+ 
++	spin_lock_bh(&sock_net(sk)->sctp.addr_wq_lock);
+ 	if (val == 0 && sp->do_auto_asconf) {
+ 		list_del(&sp->auto_asconf_list);
+ 		sp->do_auto_asconf = 0;
+@@ -3595,6 +3598,7 @@ static int sctp_setsockopt_auto_asconf(struct sock *sk, char __user *optval,
+ 		    &sock_net(sk)->sctp.auto_asconf_splist);
+ 		sp->do_auto_asconf = 1;
+ 	}
++	spin_unlock_bh(&sock_net(sk)->sctp.addr_wq_lock);
+ 	return 0;
+ }
+ 
+@@ -4128,18 +4132,28 @@ static int sctp_init_sock(struct sock *sk)
+ 	local_bh_disable();
+ 	percpu_counter_inc(&sctp_sockets_allocated);
+ 	sock_prot_inuse_add(net, sk->sk_prot, 1);
++
++	/* Nothing can fail after this block, otherwise
++	 * sctp_destroy_sock() will be called without addr_wq_lock held
++	 */
+ 	if (net->sctp.default_auto_asconf) {
++		spin_lock(&sock_net(sk)->sctp.addr_wq_lock);
+ 		list_add_tail(&sp->auto_asconf_list,
+ 		    &net->sctp.auto_asconf_splist);
+ 		sp->do_auto_asconf = 1;
+-	} else
++		spin_unlock(&sock_net(sk)->sctp.addr_wq_lock);
++	} else {
+ 		sp->do_auto_asconf = 0;
++	}
++
+ 	local_bh_enable();
+ 
+ 	return 0;
+ }
+ 
+-/* Cleanup any SCTP per socket resources.  */
++/* Cleanup any SCTP per socket resources. Must be called with
++ * sock_net(sk)->sctp.addr_wq_lock held if sp->do_auto_asconf is true
++ */
+ static void sctp_destroy_sock(struct sock *sk)
+ {
+ 	struct sctp_sock *sp;
+@@ -7202,6 +7216,19 @@ void sctp_copy_sock(struct sock *newsk, struct sock *sk,
+ 	newinet->mc_list = NULL;
+ }
+ 
++static inline void sctp_copy_descendant(struct sock *sk_to,
++					const struct sock *sk_from)
++{
++	int ancestor_size = sizeof(struct inet_sock) +
++			    sizeof(struct sctp_sock) -
++			    offsetof(struct sctp_sock, auto_asconf_list);
++
++	if (sk_from->sk_family == PF_INET6)
++		ancestor_size += sizeof(struct ipv6_pinfo);
++
++	__inet_sk_copy_descendant(sk_to, sk_from, ancestor_size);
++}
++
+ /* Populate the fields of the newsk from the oldsk and migrate the assoc
+  * and its messages to the newsk.
+  */
+@@ -7216,7 +7243,6 @@ static void sctp_sock_migrate(struct sock *oldsk, struct sock *newsk,
+ 	struct sk_buff *skb, *tmp;
+ 	struct sctp_ulpevent *event;
+ 	struct sctp_bind_hashbucket *head;
+-	struct list_head tmplist;
+ 
+ 	/* Migrate socket buffer sizes and all the socket level options to the
+ 	 * new socket.
+@@ -7224,12 +7250,7 @@ static void sctp_sock_migrate(struct sock *oldsk, struct sock *newsk,
+ 	newsk->sk_sndbuf = oldsk->sk_sndbuf;
+ 	newsk->sk_rcvbuf = oldsk->sk_rcvbuf;
+ 	/* Brute force copy old sctp opt. */
+-	if (oldsp->do_auto_asconf) {
+-		memcpy(&tmplist, &newsp->auto_asconf_list, sizeof(tmplist));
+-		inet_sk_copy_descendant(newsk, oldsk);
+-		memcpy(&newsp->auto_asconf_list, &tmplist, sizeof(tmplist));
+-	} else
+-		inet_sk_copy_descendant(newsk, oldsk);
++	sctp_copy_descendant(newsk, oldsk);
+ 
+ 	/* Restore the ep value that was overwritten with the above structure
+ 	 * copy.
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 4d1a54190388..2588e083c202 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -404,6 +404,7 @@ static int selinux_is_sblabel_mnt(struct super_block *sb)
+ 	return sbsec->behavior == SECURITY_FS_USE_XATTR ||
+ 		sbsec->behavior == SECURITY_FS_USE_TRANS ||
+ 		sbsec->behavior == SECURITY_FS_USE_TASK ||
++		sbsec->behavior == SECURITY_FS_USE_NATIVE ||
+ 		/* Special handling. Genfs but also in-core setxattr handler */
+ 		!strcmp(sb->s_type->name, "sysfs") ||
+ 		!strcmp(sb->s_type->name, "pstore") ||


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-07-22 10:11 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-07-22 10:11 UTC (permalink / raw
  To: gentoo-commits

commit:     10847d7288e01eff75faa43573cdc6252e1a3987
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 22 10:11:24 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 22 10:11:24 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=10847d72

Linux patch 4.0.9

 0000_README            |    4 +
 1008_linux-4.0.9.patch | 3482 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3486 insertions(+)

diff --git a/0000_README b/0000_README
index 6a1359e..3ff77bb 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-4.0.8.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.0.8
 
+Patch:  1008_linux-4.0.9.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.0.9
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1008_linux-4.0.9.patch b/1008_linux-4.0.9.patch
new file mode 100644
index 0000000..c0777ca
--- /dev/null
+++ b/1008_linux-4.0.9.patch
@@ -0,0 +1,3482 @@
+diff --git a/Documentation/DMA-API-HOWTO.txt b/Documentation/DMA-API-HOWTO.txt
+index 0f7afb2bb442..aef8cc5a677b 100644
+--- a/Documentation/DMA-API-HOWTO.txt
++++ b/Documentation/DMA-API-HOWTO.txt
+@@ -25,13 +25,18 @@ physical addresses.  These are the addresses in /proc/iomem.  The physical
+ address is not directly useful to a driver; it must use ioremap() to map
+ the space and produce a virtual address.
+ 
+-I/O devices use a third kind of address: a "bus address" or "DMA address".
+-If a device has registers at an MMIO address, or if it performs DMA to read
+-or write system memory, the addresses used by the device are bus addresses.
+-In some systems, bus addresses are identical to CPU physical addresses, but
+-in general they are not.  IOMMUs and host bridges can produce arbitrary
++I/O devices use a third kind of address: a "bus address".  If a device has
++registers at an MMIO address, or if it performs DMA to read or write system
++memory, the addresses used by the device are bus addresses.  In some
++systems, bus addresses are identical to CPU physical addresses, but in
++general they are not.  IOMMUs and host bridges can produce arbitrary
+ mappings between physical and bus addresses.
+ 
++From a device's point of view, DMA uses the bus address space, but it may
++be restricted to a subset of that space.  For example, even if a system
++supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
++so devices only need to use 32-bit DMA addresses.
++
+ Here's a picture and some examples:
+ 
+                CPU                  CPU                  Bus
+@@ -72,11 +77,11 @@ can use virtual address X to access the buffer, but the device itself
+ cannot because DMA doesn't go through the CPU virtual memory system.
+ 
+ In some simple systems, the device can do DMA directly to physical address
+-Y.  But in many others, there is IOMMU hardware that translates bus
++Y.  But in many others, there is IOMMU hardware that translates DMA
+ addresses to physical addresses, e.g., it translates Z to Y.  This is part
+ of the reason for the DMA API: the driver can give a virtual address X to
+ an interface like dma_map_single(), which sets up any required IOMMU
+-mapping and returns the bus address Z.  The driver then tells the device to
++mapping and returns the DMA address Z.  The driver then tells the device to
+ do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
+ RAM.
+ 
+@@ -98,7 +103,7 @@ First of all, you should make sure
+ #include <linux/dma-mapping.h>
+ 
+ is in your driver, which provides the definition of dma_addr_t.  This type
+-can hold any valid DMA or bus address for the platform and should be used
++can hold any valid DMA address for the platform and should be used
+ everywhere you hold a DMA address returned from the DMA mapping functions.
+ 
+ 			 What memory is DMA'able?
+@@ -316,7 +321,7 @@ There are two types of DMA mappings:
+   Think of "consistent" as "synchronous" or "coherent".
+ 
+   The current default is to return consistent memory in the low 32
+-  bits of the bus space.  However, for future compatibility you should
++  bits of the DMA space.  However, for future compatibility you should
+   set the consistent mask even if this default is fine for your
+   driver.
+ 
+@@ -403,7 +408,7 @@ dma_alloc_coherent() returns two values: the virtual address which you
+ can use to access it from the CPU and dma_handle which you pass to the
+ card.
+ 
+-The CPU virtual address and the DMA bus address are both
++The CPU virtual address and the DMA address are both
+ guaranteed to be aligned to the smallest PAGE_SIZE order which
+ is greater than or equal to the requested size.  This invariant
+ exists (for example) to guarantee that if you allocate a chunk
+@@ -645,8 +650,8 @@ PLEASE NOTE:  The 'nents' argument to the dma_unmap_sg call must be
+               dma_map_sg call.
+ 
+ Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
+-counterpart, because the bus address space is a shared resource and
+-you could render the machine unusable by consuming all bus addresses.
++counterpart, because the DMA address space is a shared resource and
++you could render the machine unusable by consuming all DMA addresses.
+ 
+ If you need to use the same streaming DMA region multiple times and touch
+ the data in between the DMA transfers, the buffer needs to be synced
+diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
+index 52088408668a..7eba542eff7c 100644
+--- a/Documentation/DMA-API.txt
++++ b/Documentation/DMA-API.txt
+@@ -18,10 +18,10 @@ Part I - dma_ API
+ To get the dma_ API, you must #include <linux/dma-mapping.h>.  This
+ provides dma_addr_t and the interfaces described below.
+ 
+-A dma_addr_t can hold any valid DMA or bus address for the platform.  It
+-can be given to a device to use as a DMA source or target.  A CPU cannot
+-reference a dma_addr_t directly because there may be translation between
+-its physical address space and the bus address space.
++A dma_addr_t can hold any valid DMA address for the platform.  It can be
++given to a device to use as a DMA source or target.  A CPU cannot reference
++a dma_addr_t directly because there may be translation between its physical
++address space and the DMA address space.
+ 
+ Part Ia - Using large DMA-coherent buffers
+ ------------------------------------------
+@@ -42,7 +42,7 @@ It returns a pointer to the allocated region (in the processor's virtual
+ address space) or NULL if the allocation failed.
+ 
+ It also returns a <dma_handle> which may be cast to an unsigned integer the
+-same width as the bus and given to the device as the bus address base of
++same width as the bus and given to the device as the DMA address base of
+ the region.
+ 
+ Note: consistent memory can be expensive on some platforms, and the
+@@ -193,7 +193,7 @@ dma_map_single(struct device *dev, void *cpu_addr, size_t size,
+ 		      enum dma_data_direction direction)
+ 
+ Maps a piece of processor virtual memory so it can be accessed by the
+-device and returns the bus address of the memory.
++device and returns the DMA address of the memory.
+ 
+ The direction for both APIs may be converted freely by casting.
+ However the dma_ API uses a strongly typed enumerator for its
+@@ -212,20 +212,20 @@ contiguous piece of memory.  For this reason, memory to be mapped by
+ this API should be obtained from sources which guarantee it to be
+ physically contiguous (like kmalloc).
+ 
+-Further, the bus address of the memory must be within the
++Further, the DMA address of the memory must be within the
+ dma_mask of the device (the dma_mask is a bit mask of the
+-addressable region for the device, i.e., if the bus address of
+-the memory ANDed with the dma_mask is still equal to the bus
++addressable region for the device, i.e., if the DMA address of
++the memory ANDed with the dma_mask is still equal to the DMA
+ address, then the device can perform DMA to the memory).  To
+ ensure that the memory allocated by kmalloc is within the dma_mask,
+ the driver may specify various platform-dependent flags to restrict
+-the bus address range of the allocation (e.g., on x86, GFP_DMA
+-guarantees to be within the first 16MB of available bus addresses,
++the DMA address range of the allocation (e.g., on x86, GFP_DMA
++guarantees to be within the first 16MB of available DMA addresses,
+ as required by ISA devices).
+ 
+ Note also that the above constraints on physical contiguity and
+ dma_mask may not apply if the platform has an IOMMU (a device which
+-maps an I/O bus address to a physical memory address).  However, to be
++maps an I/O DMA address to a physical memory address).  However, to be
+ portable, device driver writers may *not* assume that such an IOMMU
+ exists.
+ 
+@@ -296,7 +296,7 @@ reduce current DMA mapping usage or delay and try again later).
+ 	dma_map_sg(struct device *dev, struct scatterlist *sg,
+ 		int nents, enum dma_data_direction direction)
+ 
+-Returns: the number of bus address segments mapped (this may be shorter
++Returns: the number of DMA address segments mapped (this may be shorter
+ than <nents> passed in if some elements of the scatter/gather list are
+ physically or virtually adjacent and an IOMMU maps them with a single
+ entry).
+@@ -340,7 +340,7 @@ must be the same as those and passed in to the scatter/gather mapping
+ API.
+ 
+ Note: <nents> must be the number you passed in, *not* the number of
+-bus address entries returned.
++DMA address entries returned.
+ 
+ void
+ dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
+@@ -507,7 +507,7 @@ it's asked for coherent memory for this device.
+ phys_addr is the CPU physical address to which the memory is currently
+ assigned (this will be ioremapped so the CPU can access the region).
+ 
+-device_addr is the bus address the device needs to be programmed
++device_addr is the DMA address the device needs to be programmed
+ with to actually address this memory (this will be handed out as the
+ dma_addr_t in dma_alloc_coherent()).
+ 
+diff --git a/Documentation/devicetree/bindings/spi/spi_pl022.txt b/Documentation/devicetree/bindings/spi/spi_pl022.txt
+index 22ed6797216d..4d1673ca8cf8 100644
+--- a/Documentation/devicetree/bindings/spi/spi_pl022.txt
++++ b/Documentation/devicetree/bindings/spi/spi_pl022.txt
+@@ -4,9 +4,9 @@ Required properties:
+ - compatible : "arm,pl022", "arm,primecell"
+ - reg : Offset and length of the register set for the device
+ - interrupts : Should contain SPI controller interrupt
++- num-cs : total number of chipselects
+ 
+ Optional properties:
+-- num-cs : total number of chipselects
+ - cs-gpios : should specify GPIOs used for chipselects.
+   The gpios will be referred to as reg = <index> in the SPI child nodes.
+   If unspecified, a single SPI device without a chip select can be used.
+diff --git a/Makefile b/Makefile
+index 0e315d6e1a41..ba8875e2db6b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 0
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma sheep
+ 
+diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
+index 9917a45fc430..20b7dc17979e 100644
+--- a/arch/arc/include/asm/atomic.h
++++ b/arch/arc/include/asm/atomic.h
+@@ -43,6 +43,12 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
+ {									\
+ 	unsigned int temp;						\
+ 									\
++	/*								\
++	 * Explicit full memory barrier needed before/after as		\
++	 * LLOCK/SCOND thmeselves don't provide any such semantics	\
++	 */								\
++	smp_mb();							\
++									\
+ 	__asm__ __volatile__(						\
+ 	"1:	llock   %0, [%1]	\n"				\
+ 	"	" #asm_op " %0, %0, %2	\n"				\
+@@ -52,6 +58,8 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
+ 	: "r"(&v->counter), "ir"(i)					\
+ 	: "cc");							\
+ 									\
++	smp_mb();							\
++									\
+ 	return temp;							\
+ }
+ 
+@@ -105,6 +113,9 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
+ 	unsigned long flags;						\
+ 	unsigned long temp;						\
+ 									\
++	/*								\
++	 * spin lock/unlock provides the needed smp_mb() before/after	\
++	 */								\
+ 	atomic_ops_lock(flags);						\
+ 	temp = v->counter;						\
+ 	temp c_op i;							\
+@@ -142,9 +153,19 @@ ATOMIC_OP(and, &=, and)
+ #define __atomic_add_unless(v, a, u)					\
+ ({									\
+ 	int c, old;							\
++									\
++	/*								\
++	 * Explicit full memory barrier needed before/after as		\
++	 * LLOCK/SCOND thmeselves don't provide any such semantics	\
++	 */								\
++	smp_mb();							\
++									\
+ 	c = atomic_read(v);						\
+ 	while (c != (u) && (old = atomic_cmpxchg((v), c, c + (a))) != c)\
+ 		c = old;						\
++									\
++	smp_mb();							\
++									\
+ 	c;								\
+ })
+ 
+diff --git a/arch/arc/include/asm/bitops.h b/arch/arc/include/asm/bitops.h
+index 1a5bf07eefe2..89fbbb0db51b 100644
+--- a/arch/arc/include/asm/bitops.h
++++ b/arch/arc/include/asm/bitops.h
+@@ -103,6 +103,12 @@ static inline int test_and_set_bit(unsigned long nr, volatile unsigned long *m)
+ 	if (__builtin_constant_p(nr))
+ 		nr &= 0x1f;
+ 
++	/*
++	 * Explicit full memory barrier needed before/after as
++	 * LLOCK/SCOND themselves don't provide any such semantics
++	 */
++	smp_mb();
++
+ 	__asm__ __volatile__(
+ 	"1:	llock   %0, [%2]	\n"
+ 	"	bset    %1, %0, %3	\n"
+@@ -112,6 +118,8 @@ static inline int test_and_set_bit(unsigned long nr, volatile unsigned long *m)
+ 	: "r"(m), "ir"(nr)
+ 	: "cc");
+ 
++	smp_mb();
++
+ 	return (old & (1 << nr)) != 0;
+ }
+ 
+@@ -125,6 +133,8 @@ test_and_clear_bit(unsigned long nr, volatile unsigned long *m)
+ 	if (__builtin_constant_p(nr))
+ 		nr &= 0x1f;
+ 
++	smp_mb();
++
+ 	__asm__ __volatile__(
+ 	"1:	llock   %0, [%2]	\n"
+ 	"	bclr    %1, %0, %3	\n"
+@@ -134,6 +144,8 @@ test_and_clear_bit(unsigned long nr, volatile unsigned long *m)
+ 	: "r"(m), "ir"(nr)
+ 	: "cc");
+ 
++	smp_mb();
++
+ 	return (old & (1 << nr)) != 0;
+ }
+ 
+@@ -147,6 +159,8 @@ test_and_change_bit(unsigned long nr, volatile unsigned long *m)
+ 	if (__builtin_constant_p(nr))
+ 		nr &= 0x1f;
+ 
++	smp_mb();
++
+ 	__asm__ __volatile__(
+ 	"1:	llock   %0, [%2]	\n"
+ 	"	bxor    %1, %0, %3	\n"
+@@ -156,6 +170,8 @@ test_and_change_bit(unsigned long nr, volatile unsigned long *m)
+ 	: "r"(m), "ir"(nr)
+ 	: "cc");
+ 
++	smp_mb();
++
+ 	return (old & (1 << nr)) != 0;
+ }
+ 
+@@ -235,6 +251,9 @@ static inline int test_and_set_bit(unsigned long nr, volatile unsigned long *m)
+ 	if (__builtin_constant_p(nr))
+ 		nr &= 0x1f;
+ 
++	/*
++	 * spin lock/unlock provide the needed smp_mb() before/after
++	 */
+ 	bitops_lock(flags);
+ 
+ 	old = *m;
+diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
+index 03cd6894855d..44fd531f4d7b 100644
+--- a/arch/arc/include/asm/cmpxchg.h
++++ b/arch/arc/include/asm/cmpxchg.h
+@@ -10,6 +10,8 @@
+ #define __ASM_ARC_CMPXCHG_H
+ 
+ #include <linux/types.h>
++
++#include <asm/barrier.h>
+ #include <asm/smp.h>
+ 
+ #ifdef CONFIG_ARC_HAS_LLSC
+@@ -19,16 +21,25 @@ __cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
+ {
+ 	unsigned long prev;
+ 
++	/*
++	 * Explicit full memory barrier needed before/after as
++	 * LLOCK/SCOND thmeselves don't provide any such semantics
++	 */
++	smp_mb();
++
+ 	__asm__ __volatile__(
+ 	"1:	llock   %0, [%1]	\n"
+ 	"	brne    %0, %2, 2f	\n"
+ 	"	scond   %3, [%1]	\n"
+ 	"	bnz     1b		\n"
+ 	"2:				\n"
+-	: "=&r"(prev)
+-	: "r"(ptr), "ir"(expected),
+-	  "r"(new) /* can't be "ir". scond can't take limm for "b" */
+-	: "cc");
++	: "=&r"(prev)	/* Early clobber, to prevent reg reuse */
++	: "r"(ptr),	/* Not "m": llock only supports reg direct addr mode */
++	  "ir"(expected),
++	  "r"(new)	/* can't be "ir". scond can't take LIMM for "b" */
++	: "cc", "memory"); /* so that gcc knows memory is being written here */
++
++	smp_mb();
+ 
+ 	return prev;
+ }
+@@ -42,6 +53,9 @@ __cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
+ 	int prev;
+ 	volatile unsigned long *p = ptr;
+ 
++	/*
++	 * spin lock/unlock provide the needed smp_mb() before/after
++	 */
+ 	atomic_ops_lock(flags);
+ 	prev = *p;
+ 	if (prev == expected)
+@@ -77,12 +91,16 @@ static inline unsigned long __xchg(unsigned long val, volatile void *ptr,
+ 
+ 	switch (size) {
+ 	case 4:
++		smp_mb();
++
+ 		__asm__ __volatile__(
+ 		"	ex  %0, [%1]	\n"
+ 		: "+r"(val)
+ 		: "r"(ptr)
+ 		: "memory");
+ 
++		smp_mb();
++
+ 		return val;
+ 	}
+ 	return __xchg_bad_pointer();
+diff --git a/arch/arc/include/asm/spinlock.h b/arch/arc/include/asm/spinlock.h
+index b6a8c2dfbe6e..e1651df6a93d 100644
+--- a/arch/arc/include/asm/spinlock.h
++++ b/arch/arc/include/asm/spinlock.h
+@@ -22,24 +22,46 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
+ {
+ 	unsigned int tmp = __ARCH_SPIN_LOCK_LOCKED__;
+ 
++	/*
++	 * This smp_mb() is technically superfluous, we only need the one
++	 * after the lock for providing the ACQUIRE semantics.
++	 * However doing the "right" thing was regressing hackbench
++	 * so keeping this, pending further investigation
++	 */
++	smp_mb();
++
+ 	__asm__ __volatile__(
+ 	"1:	ex  %0, [%1]		\n"
+ 	"	breq  %0, %2, 1b	\n"
+ 	: "+&r" (tmp)
+ 	: "r"(&(lock->slock)), "ir"(__ARCH_SPIN_LOCK_LOCKED__)
+ 	: "memory");
++
++	/*
++	 * ACQUIRE barrier to ensure load/store after taking the lock
++	 * don't "bleed-up" out of the critical section (leak-in is allowed)
++	 * http://www.spinics.net/lists/kernel/msg2010409.html
++	 *
++	 * ARCv2 only has load-load, store-store and all-all barrier
++	 * thus need the full all-all barrier
++	 */
++	smp_mb();
+ }
+ 
+ static inline int arch_spin_trylock(arch_spinlock_t *lock)
+ {
+ 	unsigned int tmp = __ARCH_SPIN_LOCK_LOCKED__;
+ 
++	smp_mb();
++
+ 	__asm__ __volatile__(
+ 	"1:	ex  %0, [%1]		\n"
+ 	: "+r" (tmp)
+ 	: "r"(&(lock->slock))
+ 	: "memory");
+ 
++	smp_mb();
++
+ 	return (tmp == __ARCH_SPIN_LOCK_UNLOCKED__);
+ }
+ 
+@@ -47,12 +69,22 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
+ {
+ 	unsigned int tmp = __ARCH_SPIN_LOCK_UNLOCKED__;
+ 
++	/*
++	 * RELEASE barrier: given the instructions avail on ARCv2, full barrier
++	 * is the only option
++	 */
++	smp_mb();
++
+ 	__asm__ __volatile__(
+ 	"	ex  %0, [%1]		\n"
+ 	: "+r" (tmp)
+ 	: "r"(&(lock->slock))
+ 	: "memory");
+ 
++	/*
++	 * superfluous, but keeping for now - see pairing version in
++	 * arch_spin_lock above
++	 */
+ 	smp_mb();
+ }
+ 
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 959fe8733560..bddd04d031db 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -517,6 +517,7 @@ el0_sp_pc:
+ 	mrs	x26, far_el1
+ 	// enable interrupts before calling the main handler
+ 	enable_dbg_and_irq
++	ct_user_exit
+ 	mov	x0, x26
+ 	mov	x1, x25
+ 	mov	x2, sp
+diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
+index ff3bddea482d..f6fe17d88da5 100644
+--- a/arch/arm64/kernel/vdso/Makefile
++++ b/arch/arm64/kernel/vdso/Makefile
+@@ -15,6 +15,10 @@ ccflags-y := -shared -fno-common -fno-builtin
+ ccflags-y += -nostdlib -Wl,-soname=linux-vdso.so.1 \
+ 		$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
+ 
++# Workaround for bare-metal (ELF) toolchains that neglect to pass -shared
++# down to collect2, resulting in silent corruption of the vDSO image.
++ccflags-y += -Wl,-shared
++
+ obj-y += vdso.o
+ extra-y += vdso.lds vdso-offsets.h
+ CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
+index baa758d37021..76c1e6cd36fc 100644
+--- a/arch/arm64/mm/context.c
++++ b/arch/arm64/mm/context.c
+@@ -92,6 +92,14 @@ static void reset_context(void *info)
+ 	unsigned int cpu = smp_processor_id();
+ 	struct mm_struct *mm = current->active_mm;
+ 
++	/*
++	 * current->active_mm could be init_mm for the idle thread immediately
++	 * after secondary CPU boot or hotplug. TTBR0_EL1 is already set to
++	 * the reserved value, so no need to reset any context.
++	 */
++	if (mm == &init_mm)
++		return;
++
+ 	smp_rmb();
+ 	asid = cpu_last_asid + cpu;
+ 
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index ae85da6307bb..8f57e0d30d69 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -260,7 +260,7 @@ static void __init free_unused_memmap(void)
+ 		 * memmap entries are valid from the bank end aligned to
+ 		 * MAX_ORDER_NR_PAGES.
+ 		 */
+-		prev_end = ALIGN(start + __phys_to_pfn(reg->size),
++		prev_end = ALIGN(__phys_to_pfn(reg->base + reg->size),
+ 				 MAX_ORDER_NR_PAGES);
+ 	}
+ 
+diff --git a/arch/s390/hypfs/inode.c b/arch/s390/hypfs/inode.c
+index 99824ff8dd35..a36ce4d237e5 100644
+--- a/arch/s390/hypfs/inode.c
++++ b/arch/s390/hypfs/inode.c
+@@ -458,8 +458,6 @@ static const struct super_operations hypfs_s_ops = {
+ 	.show_options	= hypfs_show_options,
+ };
+ 
+-static struct kobject *s390_kobj;
+-
+ static int __init hypfs_init(void)
+ {
+ 	int rc;
+@@ -483,18 +481,16 @@ static int __init hypfs_init(void)
+ 		rc = -ENODATA;
+ 		goto fail_hypfs_sprp_exit;
+ 	}
+-	s390_kobj = kobject_create_and_add("s390", hypervisor_kobj);
+-	if (!s390_kobj) {
+-		rc = -ENOMEM;
++	rc = sysfs_create_mount_point(hypervisor_kobj, "s390");
++	if (rc)
+ 		goto fail_hypfs_diag0c_exit;
+-	}
+ 	rc = register_filesystem(&hypfs_type);
+ 	if (rc)
+ 		goto fail_filesystem;
+ 	return 0;
+ 
+ fail_filesystem:
+-	kobject_put(s390_kobj);
++	sysfs_remove_mount_point(hypervisor_kobj, "s390");
+ fail_hypfs_diag0c_exit:
+ 	hypfs_diag0c_exit();
+ fail_hypfs_sprp_exit:
+@@ -512,7 +508,7 @@ fail_dbfs_exit:
+ static void __exit hypfs_exit(void)
+ {
+ 	unregister_filesystem(&hypfs_type);
+-	kobject_put(s390_kobj);
++	sysfs_remove_mount_point(hypervisor_kobj, "s390");
+ 	hypfs_diag0c_exit();
+ 	hypfs_sprp_exit();
+ 	hypfs_vm_exit();
+diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
+index 8b67bd0f6bb5..cd4598b2038d 100644
+--- a/drivers/acpi/bus.c
++++ b/drivers/acpi/bus.c
+@@ -467,6 +467,16 @@ static int __init acpi_bus_init_irq(void)
+ 	return 0;
+ }
+ 
++/**
++ * acpi_early_init - Initialize ACPICA and populate the ACPI namespace.
++ *
++ * The ACPI tables are accessible after this, but the handling of events has not
++ * been initialized and the global lock is not available yet, so AML should not
++ * be executed at this point.
++ *
++ * Doing this before switching the EFI runtime services to virtual mode allows
++ * the EfiBootServices memory to be freed slightly earlier on boot.
++ */
+ void __init acpi_early_init(void)
+ {
+ 	acpi_status status;
+@@ -530,26 +540,42 @@ void __init acpi_early_init(void)
+ 		acpi_gbl_FADT.sci_interrupt = acpi_sci_override_gsi;
+ 	}
+ #endif
++	return;
++
++ error0:
++	disable_acpi();
++}
++
++/**
++ * acpi_subsystem_init - Finalize the early initialization of ACPI.
++ *
++ * Switch over the platform to the ACPI mode (if possible), initialize the
++ * handling of ACPI events, install the interrupt and global lock handlers.
++ *
++ * Doing this too early is generally unsafe, but at the same time it needs to be
++ * done before all things that really depend on ACPI.  The right spot appears to
++ * be before finalizing the EFI initialization.
++ */
++void __init acpi_subsystem_init(void)
++{
++	acpi_status status;
++
++	if (acpi_disabled)
++		return;
+ 
+ 	status = acpi_enable_subsystem(~ACPI_NO_ACPI_ENABLE);
+ 	if (ACPI_FAILURE(status)) {
+ 		printk(KERN_ERR PREFIX "Unable to enable ACPI\n");
+-		goto error0;
++		disable_acpi();
++	} else {
++		/*
++		 * If the system is using ACPI then we can be reasonably
++		 * confident that any regulators are managed by the firmware
++		 * so tell the regulator core it has everything it needs to
++		 * know.
++		 */
++		regulator_has_full_constraints();
+ 	}
+-
+-	/*
+-	 * If the system is using ACPI then we can be reasonably
+-	 * confident that any regulators are managed by the firmware
+-	 * so tell the regulator core it has everything it needs to
+-	 * know.
+-	 */
+-	regulator_has_full_constraints();
+-
+-	return;
+-
+-      error0:
+-	disable_acpi();
+-	return;
+ }
+ 
+ static int __init acpi_bus_init(void)
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index 735db11a9b00..8217e0bda60f 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -953,6 +953,7 @@ EXPORT_SYMBOL_GPL(acpi_subsys_prepare);
+  */
+ void acpi_subsys_complete(struct device *dev)
+ {
++	pm_generic_complete(dev);
+ 	/*
+ 	 * If the device had been runtime-suspended before the system went into
+ 	 * the sleep state it is going out of and it has never been resumed till
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index dbfe6a69c3da..7c6e8948ac2c 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -945,11 +945,10 @@ EXPORT_SYMBOL_GPL(devm_regmap_init);
+ static void regmap_field_init(struct regmap_field *rm_field,
+ 	struct regmap *regmap, struct reg_field reg_field)
+ {
+-	int field_bits = reg_field.msb - reg_field.lsb + 1;
+ 	rm_field->regmap = regmap;
+ 	rm_field->reg = reg_field.reg;
+ 	rm_field->shift = reg_field.lsb;
+-	rm_field->mask = ((BIT(field_bits) - 1) << reg_field.lsb);
++	rm_field->mask = GENMASK(reg_field.msb, reg_field.lsb);
+ 	rm_field->id_size = reg_field.id_size;
+ 	rm_field->id_offset = reg_field.id_offset;
+ }
+@@ -2318,7 +2317,7 @@ int regmap_bulk_read(struct regmap *map, unsigned int reg, void *val,
+ 					  &ival);
+ 			if (ret != 0)
+ 				return ret;
+-			memcpy(val + (i * val_bytes), &ival, val_bytes);
++			map->format.format_val(val + (i * val_bytes), ival, 0);
+ 		}
+ 	}
+ 
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 3061bb8629dc..e14363d12690 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -65,7 +65,6 @@ static int __init parse_efi_cmdline(char *str)
+ early_param("efi", parse_efi_cmdline);
+ 
+ static struct kobject *efi_kobj;
+-static struct kobject *efivars_kobj;
+ 
+ /*
+  * Let's not leave out systab information that snuck into
+@@ -212,10 +211,9 @@ static int __init efisubsys_init(void)
+ 		goto err_remove_group;
+ 
+ 	/* and the standard mountpoint for efivarfs */
+-	efivars_kobj = kobject_create_and_add("efivars", efi_kobj);
+-	if (!efivars_kobj) {
++	error = sysfs_create_mount_point(efi_kobj, "efivars");
++	if (error) {
+ 		pr_err("efivars: Subsystem registration failed.\n");
+-		error = -ENOMEM;
+ 		goto err_remove_group;
+ 	}
+ 
+diff --git a/drivers/gpio/gpio-crystalcove.c b/drivers/gpio/gpio-crystalcove.c
+index 3d9e08f7e823..57cd089e8caf 100644
+--- a/drivers/gpio/gpio-crystalcove.c
++++ b/drivers/gpio/gpio-crystalcove.c
+@@ -250,6 +250,7 @@ static struct irq_chip crystalcove_irqchip = {
+ 	.irq_set_type		= crystalcove_irq_type,
+ 	.irq_bus_lock		= crystalcove_bus_lock,
+ 	.irq_bus_sync_unlock	= crystalcove_bus_sync_unlock,
++	.flags			= IRQCHIP_SKIP_SET_WAKE,
+ };
+ 
+ static irqreturn_t crystalcove_gpio_irq_handler(int irq, void *data)
+diff --git a/drivers/iio/accel/kxcjk-1013.c b/drivers/iio/accel/kxcjk-1013.c
+index 1a6379525fa4..204589856e8c 100644
+--- a/drivers/iio/accel/kxcjk-1013.c
++++ b/drivers/iio/accel/kxcjk-1013.c
+@@ -1422,6 +1422,7 @@ static const struct dev_pm_ops kxcjk1013_pm_ops = {
+ static const struct acpi_device_id kx_acpi_match[] = {
+ 	{"KXCJ1013", KXCJK1013},
+ 	{"KXCJ1008", KXCJ91008},
++	{"KXCJ9000", KXCJ91008},
+ 	{"KXTJ1009", KXTJ21009},
+ 	{"SMO8500",  KXCJ91008},
+ 	{ },
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index ac72ece70160..18b4e58818c9 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -65,6 +65,8 @@ static int
+ isert_rdma_accept(struct isert_conn *isert_conn);
+ struct rdma_cm_id *isert_setup_id(struct isert_np *isert_np);
+ 
++static void isert_release_work(struct work_struct *work);
++
+ static inline bool
+ isert_prot_cmd(struct isert_conn *conn, struct se_cmd *cmd)
+ {
+@@ -604,6 +606,7 @@ isert_connect_request(struct rdma_cm_id *cma_id, struct rdma_cm_event *event)
+ 	mutex_init(&isert_conn->conn_mutex);
+ 	spin_lock_init(&isert_conn->conn_lock);
+ 	INIT_LIST_HEAD(&isert_conn->conn_fr_pool);
++	INIT_WORK(&isert_conn->release_work, isert_release_work);
+ 
+ 	isert_conn->conn_cm_id = cma_id;
+ 
+@@ -863,6 +866,7 @@ isert_disconnected_handler(struct rdma_cm_id *cma_id,
+ {
+ 	struct isert_np *isert_np = cma_id->context;
+ 	struct isert_conn *isert_conn;
++	bool terminating = false;
+ 
+ 	if (isert_np->np_cm_id == cma_id)
+ 		return isert_np_cma_handler(cma_id->context, event);
+@@ -870,12 +874,25 @@ isert_disconnected_handler(struct rdma_cm_id *cma_id,
+ 	isert_conn = cma_id->qp->qp_context;
+ 
+ 	mutex_lock(&isert_conn->conn_mutex);
++	terminating = (isert_conn->state == ISER_CONN_TERMINATING);
+ 	isert_conn_terminate(isert_conn);
+ 	mutex_unlock(&isert_conn->conn_mutex);
+ 
+ 	isert_info("conn %p completing conn_wait\n", isert_conn);
+ 	complete(&isert_conn->conn_wait);
+ 
++	if (terminating)
++		goto out;
++
++	mutex_lock(&isert_np->np_accept_mutex);
++	if (!list_empty(&isert_conn->conn_accept_node)) {
++		list_del_init(&isert_conn->conn_accept_node);
++		isert_put_conn(isert_conn);
++		queue_work(isert_release_wq, &isert_conn->release_work);
++	}
++	mutex_unlock(&isert_np->np_accept_mutex);
++
++out:
+ 	return 0;
+ }
+ 
+@@ -3305,7 +3322,6 @@ static void isert_wait_conn(struct iscsi_conn *conn)
+ 	isert_wait4flush(isert_conn);
+ 	isert_wait4logout(isert_conn);
+ 
+-	INIT_WORK(&isert_conn->release_work, isert_release_work);
+ 	queue_work(isert_release_wq, &isert_conn->release_work);
+ }
+ 
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 0747c0595a9d..313dfada7810 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -464,14 +464,13 @@ static struct srp_fr_pool *srp_alloc_fr_pool(struct srp_target_port *target)
+  */
+ static void srp_destroy_qp(struct srp_rdma_ch *ch)
+ {
+-	struct srp_target_port *target = ch->target;
+ 	static struct ib_qp_attr attr = { .qp_state = IB_QPS_ERR };
+ 	static struct ib_recv_wr wr = { .wr_id = SRP_LAST_WR_ID };
+ 	struct ib_recv_wr *bad_wr;
+ 	int ret;
+ 
+ 	/* Destroying a QP and reusing ch->done is only safe if not connected */
+-	WARN_ON_ONCE(target->connected);
++	WARN_ON_ONCE(ch->connected);
+ 
+ 	ret = ib_modify_qp(ch->qp, &attr, IB_QP_STATE);
+ 	WARN_ONCE(ret, "ib_cm_init_qp_attr() returned %d\n", ret);
+@@ -810,35 +809,19 @@ static bool srp_queue_remove_work(struct srp_target_port *target)
+ 	return changed;
+ }
+ 
+-static bool srp_change_conn_state(struct srp_target_port *target,
+-				  bool connected)
+-{
+-	bool changed = false;
+-
+-	spin_lock_irq(&target->lock);
+-	if (target->connected != connected) {
+-		target->connected = connected;
+-		changed = true;
+-	}
+-	spin_unlock_irq(&target->lock);
+-
+-	return changed;
+-}
+-
+ static void srp_disconnect_target(struct srp_target_port *target)
+ {
+ 	struct srp_rdma_ch *ch;
+ 	int i;
+ 
+-	if (srp_change_conn_state(target, false)) {
+-		/* XXX should send SRP_I_LOGOUT request */
++	/* XXX should send SRP_I_LOGOUT request */
+ 
+-		for (i = 0; i < target->ch_count; i++) {
+-			ch = &target->ch[i];
+-			if (ch->cm_id && ib_send_cm_dreq(ch->cm_id, NULL, 0)) {
+-				shost_printk(KERN_DEBUG, target->scsi_host,
+-					     PFX "Sending CM DREQ failed\n");
+-			}
++	for (i = 0; i < target->ch_count; i++) {
++		ch = &target->ch[i];
++		ch->connected = false;
++		if (ch->cm_id && ib_send_cm_dreq(ch->cm_id, NULL, 0)) {
++			shost_printk(KERN_DEBUG, target->scsi_host,
++				     PFX "Sending CM DREQ failed\n");
+ 		}
+ 	}
+ }
+@@ -985,14 +968,26 @@ static void srp_rport_delete(struct srp_rport *rport)
+ 	srp_queue_remove_work(target);
+ }
+ 
++/**
++ * srp_connected_ch() - number of connected channels
++ * @target: SRP target port.
++ */
++static int srp_connected_ch(struct srp_target_port *target)
++{
++	int i, c = 0;
++
++	for (i = 0; i < target->ch_count; i++)
++		c += target->ch[i].connected;
++
++	return c;
++}
++
+ static int srp_connect_ch(struct srp_rdma_ch *ch, bool multich)
+ {
+ 	struct srp_target_port *target = ch->target;
+ 	int ret;
+ 
+-	WARN_ON_ONCE(!multich && target->connected);
+-
+-	target->qp_in_error = false;
++	WARN_ON_ONCE(!multich && srp_connected_ch(target) > 0);
+ 
+ 	ret = srp_lookup_path(ch);
+ 	if (ret)
+@@ -1015,7 +1010,7 @@ static int srp_connect_ch(struct srp_rdma_ch *ch, bool multich)
+ 		 */
+ 		switch (ch->status) {
+ 		case 0:
+-			srp_change_conn_state(target, true);
++			ch->connected = true;
+ 			return 0;
+ 
+ 		case SRP_PORT_REDIRECT:
+@@ -1242,13 +1237,13 @@ static int srp_rport_reconnect(struct srp_rport *rport)
+ 		for (j = 0; j < target->queue_size; ++j)
+ 			list_add(&ch->tx_ring[j]->list, &ch->free_tx);
+ 	}
++
++	target->qp_in_error = false;
++
+ 	for (i = 0; i < target->ch_count; i++) {
+ 		ch = &target->ch[i];
+-		if (ret || !ch->target) {
+-			if (i > 1)
+-				ret = 0;
++		if (ret || !ch->target)
+ 			break;
+-		}
+ 		ret = srp_connect_ch(ch, multich);
+ 		multich = true;
+ 	}
+@@ -1928,7 +1923,7 @@ static void srp_handle_qp_err(u64 wr_id, enum ib_wc_status wc_status,
+ 		return;
+ 	}
+ 
+-	if (target->connected && !target->qp_in_error) {
++	if (ch->connected && !target->qp_in_error) {
+ 		if (wr_id & LOCAL_INV_WR_ID_MASK) {
+ 			shost_printk(KERN_ERR, target->scsi_host, PFX
+ 				     "LOCAL_INV failed with status %d\n",
+@@ -2366,7 +2361,7 @@ static int srp_cm_handler(struct ib_cm_id *cm_id, struct ib_cm_event *event)
+ 	case IB_CM_DREQ_RECEIVED:
+ 		shost_printk(KERN_WARNING, target->scsi_host,
+ 			     PFX "DREQ received - connection closed\n");
+-		srp_change_conn_state(target, false);
++		ch->connected = false;
+ 		if (ib_send_cm_drep(cm_id, NULL, 0))
+ 			shost_printk(KERN_ERR, target->scsi_host,
+ 				     PFX "Sending CM DREP failed\n");
+@@ -2422,7 +2417,7 @@ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag,
+ 	struct srp_iu *iu;
+ 	struct srp_tsk_mgmt *tsk_mgmt;
+ 
+-	if (!target->connected || target->qp_in_error)
++	if (!ch->connected || target->qp_in_error)
+ 		return -1;
+ 
+ 	init_completion(&ch->tsk_mgmt_done);
+@@ -2796,7 +2791,8 @@ static int srp_add_target(struct srp_host *host, struct srp_target_port *target)
+ 	scsi_scan_target(&target->scsi_host->shost_gendev,
+ 			 0, target->scsi_id, SCAN_WILD_CARD, 0);
+ 
+-	if (!target->connected || target->qp_in_error) {
++	if (srp_connected_ch(target) < target->ch_count ||
++	    target->qp_in_error) {
+ 		shost_printk(KERN_INFO, target->scsi_host,
+ 			     PFX "SCSI scan failed - removing SCSI host\n");
+ 		srp_queue_remove_work(target);
+@@ -3171,11 +3167,11 @@ static ssize_t srp_create_target(struct device *dev,
+ 
+ 	ret = srp_parse_options(buf, target);
+ 	if (ret)
+-		goto err;
++		goto out;
+ 
+ 	ret = scsi_init_shared_tag_map(target_host, target_host->can_queue);
+ 	if (ret)
+-		goto err;
++		goto out;
+ 
+ 	target->req_ring_size = target->queue_size - SRP_TSK_MGMT_SQ_SIZE;
+ 
+@@ -3186,7 +3182,7 @@ static ssize_t srp_create_target(struct device *dev,
+ 			     be64_to_cpu(target->ioc_guid),
+ 			     be64_to_cpu(target->initiator_ext));
+ 		ret = -EEXIST;
+-		goto err;
++		goto out;
+ 	}
+ 
+ 	if (!srp_dev->has_fmr && !srp_dev->has_fr && !target->allow_ext_sg &&
+@@ -3207,7 +3203,7 @@ static ssize_t srp_create_target(struct device *dev,
+ 	spin_lock_init(&target->lock);
+ 	ret = ib_query_gid(ibdev, host->port, 0, &target->sgid);
+ 	if (ret)
+-		goto err;
++		goto out;
+ 
+ 	ret = -ENOMEM;
+ 	target->ch_count = max_t(unsigned, num_online_nodes(),
+@@ -3218,7 +3214,7 @@ static ssize_t srp_create_target(struct device *dev,
+ 	target->ch = kcalloc(target->ch_count, sizeof(*target->ch),
+ 			     GFP_KERNEL);
+ 	if (!target->ch)
+-		goto err;
++		goto out;
+ 
+ 	node_idx = 0;
+ 	for_each_online_node(node) {
+@@ -3314,9 +3310,6 @@ err_disconnect:
+ 	}
+ 
+ 	kfree(target->ch);
+-
+-err:
+-	scsi_host_put(target_host);
+ 	goto out;
+ }
+ 
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h
+index a611556406ac..e690847a46dd 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.h
++++ b/drivers/infiniband/ulp/srp/ib_srp.h
+@@ -170,6 +170,7 @@ struct srp_rdma_ch {
+ 
+ 	struct completion	tsk_mgmt_done;
+ 	u8			tsk_mgmt_status;
++	bool			connected;
+ };
+ 
+ /**
+@@ -214,7 +215,6 @@ struct srp_target_port {
+ 	__be16			pkey;
+ 
+ 	u32			rq_tmo_jiffies;
+-	bool			connected;
+ 
+ 	int			zero_req_lim;
+ 
+diff --git a/drivers/input/touchscreen/pixcir_i2c_ts.c b/drivers/input/touchscreen/pixcir_i2c_ts.c
+index 2c2107147319..8f3e243a62bf 100644
+--- a/drivers/input/touchscreen/pixcir_i2c_ts.c
++++ b/drivers/input/touchscreen/pixcir_i2c_ts.c
+@@ -78,7 +78,7 @@ static void pixcir_ts_parse(struct pixcir_i2c_ts_data *tsdata,
+ 	}
+ 
+ 	ret = i2c_master_recv(tsdata->client, rdbuf, readsize);
+-	if (ret != sizeof(rdbuf)) {
++	if (ret != readsize) {
+ 		dev_err(&tsdata->client->dev,
+ 			"%s: i2c_master_recv failed(), ret=%d\n",
+ 			__func__, ret);
+diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
+index 795ec994c663..b155f88d69b8 100644
+--- a/drivers/leds/led-class.c
++++ b/drivers/leds/led-class.c
+@@ -187,6 +187,7 @@ void led_classdev_resume(struct led_classdev *led_cdev)
+ }
+ EXPORT_SYMBOL_GPL(led_classdev_resume);
+ 
++#ifdef CONFIG_PM_SLEEP
+ static int led_suspend(struct device *dev)
+ {
+ 	struct led_classdev *led_cdev = dev_get_drvdata(dev);
+@@ -206,11 +207,9 @@ static int led_resume(struct device *dev)
+ 
+ 	return 0;
+ }
++#endif
+ 
+-static const struct dev_pm_ops leds_class_dev_pm_ops = {
+-	.suspend        = led_suspend,
+-	.resume         = led_resume,
+-};
++static SIMPLE_DEV_PM_OPS(leds_class_dev_pm_ops, led_suspend, led_resume);
+ 
+ /**
+  * led_classdev_register - register a new object of led_classdev class.
+diff --git a/drivers/misc/mei/client.c b/drivers/misc/mei/client.c
+index dfbddfe1c7a0..8f639ad4ce6a 100644
+--- a/drivers/misc/mei/client.c
++++ b/drivers/misc/mei/client.c
+@@ -573,7 +573,7 @@ void mei_host_client_init(struct work_struct *work)
+ bool mei_hbuf_acquire(struct mei_device *dev)
+ {
+ 	if (mei_pg_state(dev) == MEI_PG_ON ||
+-	    dev->pg_event == MEI_PG_EVENT_WAIT) {
++	    mei_pg_in_transition(dev)) {
+ 		dev_dbg(dev->dev, "device is in pg\n");
+ 		return false;
+ 	}
+diff --git a/drivers/misc/mei/hw-me.c b/drivers/misc/mei/hw-me.c
+index f8fd503dfbd6..19e0cdbd155f 100644
+--- a/drivers/misc/mei/hw-me.c
++++ b/drivers/misc/mei/hw-me.c
+@@ -629,11 +629,27 @@ int mei_me_pg_unset_sync(struct mei_device *dev)
+ 	mutex_lock(&dev->device_lock);
+ 
+ reply:
+-	if (dev->pg_event == MEI_PG_EVENT_RECEIVED)
+-		ret = mei_hbm_pg(dev, MEI_PG_ISOLATION_EXIT_RES_CMD);
++	if (dev->pg_event != MEI_PG_EVENT_RECEIVED) {
++		ret = -ETIME;
++		goto out;
++	}
++
++	dev->pg_event = MEI_PG_EVENT_INTR_WAIT;
++	ret = mei_hbm_pg(dev, MEI_PG_ISOLATION_EXIT_RES_CMD);
++	if (ret)
++		return ret;
++
++	mutex_unlock(&dev->device_lock);
++	wait_event_timeout(dev->wait_pg,
++		dev->pg_event == MEI_PG_EVENT_INTR_RECEIVED, timeout);
++	mutex_lock(&dev->device_lock);
++
++	if (dev->pg_event == MEI_PG_EVENT_INTR_RECEIVED)
++		ret = 0;
+ 	else
+ 		ret = -ETIME;
+ 
++out:
+ 	dev->pg_event = MEI_PG_EVENT_IDLE;
+ 	hw->pg_state = MEI_PG_OFF;
+ 
+@@ -641,6 +657,19 @@ reply:
+ }
+ 
+ /**
++ * mei_me_pg_in_transition - is device now in pg transition
++ *
++ * @dev: the device structure
++ *
++ * Return: true if in pg transition, false otherwise
++ */
++static bool mei_me_pg_in_transition(struct mei_device *dev)
++{
++	return dev->pg_event >= MEI_PG_EVENT_WAIT &&
++	       dev->pg_event <= MEI_PG_EVENT_INTR_WAIT;
++}
++
++/**
+  * mei_me_pg_is_enabled - detect if PG is supported by HW
+  *
+  * @dev: the device structure
+@@ -672,6 +701,24 @@ notsupported:
+ }
+ 
+ /**
++ * mei_me_pg_intr - perform pg processing in interrupt thread handler
++ *
++ * @dev: the device structure
++ */
++static void mei_me_pg_intr(struct mei_device *dev)
++{
++	struct mei_me_hw *hw = to_me_hw(dev);
++
++	if (dev->pg_event != MEI_PG_EVENT_INTR_WAIT)
++		return;
++
++	dev->pg_event = MEI_PG_EVENT_INTR_RECEIVED;
++	hw->pg_state = MEI_PG_OFF;
++	if (waitqueue_active(&dev->wait_pg))
++		wake_up(&dev->wait_pg);
++}
++
++/**
+  * mei_me_irq_quick_handler - The ISR of the MEI device
+  *
+  * @irq: The irq number
+@@ -729,6 +776,8 @@ irqreturn_t mei_me_irq_thread_handler(int irq, void *dev_id)
+ 		goto end;
+ 	}
+ 
++	mei_me_pg_intr(dev);
++
+ 	/*  check if we need to start the dev */
+ 	if (!mei_host_is_ready(dev)) {
+ 		if (mei_hw_is_ready(dev)) {
+@@ -765,9 +814,10 @@ irqreturn_t mei_me_irq_thread_handler(int irq, void *dev_id)
+ 	/*
+ 	 * During PG handshake only allowed write is the replay to the
+ 	 * PG exit message, so block calling write function
+-	 * if the pg state is not idle
++	 * if the pg event is in PG handshake
+ 	 */
+-	if (dev->pg_event == MEI_PG_EVENT_IDLE) {
++	if (dev->pg_event != MEI_PG_EVENT_WAIT &&
++	    dev->pg_event != MEI_PG_EVENT_RECEIVED) {
+ 		rets = mei_irq_write_handler(dev, &complete_list);
+ 		dev->hbuf_is_ready = mei_hbuf_is_ready(dev);
+ 	}
+@@ -792,6 +842,7 @@ static const struct mei_hw_ops mei_me_hw_ops = {
+ 	.hw_config = mei_me_hw_config,
+ 	.hw_start = mei_me_hw_start,
+ 
++	.pg_in_transition = mei_me_pg_in_transition,
+ 	.pg_is_enabled = mei_me_pg_is_enabled,
+ 
+ 	.intr_clear = mei_me_intr_clear,
+diff --git a/drivers/misc/mei/hw-txe.c b/drivers/misc/mei/hw-txe.c
+index 618ea721aca8..50bd6e90a2bd 100644
+--- a/drivers/misc/mei/hw-txe.c
++++ b/drivers/misc/mei/hw-txe.c
+@@ -16,6 +16,7 @@
+ 
+ #include <linux/pci.h>
+ #include <linux/jiffies.h>
++#include <linux/ktime.h>
+ #include <linux/delay.h>
+ #include <linux/kthread.h>
+ #include <linux/irqreturn.h>
+@@ -218,26 +219,25 @@ static u32 mei_txe_aliveness_get(struct mei_device *dev)
+  *
+  * Polls for HICR_HOST_ALIVENESS_RESP.ALIVENESS_RESP to be set
+  *
+- * Return: > 0 if the expected value was received, -ETIME otherwise
++ * Return: 0 if the expected value was received, -ETIME otherwise
+  */
+ static int mei_txe_aliveness_poll(struct mei_device *dev, u32 expected)
+ {
+ 	struct mei_txe_hw *hw = to_txe_hw(dev);
+-	int t = 0;
++	ktime_t stop, start;
+ 
++	start = ktime_get();
++	stop = ktime_add(start, ms_to_ktime(SEC_ALIVENESS_WAIT_TIMEOUT));
+ 	do {
+ 		hw->aliveness = mei_txe_aliveness_get(dev);
+ 		if (hw->aliveness == expected) {
+ 			dev->pg_event = MEI_PG_EVENT_IDLE;
+-			dev_dbg(dev->dev,
+-				"aliveness settled after %d msecs\n", t);
+-			return t;
++			dev_dbg(dev->dev, "aliveness settled after %lld usecs\n",
++				ktime_to_us(ktime_sub(ktime_get(), start)));
++			return 0;
+ 		}
+-		mutex_unlock(&dev->device_lock);
+-		msleep(MSEC_PER_SEC / 5);
+-		mutex_lock(&dev->device_lock);
+-		t += MSEC_PER_SEC / 5;
+-	} while (t < SEC_ALIVENESS_WAIT_TIMEOUT);
++		usleep_range(20, 50);
++	} while (ktime_before(ktime_get(), stop));
+ 
+ 	dev->pg_event = MEI_PG_EVENT_IDLE;
+ 	dev_err(dev->dev, "aliveness timed out\n");
+@@ -302,6 +302,18 @@ int mei_txe_aliveness_set_sync(struct mei_device *dev, u32 req)
+ }
+ 
+ /**
++ * mei_txe_pg_in_transition - is device now in pg transition
++ *
++ * @dev: the device structure
++ *
++ * Return: true if in pg transition, false otherwise
++ */
++static bool mei_txe_pg_in_transition(struct mei_device *dev)
++{
++	return dev->pg_event == MEI_PG_EVENT_WAIT;
++}
++
++/**
+  * mei_txe_pg_is_enabled - detect if PG is supported by HW
+  *
+  * @dev: the device structure
+@@ -1138,6 +1150,7 @@ static const struct mei_hw_ops mei_txe_hw_ops = {
+ 	.hw_config = mei_txe_hw_config,
+ 	.hw_start = mei_txe_hw_start,
+ 
++	.pg_in_transition = mei_txe_pg_in_transition,
+ 	.pg_is_enabled = mei_txe_pg_is_enabled,
+ 
+ 	.intr_clear = mei_txe_intr_clear,
+diff --git a/drivers/misc/mei/mei_dev.h b/drivers/misc/mei/mei_dev.h
+index 6c6ce9381535..1ae524c2ae67 100644
+--- a/drivers/misc/mei/mei_dev.h
++++ b/drivers/misc/mei/mei_dev.h
+@@ -269,6 +269,7 @@ struct mei_cl {
+ 
+  * @fw_status        : get fw status registers
+  * @pg_state         : power gating state of the device
++ * @pg_in_transition : is device now in pg transition
+  * @pg_is_enabled    : is power gating enabled
+ 
+  * @intr_clear       : clear pending interrupts
+@@ -298,6 +299,7 @@ struct mei_hw_ops {
+ 
+ 	int (*fw_status)(struct mei_device *dev, struct mei_fw_status *fw_sts);
+ 	enum mei_pg_state (*pg_state)(struct mei_device *dev);
++	bool (*pg_in_transition)(struct mei_device *dev);
+ 	bool (*pg_is_enabled)(struct mei_device *dev);
+ 
+ 	void (*intr_clear)(struct mei_device *dev);
+@@ -396,11 +398,15 @@ struct mei_cl_device {
+  * @MEI_PG_EVENT_IDLE: the driver is not in power gating transition
+  * @MEI_PG_EVENT_WAIT: the driver is waiting for a pg event to complete
+  * @MEI_PG_EVENT_RECEIVED: the driver received pg event
++ * @MEI_PG_EVENT_INTR_WAIT: the driver is waiting for a pg event interrupt
++ * @MEI_PG_EVENT_INTR_RECEIVED: the driver received pg event interrupt
+  */
+ enum mei_pg_event {
+ 	MEI_PG_EVENT_IDLE,
+ 	MEI_PG_EVENT_WAIT,
+ 	MEI_PG_EVENT_RECEIVED,
++	MEI_PG_EVENT_INTR_WAIT,
++	MEI_PG_EVENT_INTR_RECEIVED,
+ };
+ 
+ /**
+@@ -727,6 +733,11 @@ static inline enum mei_pg_state mei_pg_state(struct mei_device *dev)
+ 	return dev->ops->pg_state(dev);
+ }
+ 
++static inline bool mei_pg_in_transition(struct mei_device *dev)
++{
++	return dev->ops->pg_in_transition(dev);
++}
++
+ static inline bool mei_pg_is_enabled(struct mei_device *dev)
+ {
+ 	return dev->ops->pg_is_enabled(dev);
+diff --git a/drivers/mtd/maps/dc21285.c b/drivers/mtd/maps/dc21285.c
+index f8a7dd14cee0..70a3db3ab856 100644
+--- a/drivers/mtd/maps/dc21285.c
++++ b/drivers/mtd/maps/dc21285.c
+@@ -38,9 +38,9 @@ static void nw_en_write(void)
+ 	 * we want to write a bit pattern XXX1 to Xilinx to enable
+ 	 * the write gate, which will be open for about the next 2ms.
+ 	 */
+-	spin_lock_irqsave(&nw_gpio_lock, flags);
++	raw_spin_lock_irqsave(&nw_gpio_lock, flags);
+ 	nw_cpld_modify(CPLD_FLASH_WR_ENABLE, CPLD_FLASH_WR_ENABLE);
+-	spin_unlock_irqrestore(&nw_gpio_lock, flags);
++	raw_spin_unlock_irqrestore(&nw_gpio_lock, flags);
+ 
+ 	/*
+ 	 * let the ISA bus to catch on...
+diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
+index d08229eb44d8..3a69b1e56908 100644
+--- a/drivers/mtd/mtd_blkdevs.c
++++ b/drivers/mtd/mtd_blkdevs.c
+@@ -200,6 +200,7 @@ static int blktrans_open(struct block_device *bdev, fmode_t mode)
+ 		return -ERESTARTSYS; /* FIXME: busy loop! -arnd*/
+ 
+ 	mutex_lock(&dev->lock);
++	mutex_lock(&mtd_table_mutex);
+ 
+ 	if (dev->open)
+ 		goto unlock;
+@@ -223,6 +224,7 @@ static int blktrans_open(struct block_device *bdev, fmode_t mode)
+ 
+ unlock:
+ 	dev->open++;
++	mutex_unlock(&mtd_table_mutex);
+ 	mutex_unlock(&dev->lock);
+ 	blktrans_dev_put(dev);
+ 	return ret;
+@@ -233,6 +235,7 @@ error_release:
+ error_put:
+ 	module_put(dev->tr->owner);
+ 	kref_put(&dev->ref, blktrans_dev_release);
++	mutex_unlock(&mtd_table_mutex);
+ 	mutex_unlock(&dev->lock);
+ 	blktrans_dev_put(dev);
+ 	return ret;
+@@ -246,6 +249,7 @@ static void blktrans_release(struct gendisk *disk, fmode_t mode)
+ 		return;
+ 
+ 	mutex_lock(&dev->lock);
++	mutex_lock(&mtd_table_mutex);
+ 
+ 	if (--dev->open)
+ 		goto unlock;
+@@ -259,6 +263,7 @@ static void blktrans_release(struct gendisk *disk, fmode_t mode)
+ 		__put_mtd_device(dev->mtd);
+ 	}
+ unlock:
++	mutex_unlock(&mtd_table_mutex);
+ 	mutex_unlock(&dev->lock);
+ 	blktrans_dev_put(dev);
+ }
+diff --git a/drivers/of/address.c b/drivers/of/address.c
+index 78a7dcbec7d8..6906a3f61bd8 100644
+--- a/drivers/of/address.c
++++ b/drivers/of/address.c
+@@ -765,7 +765,7 @@ unsigned long __weak pci_address_to_pio(phys_addr_t address)
+ 	spin_lock(&io_range_lock);
+ 	list_for_each_entry(res, &io_range_list, list) {
+ 		if (address >= res->start && address < res->start + res->size) {
+-			addr = res->start - address + offset;
++			addr = address - res->start + offset;
+ 			break;
+ 		}
+ 		offset += res->size;
+diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
+index 7a8f1c5e65af..73de4efcbe6e 100644
+--- a/drivers/pci/Kconfig
++++ b/drivers/pci/Kconfig
+@@ -1,6 +1,10 @@
+ #
+ # PCI configuration
+ #
++config PCI_BUS_ADDR_T_64BIT
++	def_bool y if (ARCH_DMA_ADDR_T_64BIT || 64BIT)
++	depends on PCI
++
+ config PCI_MSI
+ 	bool "Message Signaled Interrupts (MSI and MSI-X)"
+ 	depends on PCI
+diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c
+index 90fa3a78fb7c..6fbd3f2b5992 100644
+--- a/drivers/pci/bus.c
++++ b/drivers/pci/bus.c
+@@ -92,11 +92,11 @@ void pci_bus_remove_resources(struct pci_bus *bus)
+ }
+ 
+ static struct pci_bus_region pci_32_bit = {0, 0xffffffffULL};
+-#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
++#ifdef CONFIG_PCI_BUS_ADDR_T_64BIT
+ static struct pci_bus_region pci_64_bit = {0,
+-				(dma_addr_t) 0xffffffffffffffffULL};
+-static struct pci_bus_region pci_high = {(dma_addr_t) 0x100000000ULL,
+-				(dma_addr_t) 0xffffffffffffffffULL};
++				(pci_bus_addr_t) 0xffffffffffffffffULL};
++static struct pci_bus_region pci_high = {(pci_bus_addr_t) 0x100000000ULL,
++				(pci_bus_addr_t) 0xffffffffffffffffULL};
+ #endif
+ 
+ /*
+@@ -200,7 +200,7 @@ int pci_bus_alloc_resource(struct pci_bus *bus, struct resource *res,
+ 					  resource_size_t),
+ 		void *alignf_data)
+ {
+-#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
++#ifdef CONFIG_PCI_BUS_ADDR_T_64BIT
+ 	int rc;
+ 
+ 	if (res->flags & IORESOURCE_MEM_64) {
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 0ebf754fc177..6d6868811e56 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -176,20 +176,17 @@ static void pcie_wait_cmd(struct controller *ctrl)
+ 			  jiffies_to_msecs(jiffies - ctrl->cmd_started));
+ }
+ 
+-/**
+- * pcie_write_cmd - Issue controller command
+- * @ctrl: controller to which the command is issued
+- * @cmd:  command value written to slot control register
+- * @mask: bitmask of slot control register to be modified
+- */
+-static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask)
++static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd,
++			      u16 mask, bool wait)
+ {
+ 	struct pci_dev *pdev = ctrl_dev(ctrl);
+ 	u16 slot_ctrl;
+ 
+ 	mutex_lock(&ctrl->ctrl_lock);
+ 
+-	/* Wait for any previous command that might still be in progress */
++	/*
++	 * Always wait for any previous command that might still be in progress
++	 */
+ 	pcie_wait_cmd(ctrl);
+ 
+ 	pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl);
+@@ -201,9 +198,33 @@ static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask)
+ 	ctrl->cmd_started = jiffies;
+ 	ctrl->slot_ctrl = slot_ctrl;
+ 
++	/*
++	 * Optionally wait for the hardware to be ready for a new command,
++	 * indicating completion of the above issued command.
++	 */
++	if (wait)
++		pcie_wait_cmd(ctrl);
++
+ 	mutex_unlock(&ctrl->ctrl_lock);
+ }
+ 
++/**
++ * pcie_write_cmd - Issue controller command
++ * @ctrl: controller to which the command is issued
++ * @cmd:  command value written to slot control register
++ * @mask: bitmask of slot control register to be modified
++ */
++static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask)
++{
++	pcie_do_write_cmd(ctrl, cmd, mask, true);
++}
++
++/* Same as above without waiting for the hardware to latch */
++static void pcie_write_cmd_nowait(struct controller *ctrl, u16 cmd, u16 mask)
++{
++	pcie_do_write_cmd(ctrl, cmd, mask, false);
++}
++
+ bool pciehp_check_link_active(struct controller *ctrl)
+ {
+ 	struct pci_dev *pdev = ctrl_dev(ctrl);
+@@ -422,7 +443,7 @@ void pciehp_set_attention_status(struct slot *slot, u8 value)
+ 	default:
+ 		return;
+ 	}
+-	pcie_write_cmd(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC);
++	pcie_write_cmd_nowait(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC);
+ 	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+ 		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd);
+ }
+@@ -434,7 +455,8 @@ void pciehp_green_led_on(struct slot *slot)
+ 	if (!PWR_LED(ctrl))
+ 		return;
+ 
+-	pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON, PCI_EXP_SLTCTL_PIC);
++	pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON,
++			      PCI_EXP_SLTCTL_PIC);
+ 	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+ 		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
+ 		 PCI_EXP_SLTCTL_PWR_IND_ON);
+@@ -447,7 +469,8 @@ void pciehp_green_led_off(struct slot *slot)
+ 	if (!PWR_LED(ctrl))
+ 		return;
+ 
+-	pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, PCI_EXP_SLTCTL_PIC);
++	pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
++			      PCI_EXP_SLTCTL_PIC);
+ 	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+ 		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
+ 		 PCI_EXP_SLTCTL_PWR_IND_OFF);
+@@ -460,7 +483,8 @@ void pciehp_green_led_blink(struct slot *slot)
+ 	if (!PWR_LED(ctrl))
+ 		return;
+ 
+-	pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK, PCI_EXP_SLTCTL_PIC);
++	pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK,
++			      PCI_EXP_SLTCTL_PIC);
+ 	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+ 		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
+ 		 PCI_EXP_SLTCTL_PWR_IND_BLINK);
+@@ -613,7 +637,7 @@ void pcie_enable_notification(struct controller *ctrl)
+ 		PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_CCIE |
+ 		PCI_EXP_SLTCTL_DLLSCE);
+ 
+-	pcie_write_cmd(ctrl, cmd, mask);
++	pcie_write_cmd_nowait(ctrl, cmd, mask);
+ 	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+ 		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, cmd);
+ }
+@@ -664,7 +688,7 @@ int pciehp_reset_slot(struct slot *slot, int probe)
+ 	pci_reset_bridge_secondary_bus(ctrl->pcie->port);
+ 
+ 	pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, stat_mask);
+-	pcie_write_cmd(ctrl, ctrl_mask, ctrl_mask);
++	pcie_write_cmd_nowait(ctrl, ctrl_mask, ctrl_mask);
+ 	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+ 		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ctrl_mask);
+ 	if (pciehp_poll_mode)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 81f06e8dcc04..75712909ec04 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -4319,6 +4319,17 @@ bool pci_device_is_present(struct pci_dev *pdev)
+ }
+ EXPORT_SYMBOL_GPL(pci_device_is_present);
+ 
++void pci_ignore_hotplug(struct pci_dev *dev)
++{
++	struct pci_dev *bridge = dev->bus->self;
++
++	dev->ignore_hotplug = 1;
++	/* Propagate the "ignore hotplug" setting to the parent bridge. */
++	if (bridge)
++		bridge->ignore_hotplug = 1;
++}
++EXPORT_SYMBOL_GPL(pci_ignore_hotplug);
++
+ #define RESOURCE_ALIGNMENT_PARAM_SIZE COMMAND_LINE_SIZE
+ static char resource_alignment_param[RESOURCE_ALIGNMENT_PARAM_SIZE] = {0};
+ static DEFINE_SPINLOCK(resource_alignment_lock);
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 8d2f400e96cb..f71cb7cb2abc 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -253,8 +253,8 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
+ 	}
+ 
+ 	if (res->flags & IORESOURCE_MEM_64) {
+-		if ((sizeof(dma_addr_t) < 8 || sizeof(resource_size_t) < 8) &&
+-		    sz64 > 0x100000000ULL) {
++		if ((sizeof(pci_bus_addr_t) < 8 || sizeof(resource_size_t) < 8)
++		    && sz64 > 0x100000000ULL) {
+ 			res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED;
+ 			res->start = 0;
+ 			res->end = 0;
+@@ -263,7 +263,7 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
+ 			goto out;
+ 		}
+ 
+-		if ((sizeof(dma_addr_t) < 8) && l) {
++		if ((sizeof(pci_bus_addr_t) < 8) && l) {
+ 			/* Above 32-bit boundary; try to reallocate */
+ 			res->flags |= IORESOURCE_UNSET;
+ 			res->start = 0;
+@@ -398,7 +398,7 @@ static void pci_read_bridge_mmio_pref(struct pci_bus *child)
+ 	struct pci_dev *dev = child->self;
+ 	u16 mem_base_lo, mem_limit_lo;
+ 	u64 base64, limit64;
+-	dma_addr_t base, limit;
++	pci_bus_addr_t base, limit;
+ 	struct pci_bus_region region;
+ 	struct resource *res;
+ 
+@@ -425,8 +425,8 @@ static void pci_read_bridge_mmio_pref(struct pci_bus *child)
+ 		}
+ 	}
+ 
+-	base = (dma_addr_t) base64;
+-	limit = (dma_addr_t) limit64;
++	base = (pci_bus_addr_t) base64;
++	limit = (pci_bus_addr_t) limit64;
+ 
+ 	if (base != base64) {
+ 		dev_err(&dev->dev, "can't handle bridge window above 4GB (bus address %#010llx)\n",
+diff --git a/drivers/pcmcia/topic.h b/drivers/pcmcia/topic.h
+index 615a45a8fe86..582688fe7505 100644
+--- a/drivers/pcmcia/topic.h
++++ b/drivers/pcmcia/topic.h
+@@ -104,6 +104,9 @@
+ #define TOPIC_EXCA_IF_CONTROL		0x3e	/* 8 bit */
+ #define TOPIC_EXCA_IFC_33V_ENA		0x01
+ 
++#define TOPIC_PCI_CFG_PPBCN		0x3e	/* 16-bit */
++#define TOPIC_PCI_CFG_PPBCN_WBEN	0x0400
++
+ static void topic97_zoom_video(struct pcmcia_socket *sock, int onoff)
+ {
+ 	struct yenta_socket *socket = container_of(sock, struct yenta_socket, socket);
+@@ -138,6 +141,7 @@ static int topic97_override(struct yenta_socket *socket)
+ static int topic95_override(struct yenta_socket *socket)
+ {
+ 	u8 fctrl;
++	u16 ppbcn;
+ 
+ 	/* enable 3.3V support for 16bit cards */
+ 	fctrl = exca_readb(socket, TOPIC_EXCA_IF_CONTROL);
+@@ -146,6 +150,18 @@ static int topic95_override(struct yenta_socket *socket)
+ 	/* tell yenta to use exca registers to power 16bit cards */
+ 	socket->flags |= YENTA_16BIT_POWER_EXCA | YENTA_16BIT_POWER_DF;
+ 
++	/* Disable write buffers to prevent lockups under load with numerous
++	   Cardbus cards, observed on Tecra 500CDT and reported elsewhere on the
++	   net.  This is not a power-on default according to the datasheet
++	   but some BIOSes seem to set it. */
++	if (pci_read_config_word(socket->dev, TOPIC_PCI_CFG_PPBCN, &ppbcn) == 0
++	    && socket->dev->revision <= 7
++	    && (ppbcn & TOPIC_PCI_CFG_PPBCN_WBEN)) {
++		ppbcn &= ~TOPIC_PCI_CFG_PPBCN_WBEN;
++		pci_write_config_word(socket->dev, TOPIC_PCI_CFG_PPBCN, ppbcn);
++		dev_info(&socket->dev->dev, "Disabled ToPIC95 Cardbus write buffers.\n");
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index a4a8a6dc60c4..c860233b9f53 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -770,7 +770,7 @@ static int suspend_prepare(struct regulator_dev *rdev, suspend_state_t state)
+ static void print_constraints(struct regulator_dev *rdev)
+ {
+ 	struct regulation_constraints *constraints = rdev->constraints;
+-	char buf[80] = "";
++	char buf[160] = "";
+ 	int count = 0;
+ 	int ret;
+ 
+diff --git a/drivers/regulator/max77686.c b/drivers/regulator/max77686.c
+index 15fb1416bfbd..c064e32fb3b9 100644
+--- a/drivers/regulator/max77686.c
++++ b/drivers/regulator/max77686.c
+@@ -88,7 +88,7 @@ enum max77686_ramp_rate {
+ };
+ 
+ struct max77686_data {
+-	u64 gpio_enabled:MAX77686_REGULATORS;
++	DECLARE_BITMAP(gpio_enabled, MAX77686_REGULATORS);
+ 
+ 	/* Array indexed by regulator id */
+ 	unsigned int opmode[MAX77686_REGULATORS];
+@@ -121,7 +121,7 @@ static unsigned int max77686_map_normal_mode(struct max77686_data *max77686,
+ 	case MAX77686_BUCK8:
+ 	case MAX77686_BUCK9:
+ 	case MAX77686_LDO20 ... MAX77686_LDO22:
+-		if (max77686->gpio_enabled & (1 << id))
++		if (test_bit(id, max77686->gpio_enabled))
+ 			return MAX77686_GPIO_CONTROL;
+ 	}
+ 
+@@ -277,7 +277,7 @@ static int max77686_of_parse_cb(struct device_node *np,
+ 	}
+ 
+ 	if (gpio_is_valid(config->ena_gpio)) {
+-		max77686->gpio_enabled |= (1 << desc->id);
++		set_bit(desc->id, max77686->gpio_enabled);
+ 
+ 		return regmap_update_bits(config->regmap, desc->enable_reg,
+ 					  desc->enable_mask,
+diff --git a/drivers/scsi/ipr.h b/drivers/scsi/ipr.h
+index ec03b42fa2b9..70b064709ad8 100644
+--- a/drivers/scsi/ipr.h
++++ b/drivers/scsi/ipr.h
+@@ -268,7 +268,7 @@
+ #define IPR_RUNTIME_RESET				0x40000000
+ 
+ #define IPR_IPL_INIT_MIN_STAGE_TIME			5
+-#define IPR_IPL_INIT_DEFAULT_STAGE_TIME                 15
++#define IPR_IPL_INIT_DEFAULT_STAGE_TIME                 30
+ #define IPR_IPL_INIT_STAGE_UNKNOWN			0x0
+ #define IPR_IPL_INIT_STAGE_TRANSOP			0xB0000000
+ #define IPR_IPL_INIT_STAGE_MASK				0xff000000
+diff --git a/drivers/scsi/scsi_transport_srp.c b/drivers/scsi/scsi_transport_srp.c
+index ae45bd99baed..f115f67a6ba5 100644
+--- a/drivers/scsi/scsi_transport_srp.c
++++ b/drivers/scsi/scsi_transport_srp.c
+@@ -396,6 +396,36 @@ static void srp_reconnect_work(struct work_struct *work)
+ 	}
+ }
+ 
++/**
++ * scsi_request_fn_active() - number of kernel threads inside scsi_request_fn()
++ * @shost: SCSI host for which to count the number of scsi_request_fn() callers.
++ *
++ * To do: add support for scsi-mq in this function.
++ */
++static int scsi_request_fn_active(struct Scsi_Host *shost)
++{
++	struct scsi_device *sdev;
++	struct request_queue *q;
++	int request_fn_active = 0;
++
++	shost_for_each_device(sdev, shost) {
++		q = sdev->request_queue;
++
++		spin_lock_irq(q->queue_lock);
++		request_fn_active += q->request_fn_active;
++		spin_unlock_irq(q->queue_lock);
++	}
++
++	return request_fn_active;
++}
++
++/* Wait until ongoing shost->hostt->queuecommand() calls have finished. */
++static void srp_wait_for_queuecommand(struct Scsi_Host *shost)
++{
++	while (scsi_request_fn_active(shost))
++		msleep(20);
++}
++
+ static void __rport_fail_io_fast(struct srp_rport *rport)
+ {
+ 	struct Scsi_Host *shost = rport_to_shost(rport);
+@@ -409,8 +439,10 @@ static void __rport_fail_io_fast(struct srp_rport *rport)
+ 
+ 	/* Involve the LLD if possible to terminate all I/O on the rport. */
+ 	i = to_srp_internal(shost->transportt);
+-	if (i->f->terminate_rport_io)
++	if (i->f->terminate_rport_io) {
++		srp_wait_for_queuecommand(shost);
+ 		i->f->terminate_rport_io(rport);
++	}
+ }
+ 
+ /**
+@@ -504,27 +536,6 @@ void srp_start_tl_fail_timers(struct srp_rport *rport)
+ EXPORT_SYMBOL(srp_start_tl_fail_timers);
+ 
+ /**
+- * scsi_request_fn_active() - number of kernel threads inside scsi_request_fn()
+- * @shost: SCSI host for which to count the number of scsi_request_fn() callers.
+- */
+-static int scsi_request_fn_active(struct Scsi_Host *shost)
+-{
+-	struct scsi_device *sdev;
+-	struct request_queue *q;
+-	int request_fn_active = 0;
+-
+-	shost_for_each_device(sdev, shost) {
+-		q = sdev->request_queue;
+-
+-		spin_lock_irq(q->queue_lock);
+-		request_fn_active += q->request_fn_active;
+-		spin_unlock_irq(q->queue_lock);
+-	}
+-
+-	return request_fn_active;
+-}
+-
+-/**
+  * srp_reconnect_rport() - reconnect to an SRP target port
+  * @rport: SRP target port.
+  *
+@@ -559,8 +570,7 @@ int srp_reconnect_rport(struct srp_rport *rport)
+ 	if (res)
+ 		goto out;
+ 	scsi_target_block(&shost->shost_gendev);
+-	while (scsi_request_fn_active(shost))
+-		msleep(20);
++	srp_wait_for_queuecommand(shost);
+ 	res = rport->state != SRP_RPORT_LOST ? i->f->reconnect(rport) : -ENODEV;
+ 	pr_debug("%s (state %d): transport.reconnect() returned %d\n",
+ 		 dev_name(&shost->shost_gendev), rport->state, res);
+diff --git a/drivers/spi/spi-orion.c b/drivers/spi/spi-orion.c
+index 861664776672..ff97cabdaa81 100644
+--- a/drivers/spi/spi-orion.c
++++ b/drivers/spi/spi-orion.c
+@@ -61,6 +61,12 @@ enum orion_spi_type {
+ 
+ struct orion_spi_dev {
+ 	enum orion_spi_type	typ;
++	/*
++	 * min_divisor and max_hz should be exclusive, the only we can
++	 * have both is for managing the armada-370-spi case with old
++	 * device tree
++	 */
++	unsigned long		max_hz;
+ 	unsigned int		min_divisor;
+ 	unsigned int		max_divisor;
+ 	u32			prescale_mask;
+@@ -387,8 +393,9 @@ static const struct orion_spi_dev orion_spi_dev_data = {
+ 
+ static const struct orion_spi_dev armada_spi_dev_data = {
+ 	.typ = ARMADA_SPI,
+-	.min_divisor = 1,
++	.min_divisor = 4,
+ 	.max_divisor = 1920,
++	.max_hz = 50000000,
+ 	.prescale_mask = ARMADA_SPI_CLK_PRESCALE_MASK,
+ };
+ 
+@@ -454,7 +461,21 @@ static int orion_spi_probe(struct platform_device *pdev)
+ 		goto out;
+ 
+ 	tclk_hz = clk_get_rate(spi->clk);
+-	master->max_speed_hz = DIV_ROUND_UP(tclk_hz, devdata->min_divisor);
++
++	/*
++	 * With old device tree, armada-370-spi could be used with
++	 * Armada XP, however for this SoC the maximum frequency is
++	 * 50MHz instead of tclk/4. On Armada 370, tclk cannot be
++	 * higher than 200MHz. So, in order to be able to handle both
++	 * SoCs, we can take the minimum of 50MHz and tclk/4.
++	 */
++	if (of_device_is_compatible(pdev->dev.of_node,
++					"marvell,armada-370-spi"))
++		master->max_speed_hz = min(devdata->max_hz,
++				DIV_ROUND_UP(tclk_hz, devdata->min_divisor));
++	else
++		master->max_speed_hz =
++			DIV_ROUND_UP(tclk_hz, devdata->min_divisor);
+ 	master->min_speed_hz = DIV_ROUND_UP(tclk_hz, devdata->max_divisor);
+ 
+ 	r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 57a195041dc7..75d6e99ac706 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1091,9 +1091,6 @@ void spi_finalize_current_message(struct spi_master *master)
+ 
+ 	spin_lock_irqsave(&master->queue_lock, flags);
+ 	mesg = master->cur_msg;
+-	master->cur_msg = NULL;
+-
+-	queue_kthread_work(&master->kworker, &master->pump_messages);
+ 	spin_unlock_irqrestore(&master->queue_lock, flags);
+ 
+ 	spi_unmap_msg(master, mesg);
+@@ -1106,9 +1103,13 @@ void spi_finalize_current_message(struct spi_master *master)
+ 		}
+ 	}
+ 
+-	trace_spi_message_done(mesg);
+-
++	spin_lock_irqsave(&master->queue_lock, flags);
++	master->cur_msg = NULL;
+ 	master->cur_msg_prepared = false;
++	queue_kthread_work(&master->kworker, &master->pump_messages);
++	spin_unlock_irqrestore(&master->queue_lock, flags);
++
++	trace_spi_message_done(mesg);
+ 
+ 	mesg->state = NULL;
+ 	if (mesg->complete)
+diff --git a/drivers/video/fbdev/mxsfb.c b/drivers/video/fbdev/mxsfb.c
+index f8ac4a452f26..0f64165b0147 100644
+--- a/drivers/video/fbdev/mxsfb.c
++++ b/drivers/video/fbdev/mxsfb.c
+@@ -316,6 +316,18 @@ static int mxsfb_check_var(struct fb_var_screeninfo *var,
+ 	return 0;
+ }
+ 
++static inline void mxsfb_enable_axi_clk(struct mxsfb_info *host)
++{
++	if (host->clk_axi)
++		clk_prepare_enable(host->clk_axi);
++}
++
++static inline void mxsfb_disable_axi_clk(struct mxsfb_info *host)
++{
++	if (host->clk_axi)
++		clk_disable_unprepare(host->clk_axi);
++}
++
+ static void mxsfb_enable_controller(struct fb_info *fb_info)
+ {
+ 	struct mxsfb_info *host = to_imxfb_host(fb_info);
+@@ -333,14 +345,13 @@ static void mxsfb_enable_controller(struct fb_info *fb_info)
+ 		}
+ 	}
+ 
+-	if (host->clk_axi)
+-		clk_prepare_enable(host->clk_axi);
+-
+ 	if (host->clk_disp_axi)
+ 		clk_prepare_enable(host->clk_disp_axi);
+ 	clk_prepare_enable(host->clk);
+ 	clk_set_rate(host->clk, PICOS2KHZ(fb_info->var.pixclock) * 1000U);
+ 
++	mxsfb_enable_axi_clk(host);
++
+ 	/* if it was disabled, re-enable the mode again */
+ 	writel(CTRL_DOTCLK_MODE, host->base + LCDC_CTRL + REG_SET);
+ 
+@@ -380,11 +391,11 @@ static void mxsfb_disable_controller(struct fb_info *fb_info)
+ 	reg = readl(host->base + LCDC_VDCTRL4);
+ 	writel(reg & ~VDCTRL4_SYNC_SIGNALS_ON, host->base + LCDC_VDCTRL4);
+ 
++	mxsfb_disable_axi_clk(host);
++
+ 	clk_disable_unprepare(host->clk);
+ 	if (host->clk_disp_axi)
+ 		clk_disable_unprepare(host->clk_disp_axi);
+-	if (host->clk_axi)
+-		clk_disable_unprepare(host->clk_axi);
+ 
+ 	host->enabled = 0;
+ 
+@@ -421,6 +432,8 @@ static int mxsfb_set_par(struct fb_info *fb_info)
+ 		mxsfb_disable_controller(fb_info);
+ 	}
+ 
++	mxsfb_enable_axi_clk(host);
++
+ 	/* clear the FIFOs */
+ 	writel(CTRL1_FIFO_CLEAR, host->base + LCDC_CTRL1 + REG_SET);
+ 
+@@ -438,6 +451,7 @@ static int mxsfb_set_par(struct fb_info *fb_info)
+ 		ctrl |= CTRL_SET_WORD_LENGTH(3);
+ 		switch (host->ld_intf_width) {
+ 		case STMLCDIF_8BIT:
++			mxsfb_disable_axi_clk(host);
+ 			dev_err(&host->pdev->dev,
+ 					"Unsupported LCD bus width mapping\n");
+ 			return -EINVAL;
+@@ -451,6 +465,7 @@ static int mxsfb_set_par(struct fb_info *fb_info)
+ 		writel(CTRL1_SET_BYTE_PACKAGING(0x7), host->base + LCDC_CTRL1);
+ 		break;
+ 	default:
++		mxsfb_disable_axi_clk(host);
+ 		dev_err(&host->pdev->dev, "Unhandled color depth of %u\n",
+ 				fb_info->var.bits_per_pixel);
+ 		return -EINVAL;
+@@ -504,6 +519,8 @@ static int mxsfb_set_par(struct fb_info *fb_info)
+ 			fb_info->fix.line_length * fb_info->var.yoffset,
+ 			host->base + host->devdata->next_buf);
+ 
++	mxsfb_disable_axi_clk(host);
++
+ 	if (reenable)
+ 		mxsfb_enable_controller(fb_info);
+ 
+@@ -582,10 +599,14 @@ static int mxsfb_pan_display(struct fb_var_screeninfo *var,
+ 
+ 	offset = fb_info->fix.line_length * var->yoffset;
+ 
++	mxsfb_enable_axi_clk(host);
++
+ 	/* update on next VSYNC */
+ 	writel(fb_info->fix.smem_start + offset,
+ 			host->base + host->devdata->next_buf);
+ 
++	mxsfb_disable_axi_clk(host);
++
+ 	return 0;
+ }
+ 
+@@ -608,13 +629,17 @@ static int mxsfb_restore_mode(struct mxsfb_info *host,
+ 	unsigned line_count;
+ 	unsigned period;
+ 	unsigned long pa, fbsize;
+-	int bits_per_pixel, ofs;
++	int bits_per_pixel, ofs, ret = 0;
+ 	u32 transfer_count, vdctrl0, vdctrl2, vdctrl3, vdctrl4, ctrl;
+ 
++	mxsfb_enable_axi_clk(host);
++
+ 	/* Only restore the mode when the controller is running */
+ 	ctrl = readl(host->base + LCDC_CTRL);
+-	if (!(ctrl & CTRL_RUN))
+-		return -EINVAL;
++	if (!(ctrl & CTRL_RUN)) {
++		ret = -EINVAL;
++		goto err;
++	}
+ 
+ 	vdctrl0 = readl(host->base + LCDC_VDCTRL0);
+ 	vdctrl2 = readl(host->base + LCDC_VDCTRL2);
+@@ -635,7 +660,8 @@ static int mxsfb_restore_mode(struct mxsfb_info *host,
+ 		break;
+ 	case 1:
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err;
+ 	}
+ 
+ 	fb_info->var.bits_per_pixel = bits_per_pixel;
+@@ -673,10 +699,14 @@ static int mxsfb_restore_mode(struct mxsfb_info *host,
+ 
+ 	pa = readl(host->base + host->devdata->cur_buf);
+ 	fbsize = fb_info->fix.line_length * vmode->yres;
+-	if (pa < fb_info->fix.smem_start)
+-		return -EINVAL;
+-	if (pa + fbsize > fb_info->fix.smem_start + fb_info->fix.smem_len)
+-		return -EINVAL;
++	if (pa < fb_info->fix.smem_start) {
++		ret = -EINVAL;
++		goto err;
++	}
++	if (pa + fbsize > fb_info->fix.smem_start + fb_info->fix.smem_len) {
++		ret = -EINVAL;
++		goto err;
++	}
+ 	ofs = pa - fb_info->fix.smem_start;
+ 	if (ofs) {
+ 		memmove(fb_info->screen_base, fb_info->screen_base + ofs, fbsize);
+@@ -689,7 +719,11 @@ static int mxsfb_restore_mode(struct mxsfb_info *host,
+ 	clk_prepare_enable(host->clk);
+ 	host->enabled = 1;
+ 
+-	return 0;
++err:
++	if (ret)
++		mxsfb_disable_axi_clk(host);
++
++	return ret;
+ }
+ 
+ static int mxsfb_init_fbinfo_dt(struct mxsfb_info *host,
+@@ -915,7 +949,9 @@ static int mxsfb_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	if (!host->enabled) {
++		mxsfb_enable_axi_clk(host);
+ 		writel(0, host->base + LCDC_CTRL);
++		mxsfb_disable_axi_clk(host);
+ 		mxsfb_set_par(fb_info);
+ 		mxsfb_enable_controller(fb_info);
+ 	}
+@@ -954,11 +990,15 @@ static void mxsfb_shutdown(struct platform_device *pdev)
+ 	struct fb_info *fb_info = platform_get_drvdata(pdev);
+ 	struct mxsfb_info *host = to_imxfb_host(fb_info);
+ 
++	mxsfb_enable_axi_clk(host);
++
+ 	/*
+ 	 * Force stop the LCD controller as keeping it running during reboot
+ 	 * might interfere with the BootROM's boot mode pads sampling.
+ 	 */
+ 	writel(CTRL_RUN, host->base + LCDC_CTRL + REG_CLR);
++
++	mxsfb_disable_axi_clk(host);
+ }
+ 
+ static struct platform_driver mxsfb_driver = {
+diff --git a/fs/configfs/mount.c b/fs/configfs/mount.c
+index da94e41bdbf6..bca58da65e2b 100644
+--- a/fs/configfs/mount.c
++++ b/fs/configfs/mount.c
+@@ -129,8 +129,6 @@ void configfs_release_fs(void)
+ }
+ 
+ 
+-static struct kobject *config_kobj;
+-
+ static int __init configfs_init(void)
+ {
+ 	int err = -ENOMEM;
+@@ -141,8 +139,8 @@ static int __init configfs_init(void)
+ 	if (!configfs_dir_cachep)
+ 		goto out;
+ 
+-	config_kobj = kobject_create_and_add("config", kernel_kobj);
+-	if (!config_kobj)
++	err = sysfs_create_mount_point(kernel_kobj, "config");
++	if (err)
+ 		goto out2;
+ 
+ 	err = register_filesystem(&configfs_fs_type);
+@@ -152,7 +150,7 @@ static int __init configfs_init(void)
+ 	return 0;
+ out3:
+ 	pr_err("Unable to register filesystem!\n");
+-	kobject_put(config_kobj);
++	sysfs_remove_mount_point(kernel_kobj, "config");
+ out2:
+ 	kmem_cache_destroy(configfs_dir_cachep);
+ 	configfs_dir_cachep = NULL;
+@@ -163,7 +161,7 @@ out:
+ static void __exit configfs_exit(void)
+ {
+ 	unregister_filesystem(&configfs_fs_type);
+-	kobject_put(config_kobj);
++	sysfs_remove_mount_point(kernel_kobj, "config");
+ 	kmem_cache_destroy(configfs_dir_cachep);
+ 	configfs_dir_cachep = NULL;
+ }
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index 96400ab42d13..7c14ab423d54 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -713,20 +713,17 @@ bool debugfs_initialized(void)
+ }
+ EXPORT_SYMBOL_GPL(debugfs_initialized);
+ 
+-
+-static struct kobject *debug_kobj;
+-
+ static int __init debugfs_init(void)
+ {
+ 	int retval;
+ 
+-	debug_kobj = kobject_create_and_add("debug", kernel_kobj);
+-	if (!debug_kobj)
+-		return -EINVAL;
++	retval = sysfs_create_mount_point(kernel_kobj, "debug");
++	if (retval)
++		return retval;
+ 
+ 	retval = register_filesystem(&debug_fs_type);
+ 	if (retval)
+-		kobject_put(debug_kobj);
++		sysfs_remove_mount_point(kernel_kobj, "debug");
+ 	else
+ 		debugfs_registered = true;
+ 
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index e8799c11424b..59eabceb01df 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -1238,7 +1238,6 @@ static void fuse_fs_cleanup(void)
+ }
+ 
+ static struct kobject *fuse_kobj;
+-static struct kobject *connections_kobj;
+ 
+ static int fuse_sysfs_init(void)
+ {
+@@ -1250,11 +1249,9 @@ static int fuse_sysfs_init(void)
+ 		goto out_err;
+ 	}
+ 
+-	connections_kobj = kobject_create_and_add("connections", fuse_kobj);
+-	if (!connections_kobj) {
+-		err = -ENOMEM;
++	err = sysfs_create_mount_point(fuse_kobj, "connections");
++	if (err)
+ 		goto out_fuse_unregister;
+-	}
+ 
+ 	return 0;
+ 
+@@ -1266,7 +1263,7 @@ static int fuse_sysfs_init(void)
+ 
+ static void fuse_sysfs_cleanup(void)
+ {
+-	kobject_put(connections_kobj);
++	sysfs_remove_mount_point(fuse_kobj, "connections");
+ 	kobject_put(fuse_kobj);
+ }
+ 
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index 345b35fd329d..470b29e396ab 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -592,6 +592,9 @@ int kernfs_add_one(struct kernfs_node *kn)
+ 		goto out_unlock;
+ 
+ 	ret = -ENOENT;
++	if (parent->flags & KERNFS_EMPTY_DIR)
++		goto out_unlock;
++
+ 	if ((parent->flags & KERNFS_ACTIVATED) && !kernfs_active(parent))
+ 		goto out_unlock;
+ 
+@@ -783,6 +786,38 @@ struct kernfs_node *kernfs_create_dir_ns(struct kernfs_node *parent,
+ 	return ERR_PTR(rc);
+ }
+ 
++/**
++ * kernfs_create_empty_dir - create an always empty directory
++ * @parent: parent in which to create a new directory
++ * @name: name of the new directory
++ *
++ * Returns the created node on success, ERR_PTR() value on failure.
++ */
++struct kernfs_node *kernfs_create_empty_dir(struct kernfs_node *parent,
++					    const char *name)
++{
++	struct kernfs_node *kn;
++	int rc;
++
++	/* allocate */
++	kn = kernfs_new_node(parent, name, S_IRUGO|S_IXUGO|S_IFDIR, KERNFS_DIR);
++	if (!kn)
++		return ERR_PTR(-ENOMEM);
++
++	kn->flags |= KERNFS_EMPTY_DIR;
++	kn->dir.root = parent->dir.root;
++	kn->ns = NULL;
++	kn->priv = NULL;
++
++	/* link in */
++	rc = kernfs_add_one(kn);
++	if (!rc)
++		return kn;
++
++	kernfs_put(kn);
++	return ERR_PTR(rc);
++}
++
+ static struct dentry *kernfs_iop_lookup(struct inode *dir,
+ 					struct dentry *dentry,
+ 					unsigned int flags)
+@@ -1254,7 +1289,8 @@ int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent,
+ 	mutex_lock(&kernfs_mutex);
+ 
+ 	error = -ENOENT;
+-	if (!kernfs_active(kn) || !kernfs_active(new_parent))
++	if (!kernfs_active(kn) || !kernfs_active(new_parent) ||
++	    (new_parent->flags & KERNFS_EMPTY_DIR))
+ 		goto out;
+ 
+ 	error = 0;
+diff --git a/fs/kernfs/inode.c b/fs/kernfs/inode.c
+index 9000874a945b..2119bf06ce14 100644
+--- a/fs/kernfs/inode.c
++++ b/fs/kernfs/inode.c
+@@ -296,6 +296,8 @@ static void kernfs_init_inode(struct kernfs_node *kn, struct inode *inode)
+ 	case KERNFS_DIR:
+ 		inode->i_op = &kernfs_dir_iops;
+ 		inode->i_fop = &kernfs_dir_fops;
++		if (kn->flags & KERNFS_EMPTY_DIR)
++			make_empty_dir_inode(inode);
+ 		break;
+ 	case KERNFS_FILE:
+ 		inode->i_size = kn->attr.size;
+diff --git a/fs/libfs.c b/fs/libfs.c
+index 0ab65122ee45..7ccb71b5d4fc 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -1093,3 +1093,99 @@ simple_nosetlease(struct file *filp, long arg, struct file_lock **flp,
+ 	return -EINVAL;
+ }
+ EXPORT_SYMBOL(simple_nosetlease);
++
++
++/*
++ * Operations for a permanently empty directory.
++ */
++static struct dentry *empty_dir_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags)
++{
++	return ERR_PTR(-ENOENT);
++}
++
++static int empty_dir_getattr(struct vfsmount *mnt, struct dentry *dentry,
++				 struct kstat *stat)
++{
++	struct inode *inode = d_inode(dentry);
++	generic_fillattr(inode, stat);
++	return 0;
++}
++
++static int empty_dir_setattr(struct dentry *dentry, struct iattr *attr)
++{
++	return -EPERM;
++}
++
++static int empty_dir_setxattr(struct dentry *dentry, const char *name,
++			      const void *value, size_t size, int flags)
++{
++	return -EOPNOTSUPP;
++}
++
++static ssize_t empty_dir_getxattr(struct dentry *dentry, const char *name,
++				  void *value, size_t size)
++{
++	return -EOPNOTSUPP;
++}
++
++static int empty_dir_removexattr(struct dentry *dentry, const char *name)
++{
++	return -EOPNOTSUPP;
++}
++
++static ssize_t empty_dir_listxattr(struct dentry *dentry, char *list, size_t size)
++{
++	return -EOPNOTSUPP;
++}
++
++static const struct inode_operations empty_dir_inode_operations = {
++	.lookup		= empty_dir_lookup,
++	.permission	= generic_permission,
++	.setattr	= empty_dir_setattr,
++	.getattr	= empty_dir_getattr,
++	.setxattr	= empty_dir_setxattr,
++	.getxattr	= empty_dir_getxattr,
++	.removexattr	= empty_dir_removexattr,
++	.listxattr	= empty_dir_listxattr,
++};
++
++static loff_t empty_dir_llseek(struct file *file, loff_t offset, int whence)
++{
++	/* An empty directory has two entries . and .. at offsets 0 and 1 */
++	return generic_file_llseek_size(file, offset, whence, 2, 2);
++}
++
++static int empty_dir_readdir(struct file *file, struct dir_context *ctx)
++{
++	dir_emit_dots(file, ctx);
++	return 0;
++}
++
++static const struct file_operations empty_dir_operations = {
++	.llseek		= empty_dir_llseek,
++	.read		= generic_read_dir,
++	.iterate	= empty_dir_readdir,
++	.fsync		= noop_fsync,
++};
++
++
++void make_empty_dir_inode(struct inode *inode)
++{
++	set_nlink(inode, 2);
++	inode->i_mode = S_IFDIR | S_IRUGO | S_IXUGO;
++	inode->i_uid = GLOBAL_ROOT_UID;
++	inode->i_gid = GLOBAL_ROOT_GID;
++	inode->i_rdev = 0;
++	inode->i_size = 2;
++	inode->i_blkbits = PAGE_SHIFT;
++	inode->i_blocks = 0;
++
++	inode->i_op = &empty_dir_inode_operations;
++	inode->i_fop = &empty_dir_operations;
++}
++
++bool is_empty_dir_inode(struct inode *inode)
++{
++	return (inode->i_fop == &empty_dir_operations) &&
++		(inode->i_op == &empty_dir_inode_operations);
++}
+diff --git a/fs/namespace.c b/fs/namespace.c
+index f07c7691ace1..91d787f3a844 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2334,6 +2334,8 @@ unlock:
+ 	return err;
+ }
+ 
++static bool fs_fully_visible(struct file_system_type *fs_type, int *new_mnt_flags);
++
+ /*
+  * create a new mount for userspace and request it to be added into the
+  * namespace's tree
+@@ -2365,6 +2367,10 @@ static int do_new_mount(struct path *path, const char *fstype, int flags,
+ 			flags |= MS_NODEV;
+ 			mnt_flags |= MNT_NODEV | MNT_LOCK_NODEV;
+ 		}
++		if (type->fs_flags & FS_USERNS_VISIBLE) {
++			if (!fs_fully_visible(type, &mnt_flags))
++				return -EPERM;
++		}
+ 	}
+ 
+ 	mnt = vfs_kern_mount(type, flags, name, data);
+@@ -3166,9 +3172,10 @@ bool current_chrooted(void)
+ 	return chrooted;
+ }
+ 
+-bool fs_fully_visible(struct file_system_type *type)
++static bool fs_fully_visible(struct file_system_type *type, int *new_mnt_flags)
+ {
+ 	struct mnt_namespace *ns = current->nsproxy->mnt_ns;
++	int new_flags = *new_mnt_flags;
+ 	struct mount *mnt;
+ 	bool visible = false;
+ 
+@@ -3187,6 +3194,19 @@ bool fs_fully_visible(struct file_system_type *type)
+ 		if (mnt->mnt.mnt_root != mnt->mnt.mnt_sb->s_root)
+ 			continue;
+ 
++		/* Verify the mount flags are equal to or more permissive
++		 * than the proposed new mount.
++		 */
++		if ((mnt->mnt.mnt_flags & MNT_LOCK_READONLY) &&
++		    !(new_flags & MNT_READONLY))
++			continue;
++		if ((mnt->mnt.mnt_flags & MNT_LOCK_NODEV) &&
++		    !(new_flags & MNT_NODEV))
++			continue;
++		if ((mnt->mnt.mnt_flags & MNT_LOCK_ATIME) &&
++		    ((mnt->mnt.mnt_flags & MNT_ATIME_MASK) != (new_flags & MNT_ATIME_MASK)))
++			continue;
++
+ 		/* This mount is not fully visible if there are any
+ 		 * locked child mounts that cover anything except for
+ 		 * empty directories.
+@@ -3196,11 +3216,14 @@ bool fs_fully_visible(struct file_system_type *type)
+ 			/* Only worry about locked mounts */
+ 			if (!(mnt->mnt.mnt_flags & MNT_LOCKED))
+ 				continue;
+-			if (!S_ISDIR(inode->i_mode))
+-				goto next;
+-			if (inode->i_nlink > 2)
++			/* Is the directory permanetly empty? */
++			if (!is_empty_dir_inode(inode))
+ 				goto next;
+ 		}
++		/* Preserve the locked attributes */
++		*new_mnt_flags |= mnt->mnt.mnt_flags & (MNT_LOCK_READONLY | \
++							MNT_LOCK_NODEV    | \
++							MNT_LOCK_ATIME);
+ 		visible = true;
+ 		goto found;
+ 	next:	;
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index be65b2082135..286a1a7ddbf3 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -373,6 +373,10 @@ static struct proc_dir_entry *__proc_create(struct proc_dir_entry **parent,
+ 		WARN(1, "create '/proc/%s' by hand\n", qstr.name);
+ 		return NULL;
+ 	}
++	if (is_empty_pde(*parent)) {
++		WARN(1, "attempt to add to permanently empty directory");
++		return NULL;
++	}
+ 
+ 	ent = kzalloc(sizeof(struct proc_dir_entry) + qstr.len + 1, GFP_KERNEL);
+ 	if (!ent)
+@@ -455,6 +459,25 @@ struct proc_dir_entry *proc_mkdir(const char *name,
+ }
+ EXPORT_SYMBOL(proc_mkdir);
+ 
++struct proc_dir_entry *proc_create_mount_point(const char *name)
++{
++	umode_t mode = S_IFDIR | S_IRUGO | S_IXUGO;
++	struct proc_dir_entry *ent, *parent = NULL;
++
++	ent = __proc_create(&parent, name, mode, 2);
++	if (ent) {
++		ent->data = NULL;
++		ent->proc_fops = NULL;
++		ent->proc_iops = NULL;
++		if (proc_register(parent, ent) < 0) {
++			kfree(ent);
++			parent->nlink--;
++			ent = NULL;
++		}
++	}
++	return ent;
++}
++
+ struct proc_dir_entry *proc_create_data(const char *name, umode_t mode,
+ 					struct proc_dir_entry *parent,
+ 					const struct file_operations *proc_fops,
+diff --git a/fs/proc/inode.c b/fs/proc/inode.c
+index 7697b6621cfd..a981068fac2b 100644
+--- a/fs/proc/inode.c
++++ b/fs/proc/inode.c
+@@ -423,6 +423,10 @@ struct inode *proc_get_inode(struct super_block *sb, struct proc_dir_entry *de)
+ 		inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
+ 		PROC_I(inode)->pde = de;
+ 
++		if (is_empty_pde(de)) {
++			make_empty_dir_inode(inode);
++			return inode;
++		}
+ 		if (de->mode) {
+ 			inode->i_mode = de->mode;
+ 			inode->i_uid = de->uid;
+diff --git a/fs/proc/internal.h b/fs/proc/internal.h
+index c835b94c0cd3..aa2781095bd1 100644
+--- a/fs/proc/internal.h
++++ b/fs/proc/internal.h
+@@ -191,6 +191,12 @@ static inline struct proc_dir_entry *pde_get(struct proc_dir_entry *pde)
+ }
+ extern void pde_put(struct proc_dir_entry *);
+ 
++static inline bool is_empty_pde(const struct proc_dir_entry *pde)
++{
++	return S_ISDIR(pde->mode) && !pde->proc_iops;
++}
++struct proc_dir_entry *proc_create_mount_point(const char *name);
++
+ /*
+  * inode.c
+  */
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index f92d5dd578a4..3f7dc3e8fdcb 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -19,6 +19,28 @@ static const struct inode_operations proc_sys_inode_operations;
+ static const struct file_operations proc_sys_dir_file_operations;
+ static const struct inode_operations proc_sys_dir_operations;
+ 
++/* Support for permanently empty directories */
++
++struct ctl_table sysctl_mount_point[] = {
++	{ }
++};
++
++static bool is_empty_dir(struct ctl_table_header *head)
++{
++	return head->ctl_table[0].child == sysctl_mount_point;
++}
++
++static void set_empty_dir(struct ctl_dir *dir)
++{
++	dir->header.ctl_table[0].child = sysctl_mount_point;
++}
++
++static void clear_empty_dir(struct ctl_dir *dir)
++
++{
++	dir->header.ctl_table[0].child = NULL;
++}
++
+ void proc_sys_poll_notify(struct ctl_table_poll *poll)
+ {
+ 	if (!poll)
+@@ -187,6 +209,17 @@ static int insert_header(struct ctl_dir *dir, struct ctl_table_header *header)
+ 	struct ctl_table *entry;
+ 	int err;
+ 
++	/* Is this a permanently empty directory? */
++	if (is_empty_dir(&dir->header))
++		return -EROFS;
++
++	/* Am I creating a permanently empty directory? */
++	if (header->ctl_table == sysctl_mount_point) {
++		if (!RB_EMPTY_ROOT(&dir->root))
++			return -EINVAL;
++		set_empty_dir(dir);
++	}
++
+ 	dir->header.nreg++;
+ 	header->parent = dir;
+ 	err = insert_links(header);
+@@ -202,6 +235,8 @@ fail:
+ 	erase_header(header);
+ 	put_links(header);
+ fail_links:
++	if (header->ctl_table == sysctl_mount_point)
++		clear_empty_dir(dir);
+ 	header->parent = NULL;
+ 	drop_sysctl_table(&dir->header);
+ 	return err;
+@@ -419,6 +454,8 @@ static struct inode *proc_sys_make_inode(struct super_block *sb,
+ 		inode->i_mode |= S_IFDIR;
+ 		inode->i_op = &proc_sys_dir_operations;
+ 		inode->i_fop = &proc_sys_dir_file_operations;
++		if (is_empty_dir(head))
++			make_empty_dir_inode(inode);
+ 	}
+ out:
+ 	return inode;
+diff --git a/fs/proc/root.c b/fs/proc/root.c
+index e74ac9f1a2c0..6934ce8420d4 100644
+--- a/fs/proc/root.c
++++ b/fs/proc/root.c
+@@ -112,9 +112,6 @@ static struct dentry *proc_mount(struct file_system_type *fs_type,
+ 		ns = task_active_pid_ns(current);
+ 		options = data;
+ 
+-		if (!capable(CAP_SYS_ADMIN) && !fs_fully_visible(fs_type))
+-			return ERR_PTR(-EPERM);
+-
+ 		/* Does the mounter have privilege over the pid namespace? */
+ 		if (!ns_capable(ns->user_ns, CAP_SYS_ADMIN))
+ 			return ERR_PTR(-EPERM);
+@@ -159,7 +156,7 @@ static struct file_system_type proc_fs_type = {
+ 	.name		= "proc",
+ 	.mount		= proc_mount,
+ 	.kill_sb	= proc_kill_sb,
+-	.fs_flags	= FS_USERNS_MOUNT,
++	.fs_flags	= FS_USERNS_VISIBLE | FS_USERNS_MOUNT,
+ };
+ 
+ void __init proc_root_init(void)
+@@ -182,10 +179,10 @@ void __init proc_root_init(void)
+ #endif
+ 	proc_mkdir("fs", NULL);
+ 	proc_mkdir("driver", NULL);
+-	proc_mkdir("fs/nfsd", NULL); /* somewhere for the nfsd filesystem to be mounted */
++	proc_create_mount_point("fs/nfsd"); /* somewhere for the nfsd filesystem to be mounted */
+ #if defined(CONFIG_SUN_OPENPROMFS) || defined(CONFIG_SUN_OPENPROMFS_MODULE)
+ 	/* just give it a mountpoint */
+-	proc_mkdir("openprom", NULL);
++	proc_create_mount_point("openprom");
+ #endif
+ 	proc_tty_init();
+ 	proc_mkdir("bus", NULL);
+diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c
+index b32ce53d24ee..a8fe4f42c9bd 100644
+--- a/fs/pstore/inode.c
++++ b/fs/pstore/inode.c
+@@ -458,22 +458,18 @@ static struct file_system_type pstore_fs_type = {
+ 	.kill_sb	= pstore_kill_sb,
+ };
+ 
+-static struct kobject *pstore_kobj;
+-
+ static int __init init_pstore_fs(void)
+ {
+-	int err = 0;
++	int err;
+ 
+ 	/* Create a convenient mount point for people to access pstore */
+-	pstore_kobj = kobject_create_and_add("pstore", fs_kobj);
+-	if (!pstore_kobj) {
+-		err = -ENOMEM;
++	err = sysfs_create_mount_point(fs_kobj, "pstore");
++	if (err)
+ 		goto out;
+-	}
+ 
+ 	err = register_filesystem(&pstore_fs_type);
+ 	if (err < 0)
+-		kobject_put(pstore_kobj);
++		sysfs_remove_mount_point(fs_kobj, "pstore");
+ 
+ out:
+ 	return err;
+diff --git a/fs/sysfs/dir.c b/fs/sysfs/dir.c
+index 0b45ff42f374..94374e435025 100644
+--- a/fs/sysfs/dir.c
++++ b/fs/sysfs/dir.c
+@@ -121,3 +121,37 @@ int sysfs_move_dir_ns(struct kobject *kobj, struct kobject *new_parent_kobj,
+ 
+ 	return kernfs_rename_ns(kn, new_parent, kn->name, new_ns);
+ }
++
++/**
++ * sysfs_create_mount_point - create an always empty directory
++ * @parent_kobj:  kobject that will contain this always empty directory
++ * @name: The name of the always empty directory to add
++ */
++int sysfs_create_mount_point(struct kobject *parent_kobj, const char *name)
++{
++	struct kernfs_node *kn, *parent = parent_kobj->sd;
++
++	kn = kernfs_create_empty_dir(parent, name);
++	if (IS_ERR(kn)) {
++		if (PTR_ERR(kn) == -EEXIST)
++			sysfs_warn_dup(parent, name);
++		return PTR_ERR(kn);
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(sysfs_create_mount_point);
++
++/**
++ *	sysfs_remove_mount_point - remove an always empty directory.
++ *	@parent_kobj: kobject that will contain this always empty directory
++ *	@name: The name of the always empty directory to remove
++ *
++ */
++void sysfs_remove_mount_point(struct kobject *parent_kobj, const char *name)
++{
++	struct kernfs_node *parent = parent_kobj->sd;
++
++	kernfs_remove_by_name_ns(parent, name, NULL);
++}
++EXPORT_SYMBOL_GPL(sysfs_remove_mount_point);
+diff --git a/fs/sysfs/mount.c b/fs/sysfs/mount.c
+index 8a49486bf30c..1c6ac6fcee9f 100644
+--- a/fs/sysfs/mount.c
++++ b/fs/sysfs/mount.c
+@@ -31,9 +31,6 @@ static struct dentry *sysfs_mount(struct file_system_type *fs_type,
+ 	bool new_sb;
+ 
+ 	if (!(flags & MS_KERNMOUNT)) {
+-		if (!capable(CAP_SYS_ADMIN) && !fs_fully_visible(fs_type))
+-			return ERR_PTR(-EPERM);
+-
+ 		if (!kobj_ns_current_may_mount(KOBJ_NS_TYPE_NET))
+ 			return ERR_PTR(-EPERM);
+ 	}
+@@ -58,7 +55,7 @@ static struct file_system_type sysfs_fs_type = {
+ 	.name		= "sysfs",
+ 	.mount		= sysfs_mount,
+ 	.kill_sb	= sysfs_kill_sb,
+-	.fs_flags	= FS_USERNS_MOUNT,
++	.fs_flags	= FS_USERNS_VISIBLE | FS_USERNS_MOUNT,
+ };
+ 
+ int __init sysfs_init(void)
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index 24c7aa8b1d20..1b2689f34d0b 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -428,6 +428,7 @@ extern acpi_status acpi_pci_osc_control_set(acpi_handle handle,
+ #define ACPI_OST_SC_INSERT_NOT_SUPPORTED	0x82
+ 
+ extern void acpi_early_init(void);
++extern void acpi_subsystem_init(void);
+ 
+ extern int acpi_nvs_register(__u64 start, __u64 size);
+ 
+@@ -477,6 +478,7 @@ static inline const char *acpi_dev_name(struct acpi_device *adev)
+ }
+ 
+ static inline void acpi_early_init(void) { }
++static inline void acpi_subsystem_init(void) { }
+ 
+ static inline int early_acpi_boot_init(void)
+ {
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 52cc4492cb3a..b5b52598e7b4 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1863,6 +1863,7 @@ struct file_system_type {
+ #define FS_HAS_SUBTYPE		4
+ #define FS_USERNS_MOUNT		8	/* Can be mounted by userns root */
+ #define FS_USERNS_DEV_MOUNT	16 /* A userns mount does not imply MNT_NODEV */
++#define FS_USERNS_VISIBLE	32	/* FS must already be visible */
+ #define FS_RENAME_DOES_D_MOVE	32768	/* FS will handle d_move() during rename() internally. */
+ 	struct dentry *(*mount) (struct file_system_type *, int,
+ 		       const char *, void *);
+@@ -1950,7 +1951,6 @@ extern int vfs_ustat(dev_t, struct kstatfs *);
+ extern int freeze_super(struct super_block *super);
+ extern int thaw_super(struct super_block *super);
+ extern bool our_mnt(struct vfsmount *mnt);
+-extern bool fs_fully_visible(struct file_system_type *);
+ 
+ extern int current_umask(void);
+ 
+@@ -2721,6 +2721,8 @@ extern struct dentry *simple_lookup(struct inode *, struct dentry *, unsigned in
+ extern ssize_t generic_read_dir(struct file *, char __user *, size_t, loff_t *);
+ extern const struct file_operations simple_dir_operations;
+ extern const struct inode_operations simple_dir_inode_operations;
++extern void make_empty_dir_inode(struct inode *inode);
++extern bool is_empty_dir_inode(struct inode *inode);
+ struct tree_descr { char *name; const struct file_operations *ops; int mode; };
+ struct dentry *d_alloc_name(struct dentry *, const char *);
+ extern int simple_fill_super(struct super_block *, unsigned long, struct tree_descr *);
+diff --git a/include/linux/kernfs.h b/include/linux/kernfs.h
+index 71ecdab1671b..29d1896c3ba5 100644
+--- a/include/linux/kernfs.h
++++ b/include/linux/kernfs.h
+@@ -45,6 +45,7 @@ enum kernfs_node_flag {
+ 	KERNFS_LOCKDEP		= 0x0100,
+ 	KERNFS_SUICIDAL		= 0x0400,
+ 	KERNFS_SUICIDED		= 0x0800,
++	KERNFS_EMPTY_DIR	= 0x1000,
+ };
+ 
+ /* @flags for kernfs_create_root() */
+@@ -285,6 +286,8 @@ void kernfs_destroy_root(struct kernfs_root *root);
+ struct kernfs_node *kernfs_create_dir_ns(struct kernfs_node *parent,
+ 					 const char *name, umode_t mode,
+ 					 void *priv, const void *ns);
++struct kernfs_node *kernfs_create_empty_dir(struct kernfs_node *parent,
++					    const char *name);
+ struct kernfs_node *__kernfs_create_file(struct kernfs_node *parent,
+ 					 const char *name,
+ 					 umode_t mode, loff_t size,
+diff --git a/include/linux/kmemleak.h b/include/linux/kmemleak.h
+index e705467ddb47..d0a1f99e24e3 100644
+--- a/include/linux/kmemleak.h
++++ b/include/linux/kmemleak.h
+@@ -28,7 +28,8 @@
+ extern void kmemleak_init(void) __ref;
+ extern void kmemleak_alloc(const void *ptr, size_t size, int min_count,
+ 			   gfp_t gfp) __ref;
+-extern void kmemleak_alloc_percpu(const void __percpu *ptr, size_t size) __ref;
++extern void kmemleak_alloc_percpu(const void __percpu *ptr, size_t size,
++				  gfp_t gfp) __ref;
+ extern void kmemleak_free(const void *ptr) __ref;
+ extern void kmemleak_free_part(const void *ptr, size_t size) __ref;
+ extern void kmemleak_free_percpu(const void __percpu *ptr) __ref;
+@@ -71,7 +72,8 @@ static inline void kmemleak_alloc_recursive(const void *ptr, size_t size,
+ 					    gfp_t gfp)
+ {
+ }
+-static inline void kmemleak_alloc_percpu(const void __percpu *ptr, size_t size)
++static inline void kmemleak_alloc_percpu(const void __percpu *ptr, size_t size,
++					 gfp_t gfp)
+ {
+ }
+ static inline void kmemleak_free(const void *ptr)
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 211e9da8a7d7..409794503aed 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -573,9 +573,15 @@ int raw_pci_read(unsigned int domain, unsigned int bus, unsigned int devfn,
+ int raw_pci_write(unsigned int domain, unsigned int bus, unsigned int devfn,
+ 		  int reg, int len, u32 val);
+ 
++#ifdef CONFIG_PCI_BUS_ADDR_T_64BIT
++typedef u64 pci_bus_addr_t;
++#else
++typedef u32 pci_bus_addr_t;
++#endif
++
+ struct pci_bus_region {
+-	dma_addr_t start;
+-	dma_addr_t end;
++	pci_bus_addr_t start;
++	pci_bus_addr_t end;
+ };
+ 
+ struct pci_dynids {
+@@ -1002,6 +1008,7 @@ int __must_check pci_assign_resource(struct pci_dev *dev, int i);
+ int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align);
+ int pci_select_bars(struct pci_dev *dev, unsigned long flags);
+ bool pci_device_is_present(struct pci_dev *pdev);
++void pci_ignore_hotplug(struct pci_dev *dev);
+ 
+ /* ROM control related routines */
+ int pci_enable_rom(struct pci_dev *pdev);
+@@ -1039,11 +1046,6 @@ bool pci_dev_run_wake(struct pci_dev *dev);
+ bool pci_check_pme_status(struct pci_dev *dev);
+ void pci_pme_wakeup_bus(struct pci_bus *bus);
+ 
+-static inline void pci_ignore_hotplug(struct pci_dev *dev)
+-{
+-	dev->ignore_hotplug = 1;
+-}
+-
+ static inline int pci_enable_wake(struct pci_dev *dev, pci_power_t state,
+ 				  bool enable)
+ {
+@@ -1124,7 +1126,7 @@ int __must_check pci_bus_alloc_resource(struct pci_bus *bus,
+ 
+ int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr);
+ 
+-static inline dma_addr_t pci_bus_address(struct pci_dev *pdev, int bar)
++static inline pci_bus_addr_t pci_bus_address(struct pci_dev *pdev, int bar)
+ {
+ 	struct pci_bus_region region;
+ 
+diff --git a/include/linux/sysctl.h b/include/linux/sysctl.h
+index b7361f831226..d8926fbf6c58 100644
+--- a/include/linux/sysctl.h
++++ b/include/linux/sysctl.h
+@@ -188,6 +188,9 @@ struct ctl_table_header *register_sysctl_paths(const struct ctl_path *path,
+ void unregister_sysctl_table(struct ctl_table_header * table);
+ 
+ extern int sysctl_init(void);
++
++extern struct ctl_table sysctl_mount_point[];
++
+ #else /* CONFIG_SYSCTL */
+ static inline struct ctl_table_header *register_sysctl_table(struct ctl_table * table)
+ {
+diff --git a/include/linux/sysfs.h b/include/linux/sysfs.h
+index ddad16148bd6..68c3b0fa1185 100644
+--- a/include/linux/sysfs.h
++++ b/include/linux/sysfs.h
+@@ -195,6 +195,10 @@ int __must_check sysfs_rename_dir_ns(struct kobject *kobj, const char *new_name,
+ int __must_check sysfs_move_dir_ns(struct kobject *kobj,
+ 				   struct kobject *new_parent_kobj,
+ 				   const void *new_ns);
++int __must_check sysfs_create_mount_point(struct kobject *parent_kobj,
++					  const char *name);
++void sysfs_remove_mount_point(struct kobject *parent_kobj,
++			      const char *name);
+ 
+ int __must_check sysfs_create_file_ns(struct kobject *kobj,
+ 				      const struct attribute *attr,
+@@ -283,6 +287,17 @@ static inline int sysfs_move_dir_ns(struct kobject *kobj,
+ 	return 0;
+ }
+ 
++static inline int sysfs_create_mount_point(struct kobject *parent_kobj,
++					   const char *name)
++{
++	return 0;
++}
++
++static inline void sysfs_remove_mount_point(struct kobject *parent_kobj,
++					    const char *name)
++{
++}
++
+ static inline int sysfs_create_file_ns(struct kobject *kobj,
+ 				       const struct attribute *attr,
+ 				       const void *ns)
+diff --git a/include/linux/types.h b/include/linux/types.h
+index 6747247e3f9f..00a127e89752 100644
+--- a/include/linux/types.h
++++ b/include/linux/types.h
+@@ -139,12 +139,20 @@ typedef unsigned long blkcnt_t;
+  */
+ #define pgoff_t unsigned long
+ 
+-/* A dma_addr_t can hold any valid DMA or bus address for the platform */
++/*
++ * A dma_addr_t can hold any valid DMA address, i.e., any address returned
++ * by the DMA API.
++ *
++ * If the DMA API only uses 32-bit addresses, dma_addr_t need only be 32
++ * bits wide.  Bus addresses, e.g., PCI BARs, may be wider than 32 bits,
++ * but drivers do memory-mapped I/O to ioremapped kernel virtual addresses,
++ * so they don't care about the size of the actual bus addresses.
++ */
+ #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+ typedef u64 dma_addr_t;
+ #else
+ typedef u32 dma_addr_t;
+-#endif /* dma_addr_t */
++#endif
+ 
+ #ifdef __CHECKER__
+ #else
+diff --git a/init/main.c b/init/main.c
+index 6f0f1c5ff8cc..198e265dddd9 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -661,6 +661,7 @@ asmlinkage __visible void __init start_kernel(void)
+ 
+ 	check_bugs();
+ 
++	acpi_subsystem_init();
+ 	sfi_init_late();
+ 
+ 	if (efi_enabled(EFI_RUNTIME_SERVICES)) {
+diff --git a/kernel/cgroup.c b/kernel/cgroup.c
+index 29a7b2cc593e..b2bd0b17c9f3 100644
+--- a/kernel/cgroup.c
++++ b/kernel/cgroup.c
+@@ -1924,8 +1924,6 @@ static struct file_system_type cgroup_fs_type = {
+ 	.kill_sb = cgroup_kill_sb,
+ };
+ 
+-static struct kobject *cgroup_kobj;
+-
+ /**
+  * task_cgroup_path - cgroup path of a task in the first cgroup hierarchy
+  * @task: target task
+@@ -5042,13 +5040,13 @@ int __init cgroup_init(void)
+ 		}
+ 	}
+ 
+-	cgroup_kobj = kobject_create_and_add("cgroup", fs_kobj);
+-	if (!cgroup_kobj)
+-		return -ENOMEM;
++	err = sysfs_create_mount_point(fs_kobj, "cgroup");
++	if (err)
++		return err;
+ 
+ 	err = register_filesystem(&cgroup_fs_type);
+ 	if (err < 0) {
+-		kobject_put(cgroup_kobj);
++		sysfs_remove_mount_point(fs_kobj, "cgroup");
+ 		return err;
+ 	}
+ 
+diff --git a/kernel/irq/devres.c b/kernel/irq/devres.c
+index d5d0f7345c54..74d90a754268 100644
+--- a/kernel/irq/devres.c
++++ b/kernel/irq/devres.c
+@@ -104,7 +104,7 @@ int devm_request_any_context_irq(struct device *dev, unsigned int irq,
+ 		return -ENOMEM;
+ 
+ 	rc = request_any_context_irq(irq, handler, irqflags, devname, dev_id);
+-	if (rc) {
++	if (rc < 0) {
+ 		devres_free(dr);
+ 		return rc;
+ 	}
+@@ -113,7 +113,7 @@ int devm_request_any_context_irq(struct device *dev, unsigned int irq,
+ 	dr->dev_id = dev_id;
+ 	devres_add(dev, dr);
+ 
+-	return 0;
++	return rc;
+ }
+ EXPORT_SYMBOL(devm_request_any_context_irq);
+ 
+diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
+index 3f9f1d6b4c2e..8ba4f9f2e44e 100644
+--- a/kernel/livepatch/core.c
++++ b/kernel/livepatch/core.c
+@@ -179,7 +179,9 @@ static int klp_find_object_symbol(const char *objname, const char *name,
+ 		.count = 0
+ 	};
+ 
++	mutex_lock(&module_mutex);
+ 	kallsyms_on_each_symbol(klp_find_callback, &args);
++	mutex_unlock(&module_mutex);
+ 
+ 	if (args.count == 0)
+ 		pr_err("symbol '%s' not found in symbol table\n", name);
+@@ -219,13 +221,19 @@ static int klp_verify_vmlinux_symbol(const char *name, unsigned long addr)
+ 		.name = name,
+ 		.addr = addr,
+ 	};
++	int ret;
+ 
+-	if (kallsyms_on_each_symbol(klp_verify_callback, &args))
+-		return 0;
++	mutex_lock(&module_mutex);
++	ret = kallsyms_on_each_symbol(klp_verify_callback, &args);
++	mutex_unlock(&module_mutex);
+ 
+-	pr_err("symbol '%s' not found at specified address 0x%016lx, kernel mismatch?\n",
+-		name, addr);
+-	return -EINVAL;
++	if (!ret) {
++		pr_err("symbol '%s' not found at specified address 0x%016lx, kernel mismatch?\n",
++			name, addr);
++		return -EINVAL;
++	}
++
++	return 0;
+ }
+ 
+ static int klp_find_verify_func_addr(struct klp_object *obj,
+diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c
+index cc9ceca7bde1..d45f12e48c15 100644
+--- a/kernel/rcu/tiny.c
++++ b/kernel/rcu/tiny.c
+@@ -182,6 +182,11 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
+ 
+ 	/* Move the ready-to-invoke callbacks to a local list. */
+ 	local_irq_save(flags);
++	if (rcp->donetail == &rcp->rcucblist) {
++		/* No callbacks ready, so just leave. */
++		local_irq_restore(flags);
++		return;
++	}
+ 	RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, rcp->qlen, -1));
+ 	list = rcp->rcucblist;
+ 	rcp->rcucblist = *rcp->donetail;
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index ce410bb9f2e1..afdd5263cced 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -1510,12 +1510,6 @@ static struct ctl_table vm_table[] = {
+ 	{ }
+ };
+ 
+-#if defined(CONFIG_BINFMT_MISC) || defined(CONFIG_BINFMT_MISC_MODULE)
+-static struct ctl_table binfmt_misc_table[] = {
+-	{ }
+-};
+-#endif
+-
+ static struct ctl_table fs_table[] = {
+ 	{
+ 		.procname	= "inode-nr",
+@@ -1669,7 +1663,7 @@ static struct ctl_table fs_table[] = {
+ 	{
+ 		.procname	= "binfmt_misc",
+ 		.mode		= 0555,
+-		.child		= binfmt_misc_table,
++		.child		= sysctl_mount_point,
+ 	},
+ #endif
+ 	{
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index f0fe4f2c1fa7..3716cdb8ba42 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -195,6 +195,8 @@ static struct kmem_cache *scan_area_cache;
+ 
+ /* set if tracing memory operations is enabled */
+ static int kmemleak_enabled;
++/* same as above but only for the kmemleak_free() callback */
++static int kmemleak_free_enabled;
+ /* set in the late_initcall if there were no errors */
+ static int kmemleak_initialized;
+ /* enables or disables early logging of the memory operations */
+@@ -907,12 +909,13 @@ EXPORT_SYMBOL_GPL(kmemleak_alloc);
+  * kmemleak_alloc_percpu - register a newly allocated __percpu object
+  * @ptr:	__percpu pointer to beginning of the object
+  * @size:	size of the object
++ * @gfp:	flags used for kmemleak internal memory allocations
+  *
+  * This function is called from the kernel percpu allocator when a new object
+- * (memory block) is allocated (alloc_percpu). It assumes GFP_KERNEL
+- * allocation.
++ * (memory block) is allocated (alloc_percpu).
+  */
+-void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size)
++void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size,
++				 gfp_t gfp)
+ {
+ 	unsigned int cpu;
+ 
+@@ -925,7 +928,7 @@ void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size)
+ 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
+ 		for_each_possible_cpu(cpu)
+ 			create_object((unsigned long)per_cpu_ptr(ptr, cpu),
+-				      size, 0, GFP_KERNEL);
++				      size, 0, gfp);
+ 	else if (kmemleak_early_log)
+ 		log_early(KMEMLEAK_ALLOC_PERCPU, ptr, size, 0);
+ }
+@@ -942,7 +945,7 @@ void __ref kmemleak_free(const void *ptr)
+ {
+ 	pr_debug("%s(0x%p)\n", __func__, ptr);
+ 
+-	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
++	if (kmemleak_free_enabled && ptr && !IS_ERR(ptr))
+ 		delete_object_full((unsigned long)ptr);
+ 	else if (kmemleak_early_log)
+ 		log_early(KMEMLEAK_FREE, ptr, 0, 0);
+@@ -982,7 +985,7 @@ void __ref kmemleak_free_percpu(const void __percpu *ptr)
+ 
+ 	pr_debug("%s(0x%p)\n", __func__, ptr);
+ 
+-	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
++	if (kmemleak_free_enabled && ptr && !IS_ERR(ptr))
+ 		for_each_possible_cpu(cpu)
+ 			delete_object_full((unsigned long)per_cpu_ptr(ptr,
+ 								      cpu));
+@@ -1750,6 +1753,13 @@ static void kmemleak_do_cleanup(struct work_struct *work)
+ 	mutex_lock(&scan_mutex);
+ 	stop_scan_thread();
+ 
++	/*
++	 * Once the scan thread has stopped, it is safe to no longer track
++	 * object freeing. Ordering of the scan thread stopping and the memory
++	 * accesses below is guaranteed by the kthread_stop() function.
++	 */
++	kmemleak_free_enabled = 0;
++
+ 	if (!kmemleak_found_leaks)
+ 		__kmemleak_do_cleanup();
+ 	else
+@@ -1776,6 +1786,8 @@ static void kmemleak_disable(void)
+ 	/* check whether it is too early for a kernel thread */
+ 	if (kmemleak_initialized)
+ 		schedule_work(&cleanup_work);
++	else
++		kmemleak_free_enabled = 0;
+ 
+ 	pr_info("Kernel memory leak detector disabled\n");
+ }
+@@ -1840,8 +1852,10 @@ void __init kmemleak_init(void)
+ 	if (kmemleak_error) {
+ 		local_irq_restore(flags);
+ 		return;
+-	} else
++	} else {
+ 		kmemleak_enabled = 1;
++		kmemleak_free_enabled = 1;
++	}
+ 	local_irq_restore(flags);
+ 
+ 	/*
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 0f7d73b3e4b1..36abe529ad6e 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -1971,35 +1971,41 @@ retry_cpuset:
+ 	pol = get_vma_policy(vma, addr);
+ 	cpuset_mems_cookie = read_mems_allowed_begin();
+ 
+-	if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage &&
+-					pol->mode != MPOL_INTERLEAVE)) {
++	if (pol->mode == MPOL_INTERLEAVE) {
++		unsigned nid;
++
++		nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order);
++		mpol_cond_put(pol);
++		page = alloc_page_interleave(gfp, order, nid);
++		goto out;
++	}
++
++	if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) {
++		int hpage_node = node;
++
+ 		/*
+ 		 * For hugepage allocation and non-interleave policy which
+-		 * allows the current node, we only try to allocate from the
+-		 * current node and don't fall back to other nodes, as the
+-		 * cost of remote accesses would likely offset THP benefits.
++		 * allows the current node (or other explicitly preferred
++		 * node) we only try to allocate from the current/preferred
++		 * node and don't fall back to other nodes, as the cost of
++		 * remote accesses would likely offset THP benefits.
+ 		 *
+ 		 * If the policy is interleave, or does not allow the current
+ 		 * node in its nodemask, we allocate the standard way.
+ 		 */
++		if (pol->mode == MPOL_PREFERRED &&
++						!(pol->flags & MPOL_F_LOCAL))
++			hpage_node = pol->v.preferred_node;
++
+ 		nmask = policy_nodemask(gfp, pol);
+-		if (!nmask || node_isset(node, *nmask)) {
++		if (!nmask || node_isset(hpage_node, *nmask)) {
+ 			mpol_cond_put(pol);
+-			page = alloc_pages_exact_node(node,
++			page = alloc_pages_exact_node(hpage_node,
+ 						gfp | __GFP_THISNODE, order);
+ 			goto out;
+ 		}
+ 	}
+ 
+-	if (pol->mode == MPOL_INTERLEAVE) {
+-		unsigned nid;
+-
+-		nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order);
+-		mpol_cond_put(pol);
+-		page = alloc_page_interleave(gfp, order, nid);
+-		goto out;
+-	}
+-
+ 	nmask = policy_nodemask(gfp, pol);
+ 	zl = policy_zonelist(gfp, pol, node);
+ 	mpol_cond_put(pol);
+diff --git a/mm/percpu.c b/mm/percpu.c
+index 73c97a5f4495..12e125c0aa90 100644
+--- a/mm/percpu.c
++++ b/mm/percpu.c
+@@ -1030,7 +1030,7 @@ area_found:
+ 		memset((void *)pcpu_chunk_addr(chunk, cpu, 0) + off, 0, size);
+ 
+ 	ptr = __addr_to_pcpu_ptr(chunk->base_addr + off);
+-	kmemleak_alloc_percpu(ptr, size);
++	kmemleak_alloc_percpu(ptr, size, gfp);
+ 	return ptr;
+ 
+ fail_unlock:
+diff --git a/security/inode.c b/security/inode.c
+index 131a3c49f766..d0b1a88da557 100644
+--- a/security/inode.c
++++ b/security/inode.c
+@@ -215,19 +215,17 @@ void securityfs_remove(struct dentry *dentry)
+ }
+ EXPORT_SYMBOL_GPL(securityfs_remove);
+ 
+-static struct kobject *security_kobj;
+-
+ static int __init securityfs_init(void)
+ {
+ 	int retval;
+ 
+-	security_kobj = kobject_create_and_add("security", kernel_kobj);
+-	if (!security_kobj)
+-		return -EINVAL;
++	retval = sysfs_create_mount_point(kernel_kobj, "security");
++	if (retval)
++		return retval;
+ 
+ 	retval = register_filesystem(&fs_type);
+ 	if (retval)
+-		kobject_put(security_kobj);
++		sysfs_remove_mount_point(kernel_kobj, "security");
+ 	return retval;
+ }
+ 
+diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
+index 5fde34326dcf..ea8153391f62 100644
+--- a/security/selinux/selinuxfs.c
++++ b/security/selinux/selinuxfs.c
+@@ -1853,7 +1853,6 @@ static struct file_system_type sel_fs_type = {
+ };
+ 
+ struct vfsmount *selinuxfs_mount;
+-static struct kobject *selinuxfs_kobj;
+ 
+ static int __init init_sel_fs(void)
+ {
+@@ -1862,13 +1861,13 @@ static int __init init_sel_fs(void)
+ 	if (!selinux_enabled)
+ 		return 0;
+ 
+-	selinuxfs_kobj = kobject_create_and_add("selinux", fs_kobj);
+-	if (!selinuxfs_kobj)
+-		return -ENOMEM;
++	err = sysfs_create_mount_point(fs_kobj, "selinux");
++	if (err)
++		return err;
+ 
+ 	err = register_filesystem(&sel_fs_type);
+ 	if (err) {
+-		kobject_put(selinuxfs_kobj);
++		sysfs_remove_mount_point(fs_kobj, "selinux");
+ 		return err;
+ 	}
+ 
+@@ -1887,7 +1886,7 @@ __initcall(init_sel_fs);
+ #ifdef CONFIG_SECURITY_SELINUX_DISABLE
+ void exit_sel_fs(void)
+ {
+-	kobject_put(selinuxfs_kobj);
++	sysfs_remove_mount_point(fs_kobj, "selinux");
+ 	kern_unmount(selinuxfs_mount);
+ 	unregister_filesystem(&sel_fs_type);
+ }
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index bce4e8f1b267..fb28f74a0a87 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -2150,16 +2150,16 @@ static const struct file_operations smk_revoke_subj_ops = {
+ 	.llseek		= generic_file_llseek,
+ };
+ 
+-static struct kset *smackfs_kset;
+ /**
+  * smk_init_sysfs - initialize /sys/fs/smackfs
+  *
+  */
+ static int smk_init_sysfs(void)
+ {
+-	smackfs_kset = kset_create_and_add("smackfs", NULL, fs_kobj);
+-	if (!smackfs_kset)
+-		return -ENOMEM;
++	int err;
++	err = sysfs_create_mount_point(fs_kobj, "smackfs");
++	if (err)
++		return err;
+ 	return 0;
+ }
+ 
+diff --git a/sound/core/pcm.c b/sound/core/pcm.c
+index 0345e53a340c..546166c0c51e 100644
+--- a/sound/core/pcm.c
++++ b/sound/core/pcm.c
+@@ -1044,7 +1044,8 @@ void snd_pcm_detach_substream(struct snd_pcm_substream *substream)
+ static ssize_t show_pcm_class(struct device *dev,
+ 			      struct device_attribute *attr, char *buf)
+ {
+-	struct snd_pcm *pcm;
++	struct snd_pcm_str *pstr = container_of(dev, struct snd_pcm_str, dev);
++	struct snd_pcm *pcm = pstr->pcm;
+ 	const char *str;
+ 	static const char *strs[SNDRV_PCM_CLASS_LAST + 1] = {
+ 		[SNDRV_PCM_CLASS_GENERIC] = "generic",
+@@ -1053,8 +1054,7 @@ static ssize_t show_pcm_class(struct device *dev,
+ 		[SNDRV_PCM_CLASS_DIGITIZER] = "digitizer",
+ 	};
+ 
+-	if (! (pcm = dev_get_drvdata(dev)) ||
+-	    pcm->dev_class > SNDRV_PCM_CLASS_LAST)
++	if (pcm->dev_class > SNDRV_PCM_CLASS_LAST)
+ 		str = "none";
+ 	else
+ 		str = strs[pcm->dev_class];
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index a002a6d1e6da..bdb24b537f42 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2056,6 +2056,8 @@ static const struct pci_device_id azx_ids[] = {
+ 	{ PCI_DEVICE(0x1022, 0x780d),
+ 	  .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
+ 	/* ATI HDMI */
++	{ PCI_DEVICE(0x1002, 0x1308),
++	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
+ 	{ PCI_DEVICE(0x1002, 0x793b),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI | AZX_DCAPS_PRESET_ATI_HDMI },
+ 	{ PCI_DEVICE(0x1002, 0x7919),
+@@ -2064,6 +2066,8 @@ static const struct pci_device_id azx_ids[] = {
+ 	  .driver_data = AZX_DRIVER_ATIHDMI | AZX_DCAPS_PRESET_ATI_HDMI },
+ 	{ PCI_DEVICE(0x1002, 0x970f),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI | AZX_DCAPS_PRESET_ATI_HDMI },
++	{ PCI_DEVICE(0x1002, 0x9840),
++	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
+ 	{ PCI_DEVICE(0x1002, 0xaa00),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI | AZX_DCAPS_PRESET_ATI_HDMI },
+ 	{ PCI_DEVICE(0x1002, 0xaa08),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index a556d63564e6..a25b7567b789 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4374,6 +4374,7 @@ enum {
+ 	ALC269_FIXUP_LIFEBOOK,
+ 	ALC269_FIXUP_LIFEBOOK_EXTMIC,
+ 	ALC269_FIXUP_LIFEBOOK_HP_PIN,
++	ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT,
+ 	ALC269_FIXUP_AMIC,
+ 	ALC269_FIXUP_DMIC,
+ 	ALC269VB_FIXUP_AMIC,
+@@ -4394,6 +4395,7 @@ enum {
+ 	ALC269_FIXUP_DELL3_MIC_NO_PRESENCE,
+ 	ALC269_FIXUP_HEADSET_MODE,
+ 	ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC,
++	ALC269_FIXUP_ASPIRE_HEADSET_MIC,
+ 	ALC269_FIXUP_ASUS_X101_FUNC,
+ 	ALC269_FIXUP_ASUS_X101_VERB,
+ 	ALC269_FIXUP_ASUS_X101,
+@@ -4421,6 +4423,7 @@ enum {
+ 	ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC,
+ 	ALC293_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 	ALC292_FIXUP_TPT440_DOCK,
++	ALC292_FIXUP_TPT440_DOCK2,
+ 	ALC283_FIXUP_BXBT2807_MIC,
+ 	ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED,
+ 	ALC282_FIXUP_ASPIRE_V5_PINS,
+@@ -4534,6 +4537,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 			{ }
+ 		},
+ 	},
++	[ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc269_fixup_pincfg_no_hp_to_lineout,
++	},
+ 	[ALC269_FIXUP_AMIC] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+@@ -4662,6 +4669,15 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc_fixup_headset_mode_no_hp_mic,
+ 	},
++	[ALC269_FIXUP_ASPIRE_HEADSET_MIC] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x01a1913c }, /* headset mic w/o jack detect */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE,
++	},
+ 	[ALC286_FIXUP_SONY_MIC_NO_PRESENCE] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+@@ -4864,6 +4880,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chain_id = ALC269_FIXUP_HEADSET_MODE
+ 	},
+ 	[ALC292_FIXUP_TPT440_DOCK] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc269_fixup_pincfg_no_hp_to_lineout,
++		.chained = true,
++		.chain_id = ALC292_FIXUP_TPT440_DOCK2
++	},
++	[ALC292_FIXUP_TPT440_DOCK2] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+ 			{ 0x16, 0x21211010 }, /* dock headphone */
+@@ -4930,6 +4952,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x029b, "Acer 1810TZ", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x0349, "Acer AOD260", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x047c, "Acer AC700", ALC269_FIXUP_ACER_AC700),
++	SND_PCI_QUIRK(0x1025, 0x072d, "Acer Aspire V5-571G", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x080d, "Acer Aspire V5-122P", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x0740, "Acer AO725", ALC271_FIXUP_HP_GATE_MIC_JACK),
+ 	SND_PCI_QUIRK(0x1025, 0x0742, "Acer AO756", ALC271_FIXUP_HP_GATE_MIC_JACK),
+ 	SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
+@@ -5027,6 +5051,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
+ 	SND_PCI_QUIRK(0x104d, 0x9099, "Sony VAIO S13", ALC275_FIXUP_SONY_DISABLE_AAMIX),
+ 	SND_PCI_QUIRK(0x10cf, 0x1475, "Lifebook", ALC269_FIXUP_LIFEBOOK),
++	SND_PCI_QUIRK(0x10cf, 0x159f, "Lifebook E780", ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT),
+ 	SND_PCI_QUIRK(0x10cf, 0x15dc, "Lifebook T731", ALC269_FIXUP_LIFEBOOK_HP_PIN),
+ 	SND_PCI_QUIRK(0x10cf, 0x1757, "Lifebook E752", ALC269_FIXUP_LIFEBOOK_HP_PIN),
+ 	SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
+diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
+index 0db571340edb..3409a5376eb0 100644
+--- a/tools/testing/selftests/Makefile
++++ b/tools/testing/selftests/Makefile
+@@ -26,7 +26,7 @@ TARGETS_HOTPLUG += memory-hotplug
+ # Makefile to avoid test build failures when test
+ # Makefile doesn't have explicit build rules.
+ ifeq (1,$(MAKELEVEL))
+-undefine LDFLAGS
++override LDFLAGS =
+ override MAKEFLAGS =
+ endif
+ 


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [gentoo-commits] proj/linux-patches:4.0 commit in: /
@ 2015-09-29  0:06 Mike Pagano
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2015-09-29  0:06 UTC (permalink / raw
  To: gentoo-commits

commit:     5f1fcf42d2b9edd5baca2940182eb948753faf2a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Sep 29 00:05:46 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Sep 29 00:05:46 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5f1fcf42

dm crypt: constrain crypt device's max_segment_size to PAGE_SIZE. See bug #561558. Thanks to kipplasterjoe for reporting.

 0000_README                                |  4 ++
 1600_dm-crypt-limit-max-segment-size.patch | 84 ++++++++++++++++++++++++++++++
 2 files changed, 88 insertions(+)

diff --git a/0000_README b/0000_README
index 3ff77bb..142ec40 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
+Patch:  1600_dm-crypt-limit-max-segment-size.patch
+From:   https://bugzilla.kernel.org/show_bug.cgi?id=104421
+Desc:   dm crypt: constrain crypt device's max_segment_size to PAGE_SIZE.
+
 Patch:  2600_select-REGMAP_IRQ-for-rt5033.patch
 From:   http://git.kernel.org/
 Desc:   mfd: rt5033: MFD_RT5033 needs to select REGMAP_IRQ. See bug #546938.

diff --git a/1600_dm-crypt-limit-max-segment-size.patch b/1600_dm-crypt-limit-max-segment-size.patch
new file mode 100644
index 0000000..82aca44
--- /dev/null
+++ b/1600_dm-crypt-limit-max-segment-size.patch
@@ -0,0 +1,84 @@
+From 586b286b110e94eb31840ac5afc0c24e0881fe34 Mon Sep 17 00:00:00 2001
+From: Mike Snitzer <snitzer@redhat.com>
+Date: Wed, 9 Sep 2015 21:34:51 -0400
+Subject: dm crypt: constrain crypt device's max_segment_size to PAGE_SIZE
+
+Setting the dm-crypt device's max_segment_size to PAGE_SIZE is an
+unfortunate constraint that is required to avoid the potential for
+exceeding dm-crypt's underlying device's max_segments limits -- due to
+crypt_alloc_buffer() possibly allocating pages for the encryption bio
+that are not as physically contiguous as the original bio.
+
+It is interesting to note that this problem was already fixed back in
+2007 via commit 91e106259 ("dm crypt: use bio_add_page").  But Linux 4.0
+commit cf2f1abfb ("dm crypt: don't allocate pages for a partial
+request") regressed dm-crypt back to _not_ using bio_add_page().  But
+given dm-crypt's cpu parallelization changes all depend on commit
+cf2f1abfb's abandoning of the more complex io fragments processing that
+dm-crypt previously had we cannot easily go back to using
+bio_add_page().
+
+So all said the cleanest way to resolve this issue is to fix dm-crypt to
+properly constrain the original bios entering dm-crypt so the encryption
+bios that dm-crypt generates from the original bios are always
+compatible with the underlying device's max_segments queue limits.
+
+It should be noted that technically Linux 4.3 does _not_ need this fix
+because of the block core's new late bio-splitting capability.  But, it
+is reasoned, there is little to be gained by having the block core split
+the encrypted bio that is composed of PAGE_SIZE segments.  That said, in
+the future we may revert this change.
+
+Fixes: cf2f1abfb ("dm crypt: don't allocate pages for a partial request")
+Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=104421
+Suggested-by: Jeff Moyer <jmoyer@redhat.com>
+Signed-off-by: Mike Snitzer <snitzer@redhat.com>
+Cc: stable@vger.kernel.org # 4.0+
+
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index d60c88d..4b3b6f8 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -968,7 +968,8 @@ static void crypt_free_buffer_pages(struct crypt_config *cc, struct bio *clone);
+ 
+ /*
+  * Generate a new unfragmented bio with the given size
+- * This should never violate the device limitations
++ * This should never violate the device limitations (but only because
++ * max_segment_size is being constrained to PAGE_SIZE).
+  *
+  * This function may be called concurrently. If we allocate from the mempool
+  * concurrently, there is a possibility of deadlock. For example, if we have
+@@ -2045,9 +2046,20 @@ static int crypt_iterate_devices(struct dm_target *ti,
+ 	return fn(ti, cc->dev, cc->start, ti->len, data);
+ }
+ 
++static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
++{
++	/*
++	 * Unfortunate constraint that is required to avoid the potential
++	 * for exceeding underlying device's max_segments limits -- due to
++	 * crypt_alloc_buffer() possibly allocating pages for the encryption
++	 * bio that are not as physically contiguous as the original bio.
++	 */
++	limits->max_segment_size = PAGE_SIZE;
++}
++
+ static struct target_type crypt_target = {
+ 	.name   = "crypt",
+-	.version = {1, 14, 0},
++	.version = {1, 14, 1},
+ 	.module = THIS_MODULE,
+ 	.ctr    = crypt_ctr,
+ 	.dtr    = crypt_dtr,
+@@ -2058,6 +2070,7 @@ static struct target_type crypt_target = {
+ 	.resume = crypt_resume,
+ 	.message = crypt_message,
+ 	.iterate_devices = crypt_iterate_devices,
++	.io_hints = crypt_io_hints,
+ };
+ 
+ static int __init dm_crypt_init(void)
+-- 
+cgit v0.10.2
+


^ permalink raw reply related	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2015-09-29  0:06 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-06-23 12:48 [gentoo-commits] proj/linux-patches:master commit in: / Mike Pagano
2015-04-27 18:08 ` [gentoo-commits] proj/linux-patches:4.0 " Mike Pagano
  -- strict thread matches above, loose matches on Subject: below --
2015-09-29  0:06 Mike Pagano
2015-07-22 10:11 Mike Pagano
2015-07-10 23:45 Mike Pagano
2015-07-02 12:28 Mike Pagano
2015-06-30 15:01 Mike Pagano
2015-06-23 16:37 Mike Pagano
2015-06-23 15:38 Mike Pagano
2015-06-23 14:01 Mike Pagano
2015-06-23 12:48 [gentoo-commits] proj/linux-patches:master " Mike Pagano
2015-03-20  0:23 ` [gentoo-commits] proj/linux-patches:4.0 " Mike Pagano
2015-06-23 12:48 [gentoo-commits] proj/linux-patches:master " Mike Pagano
2015-04-29 13:35 ` [gentoo-commits] proj/linux-patches:4.0 " Mike Pagano
2015-06-20 17:36 Mike Pagano
2015-06-06 22:03 Mike Pagano
2015-05-17 19:55 Mike Pagano
2015-05-14 12:22 Mike Pagano
2015-05-07 19:37 Mike Pagano
2015-05-07 19:14 Mike Pagano
2015-05-03 23:55 Mike Pagano
2015-04-29 17:33 Mike Pagano
2015-03-21 20:00 Mike Pagano
2015-03-18 23:27 Mike Pagano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox