* [gentoo-commits] proj/linux-patches:4.3 commit in: /
@ 2015-11-02 1:08 Mike Pagano
0 siblings, 0 replies; 9+ messages in thread
From: Mike Pagano @ 2015-11-02 1:08 UTC (permalink / raw
To: gentoo-commits
commit: 44cfd57da16fd6e1350f94e762924050206a12f0
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Nov 2 01:07:54 2015 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Nov 2 01:07:54 2015 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=44cfd57d
Linux patch 4.3.0. Support for namespace user.pax.* on tmpfs. Enable link security restrictions by default. ACPI: Disable Windows 8 compatibility for some Lenovo ThinkPads.Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs. Bootsplash ported by Marco. (Bug #539616). Add Gentoo Linux support config settings and defaults.Kernel patch enables gcc < v4.9 optimizations for additional CPUs. Kernel patch enables gcc >= v4.9 optimizations for additional CPUs.
0000_README | 28 +
1500_XATTR_USER_PREFIX.patch | 54 +
...ble-link-security-restrictions-by-default.patch | 22 +
2700_ThinkPad-30-brightness-control-fix.patch | 67 +
2900_dev-root-proc-mount-fix.patch | 38 +
4200_fbcondecor-3.19.patch | 2119 ++++++++++++++++++++
...able-additional-cpu-optimizations-for-gcc.patch | 327 +++
...-additional-cpu-optimizations-for-gcc-4.9.patch | 402 ++++
8 files changed, 3057 insertions(+)
diff --git a/0000_README b/0000_README
index 9018993..8e70e78 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,34 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1500_XATTR_USER_PREFIX.patch
+From: https://bugs.gentoo.org/show_bug.cgi?id=470644
+Desc: Support for namespace user.pax.* on tmpfs.
+
+Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
+From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc: Enable link security restrictions by default.
+
+Patch: 2700_ThinkPad-30-brightness-control-fix.patch
+From: Seth Forshee <seth.forshee@canonical.com>
+Desc: ACPI: Disable Windows 8 compatibility for some Lenovo ThinkPads.
+
+Patch: 2900_dev-root-proc-mount-fix.patch
+From: https://bugs.gentoo.org/show_bug.cgi?id=438380
+Desc: Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.
+
+Patch: 4200_fbcondecor-3.19.patch
+From: http://www.mepiscommunity.org/fbcondecor
+Desc: Bootsplash ported by Marco. (Bug #539616)
+
Patch: 4567_distro-Gentoo-Kconfig.patch
From: Tom Wijsman <TomWij@gentoo.org>
Desc: Add Gentoo Linux support config settings and defaults.
+
+Patch: 5000_enable-additional-cpu-optimizations-for-gcc.patch
+From: https://github.com/graysky2/kernel_gcc_patch/
+Desc: Kernel patch enables gcc < v4.9 optimizations for additional CPUs.
+
+Patch: 5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
+From: https://github.com/graysky2/kernel_gcc_patch/
+Desc: Kernel patch enables gcc >= v4.9 optimizations for additional CPUs.
diff --git a/1500_XATTR_USER_PREFIX.patch b/1500_XATTR_USER_PREFIX.patch
new file mode 100644
index 0000000..cc15cd5
--- /dev/null
+++ b/1500_XATTR_USER_PREFIX.patch
@@ -0,0 +1,54 @@
+From: Anthony G. Basile <blueness@gentoo.org>
+
+This patch adds support for a restricted user-controlled namespace on
+tmpfs filesystem used to house PaX flags. The namespace must be of the
+form user.pax.* and its value cannot exceed a size of 8 bytes.
+
+This is needed even on all Gentoo systems so that XATTR_PAX flags
+are preserved for users who might build packages using portage on
+a tmpfs system with a non-hardened kernel and then switch to a
+hardened kernel with XATTR_PAX enabled.
+
+The namespace is added to any user with Extended Attribute support
+enabled for tmpfs. Users who do not enable xattrs will not have
+the XATTR_PAX flags preserved.
+
+diff --git a/include/uapi/linux/xattr.h b/include/uapi/linux/xattr.h
+index e4629b9..6958086 100644
+--- a/include/uapi/linux/xattr.h
++++ b/include/uapi/linux/xattr.h
+@@ -63,5 +63,9 @@
+ #define XATTR_POSIX_ACL_DEFAULT "posix_acl_default"
+ #define XATTR_NAME_POSIX_ACL_DEFAULT XATTR_SYSTEM_PREFIX XATTR_POSIX_ACL_DEFAULT
+
++/* User namespace */
++#define XATTR_PAX_PREFIX XATTR_USER_PREFIX "pax."
++#define XATTR_PAX_FLAGS_SUFFIX "flags"
++#define XATTR_NAME_PAX_FLAGS XATTR_PAX_PREFIX XATTR_PAX_FLAGS_SUFFIX
+
+ #endif /* _UAPI_LINUX_XATTR_H */
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 1c44af7..f23bb1b 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2201,6 +2201,7 @@ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ static int shmem_xattr_validate(const char *name)
+ {
+ struct { const char *prefix; size_t len; } arr[] = {
++ { XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN},
+ { XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN },
+ { XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN }
+ };
+@@ -2256,6 +2257,12 @@ static int shmem_setxattr(struct dentry *dentry, const char *name,
+ if (err)
+ return err;
+
++ if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN)) {
++ if (strcmp(name, XATTR_NAME_PAX_FLAGS))
++ return -EOPNOTSUPP;
++ if (size > 8)
++ return -EINVAL;
++ }
+ return simple_xattr_set(&info->xattrs, name, value, size, flags);
+ }
+
diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 0000000..639fb3c
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,22 @@
+From: Ben Hutchings <ben@decadent.org.uk>
+Subject: fs: Enable link security restrictions by default
+Date: Fri, 02 Nov 2012 05:32:06 +0000
+Bug-Debian: https://bugs.debian.org/609455
+Forwarded: not-needed
+
+This reverts commit 561ec64ae67ef25cac8d72bb9c4bfc955edfd415
+('VFS: don't do protected {sym,hard}links by default').
+
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -651,8 +651,8 @@ static inline void put_link(struct namei
+ path_put(link);
+ }
+
+-int sysctl_protected_symlinks __read_mostly = 0;
+-int sysctl_protected_hardlinks __read_mostly = 0;
++int sysctl_protected_symlinks __read_mostly = 1;
++int sysctl_protected_hardlinks __read_mostly = 1;
+
+ /**
+ * may_follow_link - Check symlink following for unsafe situations
diff --git a/2700_ThinkPad-30-brightness-control-fix.patch b/2700_ThinkPad-30-brightness-control-fix.patch
new file mode 100644
index 0000000..b548c6d
--- /dev/null
+++ b/2700_ThinkPad-30-brightness-control-fix.patch
@@ -0,0 +1,67 @@
+diff --git a/drivers/acpi/blacklist.c b/drivers/acpi/blacklist.c
+index cb96296..6c242ed 100644
+--- a/drivers/acpi/blacklist.c
++++ b/drivers/acpi/blacklist.c
+@@ -269,6 +276,61 @@ static struct dmi_system_id acpi_osi_dmi_table[] __initdata = {
+ },
+
+ /*
++ * The following Lenovo models have a broken workaround in the
++ * acpi_video backlight implementation to meet the Windows 8
++ * requirement of 101 backlight levels. Reverting to pre-Win8
++ * behavior fixes the problem.
++ */
++ {
++ .callback = dmi_disable_osi_win8,
++ .ident = "Lenovo ThinkPad L430",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L430"),
++ },
++ },
++ {
++ .callback = dmi_disable_osi_win8,
++ .ident = "Lenovo ThinkPad T430s",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T430s"),
++ },
++ },
++ {
++ .callback = dmi_disable_osi_win8,
++ .ident = "Lenovo ThinkPad T530",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T530"),
++ },
++ },
++ {
++ .callback = dmi_disable_osi_win8,
++ .ident = "Lenovo ThinkPad W530",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W530"),
++ },
++ },
++ {
++ .callback = dmi_disable_osi_win8,
++ .ident = "Lenovo ThinkPad X1 Carbon",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X1 Carbon"),
++ },
++ },
++ {
++ .callback = dmi_disable_osi_win8,
++ .ident = "Lenovo ThinkPad X230",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X230"),
++ },
++ },
++
++ /*
+ * BIOS invocation of _OSI(Linux) is almost always a BIOS bug.
+ * Linux ignores it, except for the machines enumerated below.
+ */
+
diff --git a/2900_dev-root-proc-mount-fix.patch b/2900_dev-root-proc-mount-fix.patch
new file mode 100644
index 0000000..60af1eb
--- /dev/null
+++ b/2900_dev-root-proc-mount-fix.patch
@@ -0,0 +1,38 @@
+--- a/init/do_mounts.c 2015-08-19 10:27:16.753852576 -0400
++++ b/init/do_mounts.c 2015-08-19 10:34:25.473850353 -0400
+@@ -490,7 +490,11 @@ void __init change_floppy(char *fmt, ...
+ va_start(args, fmt);
+ vsprintf(buf, fmt, args);
+ va_end(args);
+- fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
++ if (saved_root_name[0])
++ fd = sys_open(saved_root_name, O_RDWR | O_NDELAY, 0);
++ else
++ fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
++
+ if (fd >= 0) {
+ sys_ioctl(fd, FDEJECT, 0);
+ sys_close(fd);
+@@ -534,11 +538,17 @@ void __init mount_root(void)
+ #endif
+ #ifdef CONFIG_BLOCK
+ {
+- int err = create_dev("/dev/root", ROOT_DEV);
+-
+- if (err < 0)
+- pr_emerg("Failed to create /dev/root: %d\n", err);
+- mount_block_root("/dev/root", root_mountflags);
++ if (saved_root_name[0] == '/') {
++ int err = create_dev(saved_root_name, ROOT_DEV);
++ if (err < 0)
++ pr_emerg("Failed to create %s: %d\n", saved_root_name, err);
++ mount_block_root(saved_root_name, root_mountflags);
++ } else {
++ int err = create_dev("/dev/root", ROOT_DEV);
++ if (err < 0)
++ pr_emerg("Failed to create /dev/root: %d\n", err);
++ mount_block_root("/dev/root", root_mountflags);
++ }
+ }
+ #endif
+ }
diff --git a/4200_fbcondecor-3.19.patch b/4200_fbcondecor-3.19.patch
new file mode 100644
index 0000000..29c379f
--- /dev/null
+++ b/4200_fbcondecor-3.19.patch
@@ -0,0 +1,2119 @@
+diff --git a/Documentation/fb/00-INDEX b/Documentation/fb/00-INDEX
+index fe85e7c..2230930 100644
+--- a/Documentation/fb/00-INDEX
++++ b/Documentation/fb/00-INDEX
+@@ -23,6 +23,8 @@ ep93xx-fb.txt
+ - info on the driver for EP93xx LCD controller.
+ fbcon.txt
+ - intro to and usage guide for the framebuffer console (fbcon).
++fbcondecor.txt
++ - info on the Framebuffer Console Decoration
+ framebuffer.txt
+ - introduction to frame buffer devices.
+ gxfb.txt
+diff --git a/Documentation/fb/fbcondecor.txt b/Documentation/fb/fbcondecor.txt
+new file mode 100644
+index 0000000..3388c61
+--- /dev/null
++++ b/Documentation/fb/fbcondecor.txt
+@@ -0,0 +1,207 @@
++What is it?
++-----------
++
++The framebuffer decorations are a kernel feature which allows displaying a
++background picture on selected consoles.
++
++What do I need to get it to work?
++---------------------------------
++
++To get fbcondecor up-and-running you will have to:
++ 1) get a copy of splashutils [1] or a similar program
++ 2) get some fbcondecor themes
++ 3) build the kernel helper program
++ 4) build your kernel with the FB_CON_DECOR option enabled.
++
++To get fbcondecor operational right after fbcon initialization is finished, you
++will have to include a theme and the kernel helper into your initramfs image.
++Please refer to splashutils documentation for instructions on how to do that.
++
++[1] The splashutils package can be downloaded from:
++ http://github.com/alanhaggai/fbsplash
++
++The userspace helper
++--------------------
++
++The userspace fbcondecor helper (by default: /sbin/fbcondecor_helper) is called by the
++kernel whenever an important event occurs and the kernel needs some kind of
++job to be carried out. Important events include console switches and video
++mode switches (the kernel requests background images and configuration
++parameters for the current console). The fbcondecor helper must be accessible at
++all times. If it's not, fbcondecor will be switched off automatically.
++
++It's possible to set path to the fbcondecor helper by writing it to
++/proc/sys/kernel/fbcondecor.
++
++*****************************************************************************
++
++The information below is mostly technical stuff. There's probably no need to
++read it unless you plan to develop a userspace helper.
++
++The fbcondecor protocol
++-----------------------
++
++The fbcondecor protocol defines a communication interface between the kernel and
++the userspace fbcondecor helper.
++
++The kernel side is responsible for:
++
++ * rendering console text, using an image as a background (instead of a
++ standard solid color fbcon uses),
++ * accepting commands from the user via ioctls on the fbcondecor device,
++ * calling the userspace helper to set things up as soon as the fb subsystem
++ is initialized.
++
++The userspace helper is responsible for everything else, including parsing
++configuration files, decompressing the image files whenever the kernel needs
++it, and communicating with the kernel if necessary.
++
++The fbcondecor protocol specifies how communication is done in both ways:
++kernel->userspace and userspace->helper.
++
++Kernel -> Userspace
++-------------------
++
++The kernel communicates with the userspace helper by calling it and specifying
++the task to be done in a series of arguments.
++
++The arguments follow the pattern:
++<fbcondecor protocol version> <command> <parameters>
++
++All commands defined in fbcondecor protocol v2 have the following parameters:
++ virtual console
++ framebuffer number
++ theme
++
++Fbcondecor protocol v1 specified an additional 'fbcondecor mode' after the
++framebuffer number. Fbcondecor protocol v1 is deprecated and should not be used.
++
++Fbcondecor protocol v2 specifies the following commands:
++
++getpic
++------
++ The kernel issues this command to request image data. It's up to the
++ userspace helper to find a background image appropriate for the specified
++ theme and the current resolution. The userspace helper should respond by
++ issuing the FBIOCONDECOR_SETPIC ioctl.
++
++init
++----
++ The kernel issues this command after the fbcondecor device is created and
++ the fbcondecor interface is initialized. Upon receiving 'init', the userspace
++ helper should parse the kernel command line (/proc/cmdline) or otherwise
++ decide whether fbcondecor is to be activated.
++
++ To activate fbcondecor on the first console the helper should issue the
++ FBIOCONDECOR_SETCFG, FBIOCONDECOR_SETPIC and FBIOCONDECOR_SETSTATE commands,
++ in the above-mentioned order.
++
++ When the userspace helper is called in an early phase of the boot process
++ (right after the initialization of fbcon), no filesystems will be mounted.
++ The helper program should mount sysfs and then create the appropriate
++ framebuffer, fbcondecor and tty0 devices (if they don't already exist) to get
++ current display settings and to be able to communicate with the kernel side.
++ It should probably also mount the procfs to be able to parse the kernel
++ command line parameters.
++
++ Note that the console sem is not held when the kernel calls fbcondecor_helper
++ with the 'init' command. The fbcondecor helper should perform all ioctls with
++ origin set to FBCON_DECOR_IO_ORIG_USER.
++
++modechange
++----------
++ The kernel issues this command on a mode change. The helper's response should
++ be similar to the response to the 'init' command. Note that this time the
++ console sem is held and all ioctls must be performed with origin set to
++ FBCON_DECOR_IO_ORIG_KERNEL.
++
++
++Userspace -> Kernel
++-------------------
++
++Userspace programs can communicate with fbcondecor via ioctls on the
++fbcondecor device. These ioctls are to be used by both the userspace helper
++(called only by the kernel) and userspace configuration tools (run by the users).
++
++The fbcondecor helper should set the origin field to FBCON_DECOR_IO_ORIG_KERNEL
++when doing the appropriate ioctls. All userspace configuration tools should
++use FBCON_DECOR_IO_ORIG_USER. Failure to set the appropriate value in the origin
++field when performing ioctls from the kernel helper will most likely result
++in a console deadlock.
++
++FBCON_DECOR_IO_ORIG_KERNEL instructs fbcondecor not to try to acquire the console
++semaphore. Not surprisingly, FBCON_DECOR_IO_ORIG_USER instructs it to acquire
++the console sem.
++
++The framebuffer console decoration provides the following ioctls (all defined in
++linux/fb.h):
++
++FBIOCONDECOR_SETPIC
++description: loads a background picture for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct fb_image*
++notes:
++If called for consoles other than the current foreground one, the picture data
++will be ignored.
++
++If the current virtual console is running in a 8-bpp mode, the cmap substruct
++of fb_image has to be filled appropriately: start should be set to 16 (first
++16 colors are reserved for fbcon), len to a value <= 240 and red, green and
++blue should point to valid cmap data. The transp field is ingored. The fields
++dx, dy, bg_color, fg_color in fb_image are ignored as well.
++
++FBIOCONDECOR_SETCFG
++description: sets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++notes: The structure has to be filled with valid data.
++
++FBIOCONDECOR_GETCFG
++description: gets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++
++FBIOCONDECOR_SETSTATE
++description: sets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++ values: 0 = disabled, 1 = enabled.
++
++FBIOCONDECOR_GETSTATE
++description: gets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++ values: as in FBIOCONDECOR_SETSTATE
++
++Info on used structures:
++
++Definition of struct vc_decor can be found in linux/console_decor.h. It's
++heavily commented. Note that the 'theme' field should point to a string
++no longer than FBCON_DECOR_THEME_LEN. When FBIOCONDECOR_GETCFG call is
++performed, the theme field should point to a char buffer of length
++FBCON_DECOR_THEME_LEN.
++
++Definition of struct fbcon_decor_iowrapper can be found in linux/fb.h.
++The fields in this struct have the following meaning:
++
++vc:
++Virtual console number.
++
++origin:
++Specifies if the ioctl is performed as a response to a kernel request. The
++fbcondecor helper should set this field to FBCON_DECOR_IO_ORIG_KERNEL, userspace
++programs should set it to FBCON_DECOR_IO_ORIG_USER. This field is necessary to
++avoid console semaphore deadlocks.
++
++data:
++Pointer to a data structure appropriate for the performed ioctl. Type of
++the data struct is specified in the ioctls description.
++
++*****************************************************************************
++
++Credit
++------
++
++Original 'bootsplash' project & implementation by:
++ Volker Poplawski <volker@poplawski.de>, Stefan Reinauer <stepan@suse.de>,
++ Steffen Winterfeldt <snwint@suse.de>, Michael Schroeder <mls@suse.de>,
++ Ken Wimer <wimer@suse.de>.
++
++Fbcondecor, fbcondecor protocol design, current implementation & docs by:
++ Michal Januszewski <michalj+fbcondecor@gmail.com>
++
+diff --git a/drivers/Makefile b/drivers/Makefile
+index 7183b6a..d576148 100644
+--- a/drivers/Makefile
++++ b/drivers/Makefile
+@@ -17,6 +17,10 @@ obj-y += pwm/
+ obj-$(CONFIG_PCI) += pci/
+ obj-$(CONFIG_PARISC) += parisc/
+ obj-$(CONFIG_RAPIDIO) += rapidio/
++# tty/ comes before char/ so that the VT console is the boot-time
++# default.
++obj-y += tty/
++obj-y += char/
+ obj-y += video/
+ obj-y += idle/
+
+@@ -42,11 +46,6 @@ obj-$(CONFIG_REGULATOR) += regulator/
+ # reset controllers early, since gpu drivers might rely on them to initialize
+ obj-$(CONFIG_RESET_CONTROLLER) += reset/
+
+-# tty/ comes before char/ so that the VT console is the boot-time
+-# default.
+-obj-y += tty/
+-obj-y += char/
+-
+ # iommu/ comes before gpu as gpu are using iommu controllers
+ obj-$(CONFIG_IOMMU_SUPPORT) += iommu/
+
+diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
+index fe1cd01..6d2e87a 100644
+--- a/drivers/video/console/Kconfig
++++ b/drivers/video/console/Kconfig
+@@ -126,6 +126,19 @@ config FRAMEBUFFER_CONSOLE_ROTATION
+ such that other users of the framebuffer will remain normally
+ oriented.
+
++config FB_CON_DECOR
++ bool "Support for the Framebuffer Console Decorations"
++ depends on FRAMEBUFFER_CONSOLE=y && !FB_TILEBLITTING
++ default n
++ ---help---
++ This option enables support for framebuffer console decorations which
++ makes it possible to display images in the background of the system
++ consoles. Note that userspace utilities are necessary in order to take
++ advantage of these features. Refer to Documentation/fb/fbcondecor.txt
++ for more information.
++
++ If unsure, say N.
++
+ config STI_CONSOLE
+ bool "STI text console"
+ depends on PARISC
+diff --git a/drivers/video/console/Makefile b/drivers/video/console/Makefile
+index 43bfa48..cc104b6f 100644
+--- a/drivers/video/console/Makefile
++++ b/drivers/video/console/Makefile
+@@ -16,4 +16,5 @@ obj-$(CONFIG_FRAMEBUFFER_CONSOLE) += fbcon_rotate.o fbcon_cw.o fbcon_ud.o \
+ fbcon_ccw.o
+ endif
+
++obj-$(CONFIG_FB_CON_DECOR) += fbcondecor.o cfbcondecor.o
+ obj-$(CONFIG_FB_STI) += sticore.o
+diff --git a/drivers/video/console/bitblit.c b/drivers/video/console/bitblit.c
+index 61b182b..984384b 100644
+--- a/drivers/video/console/bitblit.c
++++ b/drivers/video/console/bitblit.c
+@@ -18,6 +18,7 @@
+ #include <linux/console.h>
+ #include <asm/types.h>
+ #include "fbcon.h"
++#include "fbcondecor.h"
+
+ /*
+ * Accelerated handlers.
+@@ -55,6 +56,13 @@ static void bit_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ area.height = height * vc->vc_font.height;
+ area.width = width * vc->vc_font.width;
+
++ if (fbcon_decor_active(info, vc)) {
++ area.sx += vc->vc_decor.tx;
++ area.sy += vc->vc_decor.ty;
++ area.dx += vc->vc_decor.tx;
++ area.dy += vc->vc_decor.ty;
++ }
++
+ info->fbops->fb_copyarea(info, &area);
+ }
+
+@@ -380,11 +388,15 @@ static void bit_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ cursor.image.depth = 1;
+ cursor.rop = ROP_XOR;
+
+- if (info->fbops->fb_cursor)
+- err = info->fbops->fb_cursor(info, &cursor);
++ if (fbcon_decor_active(info, vc)) {
++ fbcon_decor_cursor(info, &cursor);
++ } else {
++ if (info->fbops->fb_cursor)
++ err = info->fbops->fb_cursor(info, &cursor);
+
+- if (err)
+- soft_cursor(info, &cursor);
++ if (err)
++ soft_cursor(info, &cursor);
++ }
+
+ ops->cursor_reset = 0;
+ }
+diff --git a/drivers/video/console/cfbcondecor.c b/drivers/video/console/cfbcondecor.c
+new file mode 100644
+index 0000000..a2b4497
+--- /dev/null
++++ b/drivers/video/console/cfbcondecor.c
+@@ -0,0 +1,471 @@
++/*
++ * linux/drivers/video/cfbcon_decor.c -- Framebuffer decor render functions
++ *
++ * Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ * Code based upon "Bootdecor" (C) 2001-2003
++ * Volker Poplawski <volker@poplawski.de>,
++ * Stefan Reinauer <stepan@suse.de>,
++ * Steffen Winterfeldt <snwint@suse.de>,
++ * Michael Schroeder <mls@suse.de>,
++ * Ken Wimer <wimer@suse.de>.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file COPYING in the main directory of this archive for
++ * more details.
++ */
++#include <linux/module.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/selection.h>
++#include <linux/slab.h>
++#include <linux/vt_kern.h>
++#include <asm/irq.h>
++
++#include "fbcon.h"
++#include "fbcondecor.h"
++
++#define parse_pixel(shift,bpp,type) \
++ do { \
++ if (d & (0x80 >> (shift))) \
++ dd2[(shift)] = fgx; \
++ else \
++ dd2[(shift)] = transparent ? *(type *)decor_src : bgx; \
++ decor_src += (bpp); \
++ } while (0) \
++
++extern int get_color(struct vc_data *vc, struct fb_info *info,
++ u16 c, int is_fg);
++
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc)
++{
++ int i, j, k;
++ int minlen = min(min(info->var.red.length, info->var.green.length),
++ info->var.blue.length);
++ u32 col;
++
++ for (j = i = 0; i < 16; i++) {
++ k = color_table[i];
++
++ col = ((vc->vc_palette[j++] >> (8-minlen))
++ << info->var.red.offset);
++ col |= ((vc->vc_palette[j++] >> (8-minlen))
++ << info->var.green.offset);
++ col |= ((vc->vc_palette[j++] >> (8-minlen))
++ << info->var.blue.offset);
++ ((u32 *)info->pseudo_palette)[k] = col;
++ }
++}
++
++void fbcon_decor_renderc(struct fb_info *info, int ypos, int xpos, int height,
++ int width, u8* src, u32 fgx, u32 bgx, u8 transparent)
++{
++ unsigned int x, y;
++ u32 dd;
++ int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++ unsigned int d = ypos * info->fix.line_length + xpos * bytespp;
++ unsigned int ds = (ypos * info->var.xres + xpos) * bytespp;
++ u16 dd2[4];
++
++ u8* decor_src = (u8 *)(info->bgdecor.data + ds);
++ u8* dst = (u8 *)(info->screen_base + d);
++
++ if ((ypos + height) > info->var.yres || (xpos + width) > info->var.xres)
++ return;
++
++ for (y = 0; y < height; y++) {
++ switch (info->var.bits_per_pixel) {
++
++ case 32:
++ for (x = 0; x < width; x++) {
++
++ if ((x & 7) == 0)
++ d = *src++;
++ if (d & 0x80)
++ dd = fgx;
++ else
++ dd = transparent ?
++ *(u32 *)decor_src : bgx;
++
++ d <<= 1;
++ decor_src += 4;
++ fb_writel(dd, dst);
++ dst += 4;
++ }
++ break;
++ case 24:
++ for (x = 0; x < width; x++) {
++
++ if ((x & 7) == 0)
++ d = *src++;
++ if (d & 0x80)
++ dd = fgx;
++ else
++ dd = transparent ?
++ (*(u32 *)decor_src & 0xffffff) : bgx;
++
++ d <<= 1;
++ decor_src += 3;
++#ifdef __LITTLE_ENDIAN
++ fb_writew(dd & 0xffff, dst);
++ dst += 2;
++ fb_writeb((dd >> 16), dst);
++#else
++ fb_writew(dd >> 8, dst);
++ dst += 2;
++ fb_writeb(dd & 0xff, dst);
++#endif
++ dst++;
++ }
++ break;
++ case 16:
++ for (x = 0; x < width; x += 2) {
++ if ((x & 7) == 0)
++ d = *src++;
++
++ parse_pixel(0, 2, u16);
++ parse_pixel(1, 2, u16);
++#ifdef __LITTLE_ENDIAN
++ dd = dd2[0] | (dd2[1] << 16);
++#else
++ dd = dd2[1] | (dd2[0] << 16);
++#endif
++ d <<= 2;
++ fb_writel(dd, dst);
++ dst += 4;
++ }
++ break;
++
++ case 8:
++ for (x = 0; x < width; x += 4) {
++ if ((x & 7) == 0)
++ d = *src++;
++
++ parse_pixel(0, 1, u8);
++ parse_pixel(1, 1, u8);
++ parse_pixel(2, 1, u8);
++ parse_pixel(3, 1, u8);
++
++#ifdef __LITTLE_ENDIAN
++ dd = dd2[0] | (dd2[1] << 8) | (dd2[2] << 16) | (dd2[3] << 24);
++#else
++ dd = dd2[3] | (dd2[2] << 8) | (dd2[1] << 16) | (dd2[0] << 24);
++#endif
++ d <<= 4;
++ fb_writel(dd, dst);
++ dst += 4;
++ }
++ }
++
++ dst += info->fix.line_length - width * bytespp;
++ decor_src += (info->var.xres - width) * bytespp;
++ }
++}
++
++#define cc2cx(a) \
++ ((info->fix.visual == FB_VISUAL_TRUECOLOR || \
++ info->fix.visual == FB_VISUAL_DIRECTCOLOR) ? \
++ ((u32*)info->pseudo_palette)[a] : a)
++
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info,
++ const unsigned short *s, int count, int yy, int xx)
++{
++ unsigned short charmask = vc->vc_hi_font_mask ? 0x1ff : 0xff;
++ struct fbcon_ops *ops = info->fbcon_par;
++ int fg_color, bg_color, transparent;
++ u8 *src;
++ u32 bgx, fgx;
++ u16 c = scr_readw(s);
++
++ fg_color = get_color(vc, info, c, 1);
++ bg_color = get_color(vc, info, c, 0);
++
++ /* Don't paint the background image if console is blanked */
++ transparent = ops->blank_state ? 0 :
++ (vc->vc_decor.bg_color == bg_color);
++
++ xx = xx * vc->vc_font.width + vc->vc_decor.tx;
++ yy = yy * vc->vc_font.height + vc->vc_decor.ty;
++
++ fgx = cc2cx(fg_color);
++ bgx = cc2cx(bg_color);
++
++ while (count--) {
++ c = scr_readw(s++);
++ src = vc->vc_font.data + (c & charmask) * vc->vc_font.height *
++ ((vc->vc_font.width + 7) >> 3);
++
++ fbcon_decor_renderc(info, yy, xx, vc->vc_font.height,
++ vc->vc_font.width, src, fgx, bgx, transparent);
++ xx += vc->vc_font.width;
++ }
++}
++
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor)
++{
++ int i;
++ unsigned int dsize, s_pitch;
++ struct fbcon_ops *ops = info->fbcon_par;
++ struct vc_data* vc;
++ u8 *src;
++
++ /* we really don't need any cursors while the console is blanked */
++ if (info->state != FBINFO_STATE_RUNNING || ops->blank_state)
++ return;
++
++ vc = vc_cons[ops->currcon].d;
++
++ src = kmalloc(64 + sizeof(struct fb_image), GFP_ATOMIC);
++ if (!src)
++ return;
++
++ s_pitch = (cursor->image.width + 7) >> 3;
++ dsize = s_pitch * cursor->image.height;
++ if (cursor->enable) {
++ switch (cursor->rop) {
++ case ROP_XOR:
++ for (i = 0; i < dsize; i++)
++ src[i] = cursor->image.data[i] ^ cursor->mask[i];
++ break;
++ case ROP_COPY:
++ default:
++ for (i = 0; i < dsize; i++)
++ src[i] = cursor->image.data[i] & cursor->mask[i];
++ break;
++ }
++ } else
++ memcpy(src, cursor->image.data, dsize);
++
++ fbcon_decor_renderc(info,
++ cursor->image.dy + vc->vc_decor.ty,
++ cursor->image.dx + vc->vc_decor.tx,
++ cursor->image.height,
++ cursor->image.width,
++ (u8*)src,
++ cc2cx(cursor->image.fg_color),
++ cc2cx(cursor->image.bg_color),
++ cursor->image.bg_color == vc->vc_decor.bg_color);
++
++ kfree(src);
++}
++
++static void decorset(u8 *dst, int height, int width, int dstbytes,
++ u32 bgx, int bpp)
++{
++ int i;
++
++ if (bpp == 8)
++ bgx |= bgx << 8;
++ if (bpp == 16 || bpp == 8)
++ bgx |= bgx << 16;
++
++ while (height-- > 0) {
++ u8 *p = dst;
++
++ switch (bpp) {
++
++ case 32:
++ for (i=0; i < width; i++) {
++ fb_writel(bgx, p); p += 4;
++ }
++ break;
++ case 24:
++ for (i=0; i < width; i++) {
++#ifdef __LITTLE_ENDIAN
++ fb_writew((bgx & 0xffff),(u16*)p); p += 2;
++ fb_writeb((bgx >> 16),p++);
++#else
++ fb_writew((bgx >> 8),(u16*)p); p += 2;
++ fb_writeb((bgx & 0xff),p++);
++#endif
++ }
++ case 16:
++ for (i=0; i < width/4; i++) {
++ fb_writel(bgx,p); p += 4;
++ fb_writel(bgx,p); p += 4;
++ }
++ if (width & 2) {
++ fb_writel(bgx,p); p += 4;
++ }
++ if (width & 1)
++ fb_writew(bgx,(u16*)p);
++ break;
++ case 8:
++ for (i=0; i < width/4; i++) {
++ fb_writel(bgx,p); p += 4;
++ }
++
++ if (width & 2) {
++ fb_writew(bgx,p); p += 2;
++ }
++ if (width & 1)
++ fb_writeb(bgx,(u8*)p);
++ break;
++
++ }
++ dst += dstbytes;
++ }
++}
++
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes,
++ int srclinebytes, int bpp)
++{
++ int i;
++
++ while (height-- > 0) {
++ u32 *p = (u32 *)dst;
++ u32 *q = (u32 *)src;
++
++ switch (bpp) {
++
++ case 32:
++ for (i=0; i < width; i++)
++ fb_writel(*q++, p++);
++ break;
++ case 24:
++ for (i=0; i < (width*3/4); i++)
++ fb_writel(*q++, p++);
++ if ((width*3) % 4) {
++ if (width & 2) {
++ fb_writeb(*(u8*)q, (u8*)p);
++ } else if (width & 1) {
++ fb_writew(*(u16*)q, (u16*)p);
++ fb_writeb(*(u8*)((u16*)q+1),(u8*)((u16*)p+2));
++ }
++ }
++ break;
++ case 16:
++ for (i=0; i < width/4; i++) {
++ fb_writel(*q++, p++);
++ fb_writel(*q++, p++);
++ }
++ if (width & 2)
++ fb_writel(*q++, p++);
++ if (width & 1)
++ fb_writew(*(u16*)q, (u16*)p);
++ break;
++ case 8:
++ for (i=0; i < width/4; i++)
++ fb_writel(*q++, p++);
++
++ if (width & 2) {
++ fb_writew(*(u16*)q, (u16*)p);
++ q = (u32*) ((u16*)q + 1);
++ p = (u32*) ((u16*)p + 1);
++ }
++ if (width & 1)
++ fb_writeb(*(u8*)q, (u8*)p);
++ break;
++ }
++
++ dst += linebytes;
++ src += srclinebytes;
++ }
++}
++
++static void decorfill(struct fb_info *info, int sy, int sx, int height,
++ int width)
++{
++ int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++ int d = sy * info->fix.line_length + sx * bytespp;
++ int ds = (sy * info->var.xres + sx) * bytespp;
++
++ fbcon_decor_copy((u8 *)(info->screen_base + d), (u8 *)(info->bgdecor.data + ds),
++ height, width, info->fix.line_length, info->var.xres * bytespp,
++ info->var.bits_per_pixel);
++}
++
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx,
++ int height, int width)
++{
++ int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
++ struct fbcon_ops *ops = info->fbcon_par;
++ u8 *dst;
++ int transparent, bg_color = attr_bgcol_ec(bgshift, vc, info);
++
++ transparent = (vc->vc_decor.bg_color == bg_color);
++ sy = sy * vc->vc_font.height + vc->vc_decor.ty;
++ sx = sx * vc->vc_font.width + vc->vc_decor.tx;
++ height *= vc->vc_font.height;
++ width *= vc->vc_font.width;
++
++ /* Don't paint the background image if console is blanked */
++ if (transparent && !ops->blank_state) {
++ decorfill(info, sy, sx, height, width);
++ } else {
++ dst = (u8 *)(info->screen_base + sy * info->fix.line_length +
++ sx * ((info->var.bits_per_pixel + 7) >> 3));
++ decorset(dst, height, width, info->fix.line_length, cc2cx(bg_color),
++ info->var.bits_per_pixel);
++ }
++}
++
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info,
++ int bottom_only)
++{
++ unsigned int tw = vc->vc_cols*vc->vc_font.width;
++ unsigned int th = vc->vc_rows*vc->vc_font.height;
++
++ if (!bottom_only) {
++ /* top margin */
++ decorfill(info, 0, 0, vc->vc_decor.ty, info->var.xres);
++ /* left margin */
++ decorfill(info, vc->vc_decor.ty, 0, th, vc->vc_decor.tx);
++ /* right margin */
++ decorfill(info, vc->vc_decor.ty, vc->vc_decor.tx + tw, th,
++ info->var.xres - vc->vc_decor.tx - tw);
++ }
++ decorfill(info, vc->vc_decor.ty + th, 0,
++ info->var.yres - vc->vc_decor.ty - th, info->var.xres);
++}
++
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y,
++ int sx, int dx, int width)
++{
++ u16 *d = (u16 *) (vc->vc_origin + vc->vc_size_row * y + dx * 2);
++ u16 *s = d + (dx - sx);
++ u16 *start = d;
++ u16 *ls = d;
++ u16 *le = d + width;
++ u16 c;
++ int x = dx;
++ u16 attr = 1;
++
++ do {
++ c = scr_readw(d);
++ if (attr != (c & 0xff00)) {
++ attr = c & 0xff00;
++ if (d > start) {
++ fbcon_decor_putcs(vc, info, start, d - start, y, x);
++ x += d - start;
++ start = d;
++ }
++ }
++ if (s >= ls && s < le && c == scr_readw(s)) {
++ if (d > start) {
++ fbcon_decor_putcs(vc, info, start, d - start, y, x);
++ x += d - start + 1;
++ start = d + 1;
++ } else {
++ x++;
++ start++;
++ }
++ }
++ s++;
++ d++;
++ } while (d < le);
++ if (d > start)
++ fbcon_decor_putcs(vc, info, start, d - start, y, x);
++}
++
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank)
++{
++ if (blank) {
++ decorset((u8 *)info->screen_base, info->var.yres, info->var.xres,
++ info->fix.line_length, 0, info->var.bits_per_pixel);
++ } else {
++ update_screen(vc);
++ fbcon_decor_clear_margins(vc, info, 0);
++ }
++}
++
+diff --git a/drivers/video/console/fbcon.c b/drivers/video/console/fbcon.c
+index f447734..da50d61 100644
+--- a/drivers/video/console/fbcon.c
++++ b/drivers/video/console/fbcon.c
+@@ -79,6 +79,7 @@
+ #include <asm/irq.h>
+
+ #include "fbcon.h"
++#include "../console/fbcondecor.h"
+
+ #ifdef FBCONDEBUG
+ # define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __func__ , ## args)
+@@ -94,7 +95,7 @@ enum {
+
+ static struct display fb_display[MAX_NR_CONSOLES];
+
+-static signed char con2fb_map[MAX_NR_CONSOLES];
++signed char con2fb_map[MAX_NR_CONSOLES];
+ static signed char con2fb_map_boot[MAX_NR_CONSOLES];
+
+ static int logo_lines;
+@@ -286,7 +287,7 @@ static inline int fbcon_is_inactive(struct vc_data *vc, struct fb_info *info)
+ !vt_force_oops_output(vc);
+ }
+
+-static int get_color(struct vc_data *vc, struct fb_info *info,
++int get_color(struct vc_data *vc, struct fb_info *info,
+ u16 c, int is_fg)
+ {
+ int depth = fb_get_color_depth(&info->var, &info->fix);
+@@ -551,6 +552,9 @@ static int do_fbcon_takeover(int show_logo)
+ info_idx = -1;
+ } else {
+ fbcon_has_console_bind = 1;
++#ifdef CONFIG_FB_CON_DECOR
++ fbcon_decor_init();
++#endif
+ }
+
+ return err;
+@@ -1007,6 +1011,12 @@ static const char *fbcon_startup(void)
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ cols /= vc->vc_font.width;
+ rows /= vc->vc_font.height;
++
++ if (fbcon_decor_active(info, vc)) {
++ cols = vc->vc_decor.twidth / vc->vc_font.width;
++ rows = vc->vc_decor.theight / vc->vc_font.height;
++ }
++
+ vc_resize(vc, cols, rows);
+
+ DPRINTK("mode: %s\n", info->fix.id);
+@@ -1036,7 +1046,7 @@ static void fbcon_init(struct vc_data *vc, int init)
+ cap = info->flags;
+
+ if (vc != svc || logo_shown == FBCON_LOGO_DONTSHOW ||
+- (info->fix.type == FB_TYPE_TEXT))
++ (info->fix.type == FB_TYPE_TEXT) || fbcon_decor_active(info, vc))
+ logo = 0;
+
+ if (var_to_display(p, &info->var, info))
+@@ -1260,6 +1270,11 @@ static void fbcon_clear(struct vc_data *vc, int sy, int sx, int height,
+ fbcon_clear_margins(vc, 0);
+ }
+
++ if (fbcon_decor_active(info, vc)) {
++ fbcon_decor_clear(vc, info, sy, sx, height, width);
++ return;
++ }
++
+ /* Split blits that cross physical y_wrap boundary */
+
+ y_break = p->vrows - p->yscroll;
+@@ -1279,10 +1294,15 @@ static void fbcon_putcs(struct vc_data *vc, const unsigned short *s,
+ struct display *p = &fb_display[vc->vc_num];
+ struct fbcon_ops *ops = info->fbcon_par;
+
+- if (!fbcon_is_inactive(vc, info))
+- ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
+- get_color(vc, info, scr_readw(s), 1),
+- get_color(vc, info, scr_readw(s), 0));
++ if (!fbcon_is_inactive(vc, info)) {
++
++ if (fbcon_decor_active(info, vc))
++ fbcon_decor_putcs(vc, info, s, count, ypos, xpos);
++ else
++ ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
++ get_color(vc, info, scr_readw(s), 1),
++ get_color(vc, info, scr_readw(s), 0));
++ }
+ }
+
+ static void fbcon_putc(struct vc_data *vc, int c, int ypos, int xpos)
+@@ -1298,8 +1318,13 @@ static void fbcon_clear_margins(struct vc_data *vc, int bottom_only)
+ struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
+ struct fbcon_ops *ops = info->fbcon_par;
+
+- if (!fbcon_is_inactive(vc, info))
+- ops->clear_margins(vc, info, bottom_only);
++ if (!fbcon_is_inactive(vc, info)) {
++ if (fbcon_decor_active(info, vc)) {
++ fbcon_decor_clear_margins(vc, info, bottom_only);
++ } else {
++ ops->clear_margins(vc, info, bottom_only);
++ }
++ }
+ }
+
+ static void fbcon_cursor(struct vc_data *vc, int mode)
+@@ -1819,7 +1844,7 @@ static int fbcon_scroll(struct vc_data *vc, int t, int b, int dir,
+ count = vc->vc_rows;
+ if (softback_top)
+ fbcon_softback_note(vc, t, count);
+- if (logo_shown >= 0)
++ if (logo_shown >= 0 || fbcon_decor_active(info, vc))
+ goto redraw_up;
+ switch (p->scrollmode) {
+ case SCROLL_MOVE:
+@@ -1912,6 +1937,8 @@ static int fbcon_scroll(struct vc_data *vc, int t, int b, int dir,
+ count = vc->vc_rows;
+ if (logo_shown >= 0)
+ goto redraw_down;
++ if (fbcon_decor_active(info, vc))
++ goto redraw_down;
+ switch (p->scrollmode) {
+ case SCROLL_MOVE:
+ fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
+@@ -2060,6 +2087,13 @@ static void fbcon_bmove_rec(struct vc_data *vc, struct display *p, int sy, int s
+ }
+ return;
+ }
++
++ if (fbcon_decor_active(info, vc) && sy == dy && height == 1) {
++ /* must use slower redraw bmove to keep background pic intact */
++ fbcon_decor_bmove_redraw(vc, info, sy, sx, dx, width);
++ return;
++ }
++
+ ops->bmove(vc, info, real_y(p, sy), sx, real_y(p, dy), dx,
+ height, width);
+ }
+@@ -2130,8 +2164,8 @@ static int fbcon_resize(struct vc_data *vc, unsigned int width,
+ var.yres = virt_h * virt_fh;
+ x_diff = info->var.xres - var.xres;
+ y_diff = info->var.yres - var.yres;
+- if (x_diff < 0 || x_diff > virt_fw ||
+- y_diff < 0 || y_diff > virt_fh) {
++ if ((x_diff < 0 || x_diff > virt_fw ||
++ y_diff < 0 || y_diff > virt_fh) && !vc->vc_decor.state) {
+ const struct fb_videomode *mode;
+
+ DPRINTK("attempting resize %ix%i\n", var.xres, var.yres);
+@@ -2167,6 +2201,21 @@ static int fbcon_switch(struct vc_data *vc)
+
+ info = registered_fb[con2fb_map[vc->vc_num]];
+ ops = info->fbcon_par;
++ prev_console = ops->currcon;
++ if (prev_console != -1)
++ old_info = registered_fb[con2fb_map[prev_console]];
++
++#ifdef CONFIG_FB_CON_DECOR
++ if (!fbcon_decor_active_vc(vc) && info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++ struct vc_data *vc_curr = vc_cons[prev_console].d;
++ if (vc_curr && fbcon_decor_active_vc(vc_curr)) {
++ /* Clear the screen to avoid displaying funky colors during
++ * palette updates. */
++ memset((u8*)info->screen_base + info->fix.line_length * info->var.yoffset,
++ 0, info->var.yres * info->fix.line_length);
++ }
++ }
++#endif
+
+ if (softback_top) {
+ if (softback_lines)
+@@ -2185,9 +2234,6 @@ static int fbcon_switch(struct vc_data *vc)
+ logo_shown = FBCON_LOGO_CANSHOW;
+ }
+
+- prev_console = ops->currcon;
+- if (prev_console != -1)
+- old_info = registered_fb[con2fb_map[prev_console]];
+ /*
+ * FIXME: If we have multiple fbdev's loaded, we need to
+ * update all info->currcon. Perhaps, we can place this
+@@ -2231,6 +2277,18 @@ static int fbcon_switch(struct vc_data *vc)
+ fbcon_del_cursor_timer(old_info);
+ }
+
++ if (fbcon_decor_active_vc(vc)) {
++ struct vc_data *vc_curr = vc_cons[prev_console].d;
++
++ if (!vc_curr->vc_decor.theme ||
++ strcmp(vc->vc_decor.theme, vc_curr->vc_decor.theme) ||
++ (fbcon_decor_active_nores(info, vc_curr) &&
++ !fbcon_decor_active(info, vc_curr))) {
++ fbcon_decor_disable(vc, 0);
++ fbcon_decor_call_helper("modechange", vc->vc_num);
++ }
++ }
++
+ if (fbcon_is_inactive(vc, info) ||
+ ops->blank_state != FB_BLANK_UNBLANK)
+ fbcon_del_cursor_timer(info);
+@@ -2339,15 +2397,20 @@ static int fbcon_blank(struct vc_data *vc, int blank, int mode_switch)
+ }
+ }
+
+- if (!fbcon_is_inactive(vc, info)) {
++ if (!fbcon_is_inactive(vc, info)) {
+ if (ops->blank_state != blank) {
+ ops->blank_state = blank;
+ fbcon_cursor(vc, blank ? CM_ERASE : CM_DRAW);
+ ops->cursor_flash = (!blank);
+
+- if (!(info->flags & FBINFO_MISC_USEREVENT))
+- if (fb_blank(info, blank))
+- fbcon_generic_blank(vc, info, blank);
++ if (!(info->flags & FBINFO_MISC_USEREVENT)) {
++ if (fb_blank(info, blank)) {
++ if (fbcon_decor_active(info, vc))
++ fbcon_decor_blank(vc, info, blank);
++ else
++ fbcon_generic_blank(vc, info, blank);
++ }
++ }
+ }
+
+ if (!blank)
+@@ -2522,13 +2585,22 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
+ }
+
+ if (resize) {
++ /* reset wrap/pan */
+ int cols, rows;
+
+ cols = FBCON_SWAP(ops->rotate, info->var.xres, info->var.yres);
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
++
++ if (fbcon_decor_active(info, vc)) {
++ info->var.xoffset = info->var.yoffset = p->yscroll = 0;
++ cols = vc->vc_decor.twidth;
++ rows = vc->vc_decor.theight;
++ }
+ cols /= w;
+ rows /= h;
++
+ vc_resize(vc, cols, rows);
++
+ if (CON_IS_VISIBLE(vc) && softback_buf)
+ fbcon_update_softback(vc);
+ } else if (CON_IS_VISIBLE(vc)
+@@ -2657,7 +2729,11 @@ static int fbcon_set_palette(struct vc_data *vc, unsigned char *table)
+ int i, j, k, depth;
+ u8 val;
+
+- if (fbcon_is_inactive(vc, info))
++ if (fbcon_is_inactive(vc, info)
++#ifdef CONFIG_FB_CON_DECOR
++ || vc->vc_num != fg_console
++#endif
++ )
+ return -EINVAL;
+
+ if (!CON_IS_VISIBLE(vc))
+@@ -2683,14 +2759,56 @@ static int fbcon_set_palette(struct vc_data *vc, unsigned char *table)
+ } else
+ fb_copy_cmap(fb_default_cmap(1 << depth), &palette_cmap);
+
+- return fb_set_cmap(&palette_cmap, info);
++ if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++ info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++
++ u16 *red, *green, *blue;
++ int minlen = min(min(info->var.red.length, info->var.green.length),
++ info->var.blue.length);
++ int h;
++
++ struct fb_cmap cmap = {
++ .start = 0,
++ .len = (1 << minlen),
++ .red = NULL,
++ .green = NULL,
++ .blue = NULL,
++ .transp = NULL
++ };
++
++ red = kmalloc(256 * sizeof(u16) * 3, GFP_KERNEL);
++
++ if (!red)
++ goto out;
++
++ green = red + 256;
++ blue = green + 256;
++ cmap.red = red;
++ cmap.green = green;
++ cmap.blue = blue;
++
++ for (i = 0; i < cmap.len; i++) {
++ red[i] = green[i] = blue[i] = (0xffff * i)/(cmap.len-1);
++ }
++
++ h = fb_set_cmap(&cmap, info);
++ fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++ kfree(red);
++
++ return h;
++
++ } else if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++ info->var.bits_per_pixel == 8 && info->bgdecor.cmap.red != NULL)
++ fb_set_cmap(&info->bgdecor.cmap, info);
++
++out: return fb_set_cmap(&palette_cmap, info);
+ }
+
+ static u16 *fbcon_screen_pos(struct vc_data *vc, int offset)
+ {
+ unsigned long p;
+ int line;
+-
++
+ if (vc->vc_num != fg_console || !softback_lines)
+ return (u16 *) (vc->vc_origin + offset);
+ line = offset / vc->vc_size_row;
+@@ -2909,7 +3027,14 @@ static void fbcon_modechanged(struct fb_info *info)
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ cols /= vc->vc_font.width;
+ rows /= vc->vc_font.height;
+- vc_resize(vc, cols, rows);
++
++ if (!fbcon_decor_active_nores(info, vc)) {
++ vc_resize(vc, cols, rows);
++ } else {
++ fbcon_decor_disable(vc, 0);
++ fbcon_decor_call_helper("modechange", vc->vc_num);
++ }
++
+ updatescrollmode(p, info, vc);
+ scrollback_max = 0;
+ scrollback_current = 0;
+@@ -2954,7 +3079,9 @@ static void fbcon_set_all_vcs(struct fb_info *info)
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ cols /= vc->vc_font.width;
+ rows /= vc->vc_font.height;
+- vc_resize(vc, cols, rows);
++ if (!fbcon_decor_active_nores(info, vc)) {
++ vc_resize(vc, cols, rows);
++ }
+ }
+
+ if (fg != -1)
+@@ -3596,6 +3723,7 @@ static void fbcon_exit(void)
+ }
+ }
+
++ fbcon_decor_exit();
+ fbcon_has_exited = 1;
+ }
+
+diff --git a/drivers/video/console/fbcondecor.c b/drivers/video/console/fbcondecor.c
+new file mode 100644
+index 0000000..babc8c5
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.c
+@@ -0,0 +1,555 @@
++/*
++ * linux/drivers/video/console/fbcondecor.c -- Framebuffer console decorations
++ *
++ * Copyright (C) 2004-2009 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ * Code based upon "Bootsplash" (C) 2001-2003
++ * Volker Poplawski <volker@poplawski.de>,
++ * Stefan Reinauer <stepan@suse.de>,
++ * Steffen Winterfeldt <snwint@suse.de>,
++ * Michael Schroeder <mls@suse.de>,
++ * Ken Wimer <wimer@suse.de>.
++ *
++ * Compat ioctl support by Thorsten Klein <TK@Thorsten-Klein.de>.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file COPYING in the main directory of this archive for
++ * more details.
++ *
++ */
++#include <linux/module.h>
++#include <linux/kernel.h>
++#include <linux/string.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/vt_kern.h>
++#include <linux/vmalloc.h>
++#include <linux/unistd.h>
++#include <linux/syscalls.h>
++#include <linux/init.h>
++#include <linux/proc_fs.h>
++#include <linux/workqueue.h>
++#include <linux/kmod.h>
++#include <linux/miscdevice.h>
++#include <linux/device.h>
++#include <linux/fs.h>
++#include <linux/compat.h>
++#include <linux/console.h>
++
++#include <asm/uaccess.h>
++#include <asm/irq.h>
++
++#include "fbcon.h"
++#include "fbcondecor.h"
++
++extern signed char con2fb_map[];
++static int fbcon_decor_enable(struct vc_data *vc);
++char fbcon_decor_path[KMOD_PATH_LEN] = "/sbin/fbcondecor_helper";
++static int initialized = 0;
++
++int fbcon_decor_call_helper(char* cmd, unsigned short vc)
++{
++ char *envp[] = {
++ "HOME=/",
++ "PATH=/sbin:/bin",
++ NULL
++ };
++
++ char tfb[5];
++ char tcons[5];
++ unsigned char fb = (int) con2fb_map[vc];
++
++ char *argv[] = {
++ fbcon_decor_path,
++ "2",
++ cmd,
++ tcons,
++ tfb,
++ vc_cons[vc].d->vc_decor.theme,
++ NULL
++ };
++
++ snprintf(tfb,5,"%d",fb);
++ snprintf(tcons,5,"%d",vc);
++
++ return call_usermodehelper(fbcon_decor_path, argv, envp, UMH_WAIT_EXEC);
++}
++
++/* Disables fbcondecor on a virtual console; called with console sem held. */
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw)
++{
++ struct fb_info* info;
++
++ if (!vc->vc_decor.state)
++ return -EINVAL;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (info == NULL)
++ return -EINVAL;
++
++ vc->vc_decor.state = 0;
++ vc_resize(vc, info->var.xres / vc->vc_font.width,
++ info->var.yres / vc->vc_font.height);
++
++ if (fg_console == vc->vc_num && redraw) {
++ redraw_screen(vc, 0);
++ update_region(vc, vc->vc_origin +
++ vc->vc_size_row * vc->vc_top,
++ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++ }
++
++ printk(KERN_INFO "fbcondecor: switched decor state to 'off' on console %d\n",
++ vc->vc_num);
++
++ return 0;
++}
++
++/* Enables fbcondecor on a virtual console; called with console sem held. */
++static int fbcon_decor_enable(struct vc_data *vc)
++{
++ struct fb_info* info;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (vc->vc_decor.twidth == 0 || vc->vc_decor.theight == 0 ||
++ info == NULL || vc->vc_decor.state || (!info->bgdecor.data &&
++ vc->vc_num == fg_console))
++ return -EINVAL;
++
++ vc->vc_decor.state = 1;
++ vc_resize(vc, vc->vc_decor.twidth / vc->vc_font.width,
++ vc->vc_decor.theight / vc->vc_font.height);
++
++ if (fg_console == vc->vc_num) {
++ redraw_screen(vc, 0);
++ update_region(vc, vc->vc_origin +
++ vc->vc_size_row * vc->vc_top,
++ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++ fbcon_decor_clear_margins(vc, info, 0);
++ }
++
++ printk(KERN_INFO "fbcondecor: switched decor state to 'on' on console %d\n",
++ vc->vc_num);
++
++ return 0;
++}
++
++static inline int fbcon_decor_ioctl_dosetstate(struct vc_data *vc, unsigned int state, unsigned char origin)
++{
++ int ret;
++
++// if (origin == FBCON_DECOR_IO_ORIG_USER)
++ console_lock();
++ if (!state)
++ ret = fbcon_decor_disable(vc, 1);
++ else
++ ret = fbcon_decor_enable(vc);
++// if (origin == FBCON_DECOR_IO_ORIG_USER)
++ console_unlock();
++
++ return ret;
++}
++
++static inline void fbcon_decor_ioctl_dogetstate(struct vc_data *vc, unsigned int *state)
++{
++ *state = vc->vc_decor.state;
++}
++
++static int fbcon_decor_ioctl_dosetcfg(struct vc_data *vc, struct vc_decor *cfg, unsigned char origin)
++{
++ struct fb_info *info;
++ int len;
++ char *tmp;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (info == NULL || !cfg->twidth || !cfg->theight ||
++ cfg->tx + cfg->twidth > info->var.xres ||
++ cfg->ty + cfg->theight > info->var.yres)
++ return -EINVAL;
++
++ len = strlen_user(cfg->theme);
++ if (!len || len > FBCON_DECOR_THEME_LEN)
++ return -EINVAL;
++ tmp = kmalloc(len, GFP_KERNEL);
++ if (!tmp)
++ return -ENOMEM;
++ if (copy_from_user(tmp, (void __user *)cfg->theme, len))
++ return -EFAULT;
++ cfg->theme = tmp;
++ cfg->state = 0;
++
++ /* If this ioctl is a response to a request from kernel, the console sem
++ * is already held; we also don't need to disable decor because either the
++ * new config and background picture will be successfully loaded, and the
++ * decor will stay on, or in case of a failure it'll be turned off in fbcon. */
++// if (origin == FBCON_DECOR_IO_ORIG_USER) {
++ console_lock();
++ if (vc->vc_decor.state)
++ fbcon_decor_disable(vc, 1);
++// }
++
++ if (vc->vc_decor.theme)
++ kfree(vc->vc_decor.theme);
++
++ vc->vc_decor = *cfg;
++
++// if (origin == FBCON_DECOR_IO_ORIG_USER)
++ console_unlock();
++
++ printk(KERN_INFO "fbcondecor: console %d using theme '%s'\n",
++ vc->vc_num, vc->vc_decor.theme);
++ return 0;
++}
++
++static int fbcon_decor_ioctl_dogetcfg(struct vc_data *vc, struct vc_decor *decor)
++{
++ char __user *tmp;
++
++ tmp = decor->theme;
++ *decor = vc->vc_decor;
++ decor->theme = tmp;
++
++ if (vc->vc_decor.theme) {
++ if (copy_to_user(tmp, vc->vc_decor.theme, strlen(vc->vc_decor.theme) + 1))
++ return -EFAULT;
++ } else
++ if (put_user(0, tmp))
++ return -EFAULT;
++
++ return 0;
++}
++
++static int fbcon_decor_ioctl_dosetpic(struct vc_data *vc, struct fb_image *img, unsigned char origin)
++{
++ struct fb_info *info;
++ int len;
++ u8 *tmp;
++
++ if (vc->vc_num != fg_console)
++ return -EINVAL;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (info == NULL)
++ return -EINVAL;
++
++ if (img->width != info->var.xres || img->height != info->var.yres) {
++ printk(KERN_ERR "fbcondecor: picture dimensions mismatch\n");
++ printk(KERN_ERR "%dx%d vs %dx%d\n", img->width, img->height, info->var.xres, info->var.yres);
++ return -EINVAL;
++ }
++
++ if (img->depth != info->var.bits_per_pixel) {
++ printk(KERN_ERR "fbcondecor: picture depth mismatch\n");
++ return -EINVAL;
++ }
++
++ if (img->depth == 8) {
++ if (!img->cmap.len || !img->cmap.red || !img->cmap.green ||
++ !img->cmap.blue)
++ return -EINVAL;
++
++ tmp = vmalloc(img->cmap.len * 3 * 2);
++ if (!tmp)
++ return -ENOMEM;
++
++ if (copy_from_user(tmp,
++ (void __user*)img->cmap.red, (img->cmap.len << 1)) ||
++ copy_from_user(tmp + (img->cmap.len << 1),
++ (void __user*)img->cmap.green, (img->cmap.len << 1)) ||
++ copy_from_user(tmp + (img->cmap.len << 2),
++ (void __user*)img->cmap.blue, (img->cmap.len << 1))) {
++ vfree(tmp);
++ return -EFAULT;
++ }
++
++ img->cmap.transp = NULL;
++ img->cmap.red = (u16*)tmp;
++ img->cmap.green = img->cmap.red + img->cmap.len;
++ img->cmap.blue = img->cmap.green + img->cmap.len;
++ } else {
++ img->cmap.red = NULL;
++ }
++
++ len = ((img->depth + 7) >> 3) * img->width * img->height;
++
++ /*
++ * Allocate an additional byte so that we never go outside of the
++ * buffer boundaries in the rendering functions in a 24 bpp mode.
++ */
++ tmp = vmalloc(len + 1);
++
++ if (!tmp)
++ goto out;
++
++ if (copy_from_user(tmp, (void __user*)img->data, len))
++ goto out;
++
++ img->data = tmp;
++
++ /* If this ioctl is a response to a request from kernel, the console sem
++ * is already held. */
++// if (origin == FBCON_DECOR_IO_ORIG_USER)
++ console_lock();
++
++ if (info->bgdecor.data)
++ vfree((u8*)info->bgdecor.data);
++ if (info->bgdecor.cmap.red)
++ vfree(info->bgdecor.cmap.red);
++
++ info->bgdecor = *img;
++
++ if (fbcon_decor_active_vc(vc) && fg_console == vc->vc_num) {
++ redraw_screen(vc, 0);
++ update_region(vc, vc->vc_origin +
++ vc->vc_size_row * vc->vc_top,
++ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++ fbcon_decor_clear_margins(vc, info, 0);
++ }
++
++// if (origin == FBCON_DECOR_IO_ORIG_USER)
++ console_unlock();
++
++ return 0;
++
++out: if (img->cmap.red)
++ vfree(img->cmap.red);
++
++ if (tmp)
++ vfree(tmp);
++ return -ENOMEM;
++}
++
++static long fbcon_decor_ioctl(struct file *filp, u_int cmd, u_long arg)
++{
++ struct fbcon_decor_iowrapper __user *wrapper = (void __user*) arg;
++ struct vc_data *vc = NULL;
++ unsigned short vc_num = 0;
++ unsigned char origin = 0;
++ void __user *data = NULL;
++
++ if (!access_ok(VERIFY_READ, wrapper,
++ sizeof(struct fbcon_decor_iowrapper)))
++ return -EFAULT;
++
++ __get_user(vc_num, &wrapper->vc);
++ __get_user(origin, &wrapper->origin);
++ __get_user(data, &wrapper->data);
++
++ if (!vc_cons_allocated(vc_num))
++ return -EINVAL;
++
++ vc = vc_cons[vc_num].d;
++
++ switch (cmd) {
++ case FBIOCONDECOR_SETPIC:
++ {
++ struct fb_image img;
++ if (copy_from_user(&img, (struct fb_image __user *)data, sizeof(struct fb_image)))
++ return -EFAULT;
++
++ return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++ }
++ case FBIOCONDECOR_SETCFG:
++ {
++ struct vc_decor cfg;
++ if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++ return -EFAULT;
++
++ return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++ }
++ case FBIOCONDECOR_GETCFG:
++ {
++ int rval;
++ struct vc_decor cfg;
++
++ if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++ return -EFAULT;
++
++ rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++ if (copy_to_user(data, &cfg, sizeof(struct vc_decor)))
++ return -EFAULT;
++ return rval;
++ }
++ case FBIOCONDECOR_SETSTATE:
++ {
++ unsigned int state = 0;
++ if (get_user(state, (unsigned int __user *)data))
++ return -EFAULT;
++ return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++ }
++ case FBIOCONDECOR_GETSTATE:
++ {
++ unsigned int state = 0;
++ fbcon_decor_ioctl_dogetstate(vc, &state);
++ return put_user(state, (unsigned int __user *)data);
++ }
++
++ default:
++ return -ENOIOCTLCMD;
++ }
++}
++
++#ifdef CONFIG_COMPAT
++
++static long fbcon_decor_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) {
++
++ struct fbcon_decor_iowrapper32 __user *wrapper = (void __user *)arg;
++ struct vc_data *vc = NULL;
++ unsigned short vc_num = 0;
++ unsigned char origin = 0;
++ compat_uptr_t data_compat = 0;
++ void __user *data = NULL;
++
++ if (!access_ok(VERIFY_READ, wrapper,
++ sizeof(struct fbcon_decor_iowrapper32)))
++ return -EFAULT;
++
++ __get_user(vc_num, &wrapper->vc);
++ __get_user(origin, &wrapper->origin);
++ __get_user(data_compat, &wrapper->data);
++ data = compat_ptr(data_compat);
++
++ if (!vc_cons_allocated(vc_num))
++ return -EINVAL;
++
++ vc = vc_cons[vc_num].d;
++
++ switch (cmd) {
++ case FBIOCONDECOR_SETPIC32:
++ {
++ struct fb_image32 img_compat;
++ struct fb_image img;
++
++ if (copy_from_user(&img_compat, (struct fb_image32 __user *)data, sizeof(struct fb_image32)))
++ return -EFAULT;
++
++ fb_image_from_compat(img, img_compat);
++
++ return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++ }
++
++ case FBIOCONDECOR_SETCFG32:
++ {
++ struct vc_decor32 cfg_compat;
++ struct vc_decor cfg;
++
++ if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++ return -EFAULT;
++
++ vc_decor_from_compat(cfg, cfg_compat);
++
++ return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++ }
++
++ case FBIOCONDECOR_GETCFG32:
++ {
++ int rval;
++ struct vc_decor32 cfg_compat;
++ struct vc_decor cfg;
++
++ if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++ return -EFAULT;
++ cfg.theme = compat_ptr(cfg_compat.theme);
++
++ rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++ vc_decor_to_compat(cfg_compat, cfg);
++
++ if (copy_to_user((struct vc_decor32 __user *)data, &cfg_compat, sizeof(struct vc_decor32)))
++ return -EFAULT;
++ return rval;
++ }
++
++ case FBIOCONDECOR_SETSTATE32:
++ {
++ compat_uint_t state_compat = 0;
++ unsigned int state = 0;
++
++ if (get_user(state_compat, (compat_uint_t __user *)data))
++ return -EFAULT;
++
++ state = (unsigned int)state_compat;
++
++ return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++ }
++
++ case FBIOCONDECOR_GETSTATE32:
++ {
++ compat_uint_t state_compat = 0;
++ unsigned int state = 0;
++
++ fbcon_decor_ioctl_dogetstate(vc, &state);
++ state_compat = (compat_uint_t)state;
++
++ return put_user(state_compat, (compat_uint_t __user *)data);
++ }
++
++ default:
++ return -ENOIOCTLCMD;
++ }
++}
++#else
++ #define fbcon_decor_compat_ioctl NULL
++#endif
++
++static struct file_operations fbcon_decor_ops = {
++ .owner = THIS_MODULE,
++ .unlocked_ioctl = fbcon_decor_ioctl,
++ .compat_ioctl = fbcon_decor_compat_ioctl
++};
++
++static struct miscdevice fbcon_decor_dev = {
++ .minor = MISC_DYNAMIC_MINOR,
++ .name = "fbcondecor",
++ .fops = &fbcon_decor_ops
++};
++
++void fbcon_decor_reset(void)
++{
++ int i;
++
++ for (i = 0; i < num_registered_fb; i++) {
++ registered_fb[i]->bgdecor.data = NULL;
++ registered_fb[i]->bgdecor.cmap.red = NULL;
++ }
++
++ for (i = 0; i < MAX_NR_CONSOLES && vc_cons[i].d; i++) {
++ vc_cons[i].d->vc_decor.state = vc_cons[i].d->vc_decor.twidth =
++ vc_cons[i].d->vc_decor.theight = 0;
++ vc_cons[i].d->vc_decor.theme = NULL;
++ }
++
++ return;
++}
++
++int fbcon_decor_init(void)
++{
++ int i;
++
++ fbcon_decor_reset();
++
++ if (initialized)
++ return 0;
++
++ i = misc_register(&fbcon_decor_dev);
++ if (i) {
++ printk(KERN_ERR "fbcondecor: failed to register device\n");
++ return i;
++ }
++
++ fbcon_decor_call_helper("init", 0);
++ initialized = 1;
++ return 0;
++}
++
++int fbcon_decor_exit(void)
++{
++ fbcon_decor_reset();
++ return 0;
++}
++
++EXPORT_SYMBOL(fbcon_decor_path);
+diff --git a/drivers/video/console/fbcondecor.h b/drivers/video/console/fbcondecor.h
+new file mode 100644
+index 0000000..3b3724b
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.h
+@@ -0,0 +1,78 @@
++/*
++ * linux/drivers/video/console/fbcondecor.h -- Framebuffer Console Decoration headers
++ *
++ * Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ */
++
++#ifndef __FBCON_DECOR_H
++#define __FBCON_DECOR_H
++
++#ifndef _LINUX_FB_H
++#include <linux/fb.h>
++#endif
++
++/* This is needed for vc_cons in fbcmap.c */
++#include <linux/vt_kern.h>
++
++struct fb_cursor;
++struct fb_info;
++struct vc_data;
++
++#ifdef CONFIG_FB_CON_DECOR
++/* fbcondecor.c */
++int fbcon_decor_init(void);
++int fbcon_decor_exit(void);
++int fbcon_decor_call_helper(char* cmd, unsigned short cons);
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw);
++
++/* cfbcondecor.c */
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx);
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor);
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width);
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only);
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank);
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width);
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes, int srclinesbytes, int bpp);
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc);
++
++/* vt.c */
++void acquire_console_sem(void);
++void release_console_sem(void);
++void do_unblank_screen(int entering_gfx);
++
++/* struct vc_data *y */
++#define fbcon_decor_active_vc(y) (y->vc_decor.state && y->vc_decor.theme)
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active_nores(x,y) (x->bgdecor.data && fbcon_decor_active_vc(y))
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active(x,y) (fbcon_decor_active_nores(x,y) && \
++ x->bgdecor.width == x->var.xres && \
++ x->bgdecor.height == x->var.yres && \
++ x->bgdecor.depth == x->var.bits_per_pixel)
++
++
++#else /* CONFIG_FB_CON_DECOR */
++
++static inline void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx) {}
++static inline void fbcon_decor_putc(struct vc_data *vc, struct fb_info *info, int c, int ypos, int xpos) {}
++static inline void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor) {}
++static inline void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width) {}
++static inline void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only) {}
++static inline void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank) {}
++static inline void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width) {}
++static inline void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc) {}
++static inline int fbcon_decor_call_helper(char* cmd, unsigned short cons) { return 0; }
++static inline int fbcon_decor_init(void) { return 0; }
++static inline int fbcon_decor_exit(void) { return 0; }
++static inline int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw) { return 0; }
++
++#define fbcon_decor_active_vc(y) (0)
++#define fbcon_decor_active_nores(x,y) (0)
++#define fbcon_decor_active(x,y) (0)
++
++#endif /* CONFIG_FB_CON_DECOR */
++
++#endif /* __FBCON_DECOR_H */
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index e1f4727..2952e33 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -1204,7 +1204,6 @@ config FB_MATROX
+ select FB_CFB_FILLRECT
+ select FB_CFB_COPYAREA
+ select FB_CFB_IMAGEBLIT
+- select FB_TILEBLITTING
+ select FB_MACMODES if PPC_PMAC
+ ---help---
+ Say Y here if you have a Matrox Millennium, Matrox Millennium II,
+diff --git a/drivers/video/fbdev/core/fbcmap.c b/drivers/video/fbdev/core/fbcmap.c
+index f89245b..05e036c 100644
+--- a/drivers/video/fbdev/core/fbcmap.c
++++ b/drivers/video/fbdev/core/fbcmap.c
+@@ -17,6 +17,8 @@
+ #include <linux/slab.h>
+ #include <linux/uaccess.h>
+
++#include "../../console/fbcondecor.h"
++
+ static u16 red2[] __read_mostly = {
+ 0x0000, 0xaaaa
+ };
+@@ -249,14 +251,17 @@ int fb_set_cmap(struct fb_cmap *cmap, struct fb_info *info)
+ if (transp)
+ htransp = *transp++;
+ if (info->fbops->fb_setcolreg(start++,
+- hred, hgreen, hblue,
++ hred, hgreen, hblue,
+ htransp, info))
+ break;
+ }
+ }
+- if (rc == 0)
++ if (rc == 0) {
+ fb_copy_cmap(cmap, &info->cmap);
+-
++ if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++ info->fix.visual == FB_VISUAL_DIRECTCOLOR)
++ fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++ }
+ return rc;
+ }
+
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index b6d5008..d6703f2 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1250,15 +1250,6 @@ struct fb_fix_screeninfo32 {
+ u16 reserved[3];
+ };
+
+-struct fb_cmap32 {
+- u32 start;
+- u32 len;
+- compat_caddr_t red;
+- compat_caddr_t green;
+- compat_caddr_t blue;
+- compat_caddr_t transp;
+-};
+-
+ static int fb_getput_cmap(struct fb_info *info, unsigned int cmd,
+ unsigned long arg)
+ {
+diff --git a/include/linux/console_decor.h b/include/linux/console_decor.h
+new file mode 100644
+index 0000000..04b8d80
+--- /dev/null
++++ b/include/linux/console_decor.h
+@@ -0,0 +1,46 @@
++#ifndef _LINUX_CONSOLE_DECOR_H_
++#define _LINUX_CONSOLE_DECOR_H_ 1
++
++/* A structure used by the framebuffer console decorations (drivers/video/console/fbcondecor.c) */
++struct vc_decor {
++ __u8 bg_color; /* The color that is to be treated as transparent */
++ __u8 state; /* Current decor state: 0 = off, 1 = on */
++ __u16 tx, ty; /* Top left corner coordinates of the text field */
++ __u16 twidth, theight; /* Width and height of the text field */
++ char* theme;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++
++struct vc_decor32 {
++ __u8 bg_color; /* The color that is to be treated as transparent */
++ __u8 state; /* Current decor state: 0 = off, 1 = on */
++ __u16 tx, ty; /* Top left corner coordinates of the text field */
++ __u16 twidth, theight; /* Width and height of the text field */
++ compat_uptr_t theme;
++};
++
++#define vc_decor_from_compat(to, from) \
++ (to).bg_color = (from).bg_color; \
++ (to).state = (from).state; \
++ (to).tx = (from).tx; \
++ (to).ty = (from).ty; \
++ (to).twidth = (from).twidth; \
++ (to).theight = (from).theight; \
++ (to).theme = compat_ptr((from).theme)
++
++#define vc_decor_to_compat(to, from) \
++ (to).bg_color = (from).bg_color; \
++ (to).state = (from).state; \
++ (to).tx = (from).tx; \
++ (to).ty = (from).ty; \
++ (to).twidth = (from).twidth; \
++ (to).theight = (from).theight; \
++ (to).theme = ptr_to_compat((from).theme)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#endif
+diff --git a/include/linux/console_struct.h b/include/linux/console_struct.h
+index 7f0c329..98f5d60 100644
+--- a/include/linux/console_struct.h
++++ b/include/linux/console_struct.h
+@@ -19,6 +19,7 @@
+ struct vt_struct;
+
+ #define NPAR 16
++#include <linux/console_decor.h>
+
+ struct vc_data {
+ struct tty_port port; /* Upper level data */
+@@ -107,6 +108,8 @@ struct vc_data {
+ unsigned long vc_uni_pagedir;
+ unsigned long *vc_uni_pagedir_loc; /* [!] Location of uni_pagedir variable for this console */
+ bool vc_panic_force_write; /* when oops/panic this VC can accept forced output/blanking */
++
++ struct vc_decor vc_decor;
+ /* additional information is in vt_kern.h */
+ };
+
+diff --git a/include/linux/fb.h b/include/linux/fb.h
+index fe6ac95..1e36b03 100644
+--- a/include/linux/fb.h
++++ b/include/linux/fb.h
+@@ -219,6 +219,34 @@ struct fb_deferred_io {
+ };
+ #endif
+
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_image32 {
++ __u32 dx; /* Where to place image */
++ __u32 dy;
++ __u32 width; /* Size of image */
++ __u32 height;
++ __u32 fg_color; /* Only used when a mono bitmap */
++ __u32 bg_color;
++ __u8 depth; /* Depth of the image */
++ const compat_uptr_t data; /* Pointer to image data */
++ struct fb_cmap32 cmap; /* color map info */
++};
++
++#define fb_image_from_compat(to, from) \
++ (to).dx = (from).dx; \
++ (to).dy = (from).dy; \
++ (to).width = (from).width; \
++ (to).height = (from).height; \
++ (to).fg_color = (from).fg_color; \
++ (to).bg_color = (from).bg_color; \
++ (to).depth = (from).depth; \
++ (to).data = compat_ptr((from).data); \
++ fb_cmap_from_compat((to).cmap, (from).cmap)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /*
+ * Frame buffer operations
+ *
+@@ -489,6 +517,9 @@ struct fb_info {
+ #define FBINFO_STATE_SUSPENDED 1
+ u32 state; /* Hardware state i.e suspend */
+ void *fbcon_par; /* fbcon use-only private area */
++
++ struct fb_image bgdecor;
++
+ /* From here on everything is device dependent */
+ void *par;
+ /* we need the PCI or similar aperture base/size not
+diff --git a/include/uapi/linux/fb.h b/include/uapi/linux/fb.h
+index fb795c3..dc77a03 100644
+--- a/include/uapi/linux/fb.h
++++ b/include/uapi/linux/fb.h
+@@ -8,6 +8,25 @@
+
+ #define FB_MAX 32 /* sufficient for now */
+
++struct fbcon_decor_iowrapper
++{
++ unsigned short vc; /* Virtual console */
++ unsigned char origin; /* Point of origin of the request */
++ void *data;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++struct fbcon_decor_iowrapper32
++{
++ unsigned short vc; /* Virtual console */
++ unsigned char origin; /* Point of origin of the request */
++ compat_uptr_t data;
++};
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /* ioctls
+ 0x46 is 'F' */
+ #define FBIOGET_VSCREENINFO 0x4600
+@@ -35,6 +54,25 @@
+ #define FBIOGET_DISPINFO 0x4618
+ #define FBIO_WAITFORVSYNC _IOW('F', 0x20, __u32)
+
++#define FBIOCONDECOR_SETCFG _IOWR('F', 0x19, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETCFG _IOR('F', 0x1A, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETSTATE _IOWR('F', 0x1B, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETSTATE _IOR('F', 0x1C, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETPIC _IOWR('F', 0x1D, struct fbcon_decor_iowrapper)
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#define FBIOCONDECOR_SETCFG32 _IOWR('F', 0x19, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETCFG32 _IOR('F', 0x1A, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETSTATE32 _IOWR('F', 0x1B, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETSTATE32 _IOR('F', 0x1C, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETPIC32 _IOWR('F', 0x1D, struct fbcon_decor_iowrapper32)
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#define FBCON_DECOR_THEME_LEN 128 /* Maximum lenght of a theme name */
++#define FBCON_DECOR_IO_ORIG_KERNEL 0 /* Kernel ioctl origin */
++#define FBCON_DECOR_IO_ORIG_USER 1 /* User ioctl origin */
++
+ #define FB_TYPE_PACKED_PIXELS 0 /* Packed Pixels */
+ #define FB_TYPE_PLANES 1 /* Non interleaved planes */
+ #define FB_TYPE_INTERLEAVED_PLANES 2 /* Interleaved planes */
+@@ -277,6 +315,29 @@ struct fb_var_screeninfo {
+ __u32 reserved[4]; /* Reserved for future compatibility */
+ };
+
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_cmap32 {
++ __u32 start;
++ __u32 len; /* Number of entries */
++ compat_uptr_t red; /* Red values */
++ compat_uptr_t green;
++ compat_uptr_t blue;
++ compat_uptr_t transp; /* transparency, can be NULL */
++};
++
++#define fb_cmap_from_compat(to, from) \
++ (to).start = (from).start; \
++ (to).len = (from).len; \
++ (to).red = compat_ptr((from).red); \
++ (to).green = compat_ptr((from).green); \
++ (to).blue = compat_ptr((from).blue); \
++ (to).transp = compat_ptr((from).transp)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++
+ struct fb_cmap {
+ __u32 start; /* First entry */
+ __u32 len; /* Number of entries */
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 74f5b58..6386ab0 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -146,6 +146,10 @@ static const int cap_last_cap = CAP_LAST_CAP;
+ static unsigned long hung_task_timeout_max = (LONG_MAX/HZ);
+ #endif
+
++#ifdef CONFIG_FB_CON_DECOR
++extern char fbcon_decor_path[];
++#endif
++
+ #ifdef CONFIG_INOTIFY_USER
+ #include <linux/inotify.h>
+ #endif
+@@ -255,6 +259,15 @@ static struct ctl_table sysctl_base_table[] = {
+ .mode = 0555,
+ .child = dev_table,
+ },
++#ifdef CONFIG_FB_CON_DECOR
++ {
++ .procname = "fbcondecor",
++ .data = &fbcon_decor_path,
++ .maxlen = KMOD_PATH_LEN,
++ .mode = 0644,
++ .proc_handler = &proc_dostring,
++ },
++#endif
+ { }
+ };
+
diff --git a/5000_enable-additional-cpu-optimizations-for-gcc.patch b/5000_enable-additional-cpu-optimizations-for-gcc.patch
new file mode 100644
index 0000000..f7ab6f0
--- /dev/null
+++ b/5000_enable-additional-cpu-optimizations-for-gcc.patch
@@ -0,0 +1,327 @@
+This patch has been tested on and known to work with kernel versions from 3.2
+up to the latest git version (pulled on 12/14/2013).
+
+This patch will expand the number of microarchitectures to include new
+processors including: AMD K10-family, AMD Family 10h (Barcelona), AMD Family
+14h (Bobcat), AMD Family 15h (Bulldozer), AMD Family 15h (Piledriver), AMD
+Family 16h (Jaguar), Intel 1st Gen Core i3/i5/i7 (Nehalem), Intel 2nd Gen Core
+i3/i5/i7 (Sandybridge), Intel 3rd Gen Core i3/i5/i7 (Ivybridge), and Intel 4th
+Gen Core i3/i5/i7 (Haswell). It also offers the compiler the 'native' flag.
+
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=3.15
+gcc version <4.9
+
+---
+diff -uprN a/arch/x86/include/asm/module.h b/arch/x86/include/asm/module.h
+--- a/arch/x86/include/asm/module.h 2013-11-03 18:41:51.000000000 -0500
++++ b/arch/x86/include/asm/module.h 2013-12-15 06:21:24.351122516 -0500
+@@ -15,6 +15,16 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MCOREI7
++#define MODULE_PROC_FAMILY "COREI7 "
++#elif defined CONFIG_MCOREI7AVX
++#define MODULE_PROC_FAMILY "COREI7AVX "
++#elif defined CONFIG_MCOREAVXI
++#define MODULE_PROC_FAMILY "COREAVXI "
++#elif defined CONFIG_MCOREAVX2
++#define MODULE_PROC_FAMILY "COREAVX2 "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -33,6 +43,18 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+diff -uprN a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+--- a/arch/x86/Kconfig.cpu 2013-11-03 18:41:51.000000000 -0500
++++ b/arch/x86/Kconfig.cpu 2013-12-15 06:21:24.351122516 -0500
+@@ -139,7 +139,7 @@ config MPENTIUM4
+
+
+ config MK6
+- bool "K6/K6-II/K6-III"
++ bool "AMD K6/K6-II/K6-III"
+ depends on X86_32
+ ---help---
+ Select this for an AMD K6-family processor. Enables use of
+@@ -147,7 +147,7 @@ config MK6
+ flags to GCC.
+
+ config MK7
+- bool "Athlon/Duron/K7"
++ bool "AMD Athlon/Duron/K7"
+ depends on X86_32
+ ---help---
+ Select this for an AMD Athlon K7-family processor. Enables use of
+@@ -155,12 +155,55 @@ config MK7
+ flags to GCC.
+
+ config MK8
+- bool "Opteron/Athlon64/Hammer/K8"
++ bool "AMD Opteron/Athlon64/Hammer/K8"
+ ---help---
+ Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ Enables use of some extended instructions, and passes appropriate
+ optimization flags to GCC.
+
++config MK10
++ bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++ ---help---
++ Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++ Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MBARCELONA
++ bool "AMD Barcelona"
++ ---help---
++ Select this for AMD Barcelona and newer processors.
++
++ Enables -march=barcelona
++
++config MBOBCAT
++ bool "AMD Bobcat"
++ ---help---
++ Select this for AMD Bobcat processors.
++
++ Enables -march=btver1
++
++config MBULLDOZER
++ bool "AMD Bulldozer"
++ ---help---
++ Select this for AMD Bulldozer processors.
++
++ Enables -march=bdver1
++
++config MPILEDRIVER
++ bool "AMD Piledriver"
++ ---help---
++ Select this for AMD Piledriver processors.
++
++ Enables -march=bdver2
++
++config MJAGUAR
++ bool "AMD Jaguar"
++ ---help---
++ Select this for AMD Jaguar processors.
++
++ Enables -march=btver2
++
+ config MCRUSOE
+ bool "Crusoe"
+ depends on X86_32
+@@ -251,8 +294,17 @@ config MPSC
+ using the cpu family field
+ in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+
++config MATOM
++ bool "Intel Atom"
++ ---help---
++
++ Select this for the Intel Atom platform. Intel Atom CPUs have an
++ in-order pipelining architecture and thus can benefit from
++ accordingly optimized code. Use a recent GCC with specific Atom
++ support in order to fully benefit from selecting this option.
++
+ config MCORE2
+- bool "Core 2/newer Xeon"
++ bool "Intel Core 2"
+ ---help---
+
+ Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -260,14 +312,40 @@ config MCORE2
+ family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ (not a typo)
+
+-config MATOM
+- bool "Intel Atom"
++ Enables -march=core2
++
++config MCOREI7
++ bool "Intel Core i7"
+ ---help---
+
+- Select this for the Intel Atom platform. Intel Atom CPUs have an
+- in-order pipelining architecture and thus can benefit from
+- accordingly optimized code. Use a recent GCC with specific Atom
+- support in order to fully benefit from selecting this option.
++ Select this for the Intel Nehalem platform. Intel Nehalem proecessors
++ include Core i3, i5, i7, Xeon: 34xx, 35xx, 55xx, 56xx, 75xx processors.
++
++ Enables -march=corei7
++
++config MCOREI7AVX
++ bool "Intel Core 2nd Gen AVX"
++ ---help---
++
++ Select this for 2nd Gen Core processors including Sandy Bridge.
++
++ Enables -march=corei7-avx
++
++config MCOREAVXI
++ bool "Intel Core 3rd Gen AVX"
++ ---help---
++
++ Select this for 3rd Gen Core processors including Ivy Bridge.
++
++ Enables -march=core-avx-i
++
++config MCOREAVX2
++ bool "Intel Core AVX2"
++ ---help---
++
++ Select this for AVX2 enabled processors including Haswell.
++
++ Enables -march=core-avx2
+
+ config GENERIC_CPU
+ bool "Generic-x86-64"
+@@ -276,6 +354,19 @@ config GENERIC_CPU
+ Generic x86-64 CPU.
+ Run equally well on all x86-64 CPUs.
+
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++ GCC 4.2 and above support -march=native, which automatically detects
++ the optimum settings to use based on your processor. -march=native
++ also detects and applies additional settings beyond -march specific
++ to your CPU, (eg. -msse4). Unless you have a specific reason not to
++ (e.g. distcc cross-compiling), you should probably be using
++ -march=native rather than anything listed below.
++
++ Enables -march=native
++
+ endchoice
+
+ config X86_GENERIC
+@@ -300,7 +391,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ int
+ default "7" if MPENTIUM4 || MPSC
+- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++ default "6" if MK7 || MK8 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MPENTIUMM || MCORE2 || MCOREI7 || MCOREI7AVX || MCOREAVXI || MCOREAVX2 || MATOM || MVIAC7 || X86_GENERIC || MNATIVE || GENERIC_CPU
+ default "4" if MELAN || M486 || MGEODEGX1
+ default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+
+@@ -331,11 +422,11 @@ config X86_ALIGNMENT_16
+
+ config X86_INTEL_USERCOPY
+ def_bool y
+- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || MNATIVE || X86_GENERIC || MK8 || MK7 || MK10 || MBARCELONA || MEFFICEON || MCORE2 || MCOREI7 || MCOREI7AVX || MCOREAVXI || MCOREAVX2
+
+ config X86_USE_PPRO_CHECKSUM
+ def_bool y
+- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MCOREI7 || MCOREI7AVX || MCOREAVXI || MCOREAVX2 || MATOM || MNATIVE
+
+ config X86_USE_3DNOW
+ def_bool y
+@@ -363,17 +454,17 @@ config X86_P6_NOP
+
+ config X86_TSC
+ def_bool y
+- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MCOREI7 || MCOREI7-AVX || MATOM) || X86_64 || MNATIVE
+
+ config X86_CMPXCHG64
+ def_bool y
+- depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM
++ depends on X86_PAE || X86_64 || MCORE2 || MCOREI7 || MCOREI7AVX || MCOREAVXI || MCOREAVX2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM || MNATIVE
+
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+ config X86_CMOV
+ def_bool y
+- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++ depends on (MK8 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MK7 || MCORE2 || MCOREI7 || MCOREI7AVX || MCOREAVXI || MCOREAVX2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+
+ config X86_MINIMUM_CPU_FAMILY
+ int
+diff -uprN a/arch/x86/Makefile b/arch/x86/Makefile
+--- a/arch/x86/Makefile 2013-11-03 18:41:51.000000000 -0500
++++ b/arch/x86/Makefile 2013-12-15 06:21:24.354455723 -0500
+@@ -61,11 +61,26 @@ else
+ KBUILD_CFLAGS += $(call cc-option,-mno-sse -mpreferred-stack-boundary=3)
+
+ # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++ cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++ cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++ cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++ cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++ cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++ cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
+ cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+
+ cflags-$(CONFIG_MCORE2) += \
+- $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
++ $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++ cflags-$(CONFIG_MCOREI7) += \
++ $(call cc-option,-march=corei7,$(call cc-option,-mtune=corei7))
++ cflags-$(CONFIG_MCOREI7AVX) += \
++ $(call cc-option,-march=corei7-avx,$(call cc-option,-mtune=corei7-avx))
++ cflags-$(CONFIG_MCOREAVXI) += \
++ $(call cc-option,-march=core-avx-i,$(call cc-option,-mtune=core-avx-i))
++ cflags-$(CONFIG_MCOREAVX2) += \
++ $(call cc-option,-march=core-avx2,$(call cc-option,-mtune=core-avx2))
+ cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+ $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
+ cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+diff -uprN a/arch/x86/Makefile_32.cpu b/arch/x86/Makefile_32.cpu
+--- a/arch/x86/Makefile_32.cpu 2013-11-03 18:41:51.000000000 -0500
++++ b/arch/x86/Makefile_32.cpu 2013-12-15 06:21:24.354455723 -0500
+@@ -23,7 +23,14 @@ cflags-$(CONFIG_MK6) += -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7) += -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE) += -march=i686 $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -32,6 +39,10 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
+ cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7) += -march=i686
+ cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
++cflags-$(CONFIG_MCOREI7) += -march=i686 $(call tune,corei7)
++cflags-$(CONFIG_MCOREI7AVX) += -march=i686 $(call tune,corei7-avx)
++cflags-$(CONFIG_MCOREAVXI) += -march=i686 $(call tune,core-avx-i)
++cflags-$(CONFIG_MCOREAVX2) += -march=i686 $(call tune,core-avx2)
+ cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+ $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
diff --git a/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch b/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
new file mode 100644
index 0000000..c4efd06
--- /dev/null
+++ b/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
@@ -0,0 +1,402 @@
+WARNING - this version of the patch works with version 4.9+ of gcc and with
+kernel version 3.15.x+ and should NOT be applied when compiling on older
+versions due to name changes of the flags with the 4.9 release of gcc.
+Use the older version of this patch hosted on the same github for older
+versions of gcc. For example:
+
+corei7 --> nehalem
+corei7-avx --> sandybridge
+core-avx-i --> ivybridge
+core-avx2 --> haswell
+
+For more, see: https://gcc.gnu.org/gcc-4.9/changes.html
+
+It also changes 'atom' to 'bonnell' in accordance with the gcc v4.9 changes.
+Note that upstream is using the deprecated 'match=atom' flags when I believe it
+should use the newer 'march=bonnell' flag for atom processors.
+
+I have made that change to this patch set as well. See the following kernel
+bug report to see if I'm right: https://bugzilla.kernel.org/show_bug.cgi?id=77461
+
+This patch will expand the number of microarchitectures to include newer
+processors including: AMD K10-family, AMD Family 10h (Barcelona), AMD Family
+14h (Bobcat), AMD Family 15h (Bulldozer), AMD Family 15h (Piledriver), AMD
+Family 16h (Jaguar), Intel 1st Gen Core i3/i5/i7 (Nehalem), Intel 1.5 Gen Core
+i3/i5/i7 (Westmere), Intel 2nd Gen Core i3/i5/i7 (Sandybridge), Intel 3rd Gen
+Core i3/i5/i7 (Ivybridge), Intel 4th Gen Core i3/i5/i7 (Haswell), Intel 5th
+Gen Core i3/i5/i7 (Broadwell), and the low power Silvermont series of Atom
+processors (Silvermont). It also offers the compiler the 'native' flag.
+
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=3.15
+gcc version >=4.9
+
+--- a/arch/x86/include/asm/module.h 2014-06-16 16:44:27.000000000 -0400
++++ b/arch/x86/include/asm/module.h 2015-03-07 03:27:32.556672424 -0500
+@@ -15,6 +15,22 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -33,6 +49,20 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu 2014-06-16 16:44:27.000000000 -0400
++++ b/arch/x86/Kconfig.cpu 2015-03-07 03:32:14.337713226 -0500
+@@ -137,9 +137,8 @@ config MPENTIUM4
+ -Paxville
+ -Dempsey
+
+-
+ config MK6
+- bool "K6/K6-II/K6-III"
++ bool "AMD K6/K6-II/K6-III"
+ depends on X86_32
+ ---help---
+ Select this for an AMD K6-family processor. Enables use of
+@@ -147,7 +146,7 @@ config MK6
+ flags to GCC.
+
+ config MK7
+- bool "Athlon/Duron/K7"
++ bool "AMD Athlon/Duron/K7"
+ depends on X86_32
+ ---help---
+ Select this for an AMD Athlon K7-family processor. Enables use of
+@@ -155,12 +154,62 @@ config MK7
+ flags to GCC.
+
+ config MK8
+- bool "Opteron/Athlon64/Hammer/K8"
++ bool "AMD Opteron/Athlon64/Hammer/K8"
+ ---help---
+ Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ Enables use of some extended instructions, and passes appropriate
+ optimization flags to GCC.
+
++config MK8SSE3
++ bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++ ---help---
++ Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MK10
++ bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++ ---help---
++ Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++ Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MBARCELONA
++ bool "AMD Barcelona"
++ ---help---
++ Select this for AMD Barcelona and newer processors.
++
++ Enables -march=barcelona
++
++config MBOBCAT
++ bool "AMD Bobcat"
++ ---help---
++ Select this for AMD Bobcat processors.
++
++ Enables -march=btver1
++
++config MBULLDOZER
++ bool "AMD Bulldozer"
++ ---help---
++ Select this for AMD Bulldozer processors.
++
++ Enables -march=bdver1
++
++config MPILEDRIVER
++ bool "AMD Piledriver"
++ ---help---
++ Select this for AMD Piledriver processors.
++
++ Enables -march=bdver2
++
++config MJAGUAR
++ bool "AMD Jaguar"
++ ---help---
++ Select this for AMD Jaguar processors.
++
++ Enables -march=btver2
++
+ config MCRUSOE
+ bool "Crusoe"
+ depends on X86_32
+@@ -251,8 +300,17 @@ config MPSC
+ using the cpu family field
+ in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+
++config MATOM
++ bool "Intel Atom"
++ ---help---
++
++ Select this for the Intel Atom platform. Intel Atom CPUs have an
++ in-order pipelining architecture and thus can benefit from
++ accordingly optimized code. Use a recent GCC with specific Atom
++ support in order to fully benefit from selecting this option.
++
+ config MCORE2
+- bool "Core 2/newer Xeon"
++ bool "Intel Core 2"
+ ---help---
+
+ Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -260,14 +318,63 @@ config MCORE2
+ family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ (not a typo)
+
+-config MATOM
+- bool "Intel Atom"
++ Enables -march=core2
++
++config MNEHALEM
++ bool "Intel Nehalem"
+ ---help---
+
+- Select this for the Intel Atom platform. Intel Atom CPUs have an
+- in-order pipelining architecture and thus can benefit from
+- accordingly optimized code. Use a recent GCC with specific Atom
+- support in order to fully benefit from selecting this option.
++ Select this for 1st Gen Core processors in the Nehalem family.
++
++ Enables -march=nehalem
++
++config MWESTMERE
++ bool "Intel Westmere"
++ ---help---
++
++ Select this for the Intel Westmere formerly Nehalem-C family.
++
++ Enables -march=westmere
++
++config MSILVERMONT
++ bool "Intel Silvermont"
++ ---help---
++
++ Select this for the Intel Silvermont platform.
++
++ Enables -march=silvermont
++
++config MSANDYBRIDGE
++ bool "Intel Sandy Bridge"
++ ---help---
++
++ Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++ Enables -march=sandybridge
++
++config MIVYBRIDGE
++ bool "Intel Ivy Bridge"
++ ---help---
++
++ Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++ Enables -march=ivybridge
++
++config MHASWELL
++ bool "Intel Haswell"
++ ---help---
++
++ Select this for 4th Gen Core processors in the Haswell family.
++
++ Enables -march=haswell
++
++config MBROADWELL
++ bool "Intel Broadwell"
++ ---help---
++
++ Select this for 5th Gen Core processors in the Broadwell family.
++
++ Enables -march=broadwell
+
+ config GENERIC_CPU
+ bool "Generic-x86-64"
+@@ -276,6 +383,19 @@ config GENERIC_CPU
+ Generic x86-64 CPU.
+ Run equally well on all x86-64 CPUs.
+
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++ GCC 4.2 and above support -march=native, which automatically detects
++ the optimum settings to use based on your processor. -march=native
++ also detects and applies additional settings beyond -march specific
++ to your CPU, (eg. -msse4). Unless you have a specific reason not to
++ (e.g. distcc cross-compiling), you should probably be using
++ -march=native rather than anything listed below.
++
++ Enables -march=native
++
+ endchoice
+
+ config X86_GENERIC
+@@ -300,7 +420,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ int
+ default "7" if MPENTIUM4 || MPSC
+- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++ default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || BROADWELL || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ default "4" if MELAN || M486 || MGEODEGX1
+ default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+
+@@ -331,11 +451,11 @@ config X86_ALIGNMENT_16
+
+ config X86_INTEL_USERCOPY
+ def_bool y
+- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MNATIVE
+
+ config X86_USE_PPRO_CHECKSUM
+ def_bool y
+- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MATOM || MNATIVE
+
+ config X86_USE_3DNOW
+ def_bool y
+@@ -359,17 +479,17 @@ config X86_P6_NOP
+
+ config X86_TSC
+ def_bool y
+- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MNATIVE || MATOM) || X86_64
+
+ config X86_CMPXCHG64
+ def_bool y
+- depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM
++ depends on X86_PAE || X86_64 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM || MNATIVE
+
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+ config X86_CMOV
+ def_bool y
+- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++ depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+
+ config X86_MINIMUM_CPU_FAMILY
+ int
+--- a/arch/x86/Makefile 2014-06-16 16:44:27.000000000 -0400
++++ b/arch/x86/Makefile 2015-03-07 03:33:27.650843211 -0500
+@@ -92,13 +92,35 @@ else
+ KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=3)
+
+ # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++ cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++ cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++ cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++ cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++ cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++ cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++ cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
+ cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+
+ cflags-$(CONFIG_MCORE2) += \
+- $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+- cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++ $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++ cflags-$(CONFIG_MNEHALEM) += \
++ $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++ cflags-$(CONFIG_MWESTMERE) += \
++ $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++ cflags-$(CONFIG_MSILVERMONT) += \
++ $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++ cflags-$(CONFIG_MSANDYBRIDGE) += \
++ $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++ cflags-$(CONFIG_MIVYBRIDGE) += \
++ $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++ cflags-$(CONFIG_MHASWELL) += \
++ $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++ cflags-$(CONFIG_MBROADWELL) += \
++ $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++ cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+ KBUILD_CFLAGS += $(cflags-y)
+
+--- a/arch/x86/Makefile_32.cpu 2014-06-16 16:44:27.000000000 -0400
++++ b/arch/x86/Makefile_32.cpu 2015-03-07 03:34:15.203586024 -0500
+@@ -23,7 +23,15 @@ cflags-$(CONFIG_MK6) += -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7) += -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE) += -march=i686 $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -32,8 +40,15 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
+ cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7) += -march=i686
+ cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM) += -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE) += -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT) += -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MSANDYBRIDGE) += -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE) += -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL) += -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL) += -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN) += -march=i486
+
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [gentoo-commits] proj/linux-patches:4.3 commit in: /
@ 2015-11-06 0:24 Mike Pagano
0 siblings, 0 replies; 9+ messages in thread
From: Mike Pagano @ 2015-11-06 0:24 UTC (permalink / raw
To: gentoo-commits
commit: 2f6cc0b28617fe4c94729933e6aef823e8cd9773
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 6 00:24:12 2015 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Nov 6 00:24:12 2015 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2f6cc0b2
BFQ Patches v7r8.
0000_README | 13 +
...roups-kconfig-build-bits-for-BFQ-v7r8-4.3.patch | 104 +
...introduce-the-BFQ-v7r8-I-O-sched-for-4.3.patch1 | 6952 ++++++++++++++++++++
...Early-Queue-Merge-EQM-to-BFQ-v7r8-for-4.3.patch | 1220 ++++
4 files changed, 8289 insertions(+)
diff --git a/0000_README b/0000_README
index 8e70e78..4c2a487 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,19 @@ Patch: 5000_enable-additional-cpu-optimizations-for-gcc.patch
From: https://github.com/graysky2/kernel_gcc_patch/
Desc: Kernel patch enables gcc < v4.9 optimizations for additional CPUs.
+Patch: 5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r8-4.3.patch
+From: http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc: BFQ v7r8 patch 1 for 4.3: Build, cgroups and kconfig bits
+
+Patch: 5002_block-introduce-the-BFQ-v7r8-I-O-sched-for-4.3.patch1
+From: http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc: BFQ v7r8 patch 2 for 4.3: BFQ Scheduler
+
+Patch: 5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r8-for-4.3.patch
+From: http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc: BFQ v7r8 patch 3 for 4.3: Early Queue Merge (EQM)
+
Patch: 5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
From: https://github.com/graysky2/kernel_gcc_patch/
Desc: Kernel patch enables gcc >= v4.9 optimizations for additional CPUs.
+
diff --git a/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r8-4.3.patch b/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r8-4.3.patch
new file mode 100644
index 0000000..76440b8
--- /dev/null
+++ b/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r8-4.3.patch
@@ -0,0 +1,104 @@
+From 6a88d12f19b7c5578cf5d17a5e61fb0af75fa0d7 Mon Sep 17 00:00:00 2001
+From: Paolo Valente <paolo.valente@unimore.it>
+Date: Tue, 7 Apr 2015 13:39:12 +0200
+Subject: [PATCH 1/3] block: cgroups, kconfig, build bits for BFQ-v7r8-4.3
+
+Update Kconfig.iosched and do the related Makefile changes to include
+kernel configuration options for BFQ. Also add the bfqio controller
+to the cgroups subsystem.
+
+Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
+Signed-off-by: Arianna Avanzini <avanzini@google.com>
+---
+ block/Kconfig.iosched | 32 ++++++++++++++++++++++++++++++++
+ block/Makefile | 1 +
+ include/linux/cgroup_subsys.h | 4 ++++
+ 3 files changed, 37 insertions(+)
+
+diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
+index 421bef9..0ee5f0f 100644
+--- a/block/Kconfig.iosched
++++ b/block/Kconfig.iosched
+@@ -39,6 +39,27 @@ config CFQ_GROUP_IOSCHED
+ ---help---
+ Enable group IO scheduling in CFQ.
+
++config IOSCHED_BFQ
++ tristate "BFQ I/O scheduler"
++ default n
++ ---help---
++ The BFQ I/O scheduler tries to distribute bandwidth among
++ all processes according to their weights.
++ It aims at distributing the bandwidth as desired, independently of
++ the disk parameters and with any workload. It also tries to
++ guarantee low latency to interactive and soft real-time
++ applications. If compiled built-in (saying Y here), BFQ can
++ be configured to support hierarchical scheduling.
++
++config CGROUP_BFQIO
++ bool "BFQ hierarchical scheduling support"
++ depends on CGROUPS && IOSCHED_BFQ=y
++ default n
++ ---help---
++ Enable hierarchical scheduling in BFQ, using the cgroups
++ filesystem interface. The name of the subsystem will be
++ bfqio.
++
+ choice
+ prompt "Default I/O scheduler"
+ default DEFAULT_CFQ
+@@ -52,6 +73,16 @@ choice
+ config DEFAULT_CFQ
+ bool "CFQ" if IOSCHED_CFQ=y
+
++ config DEFAULT_BFQ
++ bool "BFQ" if IOSCHED_BFQ=y
++ help
++ Selects BFQ as the default I/O scheduler which will be
++ used by default for all block devices.
++ The BFQ I/O scheduler aims at distributing the bandwidth
++ as desired, independently of the disk parameters and with
++ any workload. It also tries to guarantee low latency to
++ interactive and soft real-time applications.
++
+ config DEFAULT_NOOP
+ bool "No-op"
+
+@@ -61,6 +92,7 @@ config DEFAULT_IOSCHED
+ string
+ default "deadline" if DEFAULT_DEADLINE
+ default "cfq" if DEFAULT_CFQ
++ default "bfq" if DEFAULT_BFQ
+ default "noop" if DEFAULT_NOOP
+
+ endmenu
+diff --git a/block/Makefile b/block/Makefile
+index 00ecc97..1ed86d5 100644
+--- a/block/Makefile
++++ b/block/Makefile
+@@ -18,6 +18,7 @@ obj-$(CONFIG_BLK_DEV_THROTTLING) += blk-throttle.o
+ obj-$(CONFIG_IOSCHED_NOOP) += noop-iosched.o
+ obj-$(CONFIG_IOSCHED_DEADLINE) += deadline-iosched.o
+ obj-$(CONFIG_IOSCHED_CFQ) += cfq-iosched.o
++obj-$(CONFIG_IOSCHED_BFQ) += bfq-iosched.o
+
+ obj-$(CONFIG_BLOCK_COMPAT) += compat_ioctl.o
+ obj-$(CONFIG_BLK_CMDLINE_PARSER) += cmdline-parser.o
+diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
+index 1a96fda..81ad8a0 100644
+--- a/include/linux/cgroup_subsys.h
++++ b/include/linux/cgroup_subsys.h
+@@ -46,6 +46,10 @@ SUBSYS(freezer)
+ SUBSYS(net_cls)
+ #endif
+
++#if IS_ENABLED(CONFIG_CGROUP_BFQIO)
++SUBSYS(bfqio)
++#endif
++
+ #if IS_ENABLED(CONFIG_CGROUP_PERF)
+ SUBSYS(perf_event)
+ #endif
+--
+1.9.1
+
diff --git a/5002_block-introduce-the-BFQ-v7r8-I-O-sched-for-4.3.patch1 b/5002_block-introduce-the-BFQ-v7r8-I-O-sched-for-4.3.patch1
new file mode 100644
index 0000000..43196b2
--- /dev/null
+++ b/5002_block-introduce-the-BFQ-v7r8-I-O-sched-for-4.3.patch1
@@ -0,0 +1,6952 @@
+From ec474da4d0e3f9eb7860496802b6693333687bb5 Mon Sep 17 00:00:00 2001
+From: Paolo Valente <paolo.valente@unimore.it>
+Date: Thu, 9 May 2013 19:10:02 +0200
+Subject: [PATCH 2/3] block: introduce the BFQ-v7r8 I/O sched for 4.3
+
+Add the BFQ-v7r8 I/O scheduler to 4.3.
+The general structure is borrowed from CFQ, as much of the code for
+handling I/O contexts. Over time, several useful features have been
+ported from CFQ as well (details in the changelog in README.BFQ). A
+(bfq_)queue is associated to each task doing I/O on a device, and each
+time a scheduling decision has to be made a queue is selected and served
+until it expires.
+
+ - Slices are given in the service domain: tasks are assigned
+ budgets, measured in number of sectors. Once got the disk, a task
+ must however consume its assigned budget within a configurable
+ maximum time (by default, the maximum possible value of the
+ budgets is automatically computed to comply with this timeout).
+ This allows the desired latency vs "throughput boosting" tradeoff
+ to be set.
+
+ - Budgets are scheduled according to a variant of WF2Q+, implemented
+ using an augmented rb-tree to take eligibility into account while
+ preserving an O(log N) overall complexity.
+
+ - A low-latency tunable is provided; if enabled, both interactive
+ and soft real-time applications are guaranteed a very low latency.
+
+ - Latency guarantees are preserved also in the presence of NCQ.
+
+ - Also with flash-based devices, a high throughput is achieved
+ while still preserving latency guarantees.
+
+ - BFQ features Early Queue Merge (EQM), a sort of fusion of the
+ cooperating-queue-merging and the preemption mechanisms present
+ in CFQ. EQM is in fact a unified mechanism that tries to get a
+ sequential read pattern, and hence a high throughput, with any
+ set of processes performing interleaved I/O over a contiguous
+ sequence of sectors.
+
+ - BFQ supports full hierarchical scheduling, exporting a cgroups
+ interface. Since each node has a full scheduler, each group can
+ be assigned its own weight.
+
+ - If the cgroups interface is not used, only I/O priorities can be
+ assigned to processes, with ioprio values mapped to weights
+ with the relation weight = IOPRIO_BE_NR - ioprio.
+
+ - ioprio classes are served in strict priority order, i.e., lower
+ priority queues are not served as long as there are higher
+ priority queues. Among queues in the same class the bandwidth is
+ distributed in proportion to the weight of each queue. A very
+ thin extra bandwidth is however guaranteed to the Idle class, to
+ prevent it from starving.
+
+Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
+Signed-off-by: Arianna Avanzini <avanzini@google.com>
+---
+ block/bfq-cgroup.c | 936 +++++++++++++
+ block/bfq-ioc.c | 36 +
+ block/bfq-iosched.c | 3898 +++++++++++++++++++++++++++++++++++++++++++++++++++
+ block/bfq-sched.c | 1208 ++++++++++++++++
+ block/bfq.h | 771 ++++++++++
+ 5 files changed, 6849 insertions(+)
+ create mode 100644 block/bfq-cgroup.c
+ create mode 100644 block/bfq-ioc.c
+ create mode 100644 block/bfq-iosched.c
+ create mode 100644 block/bfq-sched.c
+ create mode 100644 block/bfq.h
+
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+new file mode 100644
+index 0000000..11e2f1d
+--- /dev/null
++++ b/block/bfq-cgroup.c
+@@ -0,0 +1,936 @@
++/*
++ * BFQ: CGROUPS support.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ * Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ
++ * file.
++ */
++
++#ifdef CONFIG_CGROUP_BFQIO
++
++static DEFINE_MUTEX(bfqio_mutex);
++
++static bool bfqio_is_removed(struct bfqio_cgroup *bgrp)
++{
++ return bgrp ? !bgrp->online : false;
++}
++
++static struct bfqio_cgroup bfqio_root_cgroup = {
++ .weight = BFQ_DEFAULT_GRP_WEIGHT,
++ .ioprio = BFQ_DEFAULT_GRP_IOPRIO,
++ .ioprio_class = BFQ_DEFAULT_GRP_CLASS,
++};
++
++static inline void bfq_init_entity(struct bfq_entity *entity,
++ struct bfq_group *bfqg)
++{
++ entity->weight = entity->new_weight;
++ entity->orig_weight = entity->new_weight;
++ entity->ioprio = entity->new_ioprio;
++ entity->ioprio_class = entity->new_ioprio_class;
++ entity->parent = bfqg->my_entity;
++ entity->sched_data = &bfqg->sched_data;
++}
++
++static struct bfqio_cgroup *css_to_bfqio(struct cgroup_subsys_state *css)
++{
++ return css ? container_of(css, struct bfqio_cgroup, css) : NULL;
++}
++
++/*
++ * Search the bfq_group for bfqd into the hash table (by now only a list)
++ * of bgrp. Must be called under rcu_read_lock().
++ */
++static struct bfq_group *bfqio_lookup_group(struct bfqio_cgroup *bgrp,
++ struct bfq_data *bfqd)
++{
++ struct bfq_group *bfqg;
++ void *key;
++
++ hlist_for_each_entry_rcu(bfqg, &bgrp->group_data, group_node) {
++ key = rcu_dereference(bfqg->bfqd);
++ if (key == bfqd)
++ return bfqg;
++ }
++
++ return NULL;
++}
++
++static inline void bfq_group_init_entity(struct bfqio_cgroup *bgrp,
++ struct bfq_group *bfqg)
++{
++ struct bfq_entity *entity = &bfqg->entity;
++
++ /*
++ * If the weight of the entity has never been set via the sysfs
++ * interface, then bgrp->weight == 0. In this case we initialize
++ * the weight from the current ioprio value. Otherwise, the group
++ * weight, if set, has priority over the ioprio value.
++ */
++ if (bgrp->weight == 0) {
++ entity->new_weight = bfq_ioprio_to_weight(bgrp->ioprio);
++ entity->new_ioprio = bgrp->ioprio;
++ } else {
++ if (bgrp->weight < BFQ_MIN_WEIGHT ||
++ bgrp->weight > BFQ_MAX_WEIGHT) {
++ printk(KERN_CRIT "bfq_group_init_entity: "
++ "bgrp->weight %d\n", bgrp->weight);
++ BUG();
++ }
++ entity->new_weight = bgrp->weight;
++ entity->new_ioprio = bfq_weight_to_ioprio(bgrp->weight);
++ }
++ entity->orig_weight = entity->weight = entity->new_weight;
++ entity->ioprio = entity->new_ioprio;
++ entity->ioprio_class = entity->new_ioprio_class = bgrp->ioprio_class;
++ entity->my_sched_data = &bfqg->sched_data;
++ bfqg->active_entities = 0;
++}
++
++static inline void bfq_group_set_parent(struct bfq_group *bfqg,
++ struct bfq_group *parent)
++{
++ struct bfq_entity *entity;
++
++ BUG_ON(parent == NULL);
++ BUG_ON(bfqg == NULL);
++
++ entity = &bfqg->entity;
++ entity->parent = parent->my_entity;
++ entity->sched_data = &parent->sched_data;
++}
++
++/**
++ * bfq_group_chain_alloc - allocate a chain of groups.
++ * @bfqd: queue descriptor.
++ * @css: the leaf cgroup_subsys_state this chain starts from.
++ *
++ * Allocate a chain of groups starting from the one belonging to
++ * @cgroup up to the root cgroup. Stop if a cgroup on the chain
++ * to the root has already an allocated group on @bfqd.
++ */
++static struct bfq_group *bfq_group_chain_alloc(struct bfq_data *bfqd,
++ struct cgroup_subsys_state *css)
++{
++ struct bfqio_cgroup *bgrp;
++ struct bfq_group *bfqg, *prev = NULL, *leaf = NULL;
++
++ for (; css != NULL; css = css->parent) {
++ bgrp = css_to_bfqio(css);
++
++ bfqg = bfqio_lookup_group(bgrp, bfqd);
++ if (bfqg != NULL) {
++ /*
++ * All the cgroups in the path from there to the
++ * root must have a bfq_group for bfqd, so we don't
++ * need any more allocations.
++ */
++ break;
++ }
++
++ bfqg = kzalloc(sizeof(*bfqg), GFP_ATOMIC);
++ if (bfqg == NULL)
++ goto cleanup;
++
++ bfq_group_init_entity(bgrp, bfqg);
++ bfqg->my_entity = &bfqg->entity;
++
++ if (leaf == NULL) {
++ leaf = bfqg;
++ prev = leaf;
++ } else {
++ bfq_group_set_parent(prev, bfqg);
++ /*
++ * Build a list of allocated nodes using the bfqd
++ * filed, that is still unused and will be
++ * initialized only after the node will be
++ * connected.
++ */
++ prev->bfqd = bfqg;
++ prev = bfqg;
++ }
++ }
++
++ return leaf;
++
++cleanup:
++ while (leaf != NULL) {
++ prev = leaf;
++ leaf = leaf->bfqd;
++ kfree(prev);
++ }
++
++ return NULL;
++}
++
++/**
++ * bfq_group_chain_link - link an allocated group chain to a cgroup
++ * hierarchy.
++ * @bfqd: the queue descriptor.
++ * @css: the leaf cgroup_subsys_state to start from.
++ * @leaf: the leaf group (to be associated to @cgroup).
++ *
++ * Try to link a chain of groups to a cgroup hierarchy, connecting the
++ * nodes bottom-up, so we can be sure that when we find a cgroup in the
++ * hierarchy that already as a group associated to @bfqd all the nodes
++ * in the path to the root cgroup have one too.
++ *
++ * On locking: the queue lock protects the hierarchy (there is a hierarchy
++ * per device) while the bfqio_cgroup lock protects the list of groups
++ * belonging to the same cgroup.
++ */
++static void bfq_group_chain_link(struct bfq_data *bfqd,
++ struct cgroup_subsys_state *css,
++ struct bfq_group *leaf)
++{
++ struct bfqio_cgroup *bgrp;
++ struct bfq_group *bfqg, *next, *prev = NULL;
++ unsigned long flags;
++
++ assert_spin_locked(bfqd->queue->queue_lock);
++
++ for (; css != NULL && leaf != NULL; css = css->parent) {
++ bgrp = css_to_bfqio(css);
++ next = leaf->bfqd;
++
++ bfqg = bfqio_lookup_group(bgrp, bfqd);
++ BUG_ON(bfqg != NULL);
++
++ spin_lock_irqsave(&bgrp->lock, flags);
++
++ rcu_assign_pointer(leaf->bfqd, bfqd);
++ hlist_add_head_rcu(&leaf->group_node, &bgrp->group_data);
++ hlist_add_head(&leaf->bfqd_node, &bfqd->group_list);
++
++ spin_unlock_irqrestore(&bgrp->lock, flags);
++
++ prev = leaf;
++ leaf = next;
++ }
++
++ BUG_ON(css == NULL && leaf != NULL);
++ if (css != NULL && prev != NULL) {
++ bgrp = css_to_bfqio(css);
++ bfqg = bfqio_lookup_group(bgrp, bfqd);
++ bfq_group_set_parent(prev, bfqg);
++ }
++}
++
++/**
++ * bfq_find_alloc_group - return the group associated to @bfqd in @cgroup.
++ * @bfqd: queue descriptor.
++ * @cgroup: cgroup being searched for.
++ *
++ * Return a group associated to @bfqd in @cgroup, allocating one if
++ * necessary. When a group is returned all the cgroups in the path
++ * to the root have a group associated to @bfqd.
++ *
++ * If the allocation fails, return the root group: this breaks guarantees
++ * but is a safe fallback. If this loss becomes a problem it can be
++ * mitigated using the equivalent weight (given by the product of the
++ * weights of the groups in the path from @group to the root) in the
++ * root scheduler.
++ *
++ * We allocate all the missing nodes in the path from the leaf cgroup
++ * to the root and we connect the nodes only after all the allocations
++ * have been successful.
++ */
++static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
++ struct cgroup_subsys_state *css)
++{
++ struct bfqio_cgroup *bgrp = css_to_bfqio(css);
++ struct bfq_group *bfqg;
++
++ bfqg = bfqio_lookup_group(bgrp, bfqd);
++ if (bfqg != NULL)
++ return bfqg;
++
++ bfqg = bfq_group_chain_alloc(bfqd, css);
++ if (bfqg != NULL)
++ bfq_group_chain_link(bfqd, css, bfqg);
++ else
++ bfqg = bfqd->root_group;
++
++ return bfqg;
++}
++
++/**
++ * bfq_bfqq_move - migrate @bfqq to @bfqg.
++ * @bfqd: queue descriptor.
++ * @bfqq: the queue to move.
++ * @entity: @bfqq's entity.
++ * @bfqg: the group to move to.
++ *
++ * Move @bfqq to @bfqg, deactivating it from its old group and reactivating
++ * it on the new one. Avoid putting the entity on the old group idle tree.
++ *
++ * Must be called under the queue lock; the cgroup owning @bfqg must
++ * not disappear (by now this just means that we are called under
++ * rcu_read_lock()).
++ */
++static void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ struct bfq_entity *entity, struct bfq_group *bfqg)
++{
++ int busy, resume;
++
++ busy = bfq_bfqq_busy(bfqq);
++ resume = !RB_EMPTY_ROOT(&bfqq->sort_list);
++
++ BUG_ON(resume && !entity->on_st);
++ BUG_ON(busy && !resume && entity->on_st &&
++ bfqq != bfqd->in_service_queue);
++
++ if (busy) {
++ BUG_ON(atomic_read(&bfqq->ref) < 2);
++
++ if (!resume)
++ bfq_del_bfqq_busy(bfqd, bfqq, 0);
++ else
++ bfq_deactivate_bfqq(bfqd, bfqq, 0);
++ } else if (entity->on_st)
++ bfq_put_idle_entity(bfq_entity_service_tree(entity), entity);
++
++ /*
++ * Here we use a reference to bfqg. We don't need a refcounter
++ * as the cgroup reference will not be dropped, so that its
++ * destroy() callback will not be invoked.
++ */
++ entity->parent = bfqg->my_entity;
++ entity->sched_data = &bfqg->sched_data;
++
++ if (busy && resume)
++ bfq_activate_bfqq(bfqd, bfqq);
++
++ if (bfqd->in_service_queue == NULL && !bfqd->rq_in_driver)
++ bfq_schedule_dispatch(bfqd);
++}
++
++/**
++ * __bfq_bic_change_cgroup - move @bic to @cgroup.
++ * @bfqd: the queue descriptor.
++ * @bic: the bic to move.
++ * @cgroup: the cgroup to move to.
++ *
++ * Move bic to cgroup, assuming that bfqd->queue is locked; the caller
++ * has to make sure that the reference to cgroup is valid across the call.
++ *
++ * NOTE: an alternative approach might have been to store the current
++ * cgroup in bfqq and getting a reference to it, reducing the lookup
++ * time here, at the price of slightly more complex code.
++ */
++static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
++ struct bfq_io_cq *bic,
++ struct cgroup_subsys_state *css)
++{
++ struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0);
++ struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1);
++ struct bfq_entity *entity;
++ struct bfq_group *bfqg;
++ struct bfqio_cgroup *bgrp;
++
++ bgrp = css_to_bfqio(css);
++
++ bfqg = bfq_find_alloc_group(bfqd, css);
++ if (async_bfqq != NULL) {
++ entity = &async_bfqq->entity;
++
++ if (entity->sched_data != &bfqg->sched_data) {
++ bic_set_bfqq(bic, NULL, 0);
++ bfq_log_bfqq(bfqd, async_bfqq,
++ "bic_change_group: %p %d",
++ async_bfqq, atomic_read(&async_bfqq->ref));
++ bfq_put_queue(async_bfqq);
++ }
++ }
++
++ if (sync_bfqq != NULL) {
++ entity = &sync_bfqq->entity;
++ if (entity->sched_data != &bfqg->sched_data)
++ bfq_bfqq_move(bfqd, sync_bfqq, entity, bfqg);
++ }
++
++ return bfqg;
++}
++
++/**
++ * bfq_bic_change_cgroup - move @bic to @cgroup.
++ * @bic: the bic being migrated.
++ * @cgroup: the destination cgroup.
++ *
++ * When the task owning @bic is moved to @cgroup, @bic is immediately
++ * moved into its new parent group.
++ */
++static void bfq_bic_change_cgroup(struct bfq_io_cq *bic,
++ struct cgroup_subsys_state *css)
++{
++ struct bfq_data *bfqd;
++ unsigned long uninitialized_var(flags);
++
++ bfqd = bfq_get_bfqd_locked(&(bic->icq.q->elevator->elevator_data),
++ &flags);
++ if (bfqd != NULL) {
++ __bfq_bic_change_cgroup(bfqd, bic, css);
++ bfq_put_bfqd_unlock(bfqd, &flags);
++ }
++}
++
++/**
++ * bfq_bic_update_cgroup - update the cgroup of @bic.
++ * @bic: the @bic to update.
++ *
++ * Make sure that @bic is enqueued in the cgroup of the current task.
++ * We need this in addition to moving bics during the cgroup attach
++ * phase because the task owning @bic could be at its first disk
++ * access or we may end up in the root cgroup as the result of a
++ * memory allocation failure and here we try to move to the right
++ * group.
++ *
++ * Must be called under the queue lock. It is safe to use the returned
++ * value even after the rcu_read_unlock() as the migration/destruction
++ * paths act under the queue lock too. IOW it is impossible to race with
++ * group migration/destruction and end up with an invalid group as:
++ * a) here cgroup has not yet been destroyed, nor its destroy callback
++ * has started execution, as current holds a reference to it,
++ * b) if it is destroyed after rcu_read_unlock() [after current is
++ * migrated to a different cgroup] its attach() callback will have
++ * taken care of remove all the references to the old cgroup data.
++ */
++static struct bfq_group *bfq_bic_update_cgroup(struct bfq_io_cq *bic)
++{
++ struct bfq_data *bfqd = bic_to_bfqd(bic);
++ struct bfq_group *bfqg;
++ struct cgroup_subsys_state *css;
++
++ BUG_ON(bfqd == NULL);
++
++ rcu_read_lock();
++ css = task_css(current, bfqio_cgrp_id);
++ bfqg = __bfq_bic_change_cgroup(bfqd, bic, css);
++ rcu_read_unlock();
++
++ return bfqg;
++}
++
++/**
++ * bfq_flush_idle_tree - deactivate any entity on the idle tree of @st.
++ * @st: the service tree being flushed.
++ */
++static inline void bfq_flush_idle_tree(struct bfq_service_tree *st)
++{
++ struct bfq_entity *entity = st->first_idle;
++
++ for (; entity != NULL; entity = st->first_idle)
++ __bfq_deactivate_entity(entity, 0);
++}
++
++/**
++ * bfq_reparent_leaf_entity - move leaf entity to the root_group.
++ * @bfqd: the device data structure with the root group.
++ * @entity: the entity to move.
++ */
++static inline void bfq_reparent_leaf_entity(struct bfq_data *bfqd,
++ struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++ BUG_ON(bfqq == NULL);
++ bfq_bfqq_move(bfqd, bfqq, entity, bfqd->root_group);
++ return;
++}
++
++/**
++ * bfq_reparent_active_entities - move to the root group all active
++ * entities.
++ * @bfqd: the device data structure with the root group.
++ * @bfqg: the group to move from.
++ * @st: the service tree with the entities.
++ *
++ * Needs queue_lock to be taken and reference to be valid over the call.
++ */
++static inline void bfq_reparent_active_entities(struct bfq_data *bfqd,
++ struct bfq_group *bfqg,
++ struct bfq_service_tree *st)
++{
++ struct rb_root *active = &st->active;
++ struct bfq_entity *entity = NULL;
++
++ if (!RB_EMPTY_ROOT(&st->active))
++ entity = bfq_entity_of(rb_first(active));
++
++ for (; entity != NULL; entity = bfq_entity_of(rb_first(active)))
++ bfq_reparent_leaf_entity(bfqd, entity);
++
++ if (bfqg->sched_data.in_service_entity != NULL)
++ bfq_reparent_leaf_entity(bfqd,
++ bfqg->sched_data.in_service_entity);
++
++ return;
++}
++
++/**
++ * bfq_destroy_group - destroy @bfqg.
++ * @bgrp: the bfqio_cgroup containing @bfqg.
++ * @bfqg: the group being destroyed.
++ *
++ * Destroy @bfqg, making sure that it is not referenced from its parent.
++ */
++static void bfq_destroy_group(struct bfqio_cgroup *bgrp, struct bfq_group *bfqg)
++{
++ struct bfq_data *bfqd;
++ struct bfq_service_tree *st;
++ struct bfq_entity *entity = bfqg->my_entity;
++ unsigned long uninitialized_var(flags);
++ int i;
++
++ hlist_del(&bfqg->group_node);
++
++ /*
++ * Empty all service_trees belonging to this group before
++ * deactivating the group itself.
++ */
++ for (i = 0; i < BFQ_IOPRIO_CLASSES; i++) {
++ st = bfqg->sched_data.service_tree + i;
++
++ /*
++ * The idle tree may still contain bfq_queues belonging
++ * to exited task because they never migrated to a different
++ * cgroup from the one being destroyed now. No one else
++ * can access them so it's safe to act without any lock.
++ */
++ bfq_flush_idle_tree(st);
++
++ /*
++ * It may happen that some queues are still active
++ * (busy) upon group destruction (if the corresponding
++ * processes have been forced to terminate). We move
++ * all the leaf entities corresponding to these queues
++ * to the root_group.
++ * Also, it may happen that the group has an entity
++ * in service, which is disconnected from the active
++ * tree: it must be moved, too.
++ * There is no need to put the sync queues, as the
++ * scheduler has taken no reference.
++ */
++ bfqd = bfq_get_bfqd_locked(&bfqg->bfqd, &flags);
++ if (bfqd != NULL) {
++ bfq_reparent_active_entities(bfqd, bfqg, st);
++ bfq_put_bfqd_unlock(bfqd, &flags);
++ }
++ BUG_ON(!RB_EMPTY_ROOT(&st->active));
++ BUG_ON(!RB_EMPTY_ROOT(&st->idle));
++ }
++ BUG_ON(bfqg->sched_data.next_in_service != NULL);
++ BUG_ON(bfqg->sched_data.in_service_entity != NULL);
++
++ /*
++ * We may race with device destruction, take extra care when
++ * dereferencing bfqg->bfqd.
++ */
++ bfqd = bfq_get_bfqd_locked(&bfqg->bfqd, &flags);
++ if (bfqd != NULL) {
++ hlist_del(&bfqg->bfqd_node);
++ __bfq_deactivate_entity(entity, 0);
++ bfq_put_async_queues(bfqd, bfqg);
++ bfq_put_bfqd_unlock(bfqd, &flags);
++ }
++ BUG_ON(entity->tree != NULL);
++
++ /*
++ * No need to defer the kfree() to the end of the RCU grace
++ * period: we are called from the destroy() callback of our
++ * cgroup, so we can be sure that no one is a) still using
++ * this cgroup or b) doing lookups in it.
++ */
++ kfree(bfqg);
++}
++
++static void bfq_end_wr_async(struct bfq_data *bfqd)
++{
++ struct hlist_node *tmp;
++ struct bfq_group *bfqg;
++
++ hlist_for_each_entry_safe(bfqg, tmp, &bfqd->group_list, bfqd_node)
++ bfq_end_wr_async_queues(bfqd, bfqg);
++ bfq_end_wr_async_queues(bfqd, bfqd->root_group);
++}
++
++/**
++ * bfq_disconnect_groups - disconnect @bfqd from all its groups.
++ * @bfqd: the device descriptor being exited.
++ *
++ * When the device exits we just make sure that no lookup can return
++ * the now unused group structures. They will be deallocated on cgroup
++ * destruction.
++ */
++static void bfq_disconnect_groups(struct bfq_data *bfqd)
++{
++ struct hlist_node *tmp;
++ struct bfq_group *bfqg;
++
++ bfq_log(bfqd, "disconnect_groups beginning");
++ hlist_for_each_entry_safe(bfqg, tmp, &bfqd->group_list, bfqd_node) {
++ hlist_del(&bfqg->bfqd_node);
++
++ __bfq_deactivate_entity(bfqg->my_entity, 0);
++
++ /*
++ * Don't remove from the group hash, just set an
++ * invalid key. No lookups can race with the
++ * assignment as bfqd is being destroyed; this
++ * implies also that new elements cannot be added
++ * to the list.
++ */
++ rcu_assign_pointer(bfqg->bfqd, NULL);
++
++ bfq_log(bfqd, "disconnect_groups: put async for group %p",
++ bfqg);
++ bfq_put_async_queues(bfqd, bfqg);
++ }
++}
++
++static inline void bfq_free_root_group(struct bfq_data *bfqd)
++{
++ struct bfqio_cgroup *bgrp = &bfqio_root_cgroup;
++ struct bfq_group *bfqg = bfqd->root_group;
++
++ bfq_put_async_queues(bfqd, bfqg);
++
++ spin_lock_irq(&bgrp->lock);
++ hlist_del_rcu(&bfqg->group_node);
++ spin_unlock_irq(&bgrp->lock);
++
++ /*
++ * No need to synchronize_rcu() here: since the device is gone
++ * there cannot be any read-side access to its root_group.
++ */
++ kfree(bfqg);
++}
++
++static struct bfq_group *bfq_alloc_root_group(struct bfq_data *bfqd, int node)
++{
++ struct bfq_group *bfqg;
++ struct bfqio_cgroup *bgrp;
++ int i;
++
++ bfqg = kzalloc_node(sizeof(*bfqg), GFP_KERNEL, node);
++ if (bfqg == NULL)
++ return NULL;
++
++ bfqg->entity.parent = NULL;
++ for (i = 0; i < BFQ_IOPRIO_CLASSES; i++)
++ bfqg->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT;
++
++ bgrp = &bfqio_root_cgroup;
++ spin_lock_irq(&bgrp->lock);
++ rcu_assign_pointer(bfqg->bfqd, bfqd);
++ hlist_add_head_rcu(&bfqg->group_node, &bgrp->group_data);
++ spin_unlock_irq(&bgrp->lock);
++
++ return bfqg;
++}
++
++#define SHOW_FUNCTION(__VAR) \
++static u64 bfqio_cgroup_##__VAR##_read(struct cgroup_subsys_state *css, \
++ struct cftype *cftype) \
++{ \
++ struct bfqio_cgroup *bgrp = css_to_bfqio(css); \
++ u64 ret = -ENODEV; \
++ \
++ mutex_lock(&bfqio_mutex); \
++ if (bfqio_is_removed(bgrp)) \
++ goto out_unlock; \
++ \
++ spin_lock_irq(&bgrp->lock); \
++ ret = bgrp->__VAR; \
++ spin_unlock_irq(&bgrp->lock); \
++ \
++out_unlock: \
++ mutex_unlock(&bfqio_mutex); \
++ return ret; \
++}
++
++SHOW_FUNCTION(weight);
++SHOW_FUNCTION(ioprio);
++SHOW_FUNCTION(ioprio_class);
++#undef SHOW_FUNCTION
++
++#define STORE_FUNCTION(__VAR, __MIN, __MAX) \
++static int bfqio_cgroup_##__VAR##_write(struct cgroup_subsys_state *css,\
++ struct cftype *cftype, \
++ u64 val) \
++{ \
++ struct bfqio_cgroup *bgrp = css_to_bfqio(css); \
++ struct bfq_group *bfqg; \
++ int ret = -EINVAL; \
++ \
++ if (val < (__MIN) || val > (__MAX)) \
++ return ret; \
++ \
++ ret = -ENODEV; \
++ mutex_lock(&bfqio_mutex); \
++ if (bfqio_is_removed(bgrp)) \
++ goto out_unlock; \
++ ret = 0; \
++ \
++ spin_lock_irq(&bgrp->lock); \
++ bgrp->__VAR = (unsigned short)val; \
++ hlist_for_each_entry(bfqg, &bgrp->group_data, group_node) { \
++ /* \
++ * Setting the ioprio_changed flag of the entity \
++ * to 1 with new_##__VAR == ##__VAR would re-set \
++ * the value of the weight to its ioprio mapping. \
++ * Set the flag only if necessary. \
++ */ \
++ if ((unsigned short)val != bfqg->entity.new_##__VAR) { \
++ bfqg->entity.new_##__VAR = (unsigned short)val; \
++ /* \
++ * Make sure that the above new value has been \
++ * stored in bfqg->entity.new_##__VAR before \
++ * setting the ioprio_changed flag. In fact, \
++ * this flag may be read asynchronously (in \
++ * critical sections protected by a different \
++ * lock than that held here), and finding this \
++ * flag set may cause the execution of the code \
++ * for updating parameters whose value may \
++ * depend also on bfqg->entity.new_##__VAR (in \
++ * __bfq_entity_update_weight_prio). \
++ * This barrier makes sure that the new value \
++ * of bfqg->entity.new_##__VAR is correctly \
++ * seen in that code. \
++ */ \
++ smp_wmb(); \
++ bfqg->entity.ioprio_changed = 1; \
++ } \
++ } \
++ spin_unlock_irq(&bgrp->lock); \
++ \
++out_unlock: \
++ mutex_unlock(&bfqio_mutex); \
++ return ret; \
++}
++
++STORE_FUNCTION(weight, BFQ_MIN_WEIGHT, BFQ_MAX_WEIGHT);
++STORE_FUNCTION(ioprio, 0, IOPRIO_BE_NR - 1);
++STORE_FUNCTION(ioprio_class, IOPRIO_CLASS_RT, IOPRIO_CLASS_IDLE);
++#undef STORE_FUNCTION
++
++static struct cftype bfqio_files[] = {
++ {
++ .name = "weight",
++ .read_u64 = bfqio_cgroup_weight_read,
++ .write_u64 = bfqio_cgroup_weight_write,
++ },
++ {
++ .name = "ioprio",
++ .read_u64 = bfqio_cgroup_ioprio_read,
++ .write_u64 = bfqio_cgroup_ioprio_write,
++ },
++ {
++ .name = "ioprio_class",
++ .read_u64 = bfqio_cgroup_ioprio_class_read,
++ .write_u64 = bfqio_cgroup_ioprio_class_write,
++ },
++ { }, /* terminate */
++};
++
++static struct cgroup_subsys_state *bfqio_create(struct cgroup_subsys_state
++ *parent_css)
++{
++ struct bfqio_cgroup *bgrp;
++
++ if (parent_css != NULL) {
++ bgrp = kzalloc(sizeof(*bgrp), GFP_KERNEL);
++ if (bgrp == NULL)
++ return ERR_PTR(-ENOMEM);
++ } else
++ bgrp = &bfqio_root_cgroup;
++
++ spin_lock_init(&bgrp->lock);
++ INIT_HLIST_HEAD(&bgrp->group_data);
++ bgrp->ioprio = BFQ_DEFAULT_GRP_IOPRIO;
++ bgrp->ioprio_class = BFQ_DEFAULT_GRP_CLASS;
++
++ return &bgrp->css;
++}
++
++/*
++ * We cannot support shared io contexts, as we have no means to support
++ * two tasks with the same ioc in two different groups without major rework
++ * of the main bic/bfqq data structures. By now we allow a task to change
++ * its cgroup only if it's the only owner of its ioc; the drawback of this
++ * behavior is that a group containing a task that forked using CLONE_IO
++ * will not be destroyed until the tasks sharing the ioc die.
++ */
++static int bfqio_can_attach(struct cgroup_subsys_state *css,
++ struct cgroup_taskset *tset)
++{
++ struct task_struct *task;
++ struct io_context *ioc;
++ int ret = 0;
++
++ cgroup_taskset_for_each(task, tset) {
++ /*
++ * task_lock() is needed to avoid races with
++ * exit_io_context()
++ */
++ task_lock(task);
++ ioc = task->io_context;
++ if (ioc != NULL && atomic_read(&ioc->nr_tasks) > 1)
++ /*
++ * ioc == NULL means that the task is either too
++ * young or exiting: if it has still no ioc the
++ * ioc can't be shared, if the task is exiting the
++ * attach will fail anyway, no matter what we
++ * return here.
++ */
++ ret = -EINVAL;
++ task_unlock(task);
++ if (ret)
++ break;
++ }
++
++ return ret;
++}
++
++static void bfqio_attach(struct cgroup_subsys_state *css,
++ struct cgroup_taskset *tset)
++{
++ struct task_struct *task;
++ struct io_context *ioc;
++ struct io_cq *icq;
++
++ /*
++ * IMPORTANT NOTE: The move of more than one process at a time to a
++ * new group has not yet been tested.
++ */
++ cgroup_taskset_for_each(task, tset) {
++ ioc = get_task_io_context(task, GFP_ATOMIC, NUMA_NO_NODE);
++ if (ioc) {
++ /*
++ * Handle cgroup change here.
++ */
++ rcu_read_lock();
++ hlist_for_each_entry_rcu(icq, &ioc->icq_list, ioc_node)
++ if (!strncmp(
++ icq->q->elevator->type->elevator_name,
++ "bfq", ELV_NAME_MAX))
++ bfq_bic_change_cgroup(icq_to_bic(icq),
++ css);
++ rcu_read_unlock();
++ put_io_context(ioc);
++ }
++ }
++}
++
++static void bfqio_destroy(struct cgroup_subsys_state *css)
++{
++ struct bfqio_cgroup *bgrp = css_to_bfqio(css);
++ struct hlist_node *tmp;
++ struct bfq_group *bfqg;
++
++ /*
++ * Since we are destroying the cgroup, there are no more tasks
++ * referencing it, and all the RCU grace periods that may have
++ * referenced it are ended (as the destruction of the parent
++ * cgroup is RCU-safe); bgrp->group_data will not be accessed by
++ * anything else and we don't need any synchronization.
++ */
++ hlist_for_each_entry_safe(bfqg, tmp, &bgrp->group_data, group_node)
++ bfq_destroy_group(bgrp, bfqg);
++
++ BUG_ON(!hlist_empty(&bgrp->group_data));
++
++ kfree(bgrp);
++}
++
++static int bfqio_css_online(struct cgroup_subsys_state *css)
++{
++ struct bfqio_cgroup *bgrp = css_to_bfqio(css);
++
++ mutex_lock(&bfqio_mutex);
++ bgrp->online = true;
++ mutex_unlock(&bfqio_mutex);
++
++ return 0;
++}
++
++static void bfqio_css_offline(struct cgroup_subsys_state *css)
++{
++ struct bfqio_cgroup *bgrp = css_to_bfqio(css);
++
++ mutex_lock(&bfqio_mutex);
++ bgrp->online = false;
++ mutex_unlock(&bfqio_mutex);
++}
++
++struct cgroup_subsys bfqio_cgrp_subsys = {
++ .css_alloc = bfqio_create,
++ .css_online = bfqio_css_online,
++ .css_offline = bfqio_css_offline,
++ .can_attach = bfqio_can_attach,
++ .attach = bfqio_attach,
++ .css_free = bfqio_destroy,
++ .legacy_cftypes = bfqio_files,
++};
++#else
++static inline void bfq_init_entity(struct bfq_entity *entity,
++ struct bfq_group *bfqg)
++{
++ entity->weight = entity->new_weight;
++ entity->orig_weight = entity->new_weight;
++ entity->ioprio = entity->new_ioprio;
++ entity->ioprio_class = entity->new_ioprio_class;
++ entity->sched_data = &bfqg->sched_data;
++}
++
++static inline struct bfq_group *
++bfq_bic_update_cgroup(struct bfq_io_cq *bic)
++{
++ struct bfq_data *bfqd = bic_to_bfqd(bic);
++ return bfqd->root_group;
++}
++
++static inline void bfq_bfqq_move(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ struct bfq_entity *entity,
++ struct bfq_group *bfqg)
++{
++}
++
++static void bfq_end_wr_async(struct bfq_data *bfqd)
++{
++ bfq_end_wr_async_queues(bfqd, bfqd->root_group);
++}
++
++static inline void bfq_disconnect_groups(struct bfq_data *bfqd)
++{
++ bfq_put_async_queues(bfqd, bfqd->root_group);
++}
++
++static inline void bfq_free_root_group(struct bfq_data *bfqd)
++{
++ kfree(bfqd->root_group);
++}
++
++static struct bfq_group *bfq_alloc_root_group(struct bfq_data *bfqd, int node)
++{
++ struct bfq_group *bfqg;
++ int i;
++
++ bfqg = kmalloc_node(sizeof(*bfqg), GFP_KERNEL | __GFP_ZERO, node);
++ if (bfqg == NULL)
++ return NULL;
++
++ for (i = 0; i < BFQ_IOPRIO_CLASSES; i++)
++ bfqg->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT;
++
++ return bfqg;
++}
++#endif
+diff --git a/block/bfq-ioc.c b/block/bfq-ioc.c
+new file mode 100644
+index 0000000..7f6b000
+--- /dev/null
++++ b/block/bfq-ioc.c
+@@ -0,0 +1,36 @@
++/*
++ * BFQ: I/O context handling.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ * Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ */
++
++/**
++ * icq_to_bic - convert iocontext queue structure to bfq_io_cq.
++ * @icq: the iocontext queue.
++ */
++static inline struct bfq_io_cq *icq_to_bic(struct io_cq *icq)
++{
++ /* bic->icq is the first member, %NULL will convert to %NULL */
++ return container_of(icq, struct bfq_io_cq, icq);
++}
++
++/**
++ * bfq_bic_lookup - search into @ioc a bic associated to @bfqd.
++ * @bfqd: the lookup key.
++ * @ioc: the io_context of the process doing I/O.
++ *
++ * Queue lock must be held.
++ */
++static inline struct bfq_io_cq *bfq_bic_lookup(struct bfq_data *bfqd,
++ struct io_context *ioc)
++{
++ if (ioc)
++ return icq_to_bic(ioc_lookup_icq(ioc, bfqd->queue));
++ return NULL;
++}
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+new file mode 100644
+index 0000000..773b2ee
+--- /dev/null
++++ b/block/bfq-iosched.c
+@@ -0,0 +1,3898 @@
++/*
++ * Budget Fair Queueing (BFQ) disk scheduler.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ * Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ
++ * file.
++ *
++ * BFQ is a proportional-share storage-I/O scheduling algorithm based on
++ * the slice-by-slice service scheme of CFQ. But BFQ assigns budgets,
++ * measured in number of sectors, to processes instead of time slices. The
++ * device is not granted to the in-service process for a given time slice,
++ * but until it has exhausted its assigned budget. This change from the time
++ * to the service domain allows BFQ to distribute the device throughput
++ * among processes as desired, without any distortion due to ZBR, workload
++ * fluctuations or other factors. BFQ uses an ad hoc internal scheduler,
++ * called B-WF2Q+, to schedule processes according to their budgets. More
++ * precisely, BFQ schedules queues associated to processes. Thanks to the
++ * accurate policy of B-WF2Q+, BFQ can afford to assign high budgets to
++ * I/O-bound processes issuing sequential requests (to boost the
++ * throughput), and yet guarantee a low latency to interactive and soft
++ * real-time applications.
++ *
++ * BFQ is described in [1], where also a reference to the initial, more
++ * theoretical paper on BFQ can be found. The interested reader can find
++ * in the latter paper full details on the main algorithm, as well as
++ * formulas of the guarantees and formal proofs of all the properties.
++ * With respect to the version of BFQ presented in these papers, this
++ * implementation adds a few more heuristics, such as the one that
++ * guarantees a low latency to soft real-time applications, and a
++ * hierarchical extension based on H-WF2Q+.
++ *
++ * B-WF2Q+ is based on WF2Q+, that is described in [2], together with
++ * H-WF2Q+, while the augmented tree used to implement B-WF2Q+ with O(log N)
++ * complexity derives from the one introduced with EEVDF in [3].
++ *
++ * [1] P. Valente and M. Andreolini, ``Improving Application Responsiveness
++ * with the BFQ Disk I/O Scheduler'',
++ * Proceedings of the 5th Annual International Systems and Storage
++ * Conference (SYSTOR '12), June 2012.
++ *
++ * http://algogroup.unimo.it/people/paolo/disk_sched/bf1-v1-suite-results.pdf
++ *
++ * [2] Jon C.R. Bennett and H. Zhang, ``Hierarchical Packet Fair Queueing
++ * Algorithms,'' IEEE/ACM Transactions on Networking, 5(5):675-689,
++ * Oct 1997.
++ *
++ * http://www.cs.cmu.edu/~hzhang/papers/TON-97-Oct.ps.gz
++ *
++ * [3] I. Stoica and H. Abdel-Wahab, ``Earliest Eligible Virtual Deadline
++ * First: A Flexible and Accurate Mechanism for Proportional Share
++ * Resource Allocation,'' technical report.
++ *
++ * http://www.cs.berkeley.edu/~istoica/papers/eevdf-tr-95.pdf
++ */
++#include <linux/module.h>
++#include <linux/slab.h>
++#include <linux/blkdev.h>
++#include <linux/cgroup.h>
++#include <linux/elevator.h>
++#include <linux/jiffies.h>
++#include <linux/rbtree.h>
++#include <linux/ioprio.h>
++#include "bfq.h"
++#include "blk.h"
++
++/* Expiration time of sync (0) and async (1) requests, in jiffies. */
++static const int bfq_fifo_expire[2] = { HZ / 4, HZ / 8 };
++
++/* Maximum backwards seek, in KiB. */
++static const int bfq_back_max = 16 * 1024;
++
++/* Penalty of a backwards seek, in number of sectors. */
++static const int bfq_back_penalty = 2;
++
++/* Idling period duration, in jiffies. */
++static int bfq_slice_idle = HZ / 125;
++
++/* Default maximum budget values, in sectors and number of requests. */
++static const int bfq_default_max_budget = 16 * 1024;
++static const int bfq_max_budget_async_rq = 4;
++
++/*
++ * Async to sync throughput distribution is controlled as follows:
++ * when an async request is served, the entity is charged the number
++ * of sectors of the request, multiplied by the factor below
++ */
++static const int bfq_async_charge_factor = 10;
++
++/* Default timeout values, in jiffies, approximating CFQ defaults. */
++static const int bfq_timeout_sync = HZ / 8;
++static int bfq_timeout_async = HZ / 25;
++
++struct kmem_cache *bfq_pool;
++
++/* Below this threshold (in ms), we consider thinktime immediate. */
++#define BFQ_MIN_TT 2
++
++/* hw_tag detection: parallel requests threshold and min samples needed. */
++#define BFQ_HW_QUEUE_THRESHOLD 4
++#define BFQ_HW_QUEUE_SAMPLES 32
++
++#define BFQQ_SEEK_THR (sector_t)(8 * 1024)
++#define BFQQ_SEEKY(bfqq) ((bfqq)->seek_mean > BFQQ_SEEK_THR)
++
++/* Min samples used for peak rate estimation (for autotuning). */
++#define BFQ_PEAK_RATE_SAMPLES 32
++
++/* Shift used for peak rate fixed precision calculations. */
++#define BFQ_RATE_SHIFT 16
++
++/*
++ * By default, BFQ computes the duration of the weight raising for
++ * interactive applications automatically, using the following formula:
++ * duration = (R / r) * T, where r is the peak rate of the device, and
++ * R and T are two reference parameters.
++ * In particular, R is the peak rate of the reference device (see below),
++ * and T is a reference time: given the systems that are likely to be
++ * installed on the reference device according to its speed class, T is
++ * about the maximum time needed, under BFQ and while reading two files in
++ * parallel, to load typical large applications on these systems.
++ * In practice, the slower/faster the device at hand is, the more/less it
++ * takes to load applications with respect to the reference device.
++ * Accordingly, the longer/shorter BFQ grants weight raising to interactive
++ * applications.
++ *
++ * BFQ uses four different reference pairs (R, T), depending on:
++ * . whether the device is rotational or non-rotational;
++ * . whether the device is slow, such as old or portable HDDs, as well as
++ * SD cards, or fast, such as newer HDDs and SSDs.
++ *
++ * The device's speed class is dynamically (re)detected in
++ * bfq_update_peak_rate() every time the estimated peak rate is updated.
++ *
++ * In the following definitions, R_slow[0]/R_fast[0] and T_slow[0]/T_fast[0]
++ * are the reference values for a slow/fast rotational device, whereas
++ * R_slow[1]/R_fast[1] and T_slow[1]/T_fast[1] are the reference values for
++ * a slow/fast non-rotational device. Finally, device_speed_thresh are the
++ * thresholds used to switch between speed classes.
++ * Both the reference peak rates and the thresholds are measured in
++ * sectors/usec, left-shifted by BFQ_RATE_SHIFT.
++ */
++static int R_slow[2] = {1536, 10752};
++static int R_fast[2] = {17415, 34791};
++/*
++ * To improve readability, a conversion function is used to initialize the
++ * following arrays, which entails that they can be initialized only in a
++ * function.
++ */
++static int T_slow[2];
++static int T_fast[2];
++static int device_speed_thresh[2];
++
++#define BFQ_SERVICE_TREE_INIT ((struct bfq_service_tree) \
++ { RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 })
++
++#define RQ_BIC(rq) ((struct bfq_io_cq *) (rq)->elv.priv[0])
++#define RQ_BFQQ(rq) ((rq)->elv.priv[1])
++
++static inline void bfq_schedule_dispatch(struct bfq_data *bfqd);
++
++#include "bfq-ioc.c"
++#include "bfq-sched.c"
++#include "bfq-cgroup.c"
++
++#define bfq_class_idle(bfqq) ((bfqq)->entity.ioprio_class ==\
++ IOPRIO_CLASS_IDLE)
++#define bfq_class_rt(bfqq) ((bfqq)->entity.ioprio_class ==\
++ IOPRIO_CLASS_RT)
++
++#define bfq_sample_valid(samples) ((samples) > 80)
++
++/*
++ * The following macro groups conditions that need to be evaluated when
++ * checking if existing queues and groups form a symmetric scenario
++ * and therefore idling can be reduced or disabled for some of the
++ * queues. See the comment to the function bfq_bfqq_must_not_expire()
++ * for further details.
++ */
++#ifdef CONFIG_CGROUP_BFQIO
++#define symmetric_scenario (!bfqd->active_numerous_groups && \
++ !bfq_differentiated_weights(bfqd))
++#else
++#define symmetric_scenario (!bfq_differentiated_weights(bfqd))
++#endif
++
++/*
++ * We regard a request as SYNC, if either it's a read or has the SYNC bit
++ * set (in which case it could also be a direct WRITE).
++ */
++static inline int bfq_bio_sync(struct bio *bio)
++{
++ if (bio_data_dir(bio) == READ || (bio->bi_rw & REQ_SYNC))
++ return 1;
++
++ return 0;
++}
++
++/*
++ * Scheduler run of queue, if there are requests pending and no one in the
++ * driver that will restart queueing.
++ */
++static inline void bfq_schedule_dispatch(struct bfq_data *bfqd)
++{
++ if (bfqd->queued != 0) {
++ bfq_log(bfqd, "schedule dispatch");
++ kblockd_schedule_work(&bfqd->unplug_work);
++ }
++}
++
++/*
++ * Lifted from AS - choose which of rq1 and rq2 that is best served now.
++ * We choose the request that is closesr to the head right now. Distance
++ * behind the head is penalized and only allowed to a certain extent.
++ */
++static struct request *bfq_choose_req(struct bfq_data *bfqd,
++ struct request *rq1,
++ struct request *rq2,
++ sector_t last)
++{
++ sector_t s1, s2, d1 = 0, d2 = 0;
++ unsigned long back_max;
++#define BFQ_RQ1_WRAP 0x01 /* request 1 wraps */
++#define BFQ_RQ2_WRAP 0x02 /* request 2 wraps */
++ unsigned wrap = 0; /* bit mask: requests behind the disk head? */
++
++ if (rq1 == NULL || rq1 == rq2)
++ return rq2;
++ if (rq2 == NULL)
++ return rq1;
++
++ if (rq_is_sync(rq1) && !rq_is_sync(rq2))
++ return rq1;
++ else if (rq_is_sync(rq2) && !rq_is_sync(rq1))
++ return rq2;
++ if ((rq1->cmd_flags & REQ_META) && !(rq2->cmd_flags & REQ_META))
++ return rq1;
++ else if ((rq2->cmd_flags & REQ_META) && !(rq1->cmd_flags & REQ_META))
++ return rq2;
++
++ s1 = blk_rq_pos(rq1);
++ s2 = blk_rq_pos(rq2);
++
++ /*
++ * By definition, 1KiB is 2 sectors.
++ */
++ back_max = bfqd->bfq_back_max * 2;
++
++ /*
++ * Strict one way elevator _except_ in the case where we allow
++ * short backward seeks which are biased as twice the cost of a
++ * similar forward seek.
++ */
++ if (s1 >= last)
++ d1 = s1 - last;
++ else if (s1 + back_max >= last)
++ d1 = (last - s1) * bfqd->bfq_back_penalty;
++ else
++ wrap |= BFQ_RQ1_WRAP;
++
++ if (s2 >= last)
++ d2 = s2 - last;
++ else if (s2 + back_max >= last)
++ d2 = (last - s2) * bfqd->bfq_back_penalty;
++ else
++ wrap |= BFQ_RQ2_WRAP;
++
++ /* Found required data */
++
++ /*
++ * By doing switch() on the bit mask "wrap" we avoid having to
++ * check two variables for all permutations: --> faster!
++ */
++ switch (wrap) {
++ case 0: /* common case for CFQ: rq1 and rq2 not wrapped */
++ if (d1 < d2)
++ return rq1;
++ else if (d2 < d1)
++ return rq2;
++ else {
++ if (s1 >= s2)
++ return rq1;
++ else
++ return rq2;
++ }
++
++ case BFQ_RQ2_WRAP:
++ return rq1;
++ case BFQ_RQ1_WRAP:
++ return rq2;
++ case (BFQ_RQ1_WRAP|BFQ_RQ2_WRAP): /* both rqs wrapped */
++ default:
++ /*
++ * Since both rqs are wrapped,
++ * start with the one that's further behind head
++ * (--> only *one* back seek required),
++ * since back seek takes more time than forward.
++ */
++ if (s1 <= s2)
++ return rq1;
++ else
++ return rq2;
++ }
++}
++
++static struct bfq_queue *
++bfq_rq_pos_tree_lookup(struct bfq_data *bfqd, struct rb_root *root,
++ sector_t sector, struct rb_node **ret_parent,
++ struct rb_node ***rb_link)
++{
++ struct rb_node **p, *parent;
++ struct bfq_queue *bfqq = NULL;
++
++ parent = NULL;
++ p = &root->rb_node;
++ while (*p) {
++ struct rb_node **n;
++
++ parent = *p;
++ bfqq = rb_entry(parent, struct bfq_queue, pos_node);
++
++ /*
++ * Sort strictly based on sector. Smallest to the left,
++ * largest to the right.
++ */
++ if (sector > blk_rq_pos(bfqq->next_rq))
++ n = &(*p)->rb_right;
++ else if (sector < blk_rq_pos(bfqq->next_rq))
++ n = &(*p)->rb_left;
++ else
++ break;
++ p = n;
++ bfqq = NULL;
++ }
++
++ *ret_parent = parent;
++ if (rb_link)
++ *rb_link = p;
++
++ bfq_log(bfqd, "rq_pos_tree_lookup %llu: returning %d",
++ (long long unsigned)sector,
++ bfqq != NULL ? bfqq->pid : 0);
++
++ return bfqq;
++}
++
++static void bfq_rq_pos_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ struct rb_node **p, *parent;
++ struct bfq_queue *__bfqq;
++
++ if (bfqq->pos_root != NULL) {
++ rb_erase(&bfqq->pos_node, bfqq->pos_root);
++ bfqq->pos_root = NULL;
++ }
++
++ if (bfq_class_idle(bfqq))
++ return;
++ if (!bfqq->next_rq)
++ return;
++
++ bfqq->pos_root = &bfqd->rq_pos_tree;
++ __bfqq = bfq_rq_pos_tree_lookup(bfqd, bfqq->pos_root,
++ blk_rq_pos(bfqq->next_rq), &parent, &p);
++ if (__bfqq == NULL) {
++ rb_link_node(&bfqq->pos_node, parent, p);
++ rb_insert_color(&bfqq->pos_node, bfqq->pos_root);
++ } else
++ bfqq->pos_root = NULL;
++}
++
++/*
++ * Tell whether there are active queues or groups with differentiated weights.
++ */
++static inline bool bfq_differentiated_weights(struct bfq_data *bfqd)
++{
++ /*
++ * For weights to differ, at least one of the trees must contain
++ * at least two nodes.
++ */
++ return (!RB_EMPTY_ROOT(&bfqd->queue_weights_tree) &&
++ (bfqd->queue_weights_tree.rb_node->rb_left ||
++ bfqd->queue_weights_tree.rb_node->rb_right)
++#ifdef CONFIG_CGROUP_BFQIO
++ ) ||
++ (!RB_EMPTY_ROOT(&bfqd->group_weights_tree) &&
++ (bfqd->group_weights_tree.rb_node->rb_left ||
++ bfqd->group_weights_tree.rb_node->rb_right)
++#endif
++ );
++}
++
++/*
++ * If the weight-counter tree passed as input contains no counter for
++ * the weight of the input entity, then add that counter; otherwise just
++ * increment the existing counter.
++ *
++ * Note that weight-counter trees contain few nodes in mostly symmetric
++ * scenarios. For example, if all queues have the same weight, then the
++ * weight-counter tree for the queues may contain at most one node.
++ * This holds even if low_latency is on, because weight-raised queues
++ * are not inserted in the tree.
++ * In most scenarios, the rate at which nodes are created/destroyed
++ * should be low too.
++ */
++static void bfq_weights_tree_add(struct bfq_data *bfqd,
++ struct bfq_entity *entity,
++ struct rb_root *root)
++{
++ struct rb_node **new = &(root->rb_node), *parent = NULL;
++
++ /*
++ * Do not insert if the entity is already associated with a
++ * counter, which happens if:
++ * 1) the entity is associated with a queue,
++ * 2) a request arrival has caused the queue to become both
++ * non-weight-raised, and hence change its weight, and
++ * backlogged; in this respect, each of the two events
++ * causes an invocation of this function,
++ * 3) this is the invocation of this function caused by the
++ * second event. This second invocation is actually useless,
++ * and we handle this fact by exiting immediately. More
++ * efficient or clearer solutions might possibly be adopted.
++ */
++ if (entity->weight_counter)
++ return;
++
++ while (*new) {
++ struct bfq_weight_counter *__counter = container_of(*new,
++ struct bfq_weight_counter,
++ weights_node);
++ parent = *new;
++
++ if (entity->weight == __counter->weight) {
++ entity->weight_counter = __counter;
++ goto inc_counter;
++ }
++ if (entity->weight < __counter->weight)
++ new = &((*new)->rb_left);
++ else
++ new = &((*new)->rb_right);
++ }
++
++ entity->weight_counter = kzalloc(sizeof(struct bfq_weight_counter),
++ GFP_ATOMIC);
++ entity->weight_counter->weight = entity->weight;
++ rb_link_node(&entity->weight_counter->weights_node, parent, new);
++ rb_insert_color(&entity->weight_counter->weights_node, root);
++
++inc_counter:
++ entity->weight_counter->num_active++;
++}
++
++/*
++ * Decrement the weight counter associated with the entity, and, if the
++ * counter reaches 0, remove the counter from the tree.
++ * See the comments to the function bfq_weights_tree_add() for considerations
++ * about overhead.
++ */
++static void bfq_weights_tree_remove(struct bfq_data *bfqd,
++ struct bfq_entity *entity,
++ struct rb_root *root)
++{
++ if (!entity->weight_counter)
++ return;
++
++ BUG_ON(RB_EMPTY_ROOT(root));
++ BUG_ON(entity->weight_counter->weight != entity->weight);
++
++ BUG_ON(!entity->weight_counter->num_active);
++ entity->weight_counter->num_active--;
++ if (entity->weight_counter->num_active > 0)
++ goto reset_entity_pointer;
++
++ rb_erase(&entity->weight_counter->weights_node, root);
++ kfree(entity->weight_counter);
++
++reset_entity_pointer:
++ entity->weight_counter = NULL;
++}
++
++static struct request *bfq_find_next_rq(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ struct request *last)
++{
++ struct rb_node *rbnext = rb_next(&last->rb_node);
++ struct rb_node *rbprev = rb_prev(&last->rb_node);
++ struct request *next = NULL, *prev = NULL;
++
++ BUG_ON(RB_EMPTY_NODE(&last->rb_node));
++
++ if (rbprev != NULL)
++ prev = rb_entry_rq(rbprev);
++
++ if (rbnext != NULL)
++ next = rb_entry_rq(rbnext);
++ else {
++ rbnext = rb_first(&bfqq->sort_list);
++ if (rbnext && rbnext != &last->rb_node)
++ next = rb_entry_rq(rbnext);
++ }
++
++ return bfq_choose_req(bfqd, next, prev, blk_rq_pos(last));
++}
++
++/* see the definition of bfq_async_charge_factor for details */
++static inline unsigned long bfq_serv_to_charge(struct request *rq,
++ struct bfq_queue *bfqq)
++{
++ return blk_rq_sectors(rq) *
++ (1 + ((!bfq_bfqq_sync(bfqq)) * (bfqq->wr_coeff == 1) *
++ bfq_async_charge_factor));
++}
++
++/**
++ * bfq_updated_next_req - update the queue after a new next_rq selection.
++ * @bfqd: the device data the queue belongs to.
++ * @bfqq: the queue to update.
++ *
++ * If the first request of a queue changes we make sure that the queue
++ * has enough budget to serve at least its first request (if the
++ * request has grown). We do this because if the queue has not enough
++ * budget for its first request, it has to go through two dispatch
++ * rounds to actually get it dispatched.
++ */
++static void bfq_updated_next_req(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++ struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++ struct request *next_rq = bfqq->next_rq;
++ unsigned long new_budget;
++
++ if (next_rq == NULL)
++ return;
++
++ if (bfqq == bfqd->in_service_queue)
++ /*
++ * In order not to break guarantees, budgets cannot be
++ * changed after an entity has been selected.
++ */
++ return;
++
++ BUG_ON(entity->tree != &st->active);
++ BUG_ON(entity == entity->sched_data->in_service_entity);
++
++ new_budget = max_t(unsigned long, bfqq->max_budget,
++ bfq_serv_to_charge(next_rq, bfqq));
++ if (entity->budget != new_budget) {
++ entity->budget = new_budget;
++ bfq_log_bfqq(bfqd, bfqq, "updated next rq: new budget %lu",
++ new_budget);
++ bfq_activate_bfqq(bfqd, bfqq);
++ }
++}
++
++static inline unsigned int bfq_wr_duration(struct bfq_data *bfqd)
++{
++ u64 dur;
++
++ if (bfqd->bfq_wr_max_time > 0)
++ return bfqd->bfq_wr_max_time;
++
++ dur = bfqd->RT_prod;
++ do_div(dur, bfqd->peak_rate);
++
++ return dur;
++}
++
++/* Empty burst list and add just bfqq (see comments to bfq_handle_burst) */
++static inline void bfq_reset_burst_list(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq)
++{
++ struct bfq_queue *item;
++ struct hlist_node *n;
++
++ hlist_for_each_entry_safe(item, n, &bfqd->burst_list, burst_list_node)
++ hlist_del_init(&item->burst_list_node);
++ hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
++ bfqd->burst_size = 1;
++}
++
++/* Add bfqq to the list of queues in current burst (see bfq_handle_burst) */
++static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ /* Increment burst size to take into account also bfqq */
++ bfqd->burst_size++;
++
++ if (bfqd->burst_size == bfqd->bfq_large_burst_thresh) {
++ struct bfq_queue *pos, *bfqq_item;
++ struct hlist_node *n;
++
++ /*
++ * Enough queues have been activated shortly after each
++ * other to consider this burst as large.
++ */
++ bfqd->large_burst = true;
++
++ /*
++ * We can now mark all queues in the burst list as
++ * belonging to a large burst.
++ */
++ hlist_for_each_entry(bfqq_item, &bfqd->burst_list,
++ burst_list_node)
++ bfq_mark_bfqq_in_large_burst(bfqq_item);
++ bfq_mark_bfqq_in_large_burst(bfqq);
++
++ /*
++ * From now on, and until the current burst finishes, any
++ * new queue being activated shortly after the last queue
++ * was inserted in the burst can be immediately marked as
++ * belonging to a large burst. So the burst list is not
++ * needed any more. Remove it.
++ */
++ hlist_for_each_entry_safe(pos, n, &bfqd->burst_list,
++ burst_list_node)
++ hlist_del_init(&pos->burst_list_node);
++ } else /* burst not yet large: add bfqq to the burst list */
++ hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
++}
++
++/*
++ * If many queues happen to become active shortly after each other, then,
++ * to help the processes associated to these queues get their job done as
++ * soon as possible, it is usually better to not grant either weight-raising
++ * or device idling to these queues. In this comment we describe, firstly,
++ * the reasons why this fact holds, and, secondly, the next function, which
++ * implements the main steps needed to properly mark these queues so that
++ * they can then be treated in a different way.
++ *
++ * As for the terminology, we say that a queue becomes active, i.e.,
++ * switches from idle to backlogged, either when it is created (as a
++ * consequence of the arrival of an I/O request), or, if already existing,
++ * when a new request for the queue arrives while the queue is idle.
++ * Bursts of activations, i.e., activations of different queues occurring
++ * shortly after each other, are typically caused by services or applications
++ * that spawn or reactivate many parallel threads/processes. Examples are
++ * systemd during boot or git grep.
++ *
++ * These services or applications benefit mostly from a high throughput:
++ * the quicker the requests of the activated queues are cumulatively served,
++ * the sooner the target job of these queues gets completed. As a consequence,
++ * weight-raising any of these queues, which also implies idling the device
++ * for it, is almost always counterproductive: in most cases it just lowers
++ * throughput.
++ *
++ * On the other hand, a burst of activations may be also caused by the start
++ * of an application that does not consist in a lot of parallel I/O-bound
++ * threads. In fact, with a complex application, the burst may be just a
++ * consequence of the fact that several processes need to be executed to
++ * start-up the application. To start an application as quickly as possible,
++ * the best thing to do is to privilege the I/O related to the application
++ * with respect to all other I/O. Therefore, the best strategy to start as
++ * quickly as possible an application that causes a burst of activations is
++ * to weight-raise all the queues activated during the burst. This is the
++ * exact opposite of the best strategy for the other type of bursts.
++ *
++ * In the end, to take the best action for each of the two cases, the two
++ * types of bursts need to be distinguished. Fortunately, this seems
++ * relatively easy to do, by looking at the sizes of the bursts. In
++ * particular, we found a threshold such that bursts with a larger size
++ * than that threshold are apparently caused only by services or commands
++ * such as systemd or git grep. For brevity, hereafter we call just 'large'
++ * these bursts. BFQ *does not* weight-raise queues whose activations occur
++ * in a large burst. In addition, for each of these queues BFQ performs or
++ * does not perform idling depending on which choice boosts the throughput
++ * most. The exact choice depends on the device and request pattern at
++ * hand.
++ *
++ * Turning back to the next function, it implements all the steps needed
++ * to detect the occurrence of a large burst and to properly mark all the
++ * queues belonging to it (so that they can then be treated in a different
++ * way). This goal is achieved by maintaining a special "burst list" that
++ * holds, temporarily, the queues that belong to the burst in progress. The
++ * list is then used to mark these queues as belonging to a large burst if
++ * the burst does become large. The main steps are the following.
++ *
++ * . when the very first queue is activated, the queue is inserted into the
++ * list (as it could be the first queue in a possible burst)
++ *
++ * . if the current burst has not yet become large, and a queue Q that does
++ * not yet belong to the burst is activated shortly after the last time
++ * at which a new queue entered the burst list, then the function appends
++ * Q to the burst list
++ *
++ * . if, as a consequence of the previous step, the burst size reaches
++ * the large-burst threshold, then
++ *
++ * . all the queues in the burst list are marked as belonging to a
++ * large burst
++ *
++ * . the burst list is deleted; in fact, the burst list already served
++ * its purpose (keeping temporarily track of the queues in a burst,
++ * so as to be able to mark them as belonging to a large burst in the
++ * previous sub-step), and now is not needed any more
++ *
++ * . the device enters a large-burst mode
++ *
++ * . if a queue Q that does not belong to the burst is activated while
++ * the device is in large-burst mode and shortly after the last time
++ * at which a queue either entered the burst list or was marked as
++ * belonging to the current large burst, then Q is immediately marked
++ * as belonging to a large burst.
++ *
++ * . if a queue Q that does not belong to the burst is activated a while
++ * later, i.e., not shortly after, than the last time at which a queue
++ * either entered the burst list or was marked as belonging to the
++ * current large burst, then the current burst is deemed as finished and:
++ *
++ * . the large-burst mode is reset if set
++ *
++ * . the burst list is emptied
++ *
++ * . Q is inserted in the burst list, as Q may be the first queue
++ * in a possible new burst (then the burst list contains just Q
++ * after this step).
++ */
++static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ bool idle_for_long_time)
++{
++ /*
++ * If bfqq happened to be activated in a burst, but has been idle
++ * for at least as long as an interactive queue, then we assume
++ * that, in the overall I/O initiated in the burst, the I/O
++ * associated to bfqq is finished. So bfqq does not need to be
++ * treated as a queue belonging to a burst anymore. Accordingly,
++ * we reset bfqq's in_large_burst flag if set, and remove bfqq
++ * from the burst list if it's there. We do not decrement instead
++ * burst_size, because the fact that bfqq does not need to belong
++ * to the burst list any more does not invalidate the fact that
++ * bfqq may have been activated during the current burst.
++ */
++ if (idle_for_long_time) {
++ hlist_del_init(&bfqq->burst_list_node);
++ bfq_clear_bfqq_in_large_burst(bfqq);
++ }
++
++ /*
++ * If bfqq is already in the burst list or is part of a large
++ * burst, then there is nothing else to do.
++ */
++ if (!hlist_unhashed(&bfqq->burst_list_node) ||
++ bfq_bfqq_in_large_burst(bfqq))
++ return;
++
++ /*
++ * If bfqq's activation happens late enough, then the current
++ * burst is finished, and related data structures must be reset.
++ *
++ * In this respect, consider the special case where bfqq is the very
++ * first queue being activated. In this case, last_ins_in_burst is
++ * not yet significant when we get here. But it is easy to verify
++ * that, whether or not the following condition is true, bfqq will
++ * end up being inserted into the burst list. In particular the
++ * list will happen to contain only bfqq. And this is exactly what
++ * has to happen, as bfqq may be the first queue in a possible
++ * burst.
++ */
++ if (time_is_before_jiffies(bfqd->last_ins_in_burst +
++ bfqd->bfq_burst_interval)) {
++ bfqd->large_burst = false;
++ bfq_reset_burst_list(bfqd, bfqq);
++ return;
++ }
++
++ /*
++ * If we get here, then bfqq is being activated shortly after the
++ * last queue. So, if the current burst is also large, we can mark
++ * bfqq as belonging to this large burst immediately.
++ */
++ if (bfqd->large_burst) {
++ bfq_mark_bfqq_in_large_burst(bfqq);
++ return;
++ }
++
++ /*
++ * If we get here, then a large-burst state has not yet been
++ * reached, but bfqq is being activated shortly after the last
++ * queue. Then we add bfqq to the burst.
++ */
++ bfq_add_to_burst(bfqd, bfqq);
++}
++
++static void bfq_add_request(struct request *rq)
++{
++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
++ struct bfq_entity *entity = &bfqq->entity;
++ struct bfq_data *bfqd = bfqq->bfqd;
++ struct request *next_rq, *prev;
++ unsigned long old_wr_coeff = bfqq->wr_coeff;
++ bool interactive = false;
++
++ bfq_log_bfqq(bfqd, bfqq, "add_request %d", rq_is_sync(rq));
++ bfqq->queued[rq_is_sync(rq)]++;
++ bfqd->queued++;
++
++ elv_rb_add(&bfqq->sort_list, rq);
++
++ /*
++ * Check if this request is a better next-serve candidate.
++ */
++ prev = bfqq->next_rq;
++ next_rq = bfq_choose_req(bfqd, bfqq->next_rq, rq, bfqd->last_position);
++ BUG_ON(next_rq == NULL);
++ bfqq->next_rq = next_rq;
++
++ /*
++ * Adjust priority tree position, if next_rq changes.
++ */
++ if (prev != bfqq->next_rq)
++ bfq_rq_pos_tree_add(bfqd, bfqq);
++
++ if (!bfq_bfqq_busy(bfqq)) {
++ bool soft_rt,
++ idle_for_long_time = time_is_before_jiffies(
++ bfqq->budget_timeout +
++ bfqd->bfq_wr_min_idle_time);
++
++ if (bfq_bfqq_sync(bfqq)) {
++ bool already_in_burst =
++ !hlist_unhashed(&bfqq->burst_list_node) ||
++ bfq_bfqq_in_large_burst(bfqq);
++ bfq_handle_burst(bfqd, bfqq, idle_for_long_time);
++ /*
++ * If bfqq was not already in the current burst,
++ * then, at this point, bfqq either has been
++ * added to the current burst or has caused the
++ * current burst to terminate. In particular, in
++ * the second case, bfqq has become the first
++ * queue in a possible new burst.
++ * In both cases last_ins_in_burst needs to be
++ * moved forward.
++ */
++ if (!already_in_burst)
++ bfqd->last_ins_in_burst = jiffies;
++ }
++
++ soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
++ !bfq_bfqq_in_large_burst(bfqq) &&
++ time_is_before_jiffies(bfqq->soft_rt_next_start);
++ interactive = !bfq_bfqq_in_large_burst(bfqq) &&
++ idle_for_long_time;
++ entity->budget = max_t(unsigned long, bfqq->max_budget,
++ bfq_serv_to_charge(next_rq, bfqq));
++
++ if (!bfq_bfqq_IO_bound(bfqq)) {
++ if (time_before(jiffies,
++ RQ_BIC(rq)->ttime.last_end_request +
++ bfqd->bfq_slice_idle)) {
++ bfqq->requests_within_timer++;
++ if (bfqq->requests_within_timer >=
++ bfqd->bfq_requests_within_timer)
++ bfq_mark_bfqq_IO_bound(bfqq);
++ } else
++ bfqq->requests_within_timer = 0;
++ }
++
++ if (!bfqd->low_latency)
++ goto add_bfqq_busy;
++
++ /*
++ * If the queue is not being boosted and has been idle
++ * for enough time, start a weight-raising period
++ */
++ if (old_wr_coeff == 1 && (interactive || soft_rt)) {
++ bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++ if (interactive)
++ bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++ else
++ bfqq->wr_cur_max_time =
++ bfqd->bfq_wr_rt_max_time;
++ bfq_log_bfqq(bfqd, bfqq,
++ "wrais starting at %lu, rais_max_time %u",
++ jiffies,
++ jiffies_to_msecs(bfqq->wr_cur_max_time));
++ } else if (old_wr_coeff > 1) {
++ if (interactive)
++ bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++ else if (bfq_bfqq_in_large_burst(bfqq) ||
++ (bfqq->wr_cur_max_time ==
++ bfqd->bfq_wr_rt_max_time &&
++ !soft_rt)) {
++ bfqq->wr_coeff = 1;
++ bfq_log_bfqq(bfqd, bfqq,
++ "wrais ending at %lu, rais_max_time %u",
++ jiffies,
++ jiffies_to_msecs(bfqq->
++ wr_cur_max_time));
++ } else if (time_before(
++ bfqq->last_wr_start_finish +
++ bfqq->wr_cur_max_time,
++ jiffies +
++ bfqd->bfq_wr_rt_max_time) &&
++ soft_rt) {
++ /*
++ *
++ * The remaining weight-raising time is lower
++ * than bfqd->bfq_wr_rt_max_time, which
++ * means that the application is enjoying
++ * weight raising either because deemed soft-
++ * rt in the near past, or because deemed
++ * interactive a long ago. In both cases,
++ * resetting now the current remaining weight-
++ * raising time for the application to the
++ * weight-raising duration for soft rt
++ * applications would not cause any latency
++ * increase for the application (as the new
++ * duration would be higher than the remaining
++ * time).
++ *
++ * In addition, the application is now meeting
++ * the requirements for being deemed soft rt.
++ * In the end we can correctly and safely
++ * (re)charge the weight-raising duration for
++ * the application with the weight-raising
++ * duration for soft rt applications.
++ *
++ * In particular, doing this recharge now, i.e.,
++ * before the weight-raising period for the
++ * application finishes, reduces the probability
++ * of the following negative scenario:
++ * 1) the weight of a soft rt application is
++ * raised at startup (as for any newly
++ * created application),
++ * 2) since the application is not interactive,
++ * at a certain time weight-raising is
++ * stopped for the application,
++ * 3) at that time the application happens to
++ * still have pending requests, and hence
++ * is destined to not have a chance to be
++ * deemed soft rt before these requests are
++ * completed (see the comments to the
++ * function bfq_bfqq_softrt_next_start()
++ * for details on soft rt detection),
++ * 4) these pending requests experience a high
++ * latency because the application is not
++ * weight-raised while they are pending.
++ */
++ bfqq->last_wr_start_finish = jiffies;
++ bfqq->wr_cur_max_time =
++ bfqd->bfq_wr_rt_max_time;
++ }
++ }
++ if (old_wr_coeff != bfqq->wr_coeff)
++ entity->ioprio_changed = 1;
++add_bfqq_busy:
++ bfqq->last_idle_bklogged = jiffies;
++ bfqq->service_from_backlogged = 0;
++ bfq_clear_bfqq_softrt_update(bfqq);
++ bfq_add_bfqq_busy(bfqd, bfqq);
++ } else {
++ if (bfqd->low_latency && old_wr_coeff == 1 && !rq_is_sync(rq) &&
++ time_is_before_jiffies(
++ bfqq->last_wr_start_finish +
++ bfqd->bfq_wr_min_inter_arr_async)) {
++ bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++ bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++
++ bfqd->wr_busy_queues++;
++ entity->ioprio_changed = 1;
++ bfq_log_bfqq(bfqd, bfqq,
++ "non-idle wrais starting at %lu, rais_max_time %u",
++ jiffies,
++ jiffies_to_msecs(bfqq->wr_cur_max_time));
++ }
++ if (prev != bfqq->next_rq)
++ bfq_updated_next_req(bfqd, bfqq);
++ }
++
++ if (bfqd->low_latency &&
++ (old_wr_coeff == 1 || bfqq->wr_coeff == 1 || interactive))
++ bfqq->last_wr_start_finish = jiffies;
++}
++
++static struct request *bfq_find_rq_fmerge(struct bfq_data *bfqd,
++ struct bio *bio)
++{
++ struct task_struct *tsk = current;
++ struct bfq_io_cq *bic;
++ struct bfq_queue *bfqq;
++
++ bic = bfq_bic_lookup(bfqd, tsk->io_context);
++ if (bic == NULL)
++ return NULL;
++
++ bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
++ if (bfqq != NULL)
++ return elv_rb_find(&bfqq->sort_list, bio_end_sector(bio));
++
++ return NULL;
++}
++
++static void bfq_activate_request(struct request_queue *q, struct request *rq)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++
++ bfqd->rq_in_driver++;
++ bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq);
++ bfq_log(bfqd, "activate_request: new bfqd->last_position %llu",
++ (long long unsigned)bfqd->last_position);
++}
++
++static inline void bfq_deactivate_request(struct request_queue *q,
++ struct request *rq)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++
++ BUG_ON(bfqd->rq_in_driver == 0);
++ bfqd->rq_in_driver--;
++}
++
++static void bfq_remove_request(struct request *rq)
++{
++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
++ struct bfq_data *bfqd = bfqq->bfqd;
++ const int sync = rq_is_sync(rq);
++
++ if (bfqq->next_rq == rq) {
++ bfqq->next_rq = bfq_find_next_rq(bfqd, bfqq, rq);
++ bfq_updated_next_req(bfqd, bfqq);
++ }
++
++ if (rq->queuelist.prev != &rq->queuelist)
++ list_del_init(&rq->queuelist);
++ BUG_ON(bfqq->queued[sync] == 0);
++ bfqq->queued[sync]--;
++ bfqd->queued--;
++ elv_rb_del(&bfqq->sort_list, rq);
++
++ if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
++ if (bfq_bfqq_busy(bfqq) && bfqq != bfqd->in_service_queue)
++ bfq_del_bfqq_busy(bfqd, bfqq, 1);
++ /*
++ * Remove queue from request-position tree as it is empty.
++ */
++ if (bfqq->pos_root != NULL) {
++ rb_erase(&bfqq->pos_node, bfqq->pos_root);
++ bfqq->pos_root = NULL;
++ }
++ }
++
++ if (rq->cmd_flags & REQ_META) {
++ BUG_ON(bfqq->meta_pending == 0);
++ bfqq->meta_pending--;
++ }
++}
++
++static int bfq_merge(struct request_queue *q, struct request **req,
++ struct bio *bio)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ struct request *__rq;
++
++ __rq = bfq_find_rq_fmerge(bfqd, bio);
++ if (__rq != NULL && elv_rq_merge_ok(__rq, bio)) {
++ *req = __rq;
++ return ELEVATOR_FRONT_MERGE;
++ }
++
++ return ELEVATOR_NO_MERGE;
++}
++
++static void bfq_merged_request(struct request_queue *q, struct request *req,
++ int type)
++{
++ if (type == ELEVATOR_FRONT_MERGE &&
++ rb_prev(&req->rb_node) &&
++ blk_rq_pos(req) <
++ blk_rq_pos(container_of(rb_prev(&req->rb_node),
++ struct request, rb_node))) {
++ struct bfq_queue *bfqq = RQ_BFQQ(req);
++ struct bfq_data *bfqd = bfqq->bfqd;
++ struct request *prev, *next_rq;
++
++ /* Reposition request in its sort_list */
++ elv_rb_del(&bfqq->sort_list, req);
++ elv_rb_add(&bfqq->sort_list, req);
++ /* Choose next request to be served for bfqq */
++ prev = bfqq->next_rq;
++ next_rq = bfq_choose_req(bfqd, bfqq->next_rq, req,
++ bfqd->last_position);
++ BUG_ON(next_rq == NULL);
++ bfqq->next_rq = next_rq;
++ /*
++ * If next_rq changes, update both the queue's budget to
++ * fit the new request and the queue's position in its
++ * rq_pos_tree.
++ */
++ if (prev != bfqq->next_rq) {
++ bfq_updated_next_req(bfqd, bfqq);
++ bfq_rq_pos_tree_add(bfqd, bfqq);
++ }
++ }
++}
++
++static void bfq_merged_requests(struct request_queue *q, struct request *rq,
++ struct request *next)
++{
++ struct bfq_queue *bfqq = RQ_BFQQ(rq), *next_bfqq = RQ_BFQQ(next);
++
++ /*
++ * If next and rq belong to the same bfq_queue and next is older
++ * than rq, then reposition rq in the fifo (by substituting next
++ * with rq). Otherwise, if next and rq belong to different
++ * bfq_queues, never reposition rq: in fact, we would have to
++ * reposition it with respect to next's position in its own fifo,
++ * which would most certainly be too expensive with respect to
++ * the benefits.
++ */
++ if (bfqq == next_bfqq &&
++ !list_empty(&rq->queuelist) && !list_empty(&next->queuelist) &&
++ time_before(next->fifo_time, rq->fifo_time)) {
++ list_del_init(&rq->queuelist);
++ list_replace_init(&next->queuelist, &rq->queuelist);
++ rq->fifo_time = next->fifo_time;
++ }
++
++ if (bfqq->next_rq == next)
++ bfqq->next_rq = rq;
++
++ bfq_remove_request(next);
++}
++
++/* Must be called with bfqq != NULL */
++static inline void bfq_bfqq_end_wr(struct bfq_queue *bfqq)
++{
++ BUG_ON(bfqq == NULL);
++ if (bfq_bfqq_busy(bfqq))
++ bfqq->bfqd->wr_busy_queues--;
++ bfqq->wr_coeff = 1;
++ bfqq->wr_cur_max_time = 0;
++ /* Trigger a weight change on the next activation of the queue */
++ bfqq->entity.ioprio_changed = 1;
++}
++
++static void bfq_end_wr_async_queues(struct bfq_data *bfqd,
++ struct bfq_group *bfqg)
++{
++ int i, j;
++
++ for (i = 0; i < 2; i++)
++ for (j = 0; j < IOPRIO_BE_NR; j++)
++ if (bfqg->async_bfqq[i][j] != NULL)
++ bfq_bfqq_end_wr(bfqg->async_bfqq[i][j]);
++ if (bfqg->async_idle_bfqq != NULL)
++ bfq_bfqq_end_wr(bfqg->async_idle_bfqq);
++}
++
++static void bfq_end_wr(struct bfq_data *bfqd)
++{
++ struct bfq_queue *bfqq;
++
++ spin_lock_irq(bfqd->queue->queue_lock);
++
++ list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list)
++ bfq_bfqq_end_wr(bfqq);
++ list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list)
++ bfq_bfqq_end_wr(bfqq);
++ bfq_end_wr_async(bfqd);
++
++ spin_unlock_irq(bfqd->queue->queue_lock);
++}
++
++static int bfq_allow_merge(struct request_queue *q, struct request *rq,
++ struct bio *bio)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ struct bfq_io_cq *bic;
++ struct bfq_queue *bfqq;
++
++ /*
++ * Disallow merge of a sync bio into an async request.
++ */
++ if (bfq_bio_sync(bio) && !rq_is_sync(rq))
++ return 0;
++
++ /*
++ * Lookup the bfqq that this bio will be queued with. Allow
++ * merge only if rq is queued there.
++ * Queue lock is held here.
++ */
++ bic = bfq_bic_lookup(bfqd, current->io_context);
++ if (bic == NULL)
++ return 0;
++
++ bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
++ return bfqq == RQ_BFQQ(rq);
++}
++
++static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq)
++{
++ if (bfqq != NULL) {
++ bfq_mark_bfqq_must_alloc(bfqq);
++ bfq_mark_bfqq_budget_new(bfqq);
++ bfq_clear_bfqq_fifo_expire(bfqq);
++
++ bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
++
++ bfq_log_bfqq(bfqd, bfqq,
++ "set_in_service_queue, cur-budget = %lu",
++ bfqq->entity.budget);
++ }
++
++ bfqd->in_service_queue = bfqq;
++}
++
++/*
++ * Get and set a new queue for service.
++ */
++static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq)
++{
++ if (!bfqq)
++ bfqq = bfq_get_next_queue(bfqd);
++ else
++ bfq_get_next_queue_forced(bfqd, bfqq);
++
++ __bfq_set_in_service_queue(bfqd, bfqq);
++ return bfqq;
++}
++
++static inline sector_t bfq_dist_from_last(struct bfq_data *bfqd,
++ struct request *rq)
++{
++ if (blk_rq_pos(rq) >= bfqd->last_position)
++ return blk_rq_pos(rq) - bfqd->last_position;
++ else
++ return bfqd->last_position - blk_rq_pos(rq);
++}
++
++/*
++ * Return true if bfqq has no request pending and rq is close enough to
++ * bfqd->last_position, or if rq is closer to bfqd->last_position than
++ * bfqq->next_rq
++ */
++static inline int bfq_rq_close(struct bfq_data *bfqd, struct request *rq)
++{
++ return bfq_dist_from_last(bfqd, rq) <= BFQQ_SEEK_THR;
++}
++
++static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
++{
++ struct rb_root *root = &bfqd->rq_pos_tree;
++ struct rb_node *parent, *node;
++ struct bfq_queue *__bfqq;
++ sector_t sector = bfqd->last_position;
++
++ if (RB_EMPTY_ROOT(root))
++ return NULL;
++
++ /*
++ * First, if we find a request starting at the end of the last
++ * request, choose it.
++ */
++ __bfqq = bfq_rq_pos_tree_lookup(bfqd, root, sector, &parent, NULL);
++ if (__bfqq != NULL)
++ return __bfqq;
++
++ /*
++ * If the exact sector wasn't found, the parent of the NULL leaf
++ * will contain the closest sector (rq_pos_tree sorted by
++ * next_request position).
++ */
++ __bfqq = rb_entry(parent, struct bfq_queue, pos_node);
++ if (bfq_rq_close(bfqd, __bfqq->next_rq))
++ return __bfqq;
++
++ if (blk_rq_pos(__bfqq->next_rq) < sector)
++ node = rb_next(&__bfqq->pos_node);
++ else
++ node = rb_prev(&__bfqq->pos_node);
++ if (node == NULL)
++ return NULL;
++
++ __bfqq = rb_entry(node, struct bfq_queue, pos_node);
++ if (bfq_rq_close(bfqd, __bfqq->next_rq))
++ return __bfqq;
++
++ return NULL;
++}
++
++/*
++ * bfqd - obvious
++ * cur_bfqq - passed in so that we don't decide that the current queue
++ * is closely cooperating with itself.
++ *
++ * We are assuming that cur_bfqq has dispatched at least one request,
++ * and that bfqd->last_position reflects a position on the disk associated
++ * with the I/O issued by cur_bfqq.
++ */
++static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
++ struct bfq_queue *cur_bfqq)
++{
++ struct bfq_queue *bfqq;
++
++ if (bfq_class_idle(cur_bfqq))
++ return NULL;
++ if (!bfq_bfqq_sync(cur_bfqq))
++ return NULL;
++ if (BFQQ_SEEKY(cur_bfqq))
++ return NULL;
++
++ /* If device has only one backlogged bfq_queue, don't search. */
++ if (bfqd->busy_queues == 1)
++ return NULL;
++
++ /*
++ * We should notice if some of the queues are cooperating, e.g.
++ * working closely on the same area of the disk. In that case,
++ * we can group them together and don't waste time idling.
++ */
++ bfqq = bfqq_close(bfqd);
++ if (bfqq == NULL || bfqq == cur_bfqq)
++ return NULL;
++
++ /*
++ * Do not merge queues from different bfq_groups.
++ */
++ if (bfqq->entity.parent != cur_bfqq->entity.parent)
++ return NULL;
++
++ /*
++ * It only makes sense to merge sync queues.
++ */
++ if (!bfq_bfqq_sync(bfqq))
++ return NULL;
++ if (BFQQ_SEEKY(bfqq))
++ return NULL;
++
++ /*
++ * Do not merge queues of different priority classes.
++ */
++ if (bfq_class_rt(bfqq) != bfq_class_rt(cur_bfqq))
++ return NULL;
++
++ return bfqq;
++}
++
++/*
++ * If enough samples have been computed, return the current max budget
++ * stored in bfqd, which is dynamically updated according to the
++ * estimated disk peak rate; otherwise return the default max budget
++ */
++static inline unsigned long bfq_max_budget(struct bfq_data *bfqd)
++{
++ if (bfqd->budgets_assigned < 194)
++ return bfq_default_max_budget;
++ else
++ return bfqd->bfq_max_budget;
++}
++
++/*
++ * Return min budget, which is a fraction of the current or default
++ * max budget (trying with 1/32)
++ */
++static inline unsigned long bfq_min_budget(struct bfq_data *bfqd)
++{
++ if (bfqd->budgets_assigned < 194)
++ return bfq_default_max_budget / 32;
++ else
++ return bfqd->bfq_max_budget / 32;
++}
++
++static void bfq_arm_slice_timer(struct bfq_data *bfqd)
++{
++ struct bfq_queue *bfqq = bfqd->in_service_queue;
++ struct bfq_io_cq *bic;
++ unsigned long sl;
++
++ BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
++
++ /* Processes have exited, don't wait. */
++ bic = bfqd->in_service_bic;
++ if (bic == NULL || atomic_read(&bic->icq.ioc->active_ref) == 0)
++ return;
++
++ bfq_mark_bfqq_wait_request(bfqq);
++
++ /*
++ * We don't want to idle for seeks, but we do want to allow
++ * fair distribution of slice time for a process doing back-to-back
++ * seeks. So allow a little bit of time for him to submit a new rq.
++ *
++ * To prevent processes with (partly) seeky workloads from
++ * being too ill-treated, grant them a small fraction of the
++ * assigned budget before reducing the waiting time to
++ * BFQ_MIN_TT. This happened to help reduce latency.
++ */
++ sl = bfqd->bfq_slice_idle;
++ /*
++ * Unless the queue is being weight-raised or the scenario is
++ * asymmetric, grant only minimum idle time if the queue either
++ * has been seeky for long enough or has already proved to be
++ * constantly seeky.
++ */
++ if (bfq_sample_valid(bfqq->seek_samples) &&
++ ((BFQQ_SEEKY(bfqq) && bfqq->entity.service >
++ bfq_max_budget(bfqq->bfqd) / 8) ||
++ bfq_bfqq_constantly_seeky(bfqq)) && bfqq->wr_coeff == 1 &&
++ symmetric_scenario)
++ sl = min(sl, msecs_to_jiffies(BFQ_MIN_TT));
++ else if (bfqq->wr_coeff > 1)
++ sl = sl * 3;
++ bfqd->last_idling_start = ktime_get();
++ mod_timer(&bfqd->idle_slice_timer, jiffies + sl);
++ bfq_log(bfqd, "arm idle: %u/%u ms",
++ jiffies_to_msecs(sl), jiffies_to_msecs(bfqd->bfq_slice_idle));
++}
++
++/*
++ * Set the maximum time for the in-service queue to consume its
++ * budget. This prevents seeky processes from lowering the disk
++ * throughput (always guaranteed with a time slice scheme as in CFQ).
++ */
++static void bfq_set_budget_timeout(struct bfq_data *bfqd)
++{
++ struct bfq_queue *bfqq = bfqd->in_service_queue;
++ unsigned int timeout_coeff;
++ if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time)
++ timeout_coeff = 1;
++ else
++ timeout_coeff = bfqq->entity.weight / bfqq->entity.orig_weight;
++
++ bfqd->last_budget_start = ktime_get();
++
++ bfq_clear_bfqq_budget_new(bfqq);
++ bfqq->budget_timeout = jiffies +
++ bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] * timeout_coeff;
++
++ bfq_log_bfqq(bfqd, bfqq, "set budget_timeout %u",
++ jiffies_to_msecs(bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] *
++ timeout_coeff));
++}
++
++/*
++ * Move request from internal lists to the request queue dispatch list.
++ */
++static void bfq_dispatch_insert(struct request_queue *q, struct request *rq)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++ /*
++ * For consistency, the next instruction should have been executed
++ * after removing the request from the queue and dispatching it.
++ * We execute instead this instruction before bfq_remove_request()
++ * (and hence introduce a temporary inconsistency), for efficiency.
++ * In fact, in a forced_dispatch, this prevents two counters related
++ * to bfqq->dispatched to risk to be uselessly decremented if bfqq
++ * is not in service, and then to be incremented again after
++ * incrementing bfqq->dispatched.
++ */
++ bfqq->dispatched++;
++ bfq_remove_request(rq);
++ elv_dispatch_sort(q, rq);
++
++ if (bfq_bfqq_sync(bfqq))
++ bfqd->sync_flight++;
++}
++
++/*
++ * Return expired entry, or NULL to just start from scratch in rbtree.
++ */
++static struct request *bfq_check_fifo(struct bfq_queue *bfqq)
++{
++ struct request *rq = NULL;
++
++ if (bfq_bfqq_fifo_expire(bfqq))
++ return NULL;
++
++ bfq_mark_bfqq_fifo_expire(bfqq);
++
++ if (list_empty(&bfqq->fifo))
++ return NULL;
++
++ rq = rq_entry_fifo(bfqq->fifo.next);
++
++ if (time_before(jiffies, rq->fifo_time))
++ return NULL;
++
++ return rq;
++}
++
++/* Must be called with the queue_lock held. */
++static int bfqq_process_refs(struct bfq_queue *bfqq)
++{
++ int process_refs, io_refs;
++
++ io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
++ process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
++ BUG_ON(process_refs < 0);
++ return process_refs;
++}
++
++static void bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
++{
++ int process_refs, new_process_refs;
++ struct bfq_queue *__bfqq;
++
++ /*
++ * If there are no process references on the new_bfqq, then it is
++ * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain
++ * may have dropped their last reference (not just their last process
++ * reference).
++ */
++ if (!bfqq_process_refs(new_bfqq))
++ return;
++
++ /* Avoid a circular list and skip interim queue merges. */
++ while ((__bfqq = new_bfqq->new_bfqq)) {
++ if (__bfqq == bfqq)
++ return;
++ new_bfqq = __bfqq;
++ }
++
++ process_refs = bfqq_process_refs(bfqq);
++ new_process_refs = bfqq_process_refs(new_bfqq);
++ /*
++ * If the process for the bfqq has gone away, there is no
++ * sense in merging the queues.
++ */
++ if (process_refs == 0 || new_process_refs == 0)
++ return;
++
++ /*
++ * Merge in the direction of the lesser amount of work.
++ */
++ if (new_process_refs >= process_refs) {
++ bfqq->new_bfqq = new_bfqq;
++ atomic_add(process_refs, &new_bfqq->ref);
++ } else {
++ new_bfqq->new_bfqq = bfqq;
++ atomic_add(new_process_refs, &bfqq->ref);
++ }
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
++ new_bfqq->pid);
++}
++
++static inline unsigned long bfq_bfqq_budget_left(struct bfq_queue *bfqq)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++ return entity->budget - entity->service;
++}
++
++static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ BUG_ON(bfqq != bfqd->in_service_queue);
++
++ __bfq_bfqd_reset_in_service(bfqd);
++
++ /*
++ * If this bfqq is shared between multiple processes, check
++ * to make sure that those processes are still issuing I/Os
++ * within the mean seek distance. If not, it may be time to
++ * break the queues apart again.
++ */
++ if (bfq_bfqq_coop(bfqq) && BFQQ_SEEKY(bfqq))
++ bfq_mark_bfqq_split_coop(bfqq);
++
++ if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
++ /*
++ * Overloading budget_timeout field to store the time
++ * at which the queue remains with no backlog; used by
++ * the weight-raising mechanism.
++ */
++ bfqq->budget_timeout = jiffies;
++ bfq_del_bfqq_busy(bfqd, bfqq, 1);
++ } else {
++ bfq_activate_bfqq(bfqd, bfqq);
++ /*
++ * Resort priority tree of potential close cooperators.
++ */
++ bfq_rq_pos_tree_add(bfqd, bfqq);
++ }
++}
++
++/**
++ * __bfq_bfqq_recalc_budget - try to adapt the budget to the @bfqq behavior.
++ * @bfqd: device data.
++ * @bfqq: queue to update.
++ * @reason: reason for expiration.
++ *
++ * Handle the feedback on @bfqq budget. See the body for detailed
++ * comments.
++ */
++static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ enum bfqq_expiration reason)
++{
++ struct request *next_rq;
++ unsigned long budget, min_budget;
++
++ budget = bfqq->max_budget;
++ min_budget = bfq_min_budget(bfqd);
++
++ BUG_ON(bfqq != bfqd->in_service_queue);
++
++ bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last budg %lu, budg left %lu",
++ bfqq->entity.budget, bfq_bfqq_budget_left(bfqq));
++ bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last max_budg %lu, min budg %lu",
++ budget, bfq_min_budget(bfqd));
++ bfq_log_bfqq(bfqd, bfqq, "recalc_budg: sync %d, seeky %d",
++ bfq_bfqq_sync(bfqq), BFQQ_SEEKY(bfqd->in_service_queue));
++
++ if (bfq_bfqq_sync(bfqq)) {
++ switch (reason) {
++ /*
++ * Caveat: in all the following cases we trade latency
++ * for throughput.
++ */
++ case BFQ_BFQQ_TOO_IDLE:
++ /*
++ * This is the only case where we may reduce
++ * the budget: if there is no request of the
++ * process still waiting for completion, then
++ * we assume (tentatively) that the timer has
++ * expired because the batch of requests of
++ * the process could have been served with a
++ * smaller budget. Hence, betting that
++ * process will behave in the same way when it
++ * becomes backlogged again, we reduce its
++ * next budget. As long as we guess right,
++ * this budget cut reduces the latency
++ * experienced by the process.
++ *
++ * However, if there are still outstanding
++ * requests, then the process may have not yet
++ * issued its next request just because it is
++ * still waiting for the completion of some of
++ * the still outstanding ones. So in this
++ * subcase we do not reduce its budget, on the
++ * contrary we increase it to possibly boost
++ * the throughput, as discussed in the
++ * comments to the BUDGET_TIMEOUT case.
++ */
++ if (bfqq->dispatched > 0) /* still outstanding reqs */
++ budget = min(budget * 2, bfqd->bfq_max_budget);
++ else {
++ if (budget > 5 * min_budget)
++ budget -= 4 * min_budget;
++ else
++ budget = min_budget;
++ }
++ break;
++ case BFQ_BFQQ_BUDGET_TIMEOUT:
++ /*
++ * We double the budget here because: 1) it
++ * gives the chance to boost the throughput if
++ * this is not a seeky process (which may have
++ * bumped into this timeout because of, e.g.,
++ * ZBR), 2) together with charge_full_budget
++ * it helps give seeky processes higher
++ * timestamps, and hence be served less
++ * frequently.
++ */
++ budget = min(budget * 2, bfqd->bfq_max_budget);
++ break;
++ case BFQ_BFQQ_BUDGET_EXHAUSTED:
++ /*
++ * The process still has backlog, and did not
++ * let either the budget timeout or the disk
++ * idling timeout expire. Hence it is not
++ * seeky, has a short thinktime and may be
++ * happy with a higher budget too. So
++ * definitely increase the budget of this good
++ * candidate to boost the disk throughput.
++ */
++ budget = min(budget * 4, bfqd->bfq_max_budget);
++ break;
++ case BFQ_BFQQ_NO_MORE_REQUESTS:
++ /*
++ * Leave the budget unchanged.
++ */
++ default:
++ return;
++ }
++ } else /* async queue */
++ /* async queues get always the maximum possible budget
++ * (their ability to dispatch is limited by
++ * @bfqd->bfq_max_budget_async_rq).
++ */
++ budget = bfqd->bfq_max_budget;
++
++ bfqq->max_budget = budget;
++
++ if (bfqd->budgets_assigned >= 194 && bfqd->bfq_user_max_budget == 0 &&
++ bfqq->max_budget > bfqd->bfq_max_budget)
++ bfqq->max_budget = bfqd->bfq_max_budget;
++
++ /*
++ * Make sure that we have enough budget for the next request.
++ * Since the finish time of the bfqq must be kept in sync with
++ * the budget, be sure to call __bfq_bfqq_expire() after the
++ * update.
++ */
++ next_rq = bfqq->next_rq;
++ if (next_rq != NULL)
++ bfqq->entity.budget = max_t(unsigned long, bfqq->max_budget,
++ bfq_serv_to_charge(next_rq, bfqq));
++ else
++ bfqq->entity.budget = bfqq->max_budget;
++
++ bfq_log_bfqq(bfqd, bfqq, "head sect: %u, new budget %lu",
++ next_rq != NULL ? blk_rq_sectors(next_rq) : 0,
++ bfqq->entity.budget);
++}
++
++static unsigned long bfq_calc_max_budget(u64 peak_rate, u64 timeout)
++{
++ unsigned long max_budget;
++
++ /*
++ * The max_budget calculated when autotuning is equal to the
++ * amount of sectors transfered in timeout_sync at the
++ * estimated peak rate.
++ */
++ max_budget = (unsigned long)(peak_rate * 1000 *
++ timeout >> BFQ_RATE_SHIFT);
++
++ return max_budget;
++}
++
++/*
++ * In addition to updating the peak rate, checks whether the process
++ * is "slow", and returns 1 if so. This slow flag is used, in addition
++ * to the budget timeout, to reduce the amount of service provided to
++ * seeky processes, and hence reduce their chances to lower the
++ * throughput. See the code for more details.
++ */
++static int bfq_update_peak_rate(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ int compensate, enum bfqq_expiration reason)
++{
++ u64 bw, usecs, expected, timeout;
++ ktime_t delta;
++ int update = 0;
++
++ if (!bfq_bfqq_sync(bfqq) || bfq_bfqq_budget_new(bfqq))
++ return 0;
++
++ if (compensate)
++ delta = bfqd->last_idling_start;
++ else
++ delta = ktime_get();
++ delta = ktime_sub(delta, bfqd->last_budget_start);
++ usecs = ktime_to_us(delta);
++
++ /* Don't trust short/unrealistic values. */
++ if (usecs < 100 || usecs >= LONG_MAX)
++ return 0;
++
++ /*
++ * Calculate the bandwidth for the last slice. We use a 64 bit
++ * value to store the peak rate, in sectors per usec in fixed
++ * point math. We do so to have enough precision in the estimate
++ * and to avoid overflows.
++ */
++ bw = (u64)bfqq->entity.service << BFQ_RATE_SHIFT;
++ do_div(bw, (unsigned long)usecs);
++
++ timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
++
++ /*
++ * Use only long (> 20ms) intervals to filter out spikes for
++ * the peak rate estimation.
++ */
++ if (usecs > 20000) {
++ if (bw > bfqd->peak_rate ||
++ (!BFQQ_SEEKY(bfqq) &&
++ reason == BFQ_BFQQ_BUDGET_TIMEOUT)) {
++ bfq_log(bfqd, "measured bw =%llu", bw);
++ /*
++ * To smooth oscillations use a low-pass filter with
++ * alpha=7/8, i.e.,
++ * new_rate = (7/8) * old_rate + (1/8) * bw
++ */
++ do_div(bw, 8);
++ if (bw == 0)
++ return 0;
++ bfqd->peak_rate *= 7;
++ do_div(bfqd->peak_rate, 8);
++ bfqd->peak_rate += bw;
++ update = 1;
++ bfq_log(bfqd, "new peak_rate=%llu", bfqd->peak_rate);
++ }
++
++ update |= bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES - 1;
++
++ if (bfqd->peak_rate_samples < BFQ_PEAK_RATE_SAMPLES)
++ bfqd->peak_rate_samples++;
++
++ if (bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES &&
++ update) {
++ int dev_type = blk_queue_nonrot(bfqd->queue);
++ if (bfqd->bfq_user_max_budget == 0) {
++ bfqd->bfq_max_budget =
++ bfq_calc_max_budget(bfqd->peak_rate,
++ timeout);
++ bfq_log(bfqd, "new max_budget=%lu",
++ bfqd->bfq_max_budget);
++ }
++ if (bfqd->device_speed == BFQ_BFQD_FAST &&
++ bfqd->peak_rate < device_speed_thresh[dev_type]) {
++ bfqd->device_speed = BFQ_BFQD_SLOW;
++ bfqd->RT_prod = R_slow[dev_type] *
++ T_slow[dev_type];
++ } else if (bfqd->device_speed == BFQ_BFQD_SLOW &&
++ bfqd->peak_rate > device_speed_thresh[dev_type]) {
++ bfqd->device_speed = BFQ_BFQD_FAST;
++ bfqd->RT_prod = R_fast[dev_type] *
++ T_fast[dev_type];
++ }
++ }
++ }
++
++ /*
++ * If the process has been served for a too short time
++ * interval to let its possible sequential accesses prevail on
++ * the initial seek time needed to move the disk head on the
++ * first sector it requested, then give the process a chance
++ * and for the moment return false.
++ */
++ if (bfqq->entity.budget <= bfq_max_budget(bfqd) / 8)
++ return 0;
++
++ /*
++ * A process is considered ``slow'' (i.e., seeky, so that we
++ * cannot treat it fairly in the service domain, as it would
++ * slow down too much the other processes) if, when a slice
++ * ends for whatever reason, it has received service at a
++ * rate that would not be high enough to complete the budget
++ * before the budget timeout expiration.
++ */
++ expected = bw * 1000 * timeout >> BFQ_RATE_SHIFT;
++
++ /*
++ * Caveat: processes doing IO in the slower disk zones will
++ * tend to be slow(er) even if not seeky. And the estimated
++ * peak rate will actually be an average over the disk
++ * surface. Hence, to not be too harsh with unlucky processes,
++ * we keep a budget/3 margin of safety before declaring a
++ * process slow.
++ */
++ return expected > (4 * bfqq->entity.budget) / 3;
++}
++
++/*
++ * To be deemed as soft real-time, an application must meet two
++ * requirements. First, the application must not require an average
++ * bandwidth higher than the approximate bandwidth required to playback or
++ * record a compressed high-definition video.
++ * The next function is invoked on the completion of the last request of a
++ * batch, to compute the next-start time instant, soft_rt_next_start, such
++ * that, if the next request of the application does not arrive before
++ * soft_rt_next_start, then the above requirement on the bandwidth is met.
++ *
++ * The second requirement is that the request pattern of the application is
++ * isochronous, i.e., that, after issuing a request or a batch of requests,
++ * the application stops issuing new requests until all its pending requests
++ * have been completed. After that, the application may issue a new batch,
++ * and so on.
++ * For this reason the next function is invoked to compute
++ * soft_rt_next_start only for applications that meet this requirement,
++ * whereas soft_rt_next_start is set to infinity for applications that do
++ * not.
++ *
++ * Unfortunately, even a greedy application may happen to behave in an
++ * isochronous way if the CPU load is high. In fact, the application may
++ * stop issuing requests while the CPUs are busy serving other processes,
++ * then restart, then stop again for a while, and so on. In addition, if
++ * the disk achieves a low enough throughput with the request pattern
++ * issued by the application (e.g., because the request pattern is random
++ * and/or the device is slow), then the application may meet the above
++ * bandwidth requirement too. To prevent such a greedy application to be
++ * deemed as soft real-time, a further rule is used in the computation of
++ * soft_rt_next_start: soft_rt_next_start must be higher than the current
++ * time plus the maximum time for which the arrival of a request is waited
++ * for when a sync queue becomes idle, namely bfqd->bfq_slice_idle.
++ * This filters out greedy applications, as the latter issue instead their
++ * next request as soon as possible after the last one has been completed
++ * (in contrast, when a batch of requests is completed, a soft real-time
++ * application spends some time processing data).
++ *
++ * Unfortunately, the last filter may easily generate false positives if
++ * only bfqd->bfq_slice_idle is used as a reference time interval and one
++ * or both the following cases occur:
++ * 1) HZ is so low that the duration of a jiffy is comparable to or higher
++ * than bfqd->bfq_slice_idle. This happens, e.g., on slow devices with
++ * HZ=100.
++ * 2) jiffies, instead of increasing at a constant rate, may stop increasing
++ * for a while, then suddenly 'jump' by several units to recover the lost
++ * increments. This seems to happen, e.g., inside virtual machines.
++ * To address this issue, we do not use as a reference time interval just
++ * bfqd->bfq_slice_idle, but bfqd->bfq_slice_idle plus a few jiffies. In
++ * particular we add the minimum number of jiffies for which the filter
++ * seems to be quite precise also in embedded systems and KVM/QEMU virtual
++ * machines.
++ */
++static inline unsigned long bfq_bfqq_softrt_next_start(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq)
++{
++ return max(bfqq->last_idle_bklogged +
++ HZ * bfqq->service_from_backlogged /
++ bfqd->bfq_wr_max_softrt_rate,
++ jiffies + bfqq->bfqd->bfq_slice_idle + 4);
++}
++
++/*
++ * Return the largest-possible time instant such that, for as long as possible,
++ * the current time will be lower than this time instant according to the macro
++ * time_is_before_jiffies().
++ */
++static inline unsigned long bfq_infinity_from_now(unsigned long now)
++{
++ return now + ULONG_MAX / 2;
++}
++
++/**
++ * bfq_bfqq_expire - expire a queue.
++ * @bfqd: device owning the queue.
++ * @bfqq: the queue to expire.
++ * @compensate: if true, compensate for the time spent idling.
++ * @reason: the reason causing the expiration.
++ *
++ *
++ * If the process associated to the queue is slow (i.e., seeky), or in
++ * case of budget timeout, or, finally, if it is async, we
++ * artificially charge it an entire budget (independently of the
++ * actual service it received). As a consequence, the queue will get
++ * higher timestamps than the correct ones upon reactivation, and
++ * hence it will be rescheduled as if it had received more service
++ * than what it actually received. In the end, this class of processes
++ * will receive less service in proportion to how slowly they consume
++ * their budgets (and hence how seriously they tend to lower the
++ * throughput).
++ *
++ * In contrast, when a queue expires because it has been idling for
++ * too much or because it exhausted its budget, we do not touch the
++ * amount of service it has received. Hence when the queue will be
++ * reactivated and its timestamps updated, the latter will be in sync
++ * with the actual service received by the queue until expiration.
++ *
++ * Charging a full budget to the first type of queues and the exact
++ * service to the others has the effect of using the WF2Q+ policy to
++ * schedule the former on a timeslice basis, without violating the
++ * service domain guarantees of the latter.
++ */
++static void bfq_bfqq_expire(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ int compensate,
++ enum bfqq_expiration reason)
++{
++ int slow;
++ BUG_ON(bfqq != bfqd->in_service_queue);
++
++ /* Update disk peak rate for autotuning and check whether the
++ * process is slow (see bfq_update_peak_rate).
++ */
++ slow = bfq_update_peak_rate(bfqd, bfqq, compensate, reason);
++
++ /*
++ * As above explained, 'punish' slow (i.e., seeky), timed-out
++ * and async queues, to favor sequential sync workloads.
++ *
++ * Processes doing I/O in the slower disk zones will tend to be
++ * slow(er) even if not seeky. Hence, since the estimated peak
++ * rate is actually an average over the disk surface, these
++ * processes may timeout just for bad luck. To avoid punishing
++ * them we do not charge a full budget to a process that
++ * succeeded in consuming at least 2/3 of its budget.
++ */
++ if (slow || (reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
++ bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3))
++ bfq_bfqq_charge_full_budget(bfqq);
++
++ bfqq->service_from_backlogged += bfqq->entity.service;
++
++ if (BFQQ_SEEKY(bfqq) && reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
++ !bfq_bfqq_constantly_seeky(bfqq)) {
++ bfq_mark_bfqq_constantly_seeky(bfqq);
++ if (!blk_queue_nonrot(bfqd->queue))
++ bfqd->const_seeky_busy_in_flight_queues++;
++ }
++
++ if (reason == BFQ_BFQQ_TOO_IDLE &&
++ bfqq->entity.service <= 2 * bfqq->entity.budget / 10 )
++ bfq_clear_bfqq_IO_bound(bfqq);
++
++ if (bfqd->low_latency && bfqq->wr_coeff == 1)
++ bfqq->last_wr_start_finish = jiffies;
++
++ if (bfqd->low_latency && bfqd->bfq_wr_max_softrt_rate > 0 &&
++ RB_EMPTY_ROOT(&bfqq->sort_list)) {
++ /*
++ * If we get here, and there are no outstanding requests,
++ * then the request pattern is isochronous (see the comments
++ * to the function bfq_bfqq_softrt_next_start()). Hence we
++ * can compute soft_rt_next_start. If, instead, the queue
++ * still has outstanding requests, then we have to wait
++ * for the completion of all the outstanding requests to
++ * discover whether the request pattern is actually
++ * isochronous.
++ */
++ if (bfqq->dispatched == 0)
++ bfqq->soft_rt_next_start =
++ bfq_bfqq_softrt_next_start(bfqd, bfqq);
++ else {
++ /*
++ * The application is still waiting for the
++ * completion of one or more requests:
++ * prevent it from possibly being incorrectly
++ * deemed as soft real-time by setting its
++ * soft_rt_next_start to infinity. In fact,
++ * without this assignment, the application
++ * would be incorrectly deemed as soft
++ * real-time if:
++ * 1) it issued a new request before the
++ * completion of all its in-flight
++ * requests, and
++ * 2) at that time, its soft_rt_next_start
++ * happened to be in the past.
++ */
++ bfqq->soft_rt_next_start =
++ bfq_infinity_from_now(jiffies);
++ /*
++ * Schedule an update of soft_rt_next_start to when
++ * the task may be discovered to be isochronous.
++ */
++ bfq_mark_bfqq_softrt_update(bfqq);
++ }
++ }
++
++ bfq_log_bfqq(bfqd, bfqq,
++ "expire (%d, slow %d, num_disp %d, idle_win %d)", reason,
++ slow, bfqq->dispatched, bfq_bfqq_idle_window(bfqq));
++
++ /*
++ * Increase, decrease or leave budget unchanged according to
++ * reason.
++ */
++ __bfq_bfqq_recalc_budget(bfqd, bfqq, reason);
++ __bfq_bfqq_expire(bfqd, bfqq);
++}
++
++/*
++ * Budget timeout is not implemented through a dedicated timer, but
++ * just checked on request arrivals and completions, as well as on
++ * idle timer expirations.
++ */
++static int bfq_bfqq_budget_timeout(struct bfq_queue *bfqq)
++{
++ if (bfq_bfqq_budget_new(bfqq) ||
++ time_before(jiffies, bfqq->budget_timeout))
++ return 0;
++ return 1;
++}
++
++/*
++ * If we expire a queue that is waiting for the arrival of a new
++ * request, we may prevent the fictitious timestamp back-shifting that
++ * allows the guarantees of the queue to be preserved (see [1] for
++ * this tricky aspect). Hence we return true only if this condition
++ * does not hold, or if the queue is slow enough to deserve only to be
++ * kicked off for preserving a high throughput.
++*/
++static inline int bfq_may_expire_for_budg_timeout(struct bfq_queue *bfqq)
++{
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "may_budget_timeout: wait_request %d left %d timeout %d",
++ bfq_bfqq_wait_request(bfqq),
++ bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3,
++ bfq_bfqq_budget_timeout(bfqq));
++
++ return (!bfq_bfqq_wait_request(bfqq) ||
++ bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3)
++ &&
++ bfq_bfqq_budget_timeout(bfqq);
++}
++
++/*
++ * Device idling is allowed only for the queues for which this function
++ * returns true. For this reason, the return value of this function plays a
++ * critical role for both throughput boosting and service guarantees. The
++ * return value is computed through a logical expression. In this rather
++ * long comment, we try to briefly describe all the details and motivations
++ * behind the components of this logical expression.
++ *
++ * First, the expression is false if bfqq is not sync, or if: bfqq happened
++ * to become active during a large burst of queue activations, and the
++ * pattern of requests bfqq contains boosts the throughput if bfqq is
++ * expired. In fact, queues that became active during a large burst benefit
++ * only from throughput, as discussed in the comments to bfq_handle_burst.
++ * In this respect, expiring bfqq certainly boosts the throughput on NCQ-
++ * capable flash-based devices, whereas, on rotational devices, it boosts
++ * the throughput only if bfqq contains random requests.
++ *
++ * On the opposite end, if (a) bfqq is sync, (b) the above burst-related
++ * condition does not hold, and (c) bfqq is being weight-raised, then the
++ * expression always evaluates to true, as device idling is instrumental
++ * for preserving low-latency guarantees (see [1]). If, instead, conditions
++ * (a) and (b) do hold, but (c) does not, then the expression evaluates to
++ * true only if: (1) bfqq is I/O-bound and has a non-null idle window, and
++ * (2) at least one of the following two conditions holds.
++ * The first condition is that the device is not performing NCQ, because
++ * idling the device most certainly boosts the throughput if this condition
++ * holds and bfqq is I/O-bound and has been granted a non-null idle window.
++ * The second compound condition is made of the logical AND of two components.
++ *
++ * The first component is true only if there is no weight-raised busy
++ * queue. This guarantees that the device is not idled for a sync non-
++ * weight-raised queue when there are busy weight-raised queues. The former
++ * is then expired immediately if empty. Combined with the timestamping
++ * rules of BFQ (see [1] for details), this causes sync non-weight-raised
++ * queues to get a lower number of requests served, and hence to ask for a
++ * lower number of requests from the request pool, before the busy weight-
++ * raised queues get served again.
++ *
++ * This is beneficial for the processes associated with weight-raised
++ * queues, when the request pool is saturated (e.g., in the presence of
++ * write hogs). In fact, if the processes associated with the other queues
++ * ask for requests at a lower rate, then weight-raised processes have a
++ * higher probability to get a request from the pool immediately (or at
++ * least soon) when they need one. Hence they have a higher probability to
++ * actually get a fraction of the disk throughput proportional to their
++ * high weight. This is especially true with NCQ-capable drives, which
++ * enqueue several requests in advance and further reorder internally-
++ * queued requests.
++ *
++ * In the end, mistreating non-weight-raised queues when there are busy
++ * weight-raised queues seems to mitigate starvation problems in the
++ * presence of heavy write workloads and NCQ, and hence to guarantee a
++ * higher application and system responsiveness in these hostile scenarios.
++ *
++ * If the first component of the compound condition is instead true, i.e.,
++ * there is no weight-raised busy queue, then the second component of the
++ * compound condition takes into account service-guarantee and throughput
++ * issues related to NCQ (recall that the compound condition is evaluated
++ * only if the device is detected as supporting NCQ).
++ *
++ * As for service guarantees, allowing the drive to enqueue more than one
++ * request at a time, and hence delegating de facto final scheduling
++ * decisions to the drive's internal scheduler, causes loss of control on
++ * the actual request service order. In this respect, when the drive is
++ * allowed to enqueue more than one request at a time, the service
++ * distribution enforced by the drive's internal scheduler is likely to
++ * coincide with the desired device-throughput distribution only in the
++ * following, perfectly symmetric, scenario:
++ * 1) all active queues have the same weight,
++ * 2) all active groups at the same level in the groups tree have the same
++ * weight,
++ * 3) all active groups at the same level in the groups tree have the same
++ * number of children.
++ *
++ * Even in such a scenario, sequential I/O may still receive a preferential
++ * treatment, but this is not likely to be a big issue with flash-based
++ * devices, because of their non-dramatic loss of throughput with random
++ * I/O. Things do differ with HDDs, for which additional care is taken, as
++ * explained after completing the discussion for flash-based devices.
++ *
++ * Unfortunately, keeping the necessary state for evaluating exactly the
++ * above symmetry conditions would be quite complex and time-consuming.
++ * Therefore BFQ evaluates instead the following stronger sub-conditions,
++ * for which it is much easier to maintain the needed state:
++ * 1) all active queues have the same weight,
++ * 2) all active groups have the same weight,
++ * 3) all active groups have at most one active child each.
++ * In particular, the last two conditions are always true if hierarchical
++ * support and the cgroups interface are not enabled, hence no state needs
++ * to be maintained in this case.
++ *
++ * According to the above considerations, the second component of the
++ * compound condition evaluates to true if any of the above symmetry
++ * sub-condition does not hold, or the device is not flash-based. Therefore,
++ * if also the first component is true, then idling is allowed for a sync
++ * queue. These are the only sub-conditions considered if the device is
++ * flash-based, as, for such a device, it is sensible to force idling only
++ * for service-guarantee issues. In fact, as for throughput, idling
++ * NCQ-capable flash-based devices would not boost the throughput even
++ * with sequential I/O; rather it would lower the throughput in proportion
++ * to how fast the device is. In the end, (only) if all the three
++ * sub-conditions hold and the device is flash-based, the compound
++ * condition evaluates to false and therefore no idling is performed.
++ *
++ * As already said, things change with a rotational device, where idling
++ * boosts the throughput with sequential I/O (even with NCQ). Hence, for
++ * such a device the second component of the compound condition evaluates
++ * to true also if the following additional sub-condition does not hold:
++ * the queue is constantly seeky. Unfortunately, this different behavior
++ * with respect to flash-based devices causes an additional asymmetry: if
++ * some sync queues enjoy idling and some other sync queues do not, then
++ * the latter get a low share of the device throughput, simply because the
++ * former get many requests served after being set as in service, whereas
++ * the latter do not. As a consequence, to guarantee the desired throughput
++ * distribution, on HDDs the compound expression evaluates to true (and
++ * hence device idling is performed) also if the following last symmetry
++ * condition does not hold: no other queue is benefiting from idling. Also
++ * this last condition is actually replaced with a simpler-to-maintain and
++ * stronger condition: there is no busy queue which is not constantly seeky
++ * (and hence may also benefit from idling).
++ *
++ * To sum up, when all the required symmetry and throughput-boosting
++ * sub-conditions hold, the second component of the compound condition
++ * evaluates to false, and hence no idling is performed. This helps to
++ * keep the drives' internal queues full on NCQ-capable devices, and hence
++ * to boost the throughput, without causing 'almost' any loss of service
++ * guarantees. The 'almost' follows from the fact that, if the internal
++ * queue of one such device is filled while all the sub-conditions hold,
++ * but at some point in time some sub-condition stops to hold, then it may
++ * become impossible to let requests be served in the new desired order
++ * until all the requests already queued in the device have been served.
++ */
++static inline bool bfq_bfqq_must_not_expire(struct bfq_queue *bfqq)
++{
++ struct bfq_data *bfqd = bfqq->bfqd;
++#define cond_for_seeky_on_ncq_hdd (bfq_bfqq_constantly_seeky(bfqq) && \
++ bfqd->busy_in_flight_queues == \
++ bfqd->const_seeky_busy_in_flight_queues)
++
++#define cond_for_expiring_in_burst (bfq_bfqq_in_large_burst(bfqq) && \
++ bfqd->hw_tag && \
++ (blk_queue_nonrot(bfqd->queue) || \
++ bfq_bfqq_constantly_seeky(bfqq)))
++
++/*
++ * Condition for expiring a non-weight-raised queue (and hence not idling
++ * the device).
++ */
++#define cond_for_expiring_non_wr (bfqd->hw_tag && \
++ (bfqd->wr_busy_queues > 0 || \
++ (blk_queue_nonrot(bfqd->queue) || \
++ cond_for_seeky_on_ncq_hdd)))
++
++ return bfq_bfqq_sync(bfqq) &&
++ !cond_for_expiring_in_burst &&
++ (bfqq->wr_coeff > 1 || !symmetric_scenario ||
++ (bfq_bfqq_IO_bound(bfqq) && bfq_bfqq_idle_window(bfqq) &&
++ !cond_for_expiring_non_wr)
++ );
++}
++
++/*
++ * If the in-service queue is empty but sync, and the function
++ * bfq_bfqq_must_not_expire returns true, then:
++ * 1) the queue must remain in service and cannot be expired, and
++ * 2) the disk must be idled to wait for the possible arrival of a new
++ * request for the queue.
++ * See the comments to the function bfq_bfqq_must_not_expire for the reasons
++ * why performing device idling is the best choice to boost the throughput
++ * and preserve service guarantees when bfq_bfqq_must_not_expire itself
++ * returns true.
++ */
++static inline bool bfq_bfqq_must_idle(struct bfq_queue *bfqq)
++{
++ struct bfq_data *bfqd = bfqq->bfqd;
++
++ return RB_EMPTY_ROOT(&bfqq->sort_list) && bfqd->bfq_slice_idle != 0 &&
++ bfq_bfqq_must_not_expire(bfqq);
++}
++
++/*
++ * Select a queue for service. If we have a current queue in service,
++ * check whether to continue servicing it, or retrieve and set a new one.
++ */
++static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
++{
++ struct bfq_queue *bfqq, *new_bfqq = NULL;
++ struct request *next_rq;
++ enum bfqq_expiration reason = BFQ_BFQQ_BUDGET_TIMEOUT;
++
++ bfqq = bfqd->in_service_queue;
++ if (bfqq == NULL)
++ goto new_queue;
++
++ bfq_log_bfqq(bfqd, bfqq, "select_queue: already in-service queue");
++
++ /*
++ * If another queue has a request waiting within our mean seek
++ * distance, let it run. The expire code will check for close
++ * cooperators and put the close queue at the front of the
++ * service tree. If possible, merge the expiring queue with the
++ * new bfqq.
++ */
++ new_bfqq = bfq_close_cooperator(bfqd, bfqq);
++ if (new_bfqq != NULL && bfqq->new_bfqq == NULL)
++ bfq_setup_merge(bfqq, new_bfqq);
++
++ if (bfq_may_expire_for_budg_timeout(bfqq) &&
++ !timer_pending(&bfqd->idle_slice_timer) &&
++ !bfq_bfqq_must_idle(bfqq))
++ goto expire;
++
++ next_rq = bfqq->next_rq;
++ /*
++ * If bfqq has requests queued and it has enough budget left to
++ * serve them, keep the queue, otherwise expire it.
++ */
++ if (next_rq != NULL) {
++ if (bfq_serv_to_charge(next_rq, bfqq) >
++ bfq_bfqq_budget_left(bfqq)) {
++ reason = BFQ_BFQQ_BUDGET_EXHAUSTED;
++ goto expire;
++ } else {
++ /*
++ * The idle timer may be pending because we may
++ * not disable disk idling even when a new request
++ * arrives.
++ */
++ if (timer_pending(&bfqd->idle_slice_timer)) {
++ /*
++ * If we get here: 1) at least a new request
++ * has arrived but we have not disabled the
++ * timer because the request was too small,
++ * 2) then the block layer has unplugged
++ * the device, causing the dispatch to be
++ * invoked.
++ *
++ * Since the device is unplugged, now the
++ * requests are probably large enough to
++ * provide a reasonable throughput.
++ * So we disable idling.
++ */
++ bfq_clear_bfqq_wait_request(bfqq);
++ del_timer(&bfqd->idle_slice_timer);
++ }
++ if (new_bfqq == NULL)
++ goto keep_queue;
++ else
++ goto expire;
++ }
++ }
++
++ /*
++ * No requests pending. However, if the in-service queue is idling
++ * for a new request, or has requests waiting for a completion and
++ * may idle after their completion, then keep it anyway.
++ */
++ if (new_bfqq == NULL && (timer_pending(&bfqd->idle_slice_timer) ||
++ (bfqq->dispatched != 0 && bfq_bfqq_must_not_expire(bfqq)))) {
++ bfqq = NULL;
++ goto keep_queue;
++ } else if (new_bfqq != NULL && timer_pending(&bfqd->idle_slice_timer)) {
++ /*
++ * Expiring the queue because there is a close cooperator,
++ * cancel timer.
++ */
++ bfq_clear_bfqq_wait_request(bfqq);
++ del_timer(&bfqd->idle_slice_timer);
++ }
++
++ reason = BFQ_BFQQ_NO_MORE_REQUESTS;
++expire:
++ bfq_bfqq_expire(bfqd, bfqq, 0, reason);
++new_queue:
++ bfqq = bfq_set_in_service_queue(bfqd, new_bfqq);
++ bfq_log(bfqd, "select_queue: new queue %d returned",
++ bfqq != NULL ? bfqq->pid : 0);
++keep_queue:
++ return bfqq;
++}
++
++static void bfq_update_wr_data(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq)
++{
++ if (bfqq->wr_coeff > 1) { /* queue is being boosted */
++ struct bfq_entity *entity = &bfqq->entity;
++
++ bfq_log_bfqq(bfqd, bfqq,
++ "raising period dur %u/%u msec, old coeff %u, w %d(%d)",
++ jiffies_to_msecs(jiffies -
++ bfqq->last_wr_start_finish),
++ jiffies_to_msecs(bfqq->wr_cur_max_time),
++ bfqq->wr_coeff,
++ bfqq->entity.weight, bfqq->entity.orig_weight);
++
++ BUG_ON(bfqq != bfqd->in_service_queue && entity->weight !=
++ entity->orig_weight * bfqq->wr_coeff);
++ if (entity->ioprio_changed)
++ bfq_log_bfqq(bfqd, bfqq, "WARN: pending prio change");
++ /*
++ * If the queue was activated in a burst, or
++ * too much time has elapsed from the beginning
++ * of this weight-raising, then end weight raising.
++ */
++ if (bfq_bfqq_in_large_burst(bfqq) ||
++ time_is_before_jiffies(bfqq->last_wr_start_finish +
++ bfqq->wr_cur_max_time)) {
++ bfqq->last_wr_start_finish = jiffies;
++ bfq_log_bfqq(bfqd, bfqq,
++ "wrais ending at %lu, rais_max_time %u",
++ bfqq->last_wr_start_finish,
++ jiffies_to_msecs(bfqq->wr_cur_max_time));
++ bfq_bfqq_end_wr(bfqq);
++ __bfq_entity_update_weight_prio(
++ bfq_entity_service_tree(entity),
++ entity);
++ }
++ }
++}
++
++/*
++ * Dispatch one request from bfqq, moving it to the request queue
++ * dispatch list.
++ */
++static int bfq_dispatch_request(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq)
++{
++ int dispatched = 0;
++ struct request *rq;
++ unsigned long service_to_charge;
++
++ BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list));
++
++ /* Follow expired path, else get first next available. */
++ rq = bfq_check_fifo(bfqq);
++ if (rq == NULL)
++ rq = bfqq->next_rq;
++ service_to_charge = bfq_serv_to_charge(rq, bfqq);
++
++ if (service_to_charge > bfq_bfqq_budget_left(bfqq)) {
++ /*
++ * This may happen if the next rq is chosen in fifo order
++ * instead of sector order. The budget is properly
++ * dimensioned to be always sufficient to serve the next
++ * request only if it is chosen in sector order. The reason
++ * is that it would be quite inefficient and little useful
++ * to always make sure that the budget is large enough to
++ * serve even the possible next rq in fifo order.
++ * In fact, requests are seldom served in fifo order.
++ *
++ * Expire the queue for budget exhaustion, and make sure
++ * that the next act_budget is enough to serve the next
++ * request, even if it comes from the fifo expired path.
++ */
++ bfqq->next_rq = rq;
++ /*
++ * Since this dispatch is failed, make sure that
++ * a new one will be performed
++ */
++ if (!bfqd->rq_in_driver)
++ bfq_schedule_dispatch(bfqd);
++ goto expire;
++ }
++
++ /* Finally, insert request into driver dispatch list. */
++ bfq_bfqq_served(bfqq, service_to_charge);
++ bfq_dispatch_insert(bfqd->queue, rq);
++
++ bfq_update_wr_data(bfqd, bfqq);
++
++ bfq_log_bfqq(bfqd, bfqq,
++ "dispatched %u sec req (%llu), budg left %lu",
++ blk_rq_sectors(rq),
++ (long long unsigned)blk_rq_pos(rq),
++ bfq_bfqq_budget_left(bfqq));
++
++ dispatched++;
++
++ if (bfqd->in_service_bic == NULL) {
++ atomic_long_inc(&RQ_BIC(rq)->icq.ioc->refcount);
++ bfqd->in_service_bic = RQ_BIC(rq);
++ }
++
++ if (bfqd->busy_queues > 1 && ((!bfq_bfqq_sync(bfqq) &&
++ dispatched >= bfqd->bfq_max_budget_async_rq) ||
++ bfq_class_idle(bfqq)))
++ goto expire;
++
++ return dispatched;
++
++expire:
++ bfq_bfqq_expire(bfqd, bfqq, 0, BFQ_BFQQ_BUDGET_EXHAUSTED);
++ return dispatched;
++}
++
++static int __bfq_forced_dispatch_bfqq(struct bfq_queue *bfqq)
++{
++ int dispatched = 0;
++
++ while (bfqq->next_rq != NULL) {
++ bfq_dispatch_insert(bfqq->bfqd->queue, bfqq->next_rq);
++ dispatched++;
++ }
++
++ BUG_ON(!list_empty(&bfqq->fifo));
++ return dispatched;
++}
++
++/*
++ * Drain our current requests.
++ * Used for barriers and when switching io schedulers on-the-fly.
++ */
++static int bfq_forced_dispatch(struct bfq_data *bfqd)
++{
++ struct bfq_queue *bfqq, *n;
++ struct bfq_service_tree *st;
++ int dispatched = 0;
++
++ bfqq = bfqd->in_service_queue;
++ if (bfqq != NULL)
++ __bfq_bfqq_expire(bfqd, bfqq);
++
++ /*
++ * Loop through classes, and be careful to leave the scheduler
++ * in a consistent state, as feedback mechanisms and vtime
++ * updates cannot be disabled during the process.
++ */
++ list_for_each_entry_safe(bfqq, n, &bfqd->active_list, bfqq_list) {
++ st = bfq_entity_service_tree(&bfqq->entity);
++
++ dispatched += __bfq_forced_dispatch_bfqq(bfqq);
++ bfqq->max_budget = bfq_max_budget(bfqd);
++
++ bfq_forget_idle(st);
++ }
++
++ BUG_ON(bfqd->busy_queues != 0);
++
++ return dispatched;
++}
++
++static int bfq_dispatch_requests(struct request_queue *q, int force)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ struct bfq_queue *bfqq;
++ int max_dispatch;
++
++ bfq_log(bfqd, "dispatch requests: %d busy queues", bfqd->busy_queues);
++ if (bfqd->busy_queues == 0)
++ return 0;
++
++ if (unlikely(force))
++ return bfq_forced_dispatch(bfqd);
++
++ bfqq = bfq_select_queue(bfqd);
++ if (bfqq == NULL)
++ return 0;
++
++ if (bfq_class_idle(bfqq))
++ max_dispatch = 1;
++
++ if (!bfq_bfqq_sync(bfqq))
++ max_dispatch = bfqd->bfq_max_budget_async_rq;
++
++ if (!bfq_bfqq_sync(bfqq) && bfqq->dispatched >= max_dispatch) {
++ if (bfqd->busy_queues > 1)
++ return 0;
++ if (bfqq->dispatched >= 4 * max_dispatch)
++ return 0;
++ }
++
++ if (bfqd->sync_flight != 0 && !bfq_bfqq_sync(bfqq))
++ return 0;
++
++ bfq_clear_bfqq_wait_request(bfqq);
++ BUG_ON(timer_pending(&bfqd->idle_slice_timer));
++
++ if (!bfq_dispatch_request(bfqd, bfqq))
++ return 0;
++
++ bfq_log_bfqq(bfqd, bfqq, "dispatched %s request",
++ bfq_bfqq_sync(bfqq) ? "sync" : "async");
++
++ return 1;
++}
++
++/*
++ * Task holds one reference to the queue, dropped when task exits. Each rq
++ * in-flight on this queue also holds a reference, dropped when rq is freed.
++ *
++ * Queue lock must be held here.
++ */
++static void bfq_put_queue(struct bfq_queue *bfqq)
++{
++ struct bfq_data *bfqd = bfqq->bfqd;
++
++ BUG_ON(atomic_read(&bfqq->ref) <= 0);
++
++ bfq_log_bfqq(bfqd, bfqq, "put_queue: %p %d", bfqq,
++ atomic_read(&bfqq->ref));
++ if (!atomic_dec_and_test(&bfqq->ref))
++ return;
++
++ BUG_ON(rb_first(&bfqq->sort_list) != NULL);
++ BUG_ON(bfqq->allocated[READ] + bfqq->allocated[WRITE] != 0);
++ BUG_ON(bfqq->entity.tree != NULL);
++ BUG_ON(bfq_bfqq_busy(bfqq));
++ BUG_ON(bfqd->in_service_queue == bfqq);
++
++ if (bfq_bfqq_sync(bfqq))
++ /*
++ * The fact that this queue is being destroyed does not
++ * invalidate the fact that this queue may have been
++ * activated during the current burst. As a consequence,
++ * although the queue does not exist anymore, and hence
++ * needs to be removed from the burst list if there,
++ * the burst size has not to be decremented.
++ */
++ hlist_del_init(&bfqq->burst_list_node);
++
++ bfq_log_bfqq(bfqd, bfqq, "put_queue: %p freed", bfqq);
++
++ kmem_cache_free(bfq_pool, bfqq);
++}
++
++static void bfq_put_cooperator(struct bfq_queue *bfqq)
++{
++ struct bfq_queue *__bfqq, *next;
++
++ /*
++ * If this queue was scheduled to merge with another queue, be
++ * sure to drop the reference taken on that queue (and others in
++ * the merge chain). See bfq_setup_merge and bfq_merge_bfqqs.
++ */
++ __bfqq = bfqq->new_bfqq;
++ while (__bfqq) {
++ if (__bfqq == bfqq)
++ break;
++ next = __bfqq->new_bfqq;
++ bfq_put_queue(__bfqq);
++ __bfqq = next;
++ }
++}
++
++static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ if (bfqq == bfqd->in_service_queue) {
++ __bfq_bfqq_expire(bfqd, bfqq);
++ bfq_schedule_dispatch(bfqd);
++ }
++
++ bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq,
++ atomic_read(&bfqq->ref));
++
++ bfq_put_cooperator(bfqq);
++
++ bfq_put_queue(bfqq);
++}
++
++static inline void bfq_init_icq(struct io_cq *icq)
++{
++ struct bfq_io_cq *bic = icq_to_bic(icq);
++
++ bic->ttime.last_end_request = jiffies;
++}
++
++static void bfq_exit_icq(struct io_cq *icq)
++{
++ struct bfq_io_cq *bic = icq_to_bic(icq);
++ struct bfq_data *bfqd = bic_to_bfqd(bic);
++
++ if (bic->bfqq[BLK_RW_ASYNC]) {
++ bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_ASYNC]);
++ bic->bfqq[BLK_RW_ASYNC] = NULL;
++ }
++
++ if (bic->bfqq[BLK_RW_SYNC]) {
++ bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_SYNC]);
++ bic->bfqq[BLK_RW_SYNC] = NULL;
++ }
++}
++
++/*
++ * Update the entity prio values; note that the new values will not
++ * be used until the next (re)activation.
++ */
++static void bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
++{
++ struct task_struct *tsk = current;
++ int ioprio_class;
++
++ ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
++ switch (ioprio_class) {
++ default:
++ dev_err(bfqq->bfqd->queue->backing_dev_info.dev,
++ "bfq: bad prio class %d\n", ioprio_class);
++ case IOPRIO_CLASS_NONE:
++ /*
++ * No prio set, inherit CPU scheduling settings.
++ */
++ bfqq->entity.new_ioprio = task_nice_ioprio(tsk);
++ bfqq->entity.new_ioprio_class = task_nice_ioclass(tsk);
++ break;
++ case IOPRIO_CLASS_RT:
++ bfqq->entity.new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
++ bfqq->entity.new_ioprio_class = IOPRIO_CLASS_RT;
++ break;
++ case IOPRIO_CLASS_BE:
++ bfqq->entity.new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
++ bfqq->entity.new_ioprio_class = IOPRIO_CLASS_BE;
++ break;
++ case IOPRIO_CLASS_IDLE:
++ bfqq->entity.new_ioprio_class = IOPRIO_CLASS_IDLE;
++ bfqq->entity.new_ioprio = 7;
++ bfq_clear_bfqq_idle_window(bfqq);
++ break;
++ }
++
++ if (bfqq->entity.new_ioprio < 0 ||
++ bfqq->entity.new_ioprio >= IOPRIO_BE_NR) {
++ printk(KERN_CRIT "bfq_set_next_ioprio_data: new_ioprio %d\n",
++ bfqq->entity.new_ioprio);
++ BUG();
++ }
++
++ bfqq->entity.new_weight = bfq_ioprio_to_weight(bfqq->entity.new_ioprio);
++ bfqq->entity.ioprio_changed = 1;
++}
++
++static void bfq_check_ioprio_change(struct bfq_io_cq *bic)
++{
++ struct bfq_data *bfqd;
++ struct bfq_queue *bfqq, *new_bfqq;
++ struct bfq_group *bfqg;
++ unsigned long uninitialized_var(flags);
++ int ioprio = bic->icq.ioc->ioprio;
++
++ bfqd = bfq_get_bfqd_locked(&(bic->icq.q->elevator->elevator_data),
++ &flags);
++ /*
++ * This condition may trigger on a newly created bic, be sure to
++ * drop the lock before returning.
++ */
++ if (unlikely(bfqd == NULL) || likely(bic->ioprio == ioprio))
++ goto out;
++
++ bic->ioprio = ioprio;
++
++ bfqq = bic->bfqq[BLK_RW_ASYNC];
++ if (bfqq != NULL) {
++ bfqg = container_of(bfqq->entity.sched_data, struct bfq_group,
++ sched_data);
++ new_bfqq = bfq_get_queue(bfqd, bfqg, BLK_RW_ASYNC, bic,
++ GFP_ATOMIC);
++ if (new_bfqq != NULL) {
++ bic->bfqq[BLK_RW_ASYNC] = new_bfqq;
++ bfq_log_bfqq(bfqd, bfqq,
++ "check_ioprio_change: bfqq %p %d",
++ bfqq, atomic_read(&bfqq->ref));
++ bfq_put_queue(bfqq);
++ }
++ }
++
++ bfqq = bic->bfqq[BLK_RW_SYNC];
++ if (bfqq != NULL)
++ bfq_set_next_ioprio_data(bfqq, bic);
++
++out:
++ bfq_put_bfqd_unlock(bfqd, &flags);
++}
++
++static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ struct bfq_io_cq *bic, pid_t pid, int is_sync)
++{
++ RB_CLEAR_NODE(&bfqq->entity.rb_node);
++ INIT_LIST_HEAD(&bfqq->fifo);
++ INIT_HLIST_NODE(&bfqq->burst_list_node);
++
++ atomic_set(&bfqq->ref, 0);
++ bfqq->bfqd = bfqd;
++
++ if (bic)
++ bfq_set_next_ioprio_data(bfqq, bic);
++
++ if (is_sync) {
++ if (!bfq_class_idle(bfqq))
++ bfq_mark_bfqq_idle_window(bfqq);
++ bfq_mark_bfqq_sync(bfqq);
++ }
++ bfq_mark_bfqq_IO_bound(bfqq);
++
++ /* Tentative initial value to trade off between thr and lat */
++ bfqq->max_budget = (2 * bfq_max_budget(bfqd)) / 3;
++ bfqq->pid = pid;
++
++ bfqq->wr_coeff = 1;
++ bfqq->last_wr_start_finish = 0;
++ /*
++ * Set to the value for which bfqq will not be deemed as
++ * soft rt when it becomes backlogged.
++ */
++ bfqq->soft_rt_next_start = bfq_infinity_from_now(jiffies);
++}
++
++static struct bfq_queue *bfq_find_alloc_queue(struct bfq_data *bfqd,
++ struct bfq_group *bfqg,
++ int is_sync,
++ struct bfq_io_cq *bic,
++ gfp_t gfp_mask)
++{
++ struct bfq_queue *bfqq, *new_bfqq = NULL;
++
++retry:
++ /* bic always exists here */
++ bfqq = bic_to_bfqq(bic, is_sync);
++
++ /*
++ * Always try a new alloc if we fall back to the OOM bfqq
++ * originally, since it should just be a temporary situation.
++ */
++ if (bfqq == NULL || bfqq == &bfqd->oom_bfqq) {
++ bfqq = NULL;
++ if (new_bfqq != NULL) {
++ bfqq = new_bfqq;
++ new_bfqq = NULL;
++ } else if (gfp_mask & __GFP_WAIT) {
++ spin_unlock_irq(bfqd->queue->queue_lock);
++ new_bfqq = kmem_cache_alloc_node(bfq_pool,
++ gfp_mask | __GFP_ZERO,
++ bfqd->queue->node);
++ spin_lock_irq(bfqd->queue->queue_lock);
++ if (new_bfqq != NULL)
++ goto retry;
++ } else {
++ bfqq = kmem_cache_alloc_node(bfq_pool,
++ gfp_mask | __GFP_ZERO,
++ bfqd->queue->node);
++ }
++
++ if (bfqq != NULL) {
++ bfq_init_bfqq(bfqd, bfqq, bic, current->pid,
++ is_sync);
++ bfq_init_entity(&bfqq->entity, bfqg);
++ bfq_log_bfqq(bfqd, bfqq, "allocated");
++ } else {
++ bfqq = &bfqd->oom_bfqq;
++ bfq_log_bfqq(bfqd, bfqq, "using oom bfqq");
++ }
++ }
++
++ if (new_bfqq != NULL)
++ kmem_cache_free(bfq_pool, new_bfqq);
++
++ return bfqq;
++}
++
++static struct bfq_queue **bfq_async_queue_prio(struct bfq_data *bfqd,
++ struct bfq_group *bfqg,
++ int ioprio_class, int ioprio)
++{
++ switch (ioprio_class) {
++ case IOPRIO_CLASS_RT:
++ return &bfqg->async_bfqq[0][ioprio];
++ case IOPRIO_CLASS_NONE:
++ ioprio = IOPRIO_NORM;
++ /* fall through */
++ case IOPRIO_CLASS_BE:
++ return &bfqg->async_bfqq[1][ioprio];
++ case IOPRIO_CLASS_IDLE:
++ return &bfqg->async_idle_bfqq;
++ default:
++ BUG();
++ }
++}
++
++static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
++ struct bfq_group *bfqg, int is_sync,
++ struct bfq_io_cq *bic, gfp_t gfp_mask)
++{
++ const int ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
++ const int ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
++ struct bfq_queue **async_bfqq = NULL;
++ struct bfq_queue *bfqq = NULL;
++
++ if (!is_sync) {
++ async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
++ ioprio);
++ bfqq = *async_bfqq;
++ }
++
++ if (bfqq == NULL)
++ bfqq = bfq_find_alloc_queue(bfqd, bfqg, is_sync, bic, gfp_mask);
++
++ /*
++ * Pin the queue now that it's allocated, scheduler exit will
++ * prune it.
++ */
++ if (!is_sync && *async_bfqq == NULL) {
++ atomic_inc(&bfqq->ref);
++ bfq_log_bfqq(bfqd, bfqq, "get_queue, bfqq not in async: %p, %d",
++ bfqq, atomic_read(&bfqq->ref));
++ *async_bfqq = bfqq;
++ }
++
++ atomic_inc(&bfqq->ref);
++ bfq_log_bfqq(bfqd, bfqq, "get_queue, at end: %p, %d", bfqq,
++ atomic_read(&bfqq->ref));
++ return bfqq;
++}
++
++static void bfq_update_io_thinktime(struct bfq_data *bfqd,
++ struct bfq_io_cq *bic)
++{
++ unsigned long elapsed = jiffies - bic->ttime.last_end_request;
++ unsigned long ttime = min(elapsed, 2UL * bfqd->bfq_slice_idle);
++
++ bic->ttime.ttime_samples = (7*bic->ttime.ttime_samples + 256) / 8;
++ bic->ttime.ttime_total = (7*bic->ttime.ttime_total + 256*ttime) / 8;
++ bic->ttime.ttime_mean = (bic->ttime.ttime_total + 128) /
++ bic->ttime.ttime_samples;
++}
++
++static void bfq_update_io_seektime(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ struct request *rq)
++{
++ sector_t sdist;
++ u64 total;
++
++ if (bfqq->last_request_pos < blk_rq_pos(rq))
++ sdist = blk_rq_pos(rq) - bfqq->last_request_pos;
++ else
++ sdist = bfqq->last_request_pos - blk_rq_pos(rq);
++
++ /*
++ * Don't allow the seek distance to get too large from the
++ * odd fragment, pagein, etc.
++ */
++ if (bfqq->seek_samples == 0) /* first request, not really a seek */
++ sdist = 0;
++ else if (bfqq->seek_samples <= 60) /* second & third seek */
++ sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*1024);
++ else
++ sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*64);
++
++ bfqq->seek_samples = (7*bfqq->seek_samples + 256) / 8;
++ bfqq->seek_total = (7*bfqq->seek_total + (u64)256*sdist) / 8;
++ total = bfqq->seek_total + (bfqq->seek_samples/2);
++ do_div(total, bfqq->seek_samples);
++ bfqq->seek_mean = (sector_t)total;
++
++ bfq_log_bfqq(bfqd, bfqq, "dist=%llu mean=%llu", (u64)sdist,
++ (u64)bfqq->seek_mean);
++}
++
++/*
++ * Disable idle window if the process thinks too long or seeks so much that
++ * it doesn't matter.
++ */
++static void bfq_update_idle_window(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ struct bfq_io_cq *bic)
++{
++ int enable_idle;
++
++ /* Don't idle for async or idle io prio class. */
++ if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq))
++ return;
++
++ enable_idle = bfq_bfqq_idle_window(bfqq);
++
++ if (atomic_read(&bic->icq.ioc->active_ref) == 0 ||
++ bfqd->bfq_slice_idle == 0 ||
++ (bfqd->hw_tag && BFQQ_SEEKY(bfqq) &&
++ bfqq->wr_coeff == 1))
++ enable_idle = 0;
++ else if (bfq_sample_valid(bic->ttime.ttime_samples)) {
++ if (bic->ttime.ttime_mean > bfqd->bfq_slice_idle &&
++ bfqq->wr_coeff == 1)
++ enable_idle = 0;
++ else
++ enable_idle = 1;
++ }
++ bfq_log_bfqq(bfqd, bfqq, "update_idle_window: enable_idle %d",
++ enable_idle);
++
++ if (enable_idle)
++ bfq_mark_bfqq_idle_window(bfqq);
++ else
++ bfq_clear_bfqq_idle_window(bfqq);
++}
++
++/*
++ * Called when a new fs request (rq) is added to bfqq. Check if there's
++ * something we should do about it.
++ */
++static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ struct request *rq)
++{
++ struct bfq_io_cq *bic = RQ_BIC(rq);
++
++ if (rq->cmd_flags & REQ_META)
++ bfqq->meta_pending++;
++
++ bfq_update_io_thinktime(bfqd, bic);
++ bfq_update_io_seektime(bfqd, bfqq, rq);
++ if (!BFQQ_SEEKY(bfqq) && bfq_bfqq_constantly_seeky(bfqq)) {
++ bfq_clear_bfqq_constantly_seeky(bfqq);
++ if (!blk_queue_nonrot(bfqd->queue)) {
++ BUG_ON(!bfqd->const_seeky_busy_in_flight_queues);
++ bfqd->const_seeky_busy_in_flight_queues--;
++ }
++ }
++ if (bfqq->entity.service > bfq_max_budget(bfqd) / 8 ||
++ !BFQQ_SEEKY(bfqq))
++ bfq_update_idle_window(bfqd, bfqq, bic);
++
++ bfq_log_bfqq(bfqd, bfqq,
++ "rq_enqueued: idle_window=%d (seeky %d, mean %llu)",
++ bfq_bfqq_idle_window(bfqq), BFQQ_SEEKY(bfqq),
++ (long long unsigned)bfqq->seek_mean);
++
++ bfqq->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq);
++
++ if (bfqq == bfqd->in_service_queue && bfq_bfqq_wait_request(bfqq)) {
++ int small_req = bfqq->queued[rq_is_sync(rq)] == 1 &&
++ blk_rq_sectors(rq) < 32;
++ int budget_timeout = bfq_bfqq_budget_timeout(bfqq);
++
++ /*
++ * There is just this request queued: if the request
++ * is small and the queue is not to be expired, then
++ * just exit.
++ *
++ * In this way, if the disk is being idled to wait for
++ * a new request from the in-service queue, we avoid
++ * unplugging the device and committing the disk to serve
++ * just a small request. On the contrary, we wait for
++ * the block layer to decide when to unplug the device:
++ * hopefully, new requests will be merged to this one
++ * quickly, then the device will be unplugged and
++ * larger requests will be dispatched.
++ */
++ if (small_req && !budget_timeout)
++ return;
++
++ /*
++ * A large enough request arrived, or the queue is to
++ * be expired: in both cases disk idling is to be
++ * stopped, so clear wait_request flag and reset
++ * timer.
++ */
++ bfq_clear_bfqq_wait_request(bfqq);
++ del_timer(&bfqd->idle_slice_timer);
++
++ /*
++ * The queue is not empty, because a new request just
++ * arrived. Hence we can safely expire the queue, in
++ * case of budget timeout, without risking that the
++ * timestamps of the queue are not updated correctly.
++ * See [1] for more details.
++ */
++ if (budget_timeout)
++ bfq_bfqq_expire(bfqd, bfqq, 0, BFQ_BFQQ_BUDGET_TIMEOUT);
++
++ /*
++ * Let the request rip immediately, or let a new queue be
++ * selected if bfqq has just been expired.
++ */
++ __blk_run_queue(bfqd->queue);
++ }
++}
++
++static void bfq_insert_request(struct request_queue *q, struct request *rq)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++ assert_spin_locked(bfqd->queue->queue_lock);
++
++ bfq_add_request(rq);
++
++ rq->fifo_time = jiffies + bfqd->bfq_fifo_expire[rq_is_sync(rq)];
++ list_add_tail(&rq->queuelist, &bfqq->fifo);
++
++ bfq_rq_enqueued(bfqd, bfqq, rq);
++}
++
++static void bfq_update_hw_tag(struct bfq_data *bfqd)
++{
++ bfqd->max_rq_in_driver = max(bfqd->max_rq_in_driver,
++ bfqd->rq_in_driver);
++
++ if (bfqd->hw_tag == 1)
++ return;
++
++ /*
++ * This sample is valid if the number of outstanding requests
++ * is large enough to allow a queueing behavior. Note that the
++ * sum is not exact, as it's not taking into account deactivated
++ * requests.
++ */
++ if (bfqd->rq_in_driver + bfqd->queued < BFQ_HW_QUEUE_THRESHOLD)
++ return;
++
++ if (bfqd->hw_tag_samples++ < BFQ_HW_QUEUE_SAMPLES)
++ return;
++
++ bfqd->hw_tag = bfqd->max_rq_in_driver > BFQ_HW_QUEUE_THRESHOLD;
++ bfqd->max_rq_in_driver = 0;
++ bfqd->hw_tag_samples = 0;
++}
++
++static void bfq_completed_request(struct request_queue *q, struct request *rq)
++{
++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
++ struct bfq_data *bfqd = bfqq->bfqd;
++ bool sync = bfq_bfqq_sync(bfqq);
++
++ bfq_log_bfqq(bfqd, bfqq, "completed one req with %u sects left (%d)",
++ blk_rq_sectors(rq), sync);
++
++ bfq_update_hw_tag(bfqd);
++
++ BUG_ON(!bfqd->rq_in_driver);
++ BUG_ON(!bfqq->dispatched);
++ bfqd->rq_in_driver--;
++ bfqq->dispatched--;
++
++ if (!bfqq->dispatched && !bfq_bfqq_busy(bfqq)) {
++ bfq_weights_tree_remove(bfqd, &bfqq->entity,
++ &bfqd->queue_weights_tree);
++ if (!blk_queue_nonrot(bfqd->queue)) {
++ BUG_ON(!bfqd->busy_in_flight_queues);
++ bfqd->busy_in_flight_queues--;
++ if (bfq_bfqq_constantly_seeky(bfqq)) {
++ BUG_ON(!bfqd->
++ const_seeky_busy_in_flight_queues);
++ bfqd->const_seeky_busy_in_flight_queues--;
++ }
++ }
++ }
++
++ if (sync) {
++ bfqd->sync_flight--;
++ RQ_BIC(rq)->ttime.last_end_request = jiffies;
++ }
++
++ /*
++ * If we are waiting to discover whether the request pattern of the
++ * task associated with the queue is actually isochronous, and
++ * both requisites for this condition to hold are satisfied, then
++ * compute soft_rt_next_start (see the comments to the function
++ * bfq_bfqq_softrt_next_start()).
++ */
++ if (bfq_bfqq_softrt_update(bfqq) && bfqq->dispatched == 0 &&
++ RB_EMPTY_ROOT(&bfqq->sort_list))
++ bfqq->soft_rt_next_start =
++ bfq_bfqq_softrt_next_start(bfqd, bfqq);
++
++ /*
++ * If this is the in-service queue, check if it needs to be expired,
++ * or if we want to idle in case it has no pending requests.
++ */
++ if (bfqd->in_service_queue == bfqq) {
++ if (bfq_bfqq_budget_new(bfqq))
++ bfq_set_budget_timeout(bfqd);
++
++ if (bfq_bfqq_must_idle(bfqq)) {
++ bfq_arm_slice_timer(bfqd);
++ goto out;
++ } else if (bfq_may_expire_for_budg_timeout(bfqq))
++ bfq_bfqq_expire(bfqd, bfqq, 0, BFQ_BFQQ_BUDGET_TIMEOUT);
++ else if (RB_EMPTY_ROOT(&bfqq->sort_list) &&
++ (bfqq->dispatched == 0 ||
++ !bfq_bfqq_must_not_expire(bfqq)))
++ bfq_bfqq_expire(bfqd, bfqq, 0,
++ BFQ_BFQQ_NO_MORE_REQUESTS);
++ }
++
++ if (!bfqd->rq_in_driver)
++ bfq_schedule_dispatch(bfqd);
++
++out:
++ return;
++}
++
++static inline int __bfq_may_queue(struct bfq_queue *bfqq)
++{
++ if (bfq_bfqq_wait_request(bfqq) && bfq_bfqq_must_alloc(bfqq)) {
++ bfq_clear_bfqq_must_alloc(bfqq);
++ return ELV_MQUEUE_MUST;
++ }
++
++ return ELV_MQUEUE_MAY;
++}
++
++static int bfq_may_queue(struct request_queue *q, int rw)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ struct task_struct *tsk = current;
++ struct bfq_io_cq *bic;
++ struct bfq_queue *bfqq;
++
++ /*
++ * Don't force setup of a queue from here, as a call to may_queue
++ * does not necessarily imply that a request actually will be
++ * queued. So just lookup a possibly existing queue, or return
++ * 'may queue' if that fails.
++ */
++ bic = bfq_bic_lookup(bfqd, tsk->io_context);
++ if (bic == NULL)
++ return ELV_MQUEUE_MAY;
++
++ bfqq = bic_to_bfqq(bic, rw_is_sync(rw));
++ if (bfqq != NULL)
++ return __bfq_may_queue(bfqq);
++
++ return ELV_MQUEUE_MAY;
++}
++
++/*
++ * Queue lock held here.
++ */
++static void bfq_put_request(struct request *rq)
++{
++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++ if (bfqq != NULL) {
++ const int rw = rq_data_dir(rq);
++
++ BUG_ON(!bfqq->allocated[rw]);
++ bfqq->allocated[rw]--;
++
++ rq->elv.priv[0] = NULL;
++ rq->elv.priv[1] = NULL;
++
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "put_request %p, %d",
++ bfqq, atomic_read(&bfqq->ref));
++ bfq_put_queue(bfqq);
++ }
++}
++
++static struct bfq_queue *
++bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
++ struct bfq_queue *bfqq)
++{
++ bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
++ (long unsigned)bfqq->new_bfqq->pid);
++ bic_set_bfqq(bic, bfqq->new_bfqq, 1);
++ bfq_mark_bfqq_coop(bfqq->new_bfqq);
++ bfq_put_queue(bfqq);
++ return bic_to_bfqq(bic, 1);
++}
++
++/*
++ * Returns NULL if a new bfqq should be allocated, or the old bfqq if this
++ * was the last process referring to said bfqq.
++ */
++static struct bfq_queue *
++bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
++{
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue");
++ if (bfqq_process_refs(bfqq) == 1) {
++ bfqq->pid = current->pid;
++ bfq_clear_bfqq_coop(bfqq);
++ bfq_clear_bfqq_split_coop(bfqq);
++ return bfqq;
++ }
++
++ bic_set_bfqq(bic, NULL, 1);
++
++ bfq_put_cooperator(bfqq);
++
++ bfq_put_queue(bfqq);
++ return NULL;
++}
++
++/*
++ * Allocate bfq data structures associated with this request.
++ */
++static int bfq_set_request(struct request_queue *q, struct request *rq,
++ struct bio *bio, gfp_t gfp_mask)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ struct bfq_io_cq *bic = icq_to_bic(rq->elv.icq);
++ const int rw = rq_data_dir(rq);
++ const int is_sync = rq_is_sync(rq);
++ struct bfq_queue *bfqq;
++ struct bfq_group *bfqg;
++ unsigned long flags;
++
++ might_sleep_if(gfp_mask & __GFP_WAIT);
++
++ bfq_check_ioprio_change(bic);
++
++ spin_lock_irqsave(q->queue_lock, flags);
++
++ if (bic == NULL)
++ goto queue_fail;
++
++ bfqg = bfq_bic_update_cgroup(bic);
++
++new_queue:
++ bfqq = bic_to_bfqq(bic, is_sync);
++ if (bfqq == NULL || bfqq == &bfqd->oom_bfqq) {
++ bfqq = bfq_get_queue(bfqd, bfqg, is_sync, bic, gfp_mask);
++ bic_set_bfqq(bic, bfqq, is_sync);
++ } else {
++ /*
++ * If the queue was seeky for too long, break it apart.
++ */
++ if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq)) {
++ bfq_log_bfqq(bfqd, bfqq, "breaking apart bfqq");
++ bfqq = bfq_split_bfqq(bic, bfqq);
++ if (!bfqq)
++ goto new_queue;
++ }
++
++ /*
++ * Check to see if this queue is scheduled to merge with
++ * another closely cooperating queue. The merging of queues
++ * happens here as it must be done in process context.
++ * The reference on new_bfqq was taken in merge_bfqqs.
++ */
++ if (bfqq->new_bfqq != NULL)
++ bfqq = bfq_merge_bfqqs(bfqd, bic, bfqq);
++ }
++
++ bfqq->allocated[rw]++;
++ atomic_inc(&bfqq->ref);
++ bfq_log_bfqq(bfqd, bfqq, "set_request: bfqq %p, %d", bfqq,
++ atomic_read(&bfqq->ref));
++
++ rq->elv.priv[0] = bic;
++ rq->elv.priv[1] = bfqq;
++
++ spin_unlock_irqrestore(q->queue_lock, flags);
++
++ return 0;
++
++queue_fail:
++ bfq_schedule_dispatch(bfqd);
++ spin_unlock_irqrestore(q->queue_lock, flags);
++
++ return 1;
++}
++
++static void bfq_kick_queue(struct work_struct *work)
++{
++ struct bfq_data *bfqd =
++ container_of(work, struct bfq_data, unplug_work);
++ struct request_queue *q = bfqd->queue;
++
++ spin_lock_irq(q->queue_lock);
++ __blk_run_queue(q);
++ spin_unlock_irq(q->queue_lock);
++}
++
++/*
++ * Handler of the expiration of the timer running if the in-service queue
++ * is idling inside its time slice.
++ */
++static void bfq_idle_slice_timer(unsigned long data)
++{
++ struct bfq_data *bfqd = (struct bfq_data *)data;
++ struct bfq_queue *bfqq;
++ unsigned long flags;
++ enum bfqq_expiration reason;
++
++ spin_lock_irqsave(bfqd->queue->queue_lock, flags);
++
++ bfqq = bfqd->in_service_queue;
++ /*
++ * Theoretical race here: the in-service queue can be NULL or
++ * different from the queue that was idling if the timer handler
++ * spins on the queue_lock and a new request arrives for the
++ * current queue and there is a full dispatch cycle that changes
++ * the in-service queue. This can hardly happen, but in the worst
++ * case we just expire a queue too early.
++ */
++ if (bfqq != NULL) {
++ bfq_log_bfqq(bfqd, bfqq, "slice_timer expired");
++ if (bfq_bfqq_budget_timeout(bfqq))
++ /*
++ * Also here the queue can be safely expired
++ * for budget timeout without wasting
++ * guarantees
++ */
++ reason = BFQ_BFQQ_BUDGET_TIMEOUT;
++ else if (bfqq->queued[0] == 0 && bfqq->queued[1] == 0)
++ /*
++ * The queue may not be empty upon timer expiration,
++ * because we may not disable the timer when the
++ * first request of the in-service queue arrives
++ * during disk idling.
++ */
++ reason = BFQ_BFQQ_TOO_IDLE;
++ else
++ goto schedule_dispatch;
++
++ bfq_bfqq_expire(bfqd, bfqq, 1, reason);
++ }
++
++schedule_dispatch:
++ bfq_schedule_dispatch(bfqd);
++
++ spin_unlock_irqrestore(bfqd->queue->queue_lock, flags);
++}
++
++static void bfq_shutdown_timer_wq(struct bfq_data *bfqd)
++{
++ del_timer_sync(&bfqd->idle_slice_timer);
++ cancel_work_sync(&bfqd->unplug_work);
++}
++
++static inline void __bfq_put_async_bfqq(struct bfq_data *bfqd,
++ struct bfq_queue **bfqq_ptr)
++{
++ struct bfq_group *root_group = bfqd->root_group;
++ struct bfq_queue *bfqq = *bfqq_ptr;
++
++ bfq_log(bfqd, "put_async_bfqq: %p", bfqq);
++ if (bfqq != NULL) {
++ bfq_bfqq_move(bfqd, bfqq, &bfqq->entity, root_group);
++ bfq_log_bfqq(bfqd, bfqq, "put_async_bfqq: putting %p, %d",
++ bfqq, atomic_read(&bfqq->ref));
++ bfq_put_queue(bfqq);
++ *bfqq_ptr = NULL;
++ }
++}
++
++/*
++ * Release all the bfqg references to its async queues. If we are
++ * deallocating the group these queues may still contain requests, so
++ * we reparent them to the root cgroup (i.e., the only one that will
++ * exist for sure until all the requests on a device are gone).
++ */
++static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg)
++{
++ int i, j;
++
++ for (i = 0; i < 2; i++)
++ for (j = 0; j < IOPRIO_BE_NR; j++)
++ __bfq_put_async_bfqq(bfqd, &bfqg->async_bfqq[i][j]);
++
++ __bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq);
++}
++
++static void bfq_exit_queue(struct elevator_queue *e)
++{
++ struct bfq_data *bfqd = e->elevator_data;
++ struct request_queue *q = bfqd->queue;
++ struct bfq_queue *bfqq, *n;
++
++ bfq_shutdown_timer_wq(bfqd);
++
++ spin_lock_irq(q->queue_lock);
++
++ BUG_ON(bfqd->in_service_queue != NULL);
++ list_for_each_entry_safe(bfqq, n, &bfqd->idle_list, bfqq_list)
++ bfq_deactivate_bfqq(bfqd, bfqq, 0);
++
++ bfq_disconnect_groups(bfqd);
++ spin_unlock_irq(q->queue_lock);
++
++ bfq_shutdown_timer_wq(bfqd);
++
++ synchronize_rcu();
++
++ BUG_ON(timer_pending(&bfqd->idle_slice_timer));
++
++ bfq_free_root_group(bfqd);
++ kfree(bfqd);
++}
++
++static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
++{
++ struct bfq_group *bfqg;
++ struct bfq_data *bfqd;
++ struct elevator_queue *eq;
++
++ eq = elevator_alloc(q, e);
++ if (eq == NULL)
++ return -ENOMEM;
++
++ bfqd = kzalloc_node(sizeof(*bfqd), GFP_KERNEL, q->node);
++ if (bfqd == NULL) {
++ kobject_put(&eq->kobj);
++ return -ENOMEM;
++ }
++ eq->elevator_data = bfqd;
++
++ /*
++ * Our fallback bfqq if bfq_find_alloc_queue() runs into OOM issues.
++ * Grab a permanent reference to it, so that the normal code flow
++ * will not attempt to free it.
++ */
++ bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0);
++ atomic_inc(&bfqd->oom_bfqq.ref);
++ bfqd->oom_bfqq.entity.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO;
++ bfqd->oom_bfqq.entity.new_ioprio_class = IOPRIO_CLASS_BE;
++ bfqd->oom_bfqq.entity.new_weight =
++ bfq_ioprio_to_weight(bfqd->oom_bfqq.entity.new_ioprio);
++ /*
++ * Trigger weight initialization, according to ioprio, at the
++ * oom_bfqq's first activation. The oom_bfqq's ioprio and ioprio
++ * class won't be changed any more.
++ */
++ bfqd->oom_bfqq.entity.ioprio_changed = 1;
++
++ bfqd->queue = q;
++
++ spin_lock_irq(q->queue_lock);
++ q->elevator = eq;
++ spin_unlock_irq(q->queue_lock);
++
++ bfqg = bfq_alloc_root_group(bfqd, q->node);
++ if (bfqg == NULL) {
++ kfree(bfqd);
++ kobject_put(&eq->kobj);
++ return -ENOMEM;
++ }
++
++ bfqd->root_group = bfqg;
++ bfq_init_entity(&bfqd->oom_bfqq.entity, bfqd->root_group);
++#ifdef CONFIG_CGROUP_BFQIO
++ bfqd->active_numerous_groups = 0;
++#endif
++
++ init_timer(&bfqd->idle_slice_timer);
++ bfqd->idle_slice_timer.function = bfq_idle_slice_timer;
++ bfqd->idle_slice_timer.data = (unsigned long)bfqd;
++
++ bfqd->rq_pos_tree = RB_ROOT;
++ bfqd->queue_weights_tree = RB_ROOT;
++ bfqd->group_weights_tree = RB_ROOT;
++
++ INIT_WORK(&bfqd->unplug_work, bfq_kick_queue);
++
++ INIT_LIST_HEAD(&bfqd->active_list);
++ INIT_LIST_HEAD(&bfqd->idle_list);
++ INIT_HLIST_HEAD(&bfqd->burst_list);
++
++ bfqd->hw_tag = -1;
++
++ bfqd->bfq_max_budget = bfq_default_max_budget;
++
++ bfqd->bfq_fifo_expire[0] = bfq_fifo_expire[0];
++ bfqd->bfq_fifo_expire[1] = bfq_fifo_expire[1];
++ bfqd->bfq_back_max = bfq_back_max;
++ bfqd->bfq_back_penalty = bfq_back_penalty;
++ bfqd->bfq_slice_idle = bfq_slice_idle;
++ bfqd->bfq_class_idle_last_service = 0;
++ bfqd->bfq_max_budget_async_rq = bfq_max_budget_async_rq;
++ bfqd->bfq_timeout[BLK_RW_ASYNC] = bfq_timeout_async;
++ bfqd->bfq_timeout[BLK_RW_SYNC] = bfq_timeout_sync;
++
++ bfqd->bfq_coop_thresh = 2;
++ bfqd->bfq_failed_cooperations = 7000;
++ bfqd->bfq_requests_within_timer = 120;
++
++ bfqd->bfq_large_burst_thresh = 11;
++ bfqd->bfq_burst_interval = msecs_to_jiffies(500);
++
++ bfqd->low_latency = true;
++
++ bfqd->bfq_wr_coeff = 20;
++ bfqd->bfq_wr_rt_max_time = msecs_to_jiffies(300);
++ bfqd->bfq_wr_max_time = 0;
++ bfqd->bfq_wr_min_idle_time = msecs_to_jiffies(2000);
++ bfqd->bfq_wr_min_inter_arr_async = msecs_to_jiffies(500);
++ bfqd->bfq_wr_max_softrt_rate = 7000; /*
++ * Approximate rate required
++ * to playback or record a
++ * high-definition compressed
++ * video.
++ */
++ bfqd->wr_busy_queues = 0;
++ bfqd->busy_in_flight_queues = 0;
++ bfqd->const_seeky_busy_in_flight_queues = 0;
++
++ /*
++ * Begin by assuming, optimistically, that the device peak rate is
++ * equal to the highest reference rate.
++ */
++ bfqd->RT_prod = R_fast[blk_queue_nonrot(bfqd->queue)] *
++ T_fast[blk_queue_nonrot(bfqd->queue)];
++ bfqd->peak_rate = R_fast[blk_queue_nonrot(bfqd->queue)];
++ bfqd->device_speed = BFQ_BFQD_FAST;
++
++ return 0;
++}
++
++static void bfq_slab_kill(void)
++{
++ if (bfq_pool != NULL)
++ kmem_cache_destroy(bfq_pool);
++}
++
++static int __init bfq_slab_setup(void)
++{
++ bfq_pool = KMEM_CACHE(bfq_queue, 0);
++ if (bfq_pool == NULL)
++ return -ENOMEM;
++ return 0;
++}
++
++static ssize_t bfq_var_show(unsigned int var, char *page)
++{
++ return sprintf(page, "%d\n", var);
++}
++
++static ssize_t bfq_var_store(unsigned long *var, const char *page,
++ size_t count)
++{
++ unsigned long new_val;
++ int ret = kstrtoul(page, 10, &new_val);
++
++ if (ret == 0)
++ *var = new_val;
++
++ return count;
++}
++
++static ssize_t bfq_wr_max_time_show(struct elevator_queue *e, char *page)
++{
++ struct bfq_data *bfqd = e->elevator_data;
++ return sprintf(page, "%d\n", bfqd->bfq_wr_max_time > 0 ?
++ jiffies_to_msecs(bfqd->bfq_wr_max_time) :
++ jiffies_to_msecs(bfq_wr_duration(bfqd)));
++}
++
++static ssize_t bfq_weights_show(struct elevator_queue *e, char *page)
++{
++ struct bfq_queue *bfqq;
++ struct bfq_data *bfqd = e->elevator_data;
++ ssize_t num_char = 0;
++
++ num_char += sprintf(page + num_char, "Tot reqs queued %d\n\n",
++ bfqd->queued);
++
++ spin_lock_irq(bfqd->queue->queue_lock);
++
++ num_char += sprintf(page + num_char, "Active:\n");
++ list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list) {
++ num_char += sprintf(page + num_char,
++ "pid%d: weight %hu, nr_queued %d %d, dur %d/%u\n",
++ bfqq->pid,
++ bfqq->entity.weight,
++ bfqq->queued[0],
++ bfqq->queued[1],
++ jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish),
++ jiffies_to_msecs(bfqq->wr_cur_max_time));
++ }
++
++ num_char += sprintf(page + num_char, "Idle:\n");
++ list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list) {
++ num_char += sprintf(page + num_char,
++ "pid%d: weight %hu, dur %d/%u\n",
++ bfqq->pid,
++ bfqq->entity.weight,
++ jiffies_to_msecs(jiffies -
++ bfqq->last_wr_start_finish),
++ jiffies_to_msecs(bfqq->wr_cur_max_time));
++ }
++
++ spin_unlock_irq(bfqd->queue->queue_lock);
++
++ return num_char;
++}
++
++#define SHOW_FUNCTION(__FUNC, __VAR, __CONV) \
++static ssize_t __FUNC(struct elevator_queue *e, char *page) \
++{ \
++ struct bfq_data *bfqd = e->elevator_data; \
++ unsigned int __data = __VAR; \
++ if (__CONV) \
++ __data = jiffies_to_msecs(__data); \
++ return bfq_var_show(__data, (page)); \
++}
++SHOW_FUNCTION(bfq_fifo_expire_sync_show, bfqd->bfq_fifo_expire[1], 1);
++SHOW_FUNCTION(bfq_fifo_expire_async_show, bfqd->bfq_fifo_expire[0], 1);
++SHOW_FUNCTION(bfq_back_seek_max_show, bfqd->bfq_back_max, 0);
++SHOW_FUNCTION(bfq_back_seek_penalty_show, bfqd->bfq_back_penalty, 0);
++SHOW_FUNCTION(bfq_slice_idle_show, bfqd->bfq_slice_idle, 1);
++SHOW_FUNCTION(bfq_max_budget_show, bfqd->bfq_user_max_budget, 0);
++SHOW_FUNCTION(bfq_max_budget_async_rq_show,
++ bfqd->bfq_max_budget_async_rq, 0);
++SHOW_FUNCTION(bfq_timeout_sync_show, bfqd->bfq_timeout[BLK_RW_SYNC], 1);
++SHOW_FUNCTION(bfq_timeout_async_show, bfqd->bfq_timeout[BLK_RW_ASYNC], 1);
++SHOW_FUNCTION(bfq_low_latency_show, bfqd->low_latency, 0);
++SHOW_FUNCTION(bfq_wr_coeff_show, bfqd->bfq_wr_coeff, 0);
++SHOW_FUNCTION(bfq_wr_rt_max_time_show, bfqd->bfq_wr_rt_max_time, 1);
++SHOW_FUNCTION(bfq_wr_min_idle_time_show, bfqd->bfq_wr_min_idle_time, 1);
++SHOW_FUNCTION(bfq_wr_min_inter_arr_async_show, bfqd->bfq_wr_min_inter_arr_async,
++ 1);
++SHOW_FUNCTION(bfq_wr_max_softrt_rate_show, bfqd->bfq_wr_max_softrt_rate, 0);
++#undef SHOW_FUNCTION
++
++#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV) \
++static ssize_t \
++__FUNC(struct elevator_queue *e, const char *page, size_t count) \
++{ \
++ struct bfq_data *bfqd = e->elevator_data; \
++ unsigned long uninitialized_var(__data); \
++ int ret = bfq_var_store(&__data, (page), count); \
++ if (__data < (MIN)) \
++ __data = (MIN); \
++ else if (__data > (MAX)) \
++ __data = (MAX); \
++ if (__CONV) \
++ *(__PTR) = msecs_to_jiffies(__data); \
++ else \
++ *(__PTR) = __data; \
++ return ret; \
++}
++STORE_FUNCTION(bfq_fifo_expire_sync_store, &bfqd->bfq_fifo_expire[1], 1,
++ INT_MAX, 1);
++STORE_FUNCTION(bfq_fifo_expire_async_store, &bfqd->bfq_fifo_expire[0], 1,
++ INT_MAX, 1);
++STORE_FUNCTION(bfq_back_seek_max_store, &bfqd->bfq_back_max, 0, INT_MAX, 0);
++STORE_FUNCTION(bfq_back_seek_penalty_store, &bfqd->bfq_back_penalty, 1,
++ INT_MAX, 0);
++STORE_FUNCTION(bfq_slice_idle_store, &bfqd->bfq_slice_idle, 0, INT_MAX, 1);
++STORE_FUNCTION(bfq_max_budget_async_rq_store, &bfqd->bfq_max_budget_async_rq,
++ 1, INT_MAX, 0);
++STORE_FUNCTION(bfq_timeout_async_store, &bfqd->bfq_timeout[BLK_RW_ASYNC], 0,
++ INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_coeff_store, &bfqd->bfq_wr_coeff, 1, INT_MAX, 0);
++STORE_FUNCTION(bfq_wr_max_time_store, &bfqd->bfq_wr_max_time, 0, INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_rt_max_time_store, &bfqd->bfq_wr_rt_max_time, 0, INT_MAX,
++ 1);
++STORE_FUNCTION(bfq_wr_min_idle_time_store, &bfqd->bfq_wr_min_idle_time, 0,
++ INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_min_inter_arr_async_store,
++ &bfqd->bfq_wr_min_inter_arr_async, 0, INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_max_softrt_rate_store, &bfqd->bfq_wr_max_softrt_rate, 0,
++ INT_MAX, 0);
++#undef STORE_FUNCTION
++
++/* do nothing for the moment */
++static ssize_t bfq_weights_store(struct elevator_queue *e,
++ const char *page, size_t count)
++{
++ return count;
++}
++
++static inline unsigned long bfq_estimated_max_budget(struct bfq_data *bfqd)
++{
++ u64 timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
++
++ if (bfqd->peak_rate_samples >= BFQ_PEAK_RATE_SAMPLES)
++ return bfq_calc_max_budget(bfqd->peak_rate, timeout);
++ else
++ return bfq_default_max_budget;
++}
++
++static ssize_t bfq_max_budget_store(struct elevator_queue *e,
++ const char *page, size_t count)
++{
++ struct bfq_data *bfqd = e->elevator_data;
++ unsigned long uninitialized_var(__data);
++ int ret = bfq_var_store(&__data, (page), count);
++
++ if (__data == 0)
++ bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
++ else {
++ if (__data > INT_MAX)
++ __data = INT_MAX;
++ bfqd->bfq_max_budget = __data;
++ }
++
++ bfqd->bfq_user_max_budget = __data;
++
++ return ret;
++}
++
++static ssize_t bfq_timeout_sync_store(struct elevator_queue *e,
++ const char *page, size_t count)
++{
++ struct bfq_data *bfqd = e->elevator_data;
++ unsigned long uninitialized_var(__data);
++ int ret = bfq_var_store(&__data, (page), count);
++
++ if (__data < 1)
++ __data = 1;
++ else if (__data > INT_MAX)
++ __data = INT_MAX;
++
++ bfqd->bfq_timeout[BLK_RW_SYNC] = msecs_to_jiffies(__data);
++ if (bfqd->bfq_user_max_budget == 0)
++ bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
++
++ return ret;
++}
++
++static ssize_t bfq_low_latency_store(struct elevator_queue *e,
++ const char *page, size_t count)
++{
++ struct bfq_data *bfqd = e->elevator_data;
++ unsigned long uninitialized_var(__data);
++ int ret = bfq_var_store(&__data, (page), count);
++
++ if (__data > 1)
++ __data = 1;
++ if (__data == 0 && bfqd->low_latency != 0)
++ bfq_end_wr(bfqd);
++ bfqd->low_latency = __data;
++
++ return ret;
++}
++
++#define BFQ_ATTR(name) \
++ __ATTR(name, S_IRUGO|S_IWUSR, bfq_##name##_show, bfq_##name##_store)
++
++static struct elv_fs_entry bfq_attrs[] = {
++ BFQ_ATTR(fifo_expire_sync),
++ BFQ_ATTR(fifo_expire_async),
++ BFQ_ATTR(back_seek_max),
++ BFQ_ATTR(back_seek_penalty),
++ BFQ_ATTR(slice_idle),
++ BFQ_ATTR(max_budget),
++ BFQ_ATTR(max_budget_async_rq),
++ BFQ_ATTR(timeout_sync),
++ BFQ_ATTR(timeout_async),
++ BFQ_ATTR(low_latency),
++ BFQ_ATTR(wr_coeff),
++ BFQ_ATTR(wr_max_time),
++ BFQ_ATTR(wr_rt_max_time),
++ BFQ_ATTR(wr_min_idle_time),
++ BFQ_ATTR(wr_min_inter_arr_async),
++ BFQ_ATTR(wr_max_softrt_rate),
++ BFQ_ATTR(weights),
++ __ATTR_NULL
++};
++
++static struct elevator_type iosched_bfq = {
++ .ops = {
++ .elevator_merge_fn = bfq_merge,
++ .elevator_merged_fn = bfq_merged_request,
++ .elevator_merge_req_fn = bfq_merged_requests,
++ .elevator_allow_merge_fn = bfq_allow_merge,
++ .elevator_dispatch_fn = bfq_dispatch_requests,
++ .elevator_add_req_fn = bfq_insert_request,
++ .elevator_activate_req_fn = bfq_activate_request,
++ .elevator_deactivate_req_fn = bfq_deactivate_request,
++ .elevator_completed_req_fn = bfq_completed_request,
++ .elevator_former_req_fn = elv_rb_former_request,
++ .elevator_latter_req_fn = elv_rb_latter_request,
++ .elevator_init_icq_fn = bfq_init_icq,
++ .elevator_exit_icq_fn = bfq_exit_icq,
++ .elevator_set_req_fn = bfq_set_request,
++ .elevator_put_req_fn = bfq_put_request,
++ .elevator_may_queue_fn = bfq_may_queue,
++ .elevator_init_fn = bfq_init_queue,
++ .elevator_exit_fn = bfq_exit_queue,
++ },
++ .icq_size = sizeof(struct bfq_io_cq),
++ .icq_align = __alignof__(struct bfq_io_cq),
++ .elevator_attrs = bfq_attrs,
++ .elevator_name = "bfq",
++ .elevator_owner = THIS_MODULE,
++};
++
++static int __init bfq_init(void)
++{
++ /*
++ * Can be 0 on HZ < 1000 setups.
++ */
++ if (bfq_slice_idle == 0)
++ bfq_slice_idle = 1;
++
++ if (bfq_timeout_async == 0)
++ bfq_timeout_async = 1;
++
++ if (bfq_slab_setup())
++ return -ENOMEM;
++
++ /*
++ * Times to load large popular applications for the typical systems
++ * installed on the reference devices (see the comments before the
++ * definitions of the two arrays).
++ */
++ T_slow[0] = msecs_to_jiffies(2600);
++ T_slow[1] = msecs_to_jiffies(1000);
++ T_fast[0] = msecs_to_jiffies(5500);
++ T_fast[1] = msecs_to_jiffies(2000);
++
++ /*
++ * Thresholds that determine the switch between speed classes (see
++ * the comments before the definition of the array).
++ */
++ device_speed_thresh[0] = (R_fast[0] + R_slow[0]) / 2;
++ device_speed_thresh[1] = (R_fast[1] + R_slow[1]) / 2;
++
++ elv_register(&iosched_bfq);
++ pr_info("BFQ I/O-scheduler: v7r8");
++
++ return 0;
++}
++
++static void __exit bfq_exit(void)
++{
++ elv_unregister(&iosched_bfq);
++ bfq_slab_kill();
++}
++
++module_init(bfq_init);
++module_exit(bfq_exit);
++
++MODULE_AUTHOR("Fabio Checconi, Paolo Valente");
++MODULE_LICENSE("GPL");
+diff --git a/block/bfq-sched.c b/block/bfq-sched.c
+new file mode 100644
+index 0000000..c343099
+--- /dev/null
++++ b/block/bfq-sched.c
+@@ -0,0 +1,1208 @@
++/*
++ * BFQ: Hierarchical B-WF2Q+ scheduler.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ * Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ */
++
++#ifdef CONFIG_CGROUP_BFQIO
++#define for_each_entity(entity) \
++ for (; entity != NULL; entity = entity->parent)
++
++#define for_each_entity_safe(entity, parent) \
++ for (; entity && ({ parent = entity->parent; 1; }); entity = parent)
++
++static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
++ int extract,
++ struct bfq_data *bfqd);
++
++static inline void bfq_update_budget(struct bfq_entity *next_in_service)
++{
++ struct bfq_entity *bfqg_entity;
++ struct bfq_group *bfqg;
++ struct bfq_sched_data *group_sd;
++
++ BUG_ON(next_in_service == NULL);
++
++ group_sd = next_in_service->sched_data;
++
++ bfqg = container_of(group_sd, struct bfq_group, sched_data);
++ /*
++ * bfq_group's my_entity field is not NULL only if the group
++ * is not the root group. We must not touch the root entity
++ * as it must never become an in-service entity.
++ */
++ bfqg_entity = bfqg->my_entity;
++ if (bfqg_entity != NULL)
++ bfqg_entity->budget = next_in_service->budget;
++}
++
++static int bfq_update_next_in_service(struct bfq_sched_data *sd)
++{
++ struct bfq_entity *next_in_service;
++
++ if (sd->in_service_entity != NULL)
++ /* will update/requeue at the end of service */
++ return 0;
++
++ /*
++ * NOTE: this can be improved in many ways, such as returning
++ * 1 (and thus propagating upwards the update) only when the
++ * budget changes, or caching the bfqq that will be scheduled
++ * next from this subtree. By now we worry more about
++ * correctness than about performance...
++ */
++ next_in_service = bfq_lookup_next_entity(sd, 0, NULL);
++ sd->next_in_service = next_in_service;
++
++ if (next_in_service != NULL)
++ bfq_update_budget(next_in_service);
++
++ return 1;
++}
++
++static inline void bfq_check_next_in_service(struct bfq_sched_data *sd,
++ struct bfq_entity *entity)
++{
++ BUG_ON(sd->next_in_service != entity);
++}
++#else
++#define for_each_entity(entity) \
++ for (; entity != NULL; entity = NULL)
++
++#define for_each_entity_safe(entity, parent) \
++ for (parent = NULL; entity != NULL; entity = parent)
++
++static inline int bfq_update_next_in_service(struct bfq_sched_data *sd)
++{
++ return 0;
++}
++
++static inline void bfq_check_next_in_service(struct bfq_sched_data *sd,
++ struct bfq_entity *entity)
++{
++}
++
++static inline void bfq_update_budget(struct bfq_entity *next_in_service)
++{
++}
++#endif
++
++/*
++ * Shift for timestamp calculations. This actually limits the maximum
++ * service allowed in one timestamp delta (small shift values increase it),
++ * the maximum total weight that can be used for the queues in the system
++ * (big shift values increase it), and the period of virtual time
++ * wraparounds.
++ */
++#define WFQ_SERVICE_SHIFT 22
++
++/**
++ * bfq_gt - compare two timestamps.
++ * @a: first ts.
++ * @b: second ts.
++ *
++ * Return @a > @b, dealing with wrapping correctly.
++ */
++static inline int bfq_gt(u64 a, u64 b)
++{
++ return (s64)(a - b) > 0;
++}
++
++static inline struct bfq_queue *bfq_entity_to_bfqq(struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = NULL;
++
++ BUG_ON(entity == NULL);
++
++ if (entity->my_sched_data == NULL)
++ bfqq = container_of(entity, struct bfq_queue, entity);
++
++ return bfqq;
++}
++
++
++/**
++ * bfq_delta - map service into the virtual time domain.
++ * @service: amount of service.
++ * @weight: scale factor (weight of an entity or weight sum).
++ */
++static inline u64 bfq_delta(unsigned long service,
++ unsigned long weight)
++{
++ u64 d = (u64)service << WFQ_SERVICE_SHIFT;
++
++ do_div(d, weight);
++ return d;
++}
++
++/**
++ * bfq_calc_finish - assign the finish time to an entity.
++ * @entity: the entity to act upon.
++ * @service: the service to be charged to the entity.
++ */
++static inline void bfq_calc_finish(struct bfq_entity *entity,
++ unsigned long service)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++ BUG_ON(entity->weight == 0);
++
++ entity->finish = entity->start +
++ bfq_delta(service, entity->weight);
++
++ if (bfqq != NULL) {
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "calc_finish: serv %lu, w %d",
++ service, entity->weight);
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "calc_finish: start %llu, finish %llu, delta %llu",
++ entity->start, entity->finish,
++ bfq_delta(service, entity->weight));
++ }
++}
++
++/**
++ * bfq_entity_of - get an entity from a node.
++ * @node: the node field of the entity.
++ *
++ * Convert a node pointer to the relative entity. This is used only
++ * to simplify the logic of some functions and not as the generic
++ * conversion mechanism because, e.g., in the tree walking functions,
++ * the check for a %NULL value would be redundant.
++ */
++static inline struct bfq_entity *bfq_entity_of(struct rb_node *node)
++{
++ struct bfq_entity *entity = NULL;
++
++ if (node != NULL)
++ entity = rb_entry(node, struct bfq_entity, rb_node);
++
++ return entity;
++}
++
++/**
++ * bfq_extract - remove an entity from a tree.
++ * @root: the tree root.
++ * @entity: the entity to remove.
++ */
++static inline void bfq_extract(struct rb_root *root,
++ struct bfq_entity *entity)
++{
++ BUG_ON(entity->tree != root);
++
++ entity->tree = NULL;
++ rb_erase(&entity->rb_node, root);
++}
++
++/**
++ * bfq_idle_extract - extract an entity from the idle tree.
++ * @st: the service tree of the owning @entity.
++ * @entity: the entity being removed.
++ */
++static void bfq_idle_extract(struct bfq_service_tree *st,
++ struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ struct rb_node *next;
++
++ BUG_ON(entity->tree != &st->idle);
++
++ if (entity == st->first_idle) {
++ next = rb_next(&entity->rb_node);
++ st->first_idle = bfq_entity_of(next);
++ }
++
++ if (entity == st->last_idle) {
++ next = rb_prev(&entity->rb_node);
++ st->last_idle = bfq_entity_of(next);
++ }
++
++ bfq_extract(&st->idle, entity);
++
++ if (bfqq != NULL)
++ list_del(&bfqq->bfqq_list);
++}
++
++/**
++ * bfq_insert - generic tree insertion.
++ * @root: tree root.
++ * @entity: entity to insert.
++ *
++ * This is used for the idle and the active tree, since they are both
++ * ordered by finish time.
++ */
++static void bfq_insert(struct rb_root *root, struct bfq_entity *entity)
++{
++ struct bfq_entity *entry;
++ struct rb_node **node = &root->rb_node;
++ struct rb_node *parent = NULL;
++
++ BUG_ON(entity->tree != NULL);
++
++ while (*node != NULL) {
++ parent = *node;
++ entry = rb_entry(parent, struct bfq_entity, rb_node);
++
++ if (bfq_gt(entry->finish, entity->finish))
++ node = &parent->rb_left;
++ else
++ node = &parent->rb_right;
++ }
++
++ rb_link_node(&entity->rb_node, parent, node);
++ rb_insert_color(&entity->rb_node, root);
++
++ entity->tree = root;
++}
++
++/**
++ * bfq_update_min - update the min_start field of a entity.
++ * @entity: the entity to update.
++ * @node: one of its children.
++ *
++ * This function is called when @entity may store an invalid value for
++ * min_start due to updates to the active tree. The function assumes
++ * that the subtree rooted at @node (which may be its left or its right
++ * child) has a valid min_start value.
++ */
++static inline void bfq_update_min(struct bfq_entity *entity,
++ struct rb_node *node)
++{
++ struct bfq_entity *child;
++
++ if (node != NULL) {
++ child = rb_entry(node, struct bfq_entity, rb_node);
++ if (bfq_gt(entity->min_start, child->min_start))
++ entity->min_start = child->min_start;
++ }
++}
++
++/**
++ * bfq_update_active_node - recalculate min_start.
++ * @node: the node to update.
++ *
++ * @node may have changed position or one of its children may have moved,
++ * this function updates its min_start value. The left and right subtrees
++ * are assumed to hold a correct min_start value.
++ */
++static inline void bfq_update_active_node(struct rb_node *node)
++{
++ struct bfq_entity *entity = rb_entry(node, struct bfq_entity, rb_node);
++
++ entity->min_start = entity->start;
++ bfq_update_min(entity, node->rb_right);
++ bfq_update_min(entity, node->rb_left);
++}
++
++/**
++ * bfq_update_active_tree - update min_start for the whole active tree.
++ * @node: the starting node.
++ *
++ * @node must be the deepest modified node after an update. This function
++ * updates its min_start using the values held by its children, assuming
++ * that they did not change, and then updates all the nodes that may have
++ * changed in the path to the root. The only nodes that may have changed
++ * are the ones in the path or their siblings.
++ */
++static void bfq_update_active_tree(struct rb_node *node)
++{
++ struct rb_node *parent;
++
++up:
++ bfq_update_active_node(node);
++
++ parent = rb_parent(node);
++ if (parent == NULL)
++ return;
++
++ if (node == parent->rb_left && parent->rb_right != NULL)
++ bfq_update_active_node(parent->rb_right);
++ else if (parent->rb_left != NULL)
++ bfq_update_active_node(parent->rb_left);
++
++ node = parent;
++ goto up;
++}
++
++static void bfq_weights_tree_add(struct bfq_data *bfqd,
++ struct bfq_entity *entity,
++ struct rb_root *root);
++
++static void bfq_weights_tree_remove(struct bfq_data *bfqd,
++ struct bfq_entity *entity,
++ struct rb_root *root);
++
++
++/**
++ * bfq_active_insert - insert an entity in the active tree of its
++ * group/device.
++ * @st: the service tree of the entity.
++ * @entity: the entity being inserted.
++ *
++ * The active tree is ordered by finish time, but an extra key is kept
++ * per each node, containing the minimum value for the start times of
++ * its children (and the node itself), so it's possible to search for
++ * the eligible node with the lowest finish time in logarithmic time.
++ */
++static void bfq_active_insert(struct bfq_service_tree *st,
++ struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ struct rb_node *node = &entity->rb_node;
++#ifdef CONFIG_CGROUP_BFQIO
++ struct bfq_sched_data *sd = NULL;
++ struct bfq_group *bfqg = NULL;
++ struct bfq_data *bfqd = NULL;
++#endif
++
++ bfq_insert(&st->active, entity);
++
++ if (node->rb_left != NULL)
++ node = node->rb_left;
++ else if (node->rb_right != NULL)
++ node = node->rb_right;
++
++ bfq_update_active_tree(node);
++
++#ifdef CONFIG_CGROUP_BFQIO
++ sd = entity->sched_data;
++ bfqg = container_of(sd, struct bfq_group, sched_data);
++ BUG_ON(!bfqg);
++ bfqd = (struct bfq_data *)bfqg->bfqd;
++#endif
++ if (bfqq != NULL)
++ list_add(&bfqq->bfqq_list, &bfqq->bfqd->active_list);
++#ifdef CONFIG_CGROUP_BFQIO
++ else { /* bfq_group */
++ BUG_ON(!bfqd);
++ bfq_weights_tree_add(bfqd, entity, &bfqd->group_weights_tree);
++ }
++ if (bfqg != bfqd->root_group) {
++ BUG_ON(!bfqg);
++ BUG_ON(!bfqd);
++ bfqg->active_entities++;
++ if (bfqg->active_entities == 2)
++ bfqd->active_numerous_groups++;
++ }
++#endif
++}
++
++/**
++ * bfq_ioprio_to_weight - calc a weight from an ioprio.
++ * @ioprio: the ioprio value to convert.
++ */
++static inline unsigned short bfq_ioprio_to_weight(int ioprio)
++{
++ BUG_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
++ return IOPRIO_BE_NR - ioprio;
++}
++
++/**
++ * bfq_weight_to_ioprio - calc an ioprio from a weight.
++ * @weight: the weight value to convert.
++ *
++ * To preserve as mush as possible the old only-ioprio user interface,
++ * 0 is used as an escape ioprio value for weights (numerically) equal or
++ * larger than IOPRIO_BE_NR
++ */
++static inline unsigned short bfq_weight_to_ioprio(int weight)
++{
++ BUG_ON(weight < BFQ_MIN_WEIGHT || weight > BFQ_MAX_WEIGHT);
++ return IOPRIO_BE_NR - weight < 0 ? 0 : IOPRIO_BE_NR - weight;
++}
++
++static inline void bfq_get_entity(struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++ if (bfqq != NULL) {
++ atomic_inc(&bfqq->ref);
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "get_entity: %p %d",
++ bfqq, atomic_read(&bfqq->ref));
++ }
++}
++
++/**
++ * bfq_find_deepest - find the deepest node that an extraction can modify.
++ * @node: the node being removed.
++ *
++ * Do the first step of an extraction in an rb tree, looking for the
++ * node that will replace @node, and returning the deepest node that
++ * the following modifications to the tree can touch. If @node is the
++ * last node in the tree return %NULL.
++ */
++static struct rb_node *bfq_find_deepest(struct rb_node *node)
++{
++ struct rb_node *deepest;
++
++ if (node->rb_right == NULL && node->rb_left == NULL)
++ deepest = rb_parent(node);
++ else if (node->rb_right == NULL)
++ deepest = node->rb_left;
++ else if (node->rb_left == NULL)
++ deepest = node->rb_right;
++ else {
++ deepest = rb_next(node);
++ if (deepest->rb_right != NULL)
++ deepest = deepest->rb_right;
++ else if (rb_parent(deepest) != node)
++ deepest = rb_parent(deepest);
++ }
++
++ return deepest;
++}
++
++/**
++ * bfq_active_extract - remove an entity from the active tree.
++ * @st: the service_tree containing the tree.
++ * @entity: the entity being removed.
++ */
++static void bfq_active_extract(struct bfq_service_tree *st,
++ struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ struct rb_node *node;
++#ifdef CONFIG_CGROUP_BFQIO
++ struct bfq_sched_data *sd = NULL;
++ struct bfq_group *bfqg = NULL;
++ struct bfq_data *bfqd = NULL;
++#endif
++
++ node = bfq_find_deepest(&entity->rb_node);
++ bfq_extract(&st->active, entity);
++
++ if (node != NULL)
++ bfq_update_active_tree(node);
++
++#ifdef CONFIG_CGROUP_BFQIO
++ sd = entity->sched_data;
++ bfqg = container_of(sd, struct bfq_group, sched_data);
++ BUG_ON(!bfqg);
++ bfqd = (struct bfq_data *)bfqg->bfqd;
++#endif
++ if (bfqq != NULL)
++ list_del(&bfqq->bfqq_list);
++#ifdef CONFIG_CGROUP_BFQIO
++ else { /* bfq_group */
++ BUG_ON(!bfqd);
++ bfq_weights_tree_remove(bfqd, entity,
++ &bfqd->group_weights_tree);
++ }
++ if (bfqg != bfqd->root_group) {
++ BUG_ON(!bfqg);
++ BUG_ON(!bfqd);
++ BUG_ON(!bfqg->active_entities);
++ bfqg->active_entities--;
++ if (bfqg->active_entities == 1) {
++ BUG_ON(!bfqd->active_numerous_groups);
++ bfqd->active_numerous_groups--;
++ }
++ }
++#endif
++}
++
++/**
++ * bfq_idle_insert - insert an entity into the idle tree.
++ * @st: the service tree containing the tree.
++ * @entity: the entity to insert.
++ */
++static void bfq_idle_insert(struct bfq_service_tree *st,
++ struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ struct bfq_entity *first_idle = st->first_idle;
++ struct bfq_entity *last_idle = st->last_idle;
++
++ if (first_idle == NULL || bfq_gt(first_idle->finish, entity->finish))
++ st->first_idle = entity;
++ if (last_idle == NULL || bfq_gt(entity->finish, last_idle->finish))
++ st->last_idle = entity;
++
++ bfq_insert(&st->idle, entity);
++
++ if (bfqq != NULL)
++ list_add(&bfqq->bfqq_list, &bfqq->bfqd->idle_list);
++}
++
++/**
++ * bfq_forget_entity - remove an entity from the wfq trees.
++ * @st: the service tree.
++ * @entity: the entity being removed.
++ *
++ * Update the device status and forget everything about @entity, putting
++ * the device reference to it, if it is a queue. Entities belonging to
++ * groups are not refcounted.
++ */
++static void bfq_forget_entity(struct bfq_service_tree *st,
++ struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ struct bfq_sched_data *sd;
++
++ BUG_ON(!entity->on_st);
++
++ entity->on_st = 0;
++ st->wsum -= entity->weight;
++ if (bfqq != NULL) {
++ sd = entity->sched_data;
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "forget_entity: %p %d",
++ bfqq, atomic_read(&bfqq->ref));
++ bfq_put_queue(bfqq);
++ }
++}
++
++/**
++ * bfq_put_idle_entity - release the idle tree ref of an entity.
++ * @st: service tree for the entity.
++ * @entity: the entity being released.
++ */
++static void bfq_put_idle_entity(struct bfq_service_tree *st,
++ struct bfq_entity *entity)
++{
++ bfq_idle_extract(st, entity);
++ bfq_forget_entity(st, entity);
++}
++
++/**
++ * bfq_forget_idle - update the idle tree if necessary.
++ * @st: the service tree to act upon.
++ *
++ * To preserve the global O(log N) complexity we only remove one entry here;
++ * as the idle tree will not grow indefinitely this can be done safely.
++ */
++static void bfq_forget_idle(struct bfq_service_tree *st)
++{
++ struct bfq_entity *first_idle = st->first_idle;
++ struct bfq_entity *last_idle = st->last_idle;
++
++ if (RB_EMPTY_ROOT(&st->active) && last_idle != NULL &&
++ !bfq_gt(last_idle->finish, st->vtime)) {
++ /*
++ * Forget the whole idle tree, increasing the vtime past
++ * the last finish time of idle entities.
++ */
++ st->vtime = last_idle->finish;
++ }
++
++ if (first_idle != NULL && !bfq_gt(first_idle->finish, st->vtime))
++ bfq_put_idle_entity(st, first_idle);
++}
++
++static struct bfq_service_tree *
++__bfq_entity_update_weight_prio(struct bfq_service_tree *old_st,
++ struct bfq_entity *entity)
++{
++ struct bfq_service_tree *new_st = old_st;
++
++ if (entity->ioprio_changed) {
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ unsigned short prev_weight, new_weight;
++ struct bfq_data *bfqd = NULL;
++ struct rb_root *root;
++#ifdef CONFIG_CGROUP_BFQIO
++ struct bfq_sched_data *sd;
++ struct bfq_group *bfqg;
++#endif
++
++ if (bfqq != NULL)
++ bfqd = bfqq->bfqd;
++#ifdef CONFIG_CGROUP_BFQIO
++ else {
++ sd = entity->my_sched_data;
++ bfqg = container_of(sd, struct bfq_group, sched_data);
++ BUG_ON(!bfqg);
++ bfqd = (struct bfq_data *)bfqg->bfqd;
++ BUG_ON(!bfqd);
++ }
++#endif
++
++ BUG_ON(old_st->wsum < entity->weight);
++ old_st->wsum -= entity->weight;
++
++ if (entity->new_weight != entity->orig_weight) {
++ if (entity->new_weight < BFQ_MIN_WEIGHT ||
++ entity->new_weight > BFQ_MAX_WEIGHT) {
++ printk(KERN_CRIT "update_weight_prio: "
++ "new_weight %d\n",
++ entity->new_weight);
++ BUG();
++ }
++ entity->orig_weight = entity->new_weight;
++ entity->ioprio =
++ bfq_weight_to_ioprio(entity->orig_weight);
++ }
++
++ entity->ioprio_class = entity->new_ioprio_class;
++ entity->ioprio_changed = 0;
++
++ /*
++ * NOTE: here we may be changing the weight too early,
++ * this will cause unfairness. The correct approach
++ * would have required additional complexity to defer
++ * weight changes to the proper time instants (i.e.,
++ * when entity->finish <= old_st->vtime).
++ */
++ new_st = bfq_entity_service_tree(entity);
++
++ prev_weight = entity->weight;
++ new_weight = entity->orig_weight *
++ (bfqq != NULL ? bfqq->wr_coeff : 1);
++ /*
++ * If the weight of the entity changes, remove the entity
++ * from its old weight counter (if there is a counter
++ * associated with the entity), and add it to the counter
++ * associated with its new weight.
++ */
++ if (prev_weight != new_weight) {
++ root = bfqq ? &bfqd->queue_weights_tree :
++ &bfqd->group_weights_tree;
++ bfq_weights_tree_remove(bfqd, entity, root);
++ }
++ entity->weight = new_weight;
++ /*
++ * Add the entity to its weights tree only if it is
++ * not associated with a weight-raised queue.
++ */
++ if (prev_weight != new_weight &&
++ (bfqq ? bfqq->wr_coeff == 1 : 1))
++ /* If we get here, root has been initialized. */
++ bfq_weights_tree_add(bfqd, entity, root);
++
++ new_st->wsum += entity->weight;
++
++ if (new_st != old_st)
++ entity->start = new_st->vtime;
++ }
++
++ return new_st;
++}
++
++/**
++ * bfq_bfqq_served - update the scheduler status after selection for
++ * service.
++ * @bfqq: the queue being served.
++ * @served: bytes to transfer.
++ *
++ * NOTE: this can be optimized, as the timestamps of upper level entities
++ * are synchronized every time a new bfqq is selected for service. By now,
++ * we keep it to better check consistency.
++ */
++static void bfq_bfqq_served(struct bfq_queue *bfqq, unsigned long served)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++ struct bfq_service_tree *st;
++
++ for_each_entity(entity) {
++ st = bfq_entity_service_tree(entity);
++
++ entity->service += served;
++ BUG_ON(entity->service > entity->budget);
++ BUG_ON(st->wsum == 0);
++
++ st->vtime += bfq_delta(served, st->wsum);
++ bfq_forget_idle(st);
++ }
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "bfqq_served %lu secs", served);
++}
++
++/**
++ * bfq_bfqq_charge_full_budget - set the service to the entity budget.
++ * @bfqq: the queue that needs a service update.
++ *
++ * When it's not possible to be fair in the service domain, because
++ * a queue is not consuming its budget fast enough (the meaning of
++ * fast depends on the timeout parameter), we charge it a full
++ * budget. In this way we should obtain a sort of time-domain
++ * fairness among all the seeky/slow queues.
++ */
++static inline void bfq_bfqq_charge_full_budget(struct bfq_queue *bfqq)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "charge_full_budget");
++
++ bfq_bfqq_served(bfqq, entity->budget - entity->service);
++}
++
++/**
++ * __bfq_activate_entity - activate an entity.
++ * @entity: the entity being activated.
++ *
++ * Called whenever an entity is activated, i.e., it is not active and one
++ * of its children receives a new request, or has to be reactivated due to
++ * budget exhaustion. It uses the current budget of the entity (and the
++ * service received if @entity is active) of the queue to calculate its
++ * timestamps.
++ */
++static void __bfq_activate_entity(struct bfq_entity *entity)
++{
++ struct bfq_sched_data *sd = entity->sched_data;
++ struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++
++ if (entity == sd->in_service_entity) {
++ BUG_ON(entity->tree != NULL);
++ /*
++ * If we are requeueing the current entity we have
++ * to take care of not charging to it service it has
++ * not received.
++ */
++ bfq_calc_finish(entity, entity->service);
++ entity->start = entity->finish;
++ sd->in_service_entity = NULL;
++ } else if (entity->tree == &st->active) {
++ /*
++ * Requeueing an entity due to a change of some
++ * next_in_service entity below it. We reuse the
++ * old start time.
++ */
++ bfq_active_extract(st, entity);
++ } else if (entity->tree == &st->idle) {
++ /*
++ * Must be on the idle tree, bfq_idle_extract() will
++ * check for that.
++ */
++ bfq_idle_extract(st, entity);
++ entity->start = bfq_gt(st->vtime, entity->finish) ?
++ st->vtime : entity->finish;
++ } else {
++ /*
++ * The finish time of the entity may be invalid, and
++ * it is in the past for sure, otherwise the queue
++ * would have been on the idle tree.
++ */
++ entity->start = st->vtime;
++ st->wsum += entity->weight;
++ bfq_get_entity(entity);
++
++ BUG_ON(entity->on_st);
++ entity->on_st = 1;
++ }
++
++ st = __bfq_entity_update_weight_prio(st, entity);
++ bfq_calc_finish(entity, entity->budget);
++ bfq_active_insert(st, entity);
++}
++
++/**
++ * bfq_activate_entity - activate an entity and its ancestors if necessary.
++ * @entity: the entity to activate.
++ *
++ * Activate @entity and all the entities on the path from it to the root.
++ */
++static void bfq_activate_entity(struct bfq_entity *entity)
++{
++ struct bfq_sched_data *sd;
++
++ for_each_entity(entity) {
++ __bfq_activate_entity(entity);
++
++ sd = entity->sched_data;
++ if (!bfq_update_next_in_service(sd))
++ /*
++ * No need to propagate the activation to the
++ * upper entities, as they will be updated when
++ * the in-service entity is rescheduled.
++ */
++ break;
++ }
++}
++
++/**
++ * __bfq_deactivate_entity - deactivate an entity from its service tree.
++ * @entity: the entity to deactivate.
++ * @requeue: if false, the entity will not be put into the idle tree.
++ *
++ * Deactivate an entity, independently from its previous state. If the
++ * entity was not on a service tree just return, otherwise if it is on
++ * any scheduler tree, extract it from that tree, and if necessary
++ * and if the caller did not specify @requeue, put it on the idle tree.
++ *
++ * Return %1 if the caller should update the entity hierarchy, i.e.,
++ * if the entity was in service or if it was the next_in_service for
++ * its sched_data; return %0 otherwise.
++ */
++static int __bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
++{
++ struct bfq_sched_data *sd = entity->sched_data;
++ struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++ int was_in_service = entity == sd->in_service_entity;
++ int ret = 0;
++
++ if (!entity->on_st)
++ return 0;
++
++ BUG_ON(was_in_service && entity->tree != NULL);
++
++ if (was_in_service) {
++ bfq_calc_finish(entity, entity->service);
++ sd->in_service_entity = NULL;
++ } else if (entity->tree == &st->active)
++ bfq_active_extract(st, entity);
++ else if (entity->tree == &st->idle)
++ bfq_idle_extract(st, entity);
++ else if (entity->tree != NULL)
++ BUG();
++
++ if (was_in_service || sd->next_in_service == entity)
++ ret = bfq_update_next_in_service(sd);
++
++ if (!requeue || !bfq_gt(entity->finish, st->vtime))
++ bfq_forget_entity(st, entity);
++ else
++ bfq_idle_insert(st, entity);
++
++ BUG_ON(sd->in_service_entity == entity);
++ BUG_ON(sd->next_in_service == entity);
++
++ return ret;
++}
++
++/**
++ * bfq_deactivate_entity - deactivate an entity.
++ * @entity: the entity to deactivate.
++ * @requeue: true if the entity can be put on the idle tree
++ */
++static void bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
++{
++ struct bfq_sched_data *sd;
++ struct bfq_entity *parent;
++
++ for_each_entity_safe(entity, parent) {
++ sd = entity->sched_data;
++
++ if (!__bfq_deactivate_entity(entity, requeue))
++ /*
++ * The parent entity is still backlogged, and
++ * we don't need to update it as it is still
++ * in service.
++ */
++ break;
++
++ if (sd->next_in_service != NULL)
++ /*
++ * The parent entity is still backlogged and
++ * the budgets on the path towards the root
++ * need to be updated.
++ */
++ goto update;
++
++ /*
++ * If we reach there the parent is no more backlogged and
++ * we want to propagate the dequeue upwards.
++ */
++ requeue = 1;
++ }
++
++ return;
++
++update:
++ entity = parent;
++ for_each_entity(entity) {
++ __bfq_activate_entity(entity);
++
++ sd = entity->sched_data;
++ if (!bfq_update_next_in_service(sd))
++ break;
++ }
++}
++
++/**
++ * bfq_update_vtime - update vtime if necessary.
++ * @st: the service tree to act upon.
++ *
++ * If necessary update the service tree vtime to have at least one
++ * eligible entity, skipping to its start time. Assumes that the
++ * active tree of the device is not empty.
++ *
++ * NOTE: this hierarchical implementation updates vtimes quite often,
++ * we may end up with reactivated processes getting timestamps after a
++ * vtime skip done because we needed a ->first_active entity on some
++ * intermediate node.
++ */
++static void bfq_update_vtime(struct bfq_service_tree *st)
++{
++ struct bfq_entity *entry;
++ struct rb_node *node = st->active.rb_node;
++
++ entry = rb_entry(node, struct bfq_entity, rb_node);
++ if (bfq_gt(entry->min_start, st->vtime)) {
++ st->vtime = entry->min_start;
++ bfq_forget_idle(st);
++ }
++}
++
++/**
++ * bfq_first_active_entity - find the eligible entity with
++ * the smallest finish time
++ * @st: the service tree to select from.
++ *
++ * This function searches the first schedulable entity, starting from the
++ * root of the tree and going on the left every time on this side there is
++ * a subtree with at least one eligible (start >= vtime) entity. The path on
++ * the right is followed only if a) the left subtree contains no eligible
++ * entities and b) no eligible entity has been found yet.
++ */
++static struct bfq_entity *bfq_first_active_entity(struct bfq_service_tree *st)
++{
++ struct bfq_entity *entry, *first = NULL;
++ struct rb_node *node = st->active.rb_node;
++
++ while (node != NULL) {
++ entry = rb_entry(node, struct bfq_entity, rb_node);
++left:
++ if (!bfq_gt(entry->start, st->vtime))
++ first = entry;
++
++ BUG_ON(bfq_gt(entry->min_start, st->vtime));
++
++ if (node->rb_left != NULL) {
++ entry = rb_entry(node->rb_left,
++ struct bfq_entity, rb_node);
++ if (!bfq_gt(entry->min_start, st->vtime)) {
++ node = node->rb_left;
++ goto left;
++ }
++ }
++ if (first != NULL)
++ break;
++ node = node->rb_right;
++ }
++
++ BUG_ON(first == NULL && !RB_EMPTY_ROOT(&st->active));
++ return first;
++}
++
++/**
++ * __bfq_lookup_next_entity - return the first eligible entity in @st.
++ * @st: the service tree.
++ *
++ * Update the virtual time in @st and return the first eligible entity
++ * it contains.
++ */
++static struct bfq_entity *__bfq_lookup_next_entity(struct bfq_service_tree *st,
++ bool force)
++{
++ struct bfq_entity *entity, *new_next_in_service = NULL;
++
++ if (RB_EMPTY_ROOT(&st->active))
++ return NULL;
++
++ bfq_update_vtime(st);
++ entity = bfq_first_active_entity(st);
++ BUG_ON(bfq_gt(entity->start, st->vtime));
++
++ /*
++ * If the chosen entity does not match with the sched_data's
++ * next_in_service and we are forcedly serving the IDLE priority
++ * class tree, bubble up budget update.
++ */
++ if (unlikely(force && entity != entity->sched_data->next_in_service)) {
++ new_next_in_service = entity;
++ for_each_entity(new_next_in_service)
++ bfq_update_budget(new_next_in_service);
++ }
++
++ return entity;
++}
++
++/**
++ * bfq_lookup_next_entity - return the first eligible entity in @sd.
++ * @sd: the sched_data.
++ * @extract: if true the returned entity will be also extracted from @sd.
++ *
++ * NOTE: since we cache the next_in_service entity at each level of the
++ * hierarchy, the complexity of the lookup can be decreased with
++ * absolutely no effort just returning the cached next_in_service value;
++ * we prefer to do full lookups to test the consistency of * the data
++ * structures.
++ */
++static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
++ int extract,
++ struct bfq_data *bfqd)
++{
++ struct bfq_service_tree *st = sd->service_tree;
++ struct bfq_entity *entity;
++ int i = 0;
++
++ BUG_ON(sd->in_service_entity != NULL);
++
++ if (bfqd != NULL &&
++ jiffies - bfqd->bfq_class_idle_last_service > BFQ_CL_IDLE_TIMEOUT) {
++ entity = __bfq_lookup_next_entity(st + BFQ_IOPRIO_CLASSES - 1,
++ true);
++ if (entity != NULL) {
++ i = BFQ_IOPRIO_CLASSES - 1;
++ bfqd->bfq_class_idle_last_service = jiffies;
++ sd->next_in_service = entity;
++ }
++ }
++ for (; i < BFQ_IOPRIO_CLASSES; i++) {
++ entity = __bfq_lookup_next_entity(st + i, false);
++ if (entity != NULL) {
++ if (extract) {
++ bfq_check_next_in_service(sd, entity);
++ bfq_active_extract(st + i, entity);
++ sd->in_service_entity = entity;
++ sd->next_in_service = NULL;
++ }
++ break;
++ }
++ }
++
++ return entity;
++}
++
++/*
++ * Get next queue for service.
++ */
++static struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
++{
++ struct bfq_entity *entity = NULL;
++ struct bfq_sched_data *sd;
++ struct bfq_queue *bfqq;
++
++ BUG_ON(bfqd->in_service_queue != NULL);
++
++ if (bfqd->busy_queues == 0)
++ return NULL;
++
++ sd = &bfqd->root_group->sched_data;
++ for (; sd != NULL; sd = entity->my_sched_data) {
++ entity = bfq_lookup_next_entity(sd, 1, bfqd);
++ BUG_ON(entity == NULL);
++ entity->service = 0;
++ }
++
++ bfqq = bfq_entity_to_bfqq(entity);
++ BUG_ON(bfqq == NULL);
++
++ return bfqq;
++}
++
++/*
++ * Forced extraction of the given queue.
++ */
++static void bfq_get_next_queue_forced(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq)
++{
++ struct bfq_entity *entity;
++ struct bfq_sched_data *sd;
++
++ BUG_ON(bfqd->in_service_queue != NULL);
++
++ entity = &bfqq->entity;
++ /*
++ * Bubble up extraction/update from the leaf to the root.
++ */
++ for_each_entity(entity) {
++ sd = entity->sched_data;
++ bfq_update_budget(entity);
++ bfq_update_vtime(bfq_entity_service_tree(entity));
++ bfq_active_extract(bfq_entity_service_tree(entity), entity);
++ sd->in_service_entity = entity;
++ sd->next_in_service = NULL;
++ entity->service = 0;
++ }
++
++ return;
++}
++
++static void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd)
++{
++ if (bfqd->in_service_bic != NULL) {
++ put_io_context(bfqd->in_service_bic->icq.ioc);
++ bfqd->in_service_bic = NULL;
++ }
++
++ bfqd->in_service_queue = NULL;
++ del_timer(&bfqd->idle_slice_timer);
++}
++
++static void bfq_deactivate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ int requeue)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++
++ if (bfqq == bfqd->in_service_queue)
++ __bfq_bfqd_reset_in_service(bfqd);
++
++ bfq_deactivate_entity(entity, requeue);
++}
++
++static void bfq_activate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++
++ bfq_activate_entity(entity);
++}
++
++/*
++ * Called when the bfqq no longer has requests pending, remove it from
++ * the service tree.
++ */
++static void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ int requeue)
++{
++ BUG_ON(!bfq_bfqq_busy(bfqq));
++ BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
++
++ bfq_log_bfqq(bfqd, bfqq, "del from busy");
++
++ bfq_clear_bfqq_busy(bfqq);
++
++ BUG_ON(bfqd->busy_queues == 0);
++ bfqd->busy_queues--;
++
++ if (!bfqq->dispatched) {
++ bfq_weights_tree_remove(bfqd, &bfqq->entity,
++ &bfqd->queue_weights_tree);
++ if (!blk_queue_nonrot(bfqd->queue)) {
++ BUG_ON(!bfqd->busy_in_flight_queues);
++ bfqd->busy_in_flight_queues--;
++ if (bfq_bfqq_constantly_seeky(bfqq)) {
++ BUG_ON(!bfqd->
++ const_seeky_busy_in_flight_queues);
++ bfqd->const_seeky_busy_in_flight_queues--;
++ }
++ }
++ }
++ if (bfqq->wr_coeff > 1)
++ bfqd->wr_busy_queues--;
++
++ bfq_deactivate_bfqq(bfqd, bfqq, requeue);
++}
++
++/*
++ * Called when an inactive queue receives a new request.
++ */
++static void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ BUG_ON(bfq_bfqq_busy(bfqq));
++ BUG_ON(bfqq == bfqd->in_service_queue);
++
++ bfq_log_bfqq(bfqd, bfqq, "add to busy");
++
++ bfq_activate_bfqq(bfqd, bfqq);
++
++ bfq_mark_bfqq_busy(bfqq);
++ bfqd->busy_queues++;
++
++ if (!bfqq->dispatched) {
++ if (bfqq->wr_coeff == 1)
++ bfq_weights_tree_add(bfqd, &bfqq->entity,
++ &bfqd->queue_weights_tree);
++ if (!blk_queue_nonrot(bfqd->queue)) {
++ bfqd->busy_in_flight_queues++;
++ if (bfq_bfqq_constantly_seeky(bfqq))
++ bfqd->const_seeky_busy_in_flight_queues++;
++ }
++ }
++ if (bfqq->wr_coeff > 1)
++ bfqd->wr_busy_queues++;
++}
+diff --git a/block/bfq.h b/block/bfq.h
+new file mode 100644
+index 0000000..e350b5f
+--- /dev/null
++++ b/block/bfq.h
+@@ -0,0 +1,771 @@
++/*
++ * BFQ-v7r8 for 4.3.0: data structures and common functions prototypes.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ * Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ */
++
++#ifndef _BFQ_H
++#define _BFQ_H
++
++#include <linux/blktrace_api.h>
++#include <linux/hrtimer.h>
++#include <linux/ioprio.h>
++#include <linux/rbtree.h>
++
++#define BFQ_IOPRIO_CLASSES 3
++#define BFQ_CL_IDLE_TIMEOUT (HZ/5)
++
++#define BFQ_MIN_WEIGHT 1
++#define BFQ_MAX_WEIGHT 1000
++
++#define BFQ_DEFAULT_QUEUE_IOPRIO 4
++
++#define BFQ_DEFAULT_GRP_WEIGHT 10
++#define BFQ_DEFAULT_GRP_IOPRIO 0
++#define BFQ_DEFAULT_GRP_CLASS IOPRIO_CLASS_BE
++
++struct bfq_entity;
++
++/**
++ * struct bfq_service_tree - per ioprio_class service tree.
++ * @active: tree for active entities (i.e., those backlogged).
++ * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
++ * @first_idle: idle entity with minimum F_i.
++ * @last_idle: idle entity with maximum F_i.
++ * @vtime: scheduler virtual time.
++ * @wsum: scheduler weight sum; active and idle entities contribute to it.
++ *
++ * Each service tree represents a B-WF2Q+ scheduler on its own. Each
++ * ioprio_class has its own independent scheduler, and so its own
++ * bfq_service_tree. All the fields are protected by the queue lock
++ * of the containing bfqd.
++ */
++struct bfq_service_tree {
++ struct rb_root active;
++ struct rb_root idle;
++
++ struct bfq_entity *first_idle;
++ struct bfq_entity *last_idle;
++
++ u64 vtime;
++ unsigned long wsum;
++};
++
++/**
++ * struct bfq_sched_data - multi-class scheduler.
++ * @in_service_entity: entity in service.
++ * @next_in_service: head-of-the-line entity in the scheduler.
++ * @service_tree: array of service trees, one per ioprio_class.
++ *
++ * bfq_sched_data is the basic scheduler queue. It supports three
++ * ioprio_classes, and can be used either as a toplevel queue or as
++ * an intermediate queue on a hierarchical setup.
++ * @next_in_service points to the active entity of the sched_data
++ * service trees that will be scheduled next.
++ *
++ * The supported ioprio_classes are the same as in CFQ, in descending
++ * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
++ * Requests from higher priority queues are served before all the
++ * requests from lower priority queues; among requests of the same
++ * queue requests are served according to B-WF2Q+.
++ * All the fields are protected by the queue lock of the containing bfqd.
++ */
++struct bfq_sched_data {
++ struct bfq_entity *in_service_entity;
++ struct bfq_entity *next_in_service;
++ struct bfq_service_tree service_tree[BFQ_IOPRIO_CLASSES];
++};
++
++/**
++ * struct bfq_weight_counter - counter of the number of all active entities
++ * with a given weight.
++ * @weight: weight of the entities that this counter refers to.
++ * @num_active: number of active entities with this weight.
++ * @weights_node: weights tree member (see bfq_data's @queue_weights_tree
++ * and @group_weights_tree).
++ */
++struct bfq_weight_counter {
++ short int weight;
++ unsigned int num_active;
++ struct rb_node weights_node;
++};
++
++/**
++ * struct bfq_entity - schedulable entity.
++ * @rb_node: service_tree member.
++ * @weight_counter: pointer to the weight counter associated with this entity.
++ * @on_st: flag, true if the entity is on a tree (either the active or
++ * the idle one of its service_tree).
++ * @finish: B-WF2Q+ finish timestamp (aka F_i).
++ * @start: B-WF2Q+ start timestamp (aka S_i).
++ * @tree: tree the entity is enqueued into; %NULL if not on a tree.
++ * @min_start: minimum start time of the (active) subtree rooted at
++ * this entity; used for O(log N) lookups into active trees.
++ * @service: service received during the last round of service.
++ * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
++ * @weight: weight of the queue
++ * @parent: parent entity, for hierarchical scheduling.
++ * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
++ * associated scheduler queue, %NULL on leaf nodes.
++ * @sched_data: the scheduler queue this entity belongs to.
++ * @ioprio: the ioprio in use.
++ * @new_weight: when a weight change is requested, the new weight value.
++ * @orig_weight: original weight, used to implement weight boosting
++ * @new_ioprio: when an ioprio change is requested, the new ioprio value.
++ * @ioprio_class: the ioprio_class in use.
++ * @new_ioprio_class: when an ioprio_class change is requested, the new
++ * ioprio_class value.
++ * @ioprio_changed: flag, true when the user requested a weight, ioprio or
++ * ioprio_class change.
++ *
++ * A bfq_entity is used to represent either a bfq_queue (leaf node in the
++ * cgroup hierarchy) or a bfq_group into the upper level scheduler. Each
++ * entity belongs to the sched_data of the parent group in the cgroup
++ * hierarchy. Non-leaf entities have also their own sched_data, stored
++ * in @my_sched_data.
++ *
++ * Each entity stores independently its priority values; this would
++ * allow different weights on different devices, but this
++ * functionality is not exported to userspace by now. Priorities and
++ * weights are updated lazily, first storing the new values into the
++ * new_* fields, then setting the @ioprio_changed flag. As soon as
++ * there is a transition in the entity state that allows the priority
++ * update to take place the effective and the requested priority
++ * values are synchronized.
++ *
++ * Unless cgroups are used, the weight value is calculated from the
++ * ioprio to export the same interface as CFQ. When dealing with
++ * ``well-behaved'' queues (i.e., queues that do not spend too much
++ * time to consume their budget and have true sequential behavior, and
++ * when there are no external factors breaking anticipation) the
++ * relative weights at each level of the cgroups hierarchy should be
++ * guaranteed. All the fields are protected by the queue lock of the
++ * containing bfqd.
++ */
++struct bfq_entity {
++ struct rb_node rb_node;
++ struct bfq_weight_counter *weight_counter;
++
++ int on_st;
++
++ u64 finish;
++ u64 start;
++
++ struct rb_root *tree;
++
++ u64 min_start;
++
++ unsigned long service, budget;
++ unsigned short weight, new_weight;
++ unsigned short orig_weight;
++
++ struct bfq_entity *parent;
++
++ struct bfq_sched_data *my_sched_data;
++ struct bfq_sched_data *sched_data;
++
++ unsigned short ioprio, new_ioprio;
++ unsigned short ioprio_class, new_ioprio_class;
++
++ int ioprio_changed;
++};
++
++struct bfq_group;
++
++/**
++ * struct bfq_queue - leaf schedulable entity.
++ * @ref: reference counter.
++ * @bfqd: parent bfq_data.
++ * @new_bfqq: shared bfq_queue if queue is cooperating with
++ * one or more other queues.
++ * @pos_node: request-position tree member (see bfq_data's @rq_pos_tree).
++ * @pos_root: request-position tree root (see bfq_data's @rq_pos_tree).
++ * @sort_list: sorted list of pending requests.
++ * @next_rq: if fifo isn't expired, next request to serve.
++ * @queued: nr of requests queued in @sort_list.
++ * @allocated: currently allocated requests.
++ * @meta_pending: pending metadata requests.
++ * @fifo: fifo list of requests in sort_list.
++ * @entity: entity representing this queue in the scheduler.
++ * @max_budget: maximum budget allowed from the feedback mechanism.
++ * @budget_timeout: budget expiration (in jiffies).
++ * @dispatched: number of requests on the dispatch list or inside driver.
++ * @flags: status flags.
++ * @bfqq_list: node for active/idle bfqq list inside our bfqd.
++ * @burst_list_node: node for the device's burst list.
++ * @seek_samples: number of seeks sampled
++ * @seek_total: sum of the distances of the seeks sampled
++ * @seek_mean: mean seek distance
++ * @last_request_pos: position of the last request enqueued
++ * @requests_within_timer: number of consecutive pairs of request completion
++ * and arrival, such that the queue becomes idle
++ * after the completion, but the next request arrives
++ * within an idle time slice; used only if the queue's
++ * IO_bound has been cleared.
++ * @pid: pid of the process owning the queue, used for logging purposes.
++ * @last_wr_start_finish: start time of the current weight-raising period if
++ * the @bfq-queue is being weight-raised, otherwise
++ * finish time of the last weight-raising period
++ * @wr_cur_max_time: current max raising time for this queue
++ * @soft_rt_next_start: minimum time instant such that, only if a new
++ * request is enqueued after this time instant in an
++ * idle @bfq_queue with no outstanding requests, then
++ * the task associated with the queue it is deemed as
++ * soft real-time (see the comments to the function
++ * bfq_bfqq_softrt_next_start()).
++ * @last_idle_bklogged: time of the last transition of the @bfq_queue from
++ * idle to backlogged
++ * @service_from_backlogged: cumulative service received from the @bfq_queue
++ * since the last transition from idle to
++ * backlogged
++ *
++ * A bfq_queue is a leaf request queue; it can be associated with an io_context
++ * or more, if it is async or shared between cooperating processes. @cgroup
++ * holds a reference to the cgroup, to be sure that it does not disappear while
++ * a bfqq still references it (mostly to avoid races between request issuing and
++ * task migration followed by cgroup destruction).
++ * All the fields are protected by the queue lock of the containing bfqd.
++ */
++struct bfq_queue {
++ atomic_t ref;
++ struct bfq_data *bfqd;
++
++ /* fields for cooperating queues handling */
++ struct bfq_queue *new_bfqq;
++ struct rb_node pos_node;
++ struct rb_root *pos_root;
++
++ struct rb_root sort_list;
++ struct request *next_rq;
++ int queued[2];
++ int allocated[2];
++ int meta_pending;
++ struct list_head fifo;
++
++ struct bfq_entity entity;
++
++ unsigned long max_budget;
++ unsigned long budget_timeout;
++
++ int dispatched;
++
++ unsigned int flags;
++
++ struct list_head bfqq_list;
++
++ struct hlist_node burst_list_node;
++
++ unsigned int seek_samples;
++ u64 seek_total;
++ sector_t seek_mean;
++ sector_t last_request_pos;
++
++ unsigned int requests_within_timer;
++
++ pid_t pid;
++
++ /* weight-raising fields */
++ unsigned long wr_cur_max_time;
++ unsigned long soft_rt_next_start;
++ unsigned long last_wr_start_finish;
++ unsigned int wr_coeff;
++ unsigned long last_idle_bklogged;
++ unsigned long service_from_backlogged;
++};
++
++/**
++ * struct bfq_ttime - per process thinktime stats.
++ * @ttime_total: total process thinktime
++ * @ttime_samples: number of thinktime samples
++ * @ttime_mean: average process thinktime
++ */
++struct bfq_ttime {
++ unsigned long last_end_request;
++
++ unsigned long ttime_total;
++ unsigned long ttime_samples;
++ unsigned long ttime_mean;
++};
++
++/**
++ * struct bfq_io_cq - per (request_queue, io_context) structure.
++ * @icq: associated io_cq structure
++ * @bfqq: array of two process queues, the sync and the async
++ * @ttime: associated @bfq_ttime struct
++ */
++struct bfq_io_cq {
++ struct io_cq icq; /* must be the first member */
++ struct bfq_queue *bfqq[2];
++ struct bfq_ttime ttime;
++ int ioprio;
++};
++
++enum bfq_device_speed {
++ BFQ_BFQD_FAST,
++ BFQ_BFQD_SLOW,
++};
++
++/**
++ * struct bfq_data - per device data structure.
++ * @queue: request queue for the managed device.
++ * @root_group: root bfq_group for the device.
++ * @rq_pos_tree: rbtree sorted by next_request position, used when
++ * determining if two or more queues have interleaving
++ * requests (see bfq_close_cooperator()).
++ * @active_numerous_groups: number of bfq_groups containing more than one
++ * active @bfq_entity.
++ * @queue_weights_tree: rbtree of weight counters of @bfq_queues, sorted by
++ * weight. Used to keep track of whether all @bfq_queues
++ * have the same weight. The tree contains one counter
++ * for each distinct weight associated to some active
++ * and not weight-raised @bfq_queue (see the comments to
++ * the functions bfq_weights_tree_[add|remove] for
++ * further details).
++ * @group_weights_tree: rbtree of non-queue @bfq_entity weight counters, sorted
++ * by weight. Used to keep track of whether all
++ * @bfq_groups have the same weight. The tree contains
++ * one counter for each distinct weight associated to
++ * some active @bfq_group (see the comments to the
++ * functions bfq_weights_tree_[add|remove] for further
++ * details).
++ * @busy_queues: number of bfq_queues containing requests (including the
++ * queue in service, even if it is idling).
++ * @busy_in_flight_queues: number of @bfq_queues containing pending or
++ * in-flight requests, plus the @bfq_queue in
++ * service, even if idle but waiting for the
++ * possible arrival of its next sync request. This
++ * field is updated only if the device is rotational,
++ * but used only if the device is also NCQ-capable.
++ * The reason why the field is updated also for non-
++ * NCQ-capable rotational devices is related to the
++ * fact that the value of @hw_tag may be set also
++ * later than when busy_in_flight_queues may need to
++ * be incremented for the first time(s). Taking also
++ * this possibility into account, to avoid unbalanced
++ * increments/decrements, would imply more overhead
++ * than just updating busy_in_flight_queues
++ * regardless of the value of @hw_tag.
++ * @const_seeky_busy_in_flight_queues: number of constantly-seeky @bfq_queues
++ * (that is, seeky queues that expired
++ * for budget timeout at least once)
++ * containing pending or in-flight
++ * requests, including the in-service
++ * @bfq_queue if constantly seeky. This
++ * field is updated only if the device
++ * is rotational, but used only if the
++ * device is also NCQ-capable (see the
++ * comments to @busy_in_flight_queues).
++ * @wr_busy_queues: number of weight-raised busy @bfq_queues.
++ * @queued: number of queued requests.
++ * @rq_in_driver: number of requests dispatched and waiting for completion.
++ * @sync_flight: number of sync requests in the driver.
++ * @max_rq_in_driver: max number of reqs in driver in the last
++ * @hw_tag_samples completed requests.
++ * @hw_tag_samples: nr of samples used to calculate hw_tag.
++ * @hw_tag: flag set to one if the driver is showing a queueing behavior.
++ * @budgets_assigned: number of budgets assigned.
++ * @idle_slice_timer: timer set when idling for the next sequential request
++ * from the queue in service.
++ * @unplug_work: delayed work to restart dispatching on the request queue.
++ * @in_service_queue: bfq_queue in service.
++ * @in_service_bic: bfq_io_cq (bic) associated with the @in_service_queue.
++ * @last_position: on-disk position of the last served request.
++ * @last_budget_start: beginning of the last budget.
++ * @last_idling_start: beginning of the last idle slice.
++ * @peak_rate: peak transfer rate observed for a budget.
++ * @peak_rate_samples: number of samples used to calculate @peak_rate.
++ * @bfq_max_budget: maximum budget allotted to a bfq_queue before
++ * rescheduling.
++ * @group_list: list of all the bfq_groups active on the device.
++ * @active_list: list of all the bfq_queues active on the device.
++ * @idle_list: list of all the bfq_queues idle on the device.
++ * @bfq_fifo_expire: timeout for async/sync requests; when it expires
++ * requests are served in fifo order.
++ * @bfq_back_penalty: weight of backward seeks wrt forward ones.
++ * @bfq_back_max: maximum allowed backward seek.
++ * @bfq_slice_idle: maximum idling time.
++ * @bfq_user_max_budget: user-configured max budget value
++ * (0 for auto-tuning).
++ * @bfq_max_budget_async_rq: maximum budget (in nr of requests) allotted to
++ * async queues.
++ * @bfq_timeout: timeout for bfq_queues to consume their budget; used to
++ * to prevent seeky queues to impose long latencies to well
++ * behaved ones (this also implies that seeky queues cannot
++ * receive guarantees in the service domain; after a timeout
++ * they are charged for the whole allocated budget, to try
++ * to preserve a behavior reasonably fair among them, but
++ * without service-domain guarantees).
++ * @bfq_coop_thresh: number of queue merges after which a @bfq_queue is
++ * no more granted any weight-raising.
++ * @bfq_failed_cooperations: number of consecutive failed cooperation
++ * chances after which weight-raising is restored
++ * to a queue subject to more than bfq_coop_thresh
++ * queue merges.
++ * @bfq_requests_within_timer: number of consecutive requests that must be
++ * issued within the idle time slice to set
++ * again idling to a queue which was marked as
++ * non-I/O-bound (see the definition of the
++ * IO_bound flag for further details).
++ * @last_ins_in_burst: last time at which a queue entered the current
++ * burst of queues being activated shortly after
++ * each other; for more details about this and the
++ * following parameters related to a burst of
++ * activations, see the comments to the function
++ * @bfq_handle_burst.
++ * @bfq_burst_interval: reference time interval used to decide whether a
++ * queue has been activated shortly after
++ * @last_ins_in_burst.
++ * @burst_size: number of queues in the current burst of queue activations.
++ * @bfq_large_burst_thresh: maximum burst size above which the current
++ * queue-activation burst is deemed as 'large'.
++ * @large_burst: true if a large queue-activation burst is in progress.
++ * @burst_list: head of the burst list (as for the above fields, more details
++ * in the comments to the function bfq_handle_burst).
++ * @low_latency: if set to true, low-latency heuristics are enabled.
++ * @bfq_wr_coeff: maximum factor by which the weight of a weight-raised
++ * queue is multiplied.
++ * @bfq_wr_max_time: maximum duration of a weight-raising period (jiffies).
++ * @bfq_wr_rt_max_time: maximum duration for soft real-time processes.
++ * @bfq_wr_min_idle_time: minimum idle period after which weight-raising
++ * may be reactivated for a queue (in jiffies).
++ * @bfq_wr_min_inter_arr_async: minimum period between request arrivals
++ * after which weight-raising may be
++ * reactivated for an already busy queue
++ * (in jiffies).
++ * @bfq_wr_max_softrt_rate: max service-rate for a soft real-time queue,
++ * sectors per seconds.
++ * @RT_prod: cached value of the product R*T used for computing the maximum
++ * duration of the weight raising automatically.
++ * @device_speed: device-speed class for the low-latency heuristic.
++ * @oom_bfqq: fallback dummy bfqq for extreme OOM conditions.
++ *
++ * All the fields are protected by the @queue lock.
++ */
++struct bfq_data {
++ struct request_queue *queue;
++
++ struct bfq_group *root_group;
++ struct rb_root rq_pos_tree;
++
++#ifdef CONFIG_CGROUP_BFQIO
++ int active_numerous_groups;
++#endif
++
++ struct rb_root queue_weights_tree;
++ struct rb_root group_weights_tree;
++
++ int busy_queues;
++ int busy_in_flight_queues;
++ int const_seeky_busy_in_flight_queues;
++ int wr_busy_queues;
++ int queued;
++ int rq_in_driver;
++ int sync_flight;
++
++ int max_rq_in_driver;
++ int hw_tag_samples;
++ int hw_tag;
++
++ int budgets_assigned;
++
++ struct timer_list idle_slice_timer;
++ struct work_struct unplug_work;
++
++ struct bfq_queue *in_service_queue;
++ struct bfq_io_cq *in_service_bic;
++
++ sector_t last_position;
++
++ ktime_t last_budget_start;
++ ktime_t last_idling_start;
++ int peak_rate_samples;
++ u64 peak_rate;
++ unsigned long bfq_max_budget;
++
++ struct hlist_head group_list;
++ struct list_head active_list;
++ struct list_head idle_list;
++
++ unsigned int bfq_fifo_expire[2];
++ unsigned int bfq_back_penalty;
++ unsigned int bfq_back_max;
++ unsigned int bfq_slice_idle;
++ u64 bfq_class_idle_last_service;
++
++ unsigned int bfq_user_max_budget;
++ unsigned int bfq_max_budget_async_rq;
++ unsigned int bfq_timeout[2];
++
++ unsigned int bfq_coop_thresh;
++ unsigned int bfq_failed_cooperations;
++ unsigned int bfq_requests_within_timer;
++
++ unsigned long last_ins_in_burst;
++ unsigned long bfq_burst_interval;
++ int burst_size;
++ unsigned long bfq_large_burst_thresh;
++ bool large_burst;
++ struct hlist_head burst_list;
++
++ bool low_latency;
++
++ /* parameters of the low_latency heuristics */
++ unsigned int bfq_wr_coeff;
++ unsigned int bfq_wr_max_time;
++ unsigned int bfq_wr_rt_max_time;
++ unsigned int bfq_wr_min_idle_time;
++ unsigned long bfq_wr_min_inter_arr_async;
++ unsigned int bfq_wr_max_softrt_rate;
++ u64 RT_prod;
++ enum bfq_device_speed device_speed;
++
++ struct bfq_queue oom_bfqq;
++};
++
++enum bfqq_state_flags {
++ BFQ_BFQQ_FLAG_busy = 0, /* has requests or is in service */
++ BFQ_BFQQ_FLAG_wait_request, /* waiting for a request */
++ BFQ_BFQQ_FLAG_must_alloc, /* must be allowed rq alloc */
++ BFQ_BFQQ_FLAG_fifo_expire, /* FIFO checked in this slice */
++ BFQ_BFQQ_FLAG_idle_window, /* slice idling enabled */
++ BFQ_BFQQ_FLAG_sync, /* synchronous queue */
++ BFQ_BFQQ_FLAG_budget_new, /* no completion with this budget */
++ BFQ_BFQQ_FLAG_IO_bound, /*
++ * bfqq has timed-out at least once
++ * having consumed at most 2/10 of
++ * its budget
++ */
++ BFQ_BFQQ_FLAG_in_large_burst, /*
++ * bfqq activated in a large burst,
++ * see comments to bfq_handle_burst.
++ */
++ BFQ_BFQQ_FLAG_constantly_seeky, /*
++ * bfqq has proved to be slow and
++ * seeky until budget timeout
++ */
++ BFQ_BFQQ_FLAG_softrt_update, /*
++ * may need softrt-next-start
++ * update
++ */
++ BFQ_BFQQ_FLAG_coop, /* bfqq is shared */
++ BFQ_BFQQ_FLAG_split_coop, /* shared bfqq will be splitted */
++};
++
++#define BFQ_BFQQ_FNS(name) \
++static inline void bfq_mark_bfqq_##name(struct bfq_queue *bfqq) \
++{ \
++ (bfqq)->flags |= (1 << BFQ_BFQQ_FLAG_##name); \
++} \
++static inline void bfq_clear_bfqq_##name(struct bfq_queue *bfqq) \
++{ \
++ (bfqq)->flags &= ~(1 << BFQ_BFQQ_FLAG_##name); \
++} \
++static inline int bfq_bfqq_##name(const struct bfq_queue *bfqq) \
++{ \
++ return ((bfqq)->flags & (1 << BFQ_BFQQ_FLAG_##name)) != 0; \
++}
++
++BFQ_BFQQ_FNS(busy);
++BFQ_BFQQ_FNS(wait_request);
++BFQ_BFQQ_FNS(must_alloc);
++BFQ_BFQQ_FNS(fifo_expire);
++BFQ_BFQQ_FNS(idle_window);
++BFQ_BFQQ_FNS(sync);
++BFQ_BFQQ_FNS(budget_new);
++BFQ_BFQQ_FNS(IO_bound);
++BFQ_BFQQ_FNS(in_large_burst);
++BFQ_BFQQ_FNS(constantly_seeky);
++BFQ_BFQQ_FNS(coop);
++BFQ_BFQQ_FNS(split_coop);
++BFQ_BFQQ_FNS(softrt_update);
++#undef BFQ_BFQQ_FNS
++
++/* Logging facilities. */
++#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) \
++ blk_add_trace_msg((bfqd)->queue, "bfq%d " fmt, (bfqq)->pid, ##args)
++
++#define bfq_log(bfqd, fmt, args...) \
++ blk_add_trace_msg((bfqd)->queue, "bfq " fmt, ##args)
++
++/* Expiration reasons. */
++enum bfqq_expiration {
++ BFQ_BFQQ_TOO_IDLE = 0, /*
++ * queue has been idling for
++ * too long
++ */
++ BFQ_BFQQ_BUDGET_TIMEOUT, /* budget took too long to be used */
++ BFQ_BFQQ_BUDGET_EXHAUSTED, /* budget consumed */
++ BFQ_BFQQ_NO_MORE_REQUESTS, /* the queue has no more requests */
++};
++
++#ifdef CONFIG_CGROUP_BFQIO
++/**
++ * struct bfq_group - per (device, cgroup) data structure.
++ * @entity: schedulable entity to insert into the parent group sched_data.
++ * @sched_data: own sched_data, to contain child entities (they may be
++ * both bfq_queues and bfq_groups).
++ * @group_node: node to be inserted into the bfqio_cgroup->group_data
++ * list of the containing cgroup's bfqio_cgroup.
++ * @bfqd_node: node to be inserted into the @bfqd->group_list list
++ * of the groups active on the same device; used for cleanup.
++ * @bfqd: the bfq_data for the device this group acts upon.
++ * @async_bfqq: array of async queues for all the tasks belonging to
++ * the group, one queue per ioprio value per ioprio_class,
++ * except for the idle class that has only one queue.
++ * @async_idle_bfqq: async queue for the idle class (ioprio is ignored).
++ * @my_entity: pointer to @entity, %NULL for the toplevel group; used
++ * to avoid too many special cases during group creation/
++ * migration.
++ * @active_entities: number of active entities belonging to the group;
++ * unused for the root group. Used to know whether there
++ * are groups with more than one active @bfq_entity
++ * (see the comments to the function
++ * bfq_bfqq_must_not_expire()).
++ *
++ * Each (device, cgroup) pair has its own bfq_group, i.e., for each cgroup
++ * there is a set of bfq_groups, each one collecting the lower-level
++ * entities belonging to the group that are acting on the same device.
++ *
++ * Locking works as follows:
++ * o @group_node is protected by the bfqio_cgroup lock, and is accessed
++ * via RCU from its readers.
++ * o @bfqd is protected by the queue lock, RCU is used to access it
++ * from the readers.
++ * o All the other fields are protected by the @bfqd queue lock.
++ */
++struct bfq_group {
++ struct bfq_entity entity;
++ struct bfq_sched_data sched_data;
++
++ struct hlist_node group_node;
++ struct hlist_node bfqd_node;
++
++ void *bfqd;
++
++ struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR];
++ struct bfq_queue *async_idle_bfqq;
++
++ struct bfq_entity *my_entity;
++
++ int active_entities;
++};
++
++/**
++ * struct bfqio_cgroup - bfq cgroup data structure.
++ * @css: subsystem state for bfq in the containing cgroup.
++ * @online: flag marked when the subsystem is inserted.
++ * @weight: cgroup weight.
++ * @ioprio: cgroup ioprio.
++ * @ioprio_class: cgroup ioprio_class.
++ * @lock: spinlock that protects @ioprio, @ioprio_class and @group_data.
++ * @group_data: list containing the bfq_group belonging to this cgroup.
++ *
++ * @group_data is accessed using RCU, with @lock protecting the updates,
++ * @ioprio and @ioprio_class are protected by @lock.
++ */
++struct bfqio_cgroup {
++ struct cgroup_subsys_state css;
++ bool online;
++
++ unsigned short weight, ioprio, ioprio_class;
++
++ spinlock_t lock;
++ struct hlist_head group_data;
++};
++#else
++struct bfq_group {
++ struct bfq_sched_data sched_data;
++
++ struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR];
++ struct bfq_queue *async_idle_bfqq;
++};
++#endif
++
++static inline struct bfq_service_tree *
++bfq_entity_service_tree(struct bfq_entity *entity)
++{
++ struct bfq_sched_data *sched_data = entity->sched_data;
++ unsigned int idx = entity->ioprio_class - 1;
++
++ BUG_ON(idx >= BFQ_IOPRIO_CLASSES);
++ BUG_ON(sched_data == NULL);
++
++ return sched_data->service_tree + idx;
++}
++
++static inline struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic,
++ bool is_sync)
++{
++ return bic->bfqq[is_sync];
++}
++
++static inline void bic_set_bfqq(struct bfq_io_cq *bic,
++ struct bfq_queue *bfqq, bool is_sync)
++{
++ bic->bfqq[is_sync] = bfqq;
++}
++
++static inline struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic)
++{
++ return bic->icq.q->elevator->elevator_data;
++}
++
++/**
++ * bfq_get_bfqd_locked - get a lock to a bfqd using a RCU protected pointer.
++ * @ptr: a pointer to a bfqd.
++ * @flags: storage for the flags to be saved.
++ *
++ * This function allows bfqg->bfqd to be protected by the
++ * queue lock of the bfqd they reference; the pointer is dereferenced
++ * under RCU, so the storage for bfqd is assured to be safe as long
++ * as the RCU read side critical section does not end. After the
++ * bfqd->queue->queue_lock is taken the pointer is rechecked, to be
++ * sure that no other writer accessed it. If we raced with a writer,
++ * the function returns NULL, with the queue unlocked, otherwise it
++ * returns the dereferenced pointer, with the queue locked.
++ */
++static inline struct bfq_data *bfq_get_bfqd_locked(void **ptr,
++ unsigned long *flags)
++{
++ struct bfq_data *bfqd;
++
++ rcu_read_lock();
++ bfqd = rcu_dereference(*(struct bfq_data **)ptr);
++
++ if (bfqd != NULL) {
++ spin_lock_irqsave(bfqd->queue->queue_lock, *flags);
++ if (*ptr == bfqd)
++ goto out;
++ spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
++ }
++
++ bfqd = NULL;
++out:
++ rcu_read_unlock();
++ return bfqd;
++}
++
++static inline void bfq_put_bfqd_unlock(struct bfq_data *bfqd,
++ unsigned long *flags)
++{
++ spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
++}
++
++static void bfq_check_ioprio_change(struct bfq_io_cq *bic);
++static void bfq_put_queue(struct bfq_queue *bfqq);
++static void bfq_dispatch_insert(struct request_queue *q, struct request *rq);
++static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
++ struct bfq_group *bfqg, int is_sync,
++ struct bfq_io_cq *bic, gfp_t gfp_mask);
++static void bfq_end_wr_async_queues(struct bfq_data *bfqd,
++ struct bfq_group *bfqg);
++static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg);
++static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq);
++
++#endif /* _BFQ_H */
+--
+1.9.1
+
diff --git a/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r8-for-4.3.patch b/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r8-for-4.3.patch
new file mode 100644
index 0000000..305a5b0
--- /dev/null
+++ b/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r8-for-4.3.patch
@@ -0,0 +1,1220 @@
+From 44efc3f611c09e049fe840e640c2bd2ccfde2148 Mon Sep 17 00:00:00 2001
+From: Mauro Andreolini <mauro.andreolini@unimore.it>
+Date: Fri, 5 Jun 2015 17:45:40 +0200
+Subject: [PATCH 3/3] block, bfq: add Early Queue Merge (EQM) to BFQ-v7r8 for
+ 4.3.0
+
+A set of processes may happen to perform interleaved reads, i.e.,requests
+whose union would give rise to a sequential read pattern. There are two
+typical cases: in the first case, processes read fixed-size chunks of
+data at a fixed distance from each other, while in the second case processes
+may read variable-size chunks at variable distances. The latter case occurs
+for example with QEMU, which splits the I/O generated by the guest into
+multiple chunks, and lets these chunks be served by a pool of cooperating
+processes, iteratively assigning the next chunk of I/O to the first
+available process. CFQ uses actual queue merging for the first type of
+rocesses, whereas it uses preemption to get a sequential read pattern out
+of the read requests performed by the second type of processes. In the end
+it uses two different mechanisms to achieve the same goal: boosting the
+throughput with interleaved I/O.
+
+This patch introduces Early Queue Merge (EQM), a unified mechanism to get a
+sequential read pattern with both types of processes. The main idea is
+checking newly arrived requests against the next request of the active queue
+both in case of actual request insert and in case of request merge. By doing
+so, both the types of processes can be handled by just merging their queues.
+EQM is then simpler and more compact than the pair of mechanisms used in
+CFQ.
+
+Finally, EQM also preserves the typical low-latency properties of BFQ, by
+properly restoring the weight-raising state of a queue when it gets back to
+a non-merged state.
+
+Signed-off-by: Mauro Andreolini <mauro.andreolini@unimore.it>
+Signed-off-by: Arianna Avanzini <avanzini@google.com>
+Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
+---
+ block/bfq-iosched.c | 750 +++++++++++++++++++++++++++++++++++++---------------
+ block/bfq-sched.c | 28 --
+ block/bfq.h | 54 +++-
+ 3 files changed, 580 insertions(+), 252 deletions(-)
+
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 773b2ee..71b51c1 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -573,6 +573,57 @@ static inline unsigned int bfq_wr_duration(struct bfq_data *bfqd)
+ return dur;
+ }
+
++static inline unsigned
++bfq_bfqq_cooperations(struct bfq_queue *bfqq)
++{
++ return bfqq->bic ? bfqq->bic->cooperations : 0;
++}
++
++static inline void
++bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
++{
++ if (bic->saved_idle_window)
++ bfq_mark_bfqq_idle_window(bfqq);
++ else
++ bfq_clear_bfqq_idle_window(bfqq);
++ if (bic->saved_IO_bound)
++ bfq_mark_bfqq_IO_bound(bfqq);
++ else
++ bfq_clear_bfqq_IO_bound(bfqq);
++ /* Assuming that the flag in_large_burst is already correctly set */
++ if (bic->wr_time_left && bfqq->bfqd->low_latency &&
++ !bfq_bfqq_in_large_burst(bfqq) &&
++ bic->cooperations < bfqq->bfqd->bfq_coop_thresh) {
++ /*
++ * Start a weight raising period with the duration given by
++ * the raising_time_left snapshot.
++ */
++ if (bfq_bfqq_busy(bfqq))
++ bfqq->bfqd->wr_busy_queues++;
++ bfqq->wr_coeff = bfqq->bfqd->bfq_wr_coeff;
++ bfqq->wr_cur_max_time = bic->wr_time_left;
++ bfqq->last_wr_start_finish = jiffies;
++ bfqq->entity.ioprio_changed = 1;
++ }
++ /*
++ * Clear wr_time_left to prevent bfq_bfqq_save_state() from
++ * getting confused about the queue's need of a weight-raising
++ * period.
++ */
++ bic->wr_time_left = 0;
++}
++
++/* Must be called with the queue_lock held. */
++static int bfqq_process_refs(struct bfq_queue *bfqq)
++{
++ int process_refs, io_refs;
++
++ io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
++ process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
++ BUG_ON(process_refs < 0);
++ return process_refs;
++}
++
+ /* Empty burst list and add just bfqq (see comments to bfq_handle_burst) */
+ static inline void bfq_reset_burst_list(struct bfq_data *bfqd,
+ struct bfq_queue *bfqq)
+@@ -817,7 +868,7 @@ static void bfq_add_request(struct request *rq)
+ bfq_rq_pos_tree_add(bfqd, bfqq);
+
+ if (!bfq_bfqq_busy(bfqq)) {
+- bool soft_rt,
++ bool soft_rt, coop_or_in_burst,
+ idle_for_long_time = time_is_before_jiffies(
+ bfqq->budget_timeout +
+ bfqd->bfq_wr_min_idle_time);
+@@ -841,11 +892,12 @@ static void bfq_add_request(struct request *rq)
+ bfqd->last_ins_in_burst = jiffies;
+ }
+
++ coop_or_in_burst = bfq_bfqq_in_large_burst(bfqq) ||
++ bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh;
+ soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
+- !bfq_bfqq_in_large_burst(bfqq) &&
++ !coop_or_in_burst &&
+ time_is_before_jiffies(bfqq->soft_rt_next_start);
+- interactive = !bfq_bfqq_in_large_burst(bfqq) &&
+- idle_for_long_time;
++ interactive = !coop_or_in_burst && idle_for_long_time;
+ entity->budget = max_t(unsigned long, bfqq->max_budget,
+ bfq_serv_to_charge(next_rq, bfqq));
+
+@@ -864,11 +916,20 @@ static void bfq_add_request(struct request *rq)
+ if (!bfqd->low_latency)
+ goto add_bfqq_busy;
+
++ if (bfq_bfqq_just_split(bfqq))
++ goto set_ioprio_changed;
++
+ /*
+- * If the queue is not being boosted and has been idle
+- * for enough time, start a weight-raising period
++ * If the queue:
++ * - is not being boosted,
++ * - has been idle for enough time,
++ * - is not a sync queue or is linked to a bfq_io_cq (it is
++ * shared "for its nature" or it is not shared and its
++ * requests have not been redirected to a shared queue)
++ * start a weight-raising period.
+ */
+- if (old_wr_coeff == 1 && (interactive || soft_rt)) {
++ if (old_wr_coeff == 1 && (interactive || soft_rt) &&
++ (!bfq_bfqq_sync(bfqq) || bfqq->bic != NULL)) {
+ bfqq->wr_coeff = bfqd->bfq_wr_coeff;
+ if (interactive)
+ bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
+@@ -882,7 +943,7 @@ static void bfq_add_request(struct request *rq)
+ } else if (old_wr_coeff > 1) {
+ if (interactive)
+ bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
+- else if (bfq_bfqq_in_large_burst(bfqq) ||
++ else if (coop_or_in_burst ||
+ (bfqq->wr_cur_max_time ==
+ bfqd->bfq_wr_rt_max_time &&
+ !soft_rt)) {
+@@ -901,18 +962,18 @@ static void bfq_add_request(struct request *rq)
+ /*
+ *
+ * The remaining weight-raising time is lower
+- * than bfqd->bfq_wr_rt_max_time, which
+- * means that the application is enjoying
+- * weight raising either because deemed soft-
+- * rt in the near past, or because deemed
+- * interactive a long ago. In both cases,
+- * resetting now the current remaining weight-
+- * raising time for the application to the
+- * weight-raising duration for soft rt
+- * applications would not cause any latency
+- * increase for the application (as the new
+- * duration would be higher than the remaining
+- * time).
++ * than bfqd->bfq_wr_rt_max_time, which means
++ * that the application is enjoying weight
++ * raising either because deemed soft-rt in
++ * the near past, or because deemed interactive
++ * a long ago.
++ * In both cases, resetting now the current
++ * remaining weight-raising time for the
++ * application to the weight-raising duration
++ * for soft rt applications would not cause any
++ * latency increase for the application (as the
++ * new duration would be higher than the
++ * remaining time).
+ *
+ * In addition, the application is now meeting
+ * the requirements for being deemed soft rt.
+@@ -947,6 +1008,7 @@ static void bfq_add_request(struct request *rq)
+ bfqd->bfq_wr_rt_max_time;
+ }
+ }
++set_ioprio_changed:
+ if (old_wr_coeff != bfqq->wr_coeff)
+ entity->ioprio_changed = 1;
+ add_bfqq_busy:
+@@ -1167,90 +1229,35 @@ static void bfq_end_wr(struct bfq_data *bfqd)
+ spin_unlock_irq(bfqd->queue->queue_lock);
+ }
+
+-static int bfq_allow_merge(struct request_queue *q, struct request *rq,
+- struct bio *bio)
++static inline sector_t bfq_io_struct_pos(void *io_struct, bool request)
+ {
+- struct bfq_data *bfqd = q->elevator->elevator_data;
+- struct bfq_io_cq *bic;
+- struct bfq_queue *bfqq;
+-
+- /*
+- * Disallow merge of a sync bio into an async request.
+- */
+- if (bfq_bio_sync(bio) && !rq_is_sync(rq))
+- return 0;
+-
+- /*
+- * Lookup the bfqq that this bio will be queued with. Allow
+- * merge only if rq is queued there.
+- * Queue lock is held here.
+- */
+- bic = bfq_bic_lookup(bfqd, current->io_context);
+- if (bic == NULL)
+- return 0;
+-
+- bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
+- return bfqq == RQ_BFQQ(rq);
+-}
+-
+-static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
+- struct bfq_queue *bfqq)
+-{
+- if (bfqq != NULL) {
+- bfq_mark_bfqq_must_alloc(bfqq);
+- bfq_mark_bfqq_budget_new(bfqq);
+- bfq_clear_bfqq_fifo_expire(bfqq);
+-
+- bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
+-
+- bfq_log_bfqq(bfqd, bfqq,
+- "set_in_service_queue, cur-budget = %lu",
+- bfqq->entity.budget);
+- }
+-
+- bfqd->in_service_queue = bfqq;
+-}
+-
+-/*
+- * Get and set a new queue for service.
+- */
+-static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd,
+- struct bfq_queue *bfqq)
+-{
+- if (!bfqq)
+- bfqq = bfq_get_next_queue(bfqd);
++ if (request)
++ return blk_rq_pos(io_struct);
+ else
+- bfq_get_next_queue_forced(bfqd, bfqq);
+-
+- __bfq_set_in_service_queue(bfqd, bfqq);
+- return bfqq;
++ return ((struct bio *)io_struct)->bi_iter.bi_sector;
+ }
+
+-static inline sector_t bfq_dist_from_last(struct bfq_data *bfqd,
+- struct request *rq)
++static inline sector_t bfq_dist_from(sector_t pos1,
++ sector_t pos2)
+ {
+- if (blk_rq_pos(rq) >= bfqd->last_position)
+- return blk_rq_pos(rq) - bfqd->last_position;
++ if (pos1 >= pos2)
++ return pos1 - pos2;
+ else
+- return bfqd->last_position - blk_rq_pos(rq);
++ return pos2 - pos1;
+ }
+
+-/*
+- * Return true if bfqq has no request pending and rq is close enough to
+- * bfqd->last_position, or if rq is closer to bfqd->last_position than
+- * bfqq->next_rq
+- */
+-static inline int bfq_rq_close(struct bfq_data *bfqd, struct request *rq)
++static inline int bfq_rq_close_to_sector(void *io_struct, bool request,
++ sector_t sector)
+ {
+- return bfq_dist_from_last(bfqd, rq) <= BFQQ_SEEK_THR;
++ return bfq_dist_from(bfq_io_struct_pos(io_struct, request), sector) <=
++ BFQQ_SEEK_THR;
+ }
+
+-static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
++static struct bfq_queue *bfqq_close(struct bfq_data *bfqd, sector_t sector)
+ {
+ struct rb_root *root = &bfqd->rq_pos_tree;
+ struct rb_node *parent, *node;
+ struct bfq_queue *__bfqq;
+- sector_t sector = bfqd->last_position;
+
+ if (RB_EMPTY_ROOT(root))
+ return NULL;
+@@ -1269,7 +1276,7 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
+ * next_request position).
+ */
+ __bfqq = rb_entry(parent, struct bfq_queue, pos_node);
+- if (bfq_rq_close(bfqd, __bfqq->next_rq))
++ if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector))
+ return __bfqq;
+
+ if (blk_rq_pos(__bfqq->next_rq) < sector)
+@@ -1280,7 +1287,7 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
+ return NULL;
+
+ __bfqq = rb_entry(node, struct bfq_queue, pos_node);
+- if (bfq_rq_close(bfqd, __bfqq->next_rq))
++ if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector))
+ return __bfqq;
+
+ return NULL;
+@@ -1289,14 +1296,12 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
+ /*
+ * bfqd - obvious
+ * cur_bfqq - passed in so that we don't decide that the current queue
+- * is closely cooperating with itself.
+- *
+- * We are assuming that cur_bfqq has dispatched at least one request,
+- * and that bfqd->last_position reflects a position on the disk associated
+- * with the I/O issued by cur_bfqq.
++ * is closely cooperating with itself
++ * sector - used as a reference point to search for a close queue
+ */
+ static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
+- struct bfq_queue *cur_bfqq)
++ struct bfq_queue *cur_bfqq,
++ sector_t sector)
+ {
+ struct bfq_queue *bfqq;
+
+@@ -1316,7 +1321,7 @@ static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
+ * working closely on the same area of the disk. In that case,
+ * we can group them together and don't waste time idling.
+ */
+- bfqq = bfqq_close(bfqd);
++ bfqq = bfqq_close(bfqd, sector);
+ if (bfqq == NULL || bfqq == cur_bfqq)
+ return NULL;
+
+@@ -1343,6 +1348,315 @@ static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
+ return bfqq;
+ }
+
++static struct bfq_queue *
++bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
++{
++ int process_refs, new_process_refs;
++ struct bfq_queue *__bfqq;
++
++ /*
++ * If there are no process references on the new_bfqq, then it is
++ * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain
++ * may have dropped their last reference (not just their last process
++ * reference).
++ */
++ if (!bfqq_process_refs(new_bfqq))
++ return NULL;
++
++ /* Avoid a circular list and skip interim queue merges. */
++ while ((__bfqq = new_bfqq->new_bfqq)) {
++ if (__bfqq == bfqq)
++ return NULL;
++ new_bfqq = __bfqq;
++ }
++
++ process_refs = bfqq_process_refs(bfqq);
++ new_process_refs = bfqq_process_refs(new_bfqq);
++ /*
++ * If the process for the bfqq has gone away, there is no
++ * sense in merging the queues.
++ */
++ if (process_refs == 0 || new_process_refs == 0)
++ return NULL;
++
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
++ new_bfqq->pid);
++
++ /*
++ * Merging is just a redirection: the requests of the process
++ * owning one of the two queues are redirected to the other queue.
++ * The latter queue, in its turn, is set as shared if this is the
++ * first time that the requests of some process are redirected to
++ * it.
++ *
++ * We redirect bfqq to new_bfqq and not the opposite, because we
++ * are in the context of the process owning bfqq, hence we have
++ * the io_cq of this process. So we can immediately configure this
++ * io_cq to redirect the requests of the process to new_bfqq.
++ *
++ * NOTE, even if new_bfqq coincides with the in-service queue, the
++ * io_cq of new_bfqq is not available, because, if the in-service
++ * queue is shared, bfqd->in_service_bic may not point to the
++ * io_cq of the in-service queue.
++ * Redirecting the requests of the process owning bfqq to the
++ * currently in-service queue is in any case the best option, as
++ * we feed the in-service queue with new requests close to the
++ * last request served and, by doing so, hopefully increase the
++ * throughput.
++ */
++ bfqq->new_bfqq = new_bfqq;
++ atomic_add(process_refs, &new_bfqq->ref);
++ return new_bfqq;
++}
++
++/*
++ * Attempt to schedule a merge of bfqq with the currently in-service queue
++ * or with a close queue among the scheduled queues.
++ * Return NULL if no merge was scheduled, a pointer to the shared bfq_queue
++ * structure otherwise.
++ *
++ * The OOM queue is not allowed to participate to cooperation: in fact, since
++ * the requests temporarily redirected to the OOM queue could be redirected
++ * again to dedicated queues at any time, the state needed to correctly
++ * handle merging with the OOM queue would be quite complex and expensive
++ * to maintain. Besides, in such a critical condition as an out of memory,
++ * the benefits of queue merging may be little relevant, or even negligible.
++ */
++static struct bfq_queue *
++bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ void *io_struct, bool request)
++{
++ struct bfq_queue *in_service_bfqq, *new_bfqq;
++
++ if (bfqq->new_bfqq)
++ return bfqq->new_bfqq;
++
++ if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
++ return NULL;
++
++ in_service_bfqq = bfqd->in_service_queue;
++
++ if (in_service_bfqq == NULL || in_service_bfqq == bfqq ||
++ !bfqd->in_service_bic ||
++ unlikely(in_service_bfqq == &bfqd->oom_bfqq))
++ goto check_scheduled;
++
++ if (bfq_class_idle(in_service_bfqq) || bfq_class_idle(bfqq))
++ goto check_scheduled;
++
++ if (bfq_class_rt(in_service_bfqq) != bfq_class_rt(bfqq))
++ goto check_scheduled;
++
++ if (in_service_bfqq->entity.parent != bfqq->entity.parent)
++ goto check_scheduled;
++
++ if (bfq_rq_close_to_sector(io_struct, request, bfqd->last_position) &&
++ bfq_bfqq_sync(in_service_bfqq) && bfq_bfqq_sync(bfqq)) {
++ new_bfqq = bfq_setup_merge(bfqq, in_service_bfqq);
++ if (new_bfqq != NULL)
++ return new_bfqq; /* Merge with in-service queue */
++ }
++
++ /*
++ * Check whether there is a cooperator among currently scheduled
++ * queues. The only thing we need is that the bio/request is not
++ * NULL, as we need it to establish whether a cooperator exists.
++ */
++check_scheduled:
++ new_bfqq = bfq_close_cooperator(bfqd, bfqq,
++ bfq_io_struct_pos(io_struct, request));
++ if (new_bfqq && likely(new_bfqq != &bfqd->oom_bfqq))
++ return bfq_setup_merge(bfqq, new_bfqq);
++
++ return NULL;
++}
++
++static inline void
++bfq_bfqq_save_state(struct bfq_queue *bfqq)
++{
++ /*
++ * If bfqq->bic == NULL, the queue is already shared or its requests
++ * have already been redirected to a shared queue; both idle window
++ * and weight raising state have already been saved. Do nothing.
++ */
++ if (bfqq->bic == NULL)
++ return;
++ if (bfqq->bic->wr_time_left)
++ /*
++ * This is the queue of a just-started process, and would
++ * deserve weight raising: we set wr_time_left to the full
++ * weight-raising duration to trigger weight-raising when
++ * and if the queue is split and the first request of the
++ * queue is enqueued.
++ */
++ bfqq->bic->wr_time_left = bfq_wr_duration(bfqq->bfqd);
++ else if (bfqq->wr_coeff > 1) {
++ unsigned long wr_duration =
++ jiffies - bfqq->last_wr_start_finish;
++ /*
++ * It may happen that a queue's weight raising period lasts
++ * longer than its wr_cur_max_time, as weight raising is
++ * handled only when a request is enqueued or dispatched (it
++ * does not use any timer). If the weight raising period is
++ * about to end, don't save it.
++ */
++ if (bfqq->wr_cur_max_time <= wr_duration)
++ bfqq->bic->wr_time_left = 0;
++ else
++ bfqq->bic->wr_time_left =
++ bfqq->wr_cur_max_time - wr_duration;
++ /*
++ * The bfq_queue is becoming shared or the requests of the
++ * process owning the queue are being redirected to a shared
++ * queue. Stop the weight raising period of the queue, as in
++ * both cases it should not be owned by an interactive or
++ * soft real-time application.
++ */
++ bfq_bfqq_end_wr(bfqq);
++ } else
++ bfqq->bic->wr_time_left = 0;
++ bfqq->bic->saved_idle_window = bfq_bfqq_idle_window(bfqq);
++ bfqq->bic->saved_IO_bound = bfq_bfqq_IO_bound(bfqq);
++ bfqq->bic->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq);
++ bfqq->bic->was_in_burst_list = !hlist_unhashed(&bfqq->burst_list_node);
++ bfqq->bic->cooperations++;
++ bfqq->bic->failed_cooperations = 0;
++}
++
++static inline void
++bfq_get_bic_reference(struct bfq_queue *bfqq)
++{
++ /*
++ * If bfqq->bic has a non-NULL value, the bic to which it belongs
++ * is about to begin using a shared bfq_queue.
++ */
++ if (bfqq->bic)
++ atomic_long_inc(&bfqq->bic->icq.ioc->refcount);
++}
++
++static void
++bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
++ struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
++{
++ bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
++ (long unsigned)new_bfqq->pid);
++ /* Save weight raising and idle window of the merged queues */
++ bfq_bfqq_save_state(bfqq);
++ bfq_bfqq_save_state(new_bfqq);
++ if (bfq_bfqq_IO_bound(bfqq))
++ bfq_mark_bfqq_IO_bound(new_bfqq);
++ bfq_clear_bfqq_IO_bound(bfqq);
++ /*
++ * Grab a reference to the bic, to prevent it from being destroyed
++ * before being possibly touched by a bfq_split_bfqq().
++ */
++ bfq_get_bic_reference(bfqq);
++ bfq_get_bic_reference(new_bfqq);
++ /*
++ * Merge queues (that is, let bic redirect its requests to new_bfqq)
++ */
++ bic_set_bfqq(bic, new_bfqq, 1);
++ bfq_mark_bfqq_coop(new_bfqq);
++ /*
++ * new_bfqq now belongs to at least two bics (it is a shared queue):
++ * set new_bfqq->bic to NULL. bfqq either:
++ * - does not belong to any bic any more, and hence bfqq->bic must
++ * be set to NULL, or
++ * - is a queue whose owning bics have already been redirected to a
++ * different queue, hence the queue is destined to not belong to
++ * any bic soon and bfqq->bic is already NULL (therefore the next
++ * assignment causes no harm).
++ */
++ new_bfqq->bic = NULL;
++ bfqq->bic = NULL;
++ bfq_put_queue(bfqq);
++}
++
++static inline void bfq_bfqq_increase_failed_cooperations(struct bfq_queue *bfqq)
++{
++ struct bfq_io_cq *bic = bfqq->bic;
++ struct bfq_data *bfqd = bfqq->bfqd;
++
++ if (bic && bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh) {
++ bic->failed_cooperations++;
++ if (bic->failed_cooperations >= bfqd->bfq_failed_cooperations)
++ bic->cooperations = 0;
++ }
++}
++
++static int bfq_allow_merge(struct request_queue *q, struct request *rq,
++ struct bio *bio)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ struct bfq_io_cq *bic;
++ struct bfq_queue *bfqq, *new_bfqq;
++
++ /*
++ * Disallow merge of a sync bio into an async request.
++ */
++ if (bfq_bio_sync(bio) && !rq_is_sync(rq))
++ return 0;
++
++ /*
++ * Lookup the bfqq that this bio will be queued with. Allow
++ * merge only if rq is queued there.
++ * Queue lock is held here.
++ */
++ bic = bfq_bic_lookup(bfqd, current->io_context);
++ if (bic == NULL)
++ return 0;
++
++ bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
++ /*
++ * We take advantage of this function to perform an early merge
++ * of the queues of possible cooperating processes.
++ */
++ if (bfqq != NULL) {
++ new_bfqq = bfq_setup_cooperator(bfqd, bfqq, bio, false);
++ if (new_bfqq != NULL) {
++ bfq_merge_bfqqs(bfqd, bic, bfqq, new_bfqq);
++ /*
++ * If we get here, the bio will be queued in the
++ * shared queue, i.e., new_bfqq, so use new_bfqq
++ * to decide whether bio and rq can be merged.
++ */
++ bfqq = new_bfqq;
++ } else
++ bfq_bfqq_increase_failed_cooperations(bfqq);
++ }
++
++ return bfqq == RQ_BFQQ(rq);
++}
++
++static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq)
++{
++ if (bfqq != NULL) {
++ bfq_mark_bfqq_must_alloc(bfqq);
++ bfq_mark_bfqq_budget_new(bfqq);
++ bfq_clear_bfqq_fifo_expire(bfqq);
++
++ bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
++
++ bfq_log_bfqq(bfqd, bfqq,
++ "set_in_service_queue, cur-budget = %lu",
++ bfqq->entity.budget);
++ }
++
++ bfqd->in_service_queue = bfqq;
++}
++
++/*
++ * Get and set a new queue for service.
++ */
++static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd)
++{
++ struct bfq_queue *bfqq = bfq_get_next_queue(bfqd);
++
++ __bfq_set_in_service_queue(bfqd, bfqq);
++ return bfqq;
++}
++
+ /*
+ * If enough samples have been computed, return the current max budget
+ * stored in bfqd, which is dynamically updated according to the
+@@ -1488,61 +1802,6 @@ static struct request *bfq_check_fifo(struct bfq_queue *bfqq)
+ return rq;
+ }
+
+-/* Must be called with the queue_lock held. */
+-static int bfqq_process_refs(struct bfq_queue *bfqq)
+-{
+- int process_refs, io_refs;
+-
+- io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
+- process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
+- BUG_ON(process_refs < 0);
+- return process_refs;
+-}
+-
+-static void bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+-{
+- int process_refs, new_process_refs;
+- struct bfq_queue *__bfqq;
+-
+- /*
+- * If there are no process references on the new_bfqq, then it is
+- * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain
+- * may have dropped their last reference (not just their last process
+- * reference).
+- */
+- if (!bfqq_process_refs(new_bfqq))
+- return;
+-
+- /* Avoid a circular list and skip interim queue merges. */
+- while ((__bfqq = new_bfqq->new_bfqq)) {
+- if (__bfqq == bfqq)
+- return;
+- new_bfqq = __bfqq;
+- }
+-
+- process_refs = bfqq_process_refs(bfqq);
+- new_process_refs = bfqq_process_refs(new_bfqq);
+- /*
+- * If the process for the bfqq has gone away, there is no
+- * sense in merging the queues.
+- */
+- if (process_refs == 0 || new_process_refs == 0)
+- return;
+-
+- /*
+- * Merge in the direction of the lesser amount of work.
+- */
+- if (new_process_refs >= process_refs) {
+- bfqq->new_bfqq = new_bfqq;
+- atomic_add(process_refs, &new_bfqq->ref);
+- } else {
+- new_bfqq->new_bfqq = bfqq;
+- atomic_add(new_process_refs, &bfqq->ref);
+- }
+- bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
+- new_bfqq->pid);
+-}
+-
+ static inline unsigned long bfq_bfqq_budget_left(struct bfq_queue *bfqq)
+ {
+ struct bfq_entity *entity = &bfqq->entity;
+@@ -2269,7 +2528,7 @@ static inline bool bfq_bfqq_must_idle(struct bfq_queue *bfqq)
+ */
+ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ {
+- struct bfq_queue *bfqq, *new_bfqq = NULL;
++ struct bfq_queue *bfqq;
+ struct request *next_rq;
+ enum bfqq_expiration reason = BFQ_BFQQ_BUDGET_TIMEOUT;
+
+@@ -2279,17 +2538,6 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+
+ bfq_log_bfqq(bfqd, bfqq, "select_queue: already in-service queue");
+
+- /*
+- * If another queue has a request waiting within our mean seek
+- * distance, let it run. The expire code will check for close
+- * cooperators and put the close queue at the front of the
+- * service tree. If possible, merge the expiring queue with the
+- * new bfqq.
+- */
+- new_bfqq = bfq_close_cooperator(bfqd, bfqq);
+- if (new_bfqq != NULL && bfqq->new_bfqq == NULL)
+- bfq_setup_merge(bfqq, new_bfqq);
+-
+ if (bfq_may_expire_for_budg_timeout(bfqq) &&
+ !timer_pending(&bfqd->idle_slice_timer) &&
+ !bfq_bfqq_must_idle(bfqq))
+@@ -2328,10 +2576,7 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ bfq_clear_bfqq_wait_request(bfqq);
+ del_timer(&bfqd->idle_slice_timer);
+ }
+- if (new_bfqq == NULL)
+- goto keep_queue;
+- else
+- goto expire;
++ goto keep_queue;
+ }
+ }
+
+@@ -2340,40 +2585,30 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ * for a new request, or has requests waiting for a completion and
+ * may idle after their completion, then keep it anyway.
+ */
+- if (new_bfqq == NULL && (timer_pending(&bfqd->idle_slice_timer) ||
+- (bfqq->dispatched != 0 && bfq_bfqq_must_not_expire(bfqq)))) {
++ if (timer_pending(&bfqd->idle_slice_timer) ||
++ (bfqq->dispatched != 0 && bfq_bfqq_must_not_expire(bfqq))) {
+ bfqq = NULL;
+ goto keep_queue;
+- } else if (new_bfqq != NULL && timer_pending(&bfqd->idle_slice_timer)) {
+- /*
+- * Expiring the queue because there is a close cooperator,
+- * cancel timer.
+- */
+- bfq_clear_bfqq_wait_request(bfqq);
+- del_timer(&bfqd->idle_slice_timer);
+ }
+
+ reason = BFQ_BFQQ_NO_MORE_REQUESTS;
+ expire:
+ bfq_bfqq_expire(bfqd, bfqq, 0, reason);
+ new_queue:
+- bfqq = bfq_set_in_service_queue(bfqd, new_bfqq);
++ bfqq = bfq_set_in_service_queue(bfqd);
+ bfq_log(bfqd, "select_queue: new queue %d returned",
+ bfqq != NULL ? bfqq->pid : 0);
+ keep_queue:
+ return bfqq;
+ }
+
+-static void bfq_update_wr_data(struct bfq_data *bfqd,
+- struct bfq_queue *bfqq)
++static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+- if (bfqq->wr_coeff > 1) { /* queue is being boosted */
+- struct bfq_entity *entity = &bfqq->entity;
+-
++ struct bfq_entity *entity = &bfqq->entity;
++ if (bfqq->wr_coeff > 1) { /* queue is being weight-raised */
+ bfq_log_bfqq(bfqd, bfqq,
+ "raising period dur %u/%u msec, old coeff %u, w %d(%d)",
+- jiffies_to_msecs(jiffies -
+- bfqq->last_wr_start_finish),
++ jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish),
+ jiffies_to_msecs(bfqq->wr_cur_max_time),
+ bfqq->wr_coeff,
+ bfqq->entity.weight, bfqq->entity.orig_weight);
+@@ -2382,12 +2617,16 @@ static void bfq_update_wr_data(struct bfq_data *bfqd,
+ entity->orig_weight * bfqq->wr_coeff);
+ if (entity->ioprio_changed)
+ bfq_log_bfqq(bfqd, bfqq, "WARN: pending prio change");
++
+ /*
+ * If the queue was activated in a burst, or
+ * too much time has elapsed from the beginning
+- * of this weight-raising, then end weight raising.
++ * of this weight-raising period, or the queue has
++ * exceeded the acceptable number of cooperations,
++ * then end weight raising.
+ */
+ if (bfq_bfqq_in_large_burst(bfqq) ||
++ bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh ||
+ time_is_before_jiffies(bfqq->last_wr_start_finish +
+ bfqq->wr_cur_max_time)) {
+ bfqq->last_wr_start_finish = jiffies;
+@@ -2396,11 +2635,13 @@ static void bfq_update_wr_data(struct bfq_data *bfqd,
+ bfqq->last_wr_start_finish,
+ jiffies_to_msecs(bfqq->wr_cur_max_time));
+ bfq_bfqq_end_wr(bfqq);
+- __bfq_entity_update_weight_prio(
+- bfq_entity_service_tree(entity),
+- entity);
+ }
+ }
++ /* Update weight both if it must be raised and if it must be lowered */
++ if ((entity->weight > entity->orig_weight) != (bfqq->wr_coeff > 1))
++ __bfq_entity_update_weight_prio(
++ bfq_entity_service_tree(entity),
++ entity);
+ }
+
+ /*
+@@ -2647,6 +2888,25 @@ static inline void bfq_init_icq(struct io_cq *icq)
+ struct bfq_io_cq *bic = icq_to_bic(icq);
+
+ bic->ttime.last_end_request = jiffies;
++ /*
++ * A newly created bic indicates that the process has just
++ * started doing I/O, and is probably mapping into memory its
++ * executable and libraries: it definitely needs weight raising.
++ * There is however the possibility that the process performs,
++ * for a while, I/O close to some other process. EQM intercepts
++ * this behavior and may merge the queue corresponding to the
++ * process with some other queue, BEFORE the weight of the queue
++ * is raised. Merged queues are not weight-raised (they are assumed
++ * to belong to processes that benefit only from high throughput).
++ * If the merge is basically the consequence of an accident, then
++ * the queue will be split soon and will get back its old weight.
++ * It is then important to write down somewhere that this queue
++ * does need weight raising, even if it did not make it to get its
++ * weight raised before being merged. To this purpose, we overload
++ * the field raising_time_left and assign 1 to it, to mark the queue
++ * as needing weight raising.
++ */
++ bic->wr_time_left = 1;
+ }
+
+ static void bfq_exit_icq(struct io_cq *icq)
+@@ -2660,6 +2920,13 @@ static void bfq_exit_icq(struct io_cq *icq)
+ }
+
+ if (bic->bfqq[BLK_RW_SYNC]) {
++ /*
++ * If the bic is using a shared queue, put the reference
++ * taken on the io_context when the bic started using a
++ * shared bfq_queue.
++ */
++ if (bfq_bfqq_coop(bic->bfqq[BLK_RW_SYNC]))
++ put_io_context(icq->ioc);
+ bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_SYNC]);
+ bic->bfqq[BLK_RW_SYNC] = NULL;
+ }
+@@ -2952,6 +3219,10 @@ static void bfq_update_idle_window(struct bfq_data *bfqd,
+ if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq))
+ return;
+
++ /* Idle window just restored, statistics are meaningless. */
++ if (bfq_bfqq_just_split(bfqq))
++ return;
++
+ enable_idle = bfq_bfqq_idle_window(bfqq);
+
+ if (atomic_read(&bic->icq.ioc->active_ref) == 0 ||
+@@ -2999,6 +3270,7 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ if (bfqq->entity.service > bfq_max_budget(bfqd) / 8 ||
+ !BFQQ_SEEKY(bfqq))
+ bfq_update_idle_window(bfqd, bfqq, bic);
++ bfq_clear_bfqq_just_split(bfqq);
+
+ bfq_log_bfqq(bfqd, bfqq,
+ "rq_enqueued: idle_window=%d (seeky %d, mean %llu)",
+@@ -3059,12 +3331,47 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ static void bfq_insert_request(struct request_queue *q, struct request *rq)
+ {
+ struct bfq_data *bfqd = q->elevator->elevator_data;
+- struct bfq_queue *bfqq = RQ_BFQQ(rq);
++ struct bfq_queue *bfqq = RQ_BFQQ(rq), *new_bfqq;
+
+ assert_spin_locked(bfqd->queue->queue_lock);
+
++ /*
++ * An unplug may trigger a requeue of a request from the device
++ * driver: make sure we are in process context while trying to
++ * merge two bfq_queues.
++ */
++ if (!in_interrupt()) {
++ new_bfqq = bfq_setup_cooperator(bfqd, bfqq, rq, true);
++ if (new_bfqq != NULL) {
++ if (bic_to_bfqq(RQ_BIC(rq), 1) != bfqq)
++ new_bfqq = bic_to_bfqq(RQ_BIC(rq), 1);
++ /*
++ * Release the request's reference to the old bfqq
++ * and make sure one is taken to the shared queue.
++ */
++ new_bfqq->allocated[rq_data_dir(rq)]++;
++ bfqq->allocated[rq_data_dir(rq)]--;
++ atomic_inc(&new_bfqq->ref);
++ bfq_put_queue(bfqq);
++ if (bic_to_bfqq(RQ_BIC(rq), 1) == bfqq)
++ bfq_merge_bfqqs(bfqd, RQ_BIC(rq),
++ bfqq, new_bfqq);
++ rq->elv.priv[1] = new_bfqq;
++ bfqq = new_bfqq;
++ } else
++ bfq_bfqq_increase_failed_cooperations(bfqq);
++ }
++
+ bfq_add_request(rq);
+
++ /*
++ * Here a newly-created bfq_queue has already started a weight-raising
++ * period: clear raising_time_left to prevent bfq_bfqq_save_state()
++ * from assigning it a full weight-raising period. See the detailed
++ * comments about this field in bfq_init_icq().
++ */
++ if (bfqq->bic != NULL)
++ bfqq->bic->wr_time_left = 0;
+ rq->fifo_time = jiffies + bfqd->bfq_fifo_expire[rq_is_sync(rq)];
+ list_add_tail(&rq->queuelist, &bfqq->fifo);
+
+@@ -3226,18 +3533,6 @@ static void bfq_put_request(struct request *rq)
+ }
+ }
+
+-static struct bfq_queue *
+-bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
+- struct bfq_queue *bfqq)
+-{
+- bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
+- (long unsigned)bfqq->new_bfqq->pid);
+- bic_set_bfqq(bic, bfqq->new_bfqq, 1);
+- bfq_mark_bfqq_coop(bfqq->new_bfqq);
+- bfq_put_queue(bfqq);
+- return bic_to_bfqq(bic, 1);
+-}
+-
+ /*
+ * Returns NULL if a new bfqq should be allocated, or the old bfqq if this
+ * was the last process referring to said bfqq.
+@@ -3246,6 +3541,9 @@ static struct bfq_queue *
+ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
+ {
+ bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue");
++
++ put_io_context(bic->icq.ioc);
++
+ if (bfqq_process_refs(bfqq) == 1) {
+ bfqq->pid = current->pid;
+ bfq_clear_bfqq_coop(bfqq);
+@@ -3274,6 +3572,7 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+ struct bfq_queue *bfqq;
+ struct bfq_group *bfqg;
+ unsigned long flags;
++ bool split = false;
+
+ might_sleep_if(gfp_mask & __GFP_WAIT);
+
+@@ -3291,25 +3590,26 @@ new_queue:
+ if (bfqq == NULL || bfqq == &bfqd->oom_bfqq) {
+ bfqq = bfq_get_queue(bfqd, bfqg, is_sync, bic, gfp_mask);
+ bic_set_bfqq(bic, bfqq, is_sync);
++ if (split && is_sync) {
++ if ((bic->was_in_burst_list && bfqd->large_burst) ||
++ bic->saved_in_large_burst)
++ bfq_mark_bfqq_in_large_burst(bfqq);
++ else {
++ bfq_clear_bfqq_in_large_burst(bfqq);
++ if (bic->was_in_burst_list)
++ hlist_add_head(&bfqq->burst_list_node,
++ &bfqd->burst_list);
++ }
++ }
+ } else {
+- /*
+- * If the queue was seeky for too long, break it apart.
+- */
++ /* If the queue was seeky for too long, break it apart. */
+ if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq)) {
+ bfq_log_bfqq(bfqd, bfqq, "breaking apart bfqq");
+ bfqq = bfq_split_bfqq(bic, bfqq);
++ split = true;
+ if (!bfqq)
+ goto new_queue;
+ }
+-
+- /*
+- * Check to see if this queue is scheduled to merge with
+- * another closely cooperating queue. The merging of queues
+- * happens here as it must be done in process context.
+- * The reference on new_bfqq was taken in merge_bfqqs.
+- */
+- if (bfqq->new_bfqq != NULL)
+- bfqq = bfq_merge_bfqqs(bfqd, bic, bfqq);
+ }
+
+ bfqq->allocated[rw]++;
+@@ -3320,6 +3620,26 @@ new_queue:
+ rq->elv.priv[0] = bic;
+ rq->elv.priv[1] = bfqq;
+
++ /*
++ * If a bfq_queue has only one process reference, it is owned
++ * by only one bfq_io_cq: we can set the bic field of the
++ * bfq_queue to the address of that structure. Also, if the
++ * queue has just been split, mark a flag so that the
++ * information is available to the other scheduler hooks.
++ */
++ if (likely(bfqq != &bfqd->oom_bfqq) && bfqq_process_refs(bfqq) == 1) {
++ bfqq->bic = bic;
++ if (split) {
++ bfq_mark_bfqq_just_split(bfqq);
++ /*
++ * If the queue has just been split from a shared
++ * queue, restore the idle window and the possible
++ * weight raising period.
++ */
++ bfq_bfqq_resume_state(bfqq, bic);
++ }
++ }
++
+ spin_unlock_irqrestore(q->queue_lock, flags);
+
+ return 0;
+diff --git a/block/bfq-sched.c b/block/bfq-sched.c
+index c343099..d0890c6 100644
+--- a/block/bfq-sched.c
++++ b/block/bfq-sched.c
+@@ -1085,34 +1085,6 @@ static struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
+ return bfqq;
+ }
+
+-/*
+- * Forced extraction of the given queue.
+- */
+-static void bfq_get_next_queue_forced(struct bfq_data *bfqd,
+- struct bfq_queue *bfqq)
+-{
+- struct bfq_entity *entity;
+- struct bfq_sched_data *sd;
+-
+- BUG_ON(bfqd->in_service_queue != NULL);
+-
+- entity = &bfqq->entity;
+- /*
+- * Bubble up extraction/update from the leaf to the root.
+- */
+- for_each_entity(entity) {
+- sd = entity->sched_data;
+- bfq_update_budget(entity);
+- bfq_update_vtime(bfq_entity_service_tree(entity));
+- bfq_active_extract(bfq_entity_service_tree(entity), entity);
+- sd->in_service_entity = entity;
+- sd->next_in_service = NULL;
+- entity->service = 0;
+- }
+-
+- return;
+-}
+-
+ static void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd)
+ {
+ if (bfqd->in_service_bic != NULL) {
+diff --git a/block/bfq.h b/block/bfq.h
+index e350b5f..93d3f6e 100644
+--- a/block/bfq.h
++++ b/block/bfq.h
+@@ -218,18 +218,21 @@ struct bfq_group;
+ * idle @bfq_queue with no outstanding requests, then
+ * the task associated with the queue it is deemed as
+ * soft real-time (see the comments to the function
+- * bfq_bfqq_softrt_next_start()).
++ * bfq_bfqq_softrt_next_start())
+ * @last_idle_bklogged: time of the last transition of the @bfq_queue from
+ * idle to backlogged
+ * @service_from_backlogged: cumulative service received from the @bfq_queue
+ * since the last transition from idle to
+ * backlogged
++ * @bic: pointer to the bfq_io_cq owning the bfq_queue, set to %NULL if the
++ * queue is shared
+ *
+- * A bfq_queue is a leaf request queue; it can be associated with an io_context
+- * or more, if it is async or shared between cooperating processes. @cgroup
+- * holds a reference to the cgroup, to be sure that it does not disappear while
+- * a bfqq still references it (mostly to avoid races between request issuing and
+- * task migration followed by cgroup destruction).
++ * A bfq_queue is a leaf request queue; it can be associated with an
++ * io_context or more, if it is async or shared between cooperating
++ * processes. @cgroup holds a reference to the cgroup, to be sure that it
++ * does not disappear while a bfqq still references it (mostly to avoid
++ * races between request issuing and task migration followed by cgroup
++ * destruction).
+ * All the fields are protected by the queue lock of the containing bfqd.
+ */
+ struct bfq_queue {
+@@ -269,6 +272,7 @@ struct bfq_queue {
+ unsigned int requests_within_timer;
+
+ pid_t pid;
++ struct bfq_io_cq *bic;
+
+ /* weight-raising fields */
+ unsigned long wr_cur_max_time;
+@@ -298,12 +302,42 @@ struct bfq_ttime {
+ * @icq: associated io_cq structure
+ * @bfqq: array of two process queues, the sync and the async
+ * @ttime: associated @bfq_ttime struct
++ * @wr_time_left: snapshot of the time left before weight raising ends
++ * for the sync queue associated to this process; this
++ * snapshot is taken to remember this value while the weight
++ * raising is suspended because the queue is merged with a
++ * shared queue, and is used to set @raising_cur_max_time
++ * when the queue is split from the shared queue and its
++ * weight is raised again
++ * @saved_idle_window: same purpose as the previous field for the idle
++ * window
++ * @saved_IO_bound: same purpose as the previous two fields for the I/O
++ * bound classification of a queue
++ * @saved_in_large_burst: same purpose as the previous fields for the
++ * value of the field keeping the queue's belonging
++ * to a large burst
++ * @was_in_burst_list: true if the queue belonged to a burst list
++ * before its merge with another cooperating queue
++ * @cooperations: counter of consecutive successful queue merges underwent
++ * by any of the process' @bfq_queues
++ * @failed_cooperations: counter of consecutive failed queue merges of any
++ * of the process' @bfq_queues
+ */
+ struct bfq_io_cq {
+ struct io_cq icq; /* must be the first member */
+ struct bfq_queue *bfqq[2];
+ struct bfq_ttime ttime;
+ int ioprio;
++
++ unsigned int wr_time_left;
++ bool saved_idle_window;
++ bool saved_IO_bound;
++
++ bool saved_in_large_burst;
++ bool was_in_burst_list;
++
++ unsigned int cooperations;
++ unsigned int failed_cooperations;
+ };
+
+ enum bfq_device_speed {
+@@ -536,7 +570,7 @@ enum bfqq_state_flags {
+ BFQ_BFQQ_FLAG_idle_window, /* slice idling enabled */
+ BFQ_BFQQ_FLAG_sync, /* synchronous queue */
+ BFQ_BFQQ_FLAG_budget_new, /* no completion with this budget */
+- BFQ_BFQQ_FLAG_IO_bound, /*
++ BFQ_BFQQ_FLAG_IO_bound, /*
+ * bfqq has timed-out at least once
+ * having consumed at most 2/10 of
+ * its budget
+@@ -549,12 +583,13 @@ enum bfqq_state_flags {
+ * bfqq has proved to be slow and
+ * seeky until budget timeout
+ */
+- BFQ_BFQQ_FLAG_softrt_update, /*
++ BFQ_BFQQ_FLAG_softrt_update, /*
+ * may need softrt-next-start
+ * update
+ */
+ BFQ_BFQQ_FLAG_coop, /* bfqq is shared */
+- BFQ_BFQQ_FLAG_split_coop, /* shared bfqq will be splitted */
++ BFQ_BFQQ_FLAG_split_coop, /* shared bfqq will be split */
++ BFQ_BFQQ_FLAG_just_split, /* queue has just been split */
+ };
+
+ #define BFQ_BFQQ_FNS(name) \
+@@ -583,6 +618,7 @@ BFQ_BFQQ_FNS(in_large_burst);
+ BFQ_BFQQ_FNS(constantly_seeky);
+ BFQ_BFQQ_FNS(coop);
+ BFQ_BFQQ_FNS(split_coop);
++BFQ_BFQQ_FNS(just_split);
+ BFQ_BFQQ_FNS(softrt_update);
+ #undef BFQ_BFQQ_FNS
+
+--
+1.9.1
+
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [gentoo-commits] proj/linux-patches:4.3 commit in: /
@ 2015-12-10 20:16 Mike Pagano
0 siblings, 0 replies; 9+ messages in thread
From: Mike Pagano @ 2015-12-10 20:16 UTC (permalink / raw
To: gentoo-commits
commit: ebb5271c0befa2aab1a56db3201c5d7832dec8f9
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 10 20:16:49 2015 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Dec 10 20:16:49 2015 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ebb5271c
Linux patch 4.3.1
0000_README | 2 +
1000_linux-4.3.1.patch | 4386 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4388 insertions(+)
diff --git a/0000_README b/0000_README
index 4c2a487..eb12e02 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,8 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1000_linux-4.3.1.patch From: http://www.kernel.org Desc: Linux 4.3.1
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1000_linux-4.3.1.patch b/1000_linux-4.3.1.patch
new file mode 100644
index 0000000..a9daca9
--- /dev/null
+++ b/1000_linux-4.3.1.patch
@@ -0,0 +1,4386 @@
+diff --git a/Documentation/devicetree/bindings/usb/dwc3.txt b/Documentation/devicetree/bindings/usb/dwc3.txt
+index 0815eac5b185..e12f3448846a 100644
+--- a/Documentation/devicetree/bindings/usb/dwc3.txt
++++ b/Documentation/devicetree/bindings/usb/dwc3.txt
+@@ -35,6 +35,8 @@ Optional properties:
+ LTSSM during USB3 Compliance mode.
+ - snps,dis_u3_susphy_quirk: when set core will disable USB3 suspend phy.
+ - snps,dis_u2_susphy_quirk: when set core will disable USB2 suspend phy.
++ - snps,dis_enblslpm_quirk: when set clears the enblslpm in GUSB2PHYCFG,
++ disabling the suspend signal to the PHY.
+ - snps,is-utmi-l1-suspend: true when DWC3 asserts output signal
+ utmi_l1_suspend_n, false when asserts utmi_sleep_n
+ - snps,hird-threshold: HIRD threshold
+diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt
+index d411ca63c8b6..3a9d65c912e7 100644
+--- a/Documentation/filesystems/proc.txt
++++ b/Documentation/filesystems/proc.txt
+@@ -140,7 +140,8 @@ Table 1-1: Process specific entries in /proc
+ stat Process status
+ statm Process memory status information
+ status Process status in human readable form
+- wchan If CONFIG_KALLSYMS is set, a pre-decoded wchan
++ wchan Present with CONFIG_KALLSYMS=y: it shows the kernel function
++ symbol the task is blocked in - or "0" if not blocked.
+ pagemap Page table
+ stack Report full stack trace, enable via CONFIG_STACKTRACE
+ smaps a extension based on maps, showing the memory consumption of
+@@ -310,7 +311,7 @@ Table 1-4: Contents of the stat files (as of 2.6.30-rc7)
+ blocked bitmap of blocked signals
+ sigign bitmap of ignored signals
+ sigcatch bitmap of caught signals
+- wchan address where process went to sleep
++ 0 (place holder, used to be the wchan address, use /proc/PID/wchan instead)
+ 0 (place holder)
+ 0 (place holder)
+ exit_signal signal to send to parent thread on exit
+diff --git a/Makefile b/Makefile
+index d5b37391195f..266eeacc1490 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 3
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Blurry Fish Butt
+
+diff --git a/arch/arm/boot/dts/exynos3250-monk.dts b/arch/arm/boot/dts/exynos3250-monk.dts
+index 540a0adf2be6..35b39d2255d3 100644
+--- a/arch/arm/boot/dts/exynos3250-monk.dts
++++ b/arch/arm/boot/dts/exynos3250-monk.dts
+@@ -161,6 +161,7 @@
+ };
+
+ &exynos_usbphy {
++ vbus-supply = <&safeout_reg>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm/boot/dts/exynos3250-rinato.dts b/arch/arm/boot/dts/exynos3250-rinato.dts
+index 41a5fafb9aa9..23623cd3ebd9 100644
+--- a/arch/arm/boot/dts/exynos3250-rinato.dts
++++ b/arch/arm/boot/dts/exynos3250-rinato.dts
+@@ -153,6 +153,7 @@
+
+ &exynos_usbphy {
+ status = "okay";
++ vbus-supply = <&safeout_reg>;
+ };
+
+ &hsotg {
+diff --git a/arch/arm/boot/dts/exynos4210-trats.dts b/arch/arm/boot/dts/exynos4210-trats.dts
+index ba34886f8b65..01d38f2145b9 100644
+--- a/arch/arm/boot/dts/exynos4210-trats.dts
++++ b/arch/arm/boot/dts/exynos4210-trats.dts
+@@ -251,6 +251,7 @@
+
+ &exynos_usbphy {
+ status = "okay";
++ vbus-supply = <&safe1_sreg>;
+ };
+
+ &fimd {
+@@ -448,7 +449,6 @@
+
+ safe1_sreg: ESAFEOUT1 {
+ regulator-name = "SAFEOUT1";
+- regulator-always-on;
+ };
+
+ safe2_sreg: ESAFEOUT2 {
+diff --git a/arch/arm/boot/dts/exynos4210-universal_c210.dts b/arch/arm/boot/dts/exynos4210-universal_c210.dts
+index eb379526e234..2c04297825fe 100644
+--- a/arch/arm/boot/dts/exynos4210-universal_c210.dts
++++ b/arch/arm/boot/dts/exynos4210-universal_c210.dts
+@@ -248,6 +248,7 @@
+
+ &exynos_usbphy {
+ status = "okay";
++ vbus-supply = <&safeout1_reg>;
+ };
+
+ &fimd {
+@@ -486,7 +487,6 @@
+
+ safeout1_reg: ESAFEOUT1 {
+ regulator-name = "SAFEOUT1";
+- regulator-always-on;
+ };
+
+ safeout2_reg: ESAFEOUT2 {
+diff --git a/arch/arm/boot/dts/exynos4412-trats2.dts b/arch/arm/boot/dts/exynos4412-trats2.dts
+index 2a1ebb76ebe0..50a5e8a85283 100644
+--- a/arch/arm/boot/dts/exynos4412-trats2.dts
++++ b/arch/arm/boot/dts/exynos4412-trats2.dts
+@@ -391,6 +391,7 @@
+ };
+
+ &exynos_usbphy {
++ vbus-supply = <&esafeout1_reg>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm/boot/dts/imx27.dtsi b/arch/arm/boot/dts/imx27.dtsi
+index feb9d34b239c..f818ea483aeb 100644
+--- a/arch/arm/boot/dts/imx27.dtsi
++++ b/arch/arm/boot/dts/imx27.dtsi
+@@ -486,7 +486,10 @@
+ compatible = "fsl,imx27-usb";
+ reg = <0x10024000 0x200>;
+ interrupts = <56>;
+- clocks = <&clks IMX27_CLK_USB_IPG_GATE>;
++ clocks = <&clks IMX27_CLK_USB_IPG_GATE>,
++ <&clks IMX27_CLK_USB_AHB_GATE>,
++ <&clks IMX27_CLK_USB_DIV>;
++ clock-names = "ipg", "ahb", "per";
+ fsl,usbmisc = <&usbmisc 0>;
+ status = "disabled";
+ };
+@@ -495,7 +498,10 @@
+ compatible = "fsl,imx27-usb";
+ reg = <0x10024200 0x200>;
+ interrupts = <54>;
+- clocks = <&clks IMX27_CLK_USB_IPG_GATE>;
++ clocks = <&clks IMX27_CLK_USB_IPG_GATE>,
++ <&clks IMX27_CLK_USB_AHB_GATE>,
++ <&clks IMX27_CLK_USB_DIV>;
++ clock-names = "ipg", "ahb", "per";
+ fsl,usbmisc = <&usbmisc 1>;
+ dr_mode = "host";
+ status = "disabled";
+@@ -505,7 +511,10 @@
+ compatible = "fsl,imx27-usb";
+ reg = <0x10024400 0x200>;
+ interrupts = <55>;
+- clocks = <&clks IMX27_CLK_USB_IPG_GATE>;
++ clocks = <&clks IMX27_CLK_USB_IPG_GATE>,
++ <&clks IMX27_CLK_USB_AHB_GATE>,
++ <&clks IMX27_CLK_USB_DIV>;
++ clock-names = "ipg", "ahb", "per";
+ fsl,usbmisc = <&usbmisc 2>;
+ dr_mode = "host";
+ status = "disabled";
+@@ -515,7 +524,6 @@
+ #index-cells = <1>;
+ compatible = "fsl,imx27-usbmisc";
+ reg = <0x10024600 0x200>;
+- clocks = <&clks IMX27_CLK_USB_AHB_GATE>;
+ };
+
+ sahara2: sahara@10025000 {
+diff --git a/arch/arm/boot/dts/omap5-uevm.dts b/arch/arm/boot/dts/omap5-uevm.dts
+index 3cb030f9d2c4..c85761212d90 100644
+--- a/arch/arm/boot/dts/omap5-uevm.dts
++++ b/arch/arm/boot/dts/omap5-uevm.dts
+@@ -31,6 +31,24 @@
+ regulator-max-microvolt = <3000000>;
+ };
+
++ mmc3_pwrseq: sdhci0_pwrseq {
++ compatible = "mmc-pwrseq-simple";
++ clocks = <&clk32kgaudio>;
++ clock-names = "ext_clock";
++ };
++
++ vmmcsdio_fixed: fixedregulator-mmcsdio {
++ compatible = "regulator-fixed";
++ regulator-name = "vmmcsdio_fixed";
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
++ gpio = <&gpio5 12 GPIO_ACTIVE_HIGH>; /* gpio140 WLAN_EN */
++ enable-active-high;
++ startup-delay-us = <70000>;
++ pinctrl-names = "default";
++ pinctrl-0 = <&wlan_pins>;
++ };
++
+ /* HS USB Host PHY on PORT 2 */
+ hsusb2_phy: hsusb2_phy {
+ compatible = "usb-nop-xceiv";
+@@ -197,12 +215,20 @@
+ >;
+ };
+
+- mcspi4_pins: pinmux_mcspi4_pins {
++ mmc3_pins: pinmux_mmc3_pins {
++ pinctrl-single,pins = <
++ OMAP5_IOPAD(0x01a4, PIN_INPUT_PULLUP | MUX_MODE0) /* wlsdio_clk */
++ OMAP5_IOPAD(0x01a6, PIN_INPUT_PULLUP | MUX_MODE0) /* wlsdio_cmd */
++ OMAP5_IOPAD(0x01a8, PIN_INPUT_PULLUP | MUX_MODE0) /* wlsdio_data0 */
++ OMAP5_IOPAD(0x01aa, PIN_INPUT_PULLUP | MUX_MODE0) /* wlsdio_data1 */
++ OMAP5_IOPAD(0x01ac, PIN_INPUT_PULLUP | MUX_MODE0) /* wlsdio_data2 */
++ OMAP5_IOPAD(0x01ae, PIN_INPUT_PULLUP | MUX_MODE0) /* wlsdio_data3 */
++ >;
++ };
++
++ wlan_pins: pinmux_wlan_pins {
+ pinctrl-single,pins = <
+- 0x164 (PIN_INPUT | MUX_MODE1) /* mcspi4_clk */
+- 0x168 (PIN_INPUT | MUX_MODE1) /* mcspi4_simo */
+- 0x16a (PIN_INPUT | MUX_MODE1) /* mcspi4_somi */
+- 0x16c (PIN_INPUT | MUX_MODE1) /* mcspi4_cs0 */
++ OMAP5_IOPAD(0x1bc, PIN_OUTPUT | MUX_MODE6) /* mcspi1_clk.gpio5_140 */
+ >;
+ };
+
+@@ -276,6 +302,12 @@
+ 0x1A (PIN_OUTPUT | MUX_MODE0) /* fref_clk1_out, USB hub clk */
+ >;
+ };
++
++ wlcore_irq_pin: pinmux_wlcore_irq_pin {
++ pinctrl-single,pins = <
++ OMAP5_IOPAD(0x040, WAKEUP_EN | PIN_INPUT_PULLUP | MUX_MODE6) /* llia_wakereqin.gpio1_wk14 */
++ >;
++ };
+ };
+
+ &mmc1 {
+@@ -290,8 +322,25 @@
+ };
+
+ &mmc3 {
++ vmmc-supply = <&vmmcsdio_fixed>;
++ mmc-pwrseq = <&mmc3_pwrseq>;
+ bus-width = <4>;
+- ti,non-removable;
++ non-removable;
++ cap-power-off-card;
++ pinctrl-names = "default";
++ pinctrl-0 = <&mmc3_pins &wlcore_irq_pin>;
++ interrupts-extended = <&gic GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH
++ &omap5_pmx_core 0x168>;
++
++ #address-cells = <1>;
++ #size-cells = <0>;
++ wlcore: wlcore@2 {
++ compatible = "ti,wl1271";
++ reg = <2>;
++ interrupt-parent = <&gpio1>;
++ interrupts = <14 IRQ_TYPE_LEVEL_HIGH>; /* gpio 14 */
++ ref-clock-frequency = <26000000>;
++ };
+ };
+
+ &mmc4 {
+@@ -598,11 +647,6 @@
+ pinctrl-0 = <&mcspi3_pins>;
+ };
+
+-&mcspi4 {
+- pinctrl-names = "default";
+- pinctrl-0 = <&mcspi4_pins>;
+-};
+-
+ &uart1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart1_pins>;
+diff --git a/arch/arm/boot/dts/sama5d4.dtsi b/arch/arm/boot/dts/sama5d4.dtsi
+index 8d1de29e8da1..fb92481e60d4 100644
+--- a/arch/arm/boot/dts/sama5d4.dtsi
++++ b/arch/arm/boot/dts/sama5d4.dtsi
+@@ -939,11 +939,11 @@
+ reg = <0xf8018000 0x4000>;
+ interrupts = <33 IRQ_TYPE_LEVEL_HIGH 6>;
+ dmas = <&dma1
+- (AT91_XDMAC_DT_MEM_IF(0) | AT91_XDMAC_DT_PER_IF(1))
+- AT91_XDMAC_DT_PERID(4)>,
++ (AT91_XDMAC_DT_MEM_IF(0) | AT91_XDMAC_DT_PER_IF(1)
++ | AT91_XDMAC_DT_PERID(4))>,
+ <&dma1
+- (AT91_XDMAC_DT_MEM_IF(0) | AT91_XDMAC_DT_PER_IF(1))
+- AT91_XDMAC_DT_PERID(5)>;
++ (AT91_XDMAC_DT_MEM_IF(0) | AT91_XDMAC_DT_PER_IF(1)
++ | AT91_XDMAC_DT_PERID(5))>;
+ dma-names = "tx", "rx";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c1>;
+diff --git a/arch/arm/boot/dts/sun6i-a31-hummingbird.dts b/arch/arm/boot/dts/sun6i-a31-hummingbird.dts
+index d0cfadac0691..18f26ca4e375 100644
+--- a/arch/arm/boot/dts/sun6i-a31-hummingbird.dts
++++ b/arch/arm/boot/dts/sun6i-a31-hummingbird.dts
+@@ -184,18 +184,18 @@
+ regulator-name = "vcc-3v0";
+ };
+
+- vdd_cpu: dcdc2 {
++ vdd_gpu: dcdc2 {
+ regulator-always-on;
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1320000>;
+- regulator-name = "vdd-cpu";
++ regulator-name = "vdd-gpu";
+ };
+
+- vdd_gpu: dcdc3 {
++ vdd_cpu: dcdc3 {
+ regulator-always-on;
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1320000>;
+- regulator-name = "vdd-gpu";
++ regulator-name = "vdd-cpu";
+ };
+
+ vdd_sys_dll: dcdc4 {
+diff --git a/arch/arm/common/edma.c b/arch/arm/common/edma.c
+index 873dbfcc7dc9..56fc339571f9 100644
+--- a/arch/arm/common/edma.c
++++ b/arch/arm/common/edma.c
+@@ -406,7 +406,8 @@ static irqreturn_t dma_irq_handler(int irq, void *data)
+ BIT(slot));
+ if (edma_cc[ctlr]->intr_data[channel].callback)
+ edma_cc[ctlr]->intr_data[channel].callback(
+- channel, EDMA_DMA_COMPLETE,
++ EDMA_CTLR_CHAN(ctlr, channel),
++ EDMA_DMA_COMPLETE,
+ edma_cc[ctlr]->intr_data[channel].data);
+ }
+ } while (sh_ipr);
+@@ -460,7 +461,8 @@ static irqreturn_t dma_ccerr_handler(int irq, void *data)
+ if (edma_cc[ctlr]->intr_data[k].
+ callback) {
+ edma_cc[ctlr]->intr_data[k].
+- callback(k,
++ callback(
++ EDMA_CTLR_CHAN(ctlr, k),
+ EDMA_DMA_CC_ERROR,
+ edma_cc[ctlr]->intr_data
+ [k].data);
+diff --git a/arch/arm/include/asm/irq.h b/arch/arm/include/asm/irq.h
+index be1d07d59ee9..1bd9510de1b9 100644
+--- a/arch/arm/include/asm/irq.h
++++ b/arch/arm/include/asm/irq.h
+@@ -40,6 +40,11 @@ extern void arch_trigger_all_cpu_backtrace(bool);
+ #define arch_trigger_all_cpu_backtrace(x) arch_trigger_all_cpu_backtrace(x)
+ #endif
+
++static inline int nr_legacy_irqs(void)
++{
++ return NR_IRQS_LEGACY;
++}
++
+ #endif
+
+ #endif
+diff --git a/arch/arm/mach-at91/pm_suspend.S b/arch/arm/mach-at91/pm_suspend.S
+index 0d95f488b47a..a25defda3d22 100644
+--- a/arch/arm/mach-at91/pm_suspend.S
++++ b/arch/arm/mach-at91/pm_suspend.S
+@@ -80,6 +80,8 @@ tmp2 .req r5
+ * @r2: base address of second SDRAM Controller or 0 if not present
+ * @r3: pm information
+ */
++/* at91_pm_suspend_in_sram must be 8-byte aligned per the requirements of fncpy() */
++ .align 3
+ ENTRY(at91_pm_suspend_in_sram)
+ /* Save registers on stack */
+ stmfd sp!, {r4 - r12, lr}
+diff --git a/arch/arm/mach-pxa/include/mach/pxa27x.h b/arch/arm/mach-pxa/include/mach/pxa27x.h
+index 599b925a657c..1a4291936c58 100644
+--- a/arch/arm/mach-pxa/include/mach/pxa27x.h
++++ b/arch/arm/mach-pxa/include/mach/pxa27x.h
+@@ -19,7 +19,7 @@
+ #define ARB_CORE_PARK (1<<24) /* Be parked with core when idle */
+ #define ARB_LOCK_FLAG (1<<23) /* Only Locking masters gain access to the bus */
+
+-extern int __init pxa27x_set_pwrmode(unsigned int mode);
++extern int pxa27x_set_pwrmode(unsigned int mode);
+ extern void pxa27x_cpu_pm_enter(suspend_state_t state);
+
+ #endif /* __MACH_PXA27x_H */
+diff --git a/arch/arm/mach-pxa/pxa27x.c b/arch/arm/mach-pxa/pxa27x.c
+index 221260d5d109..ffc424028557 100644
+--- a/arch/arm/mach-pxa/pxa27x.c
++++ b/arch/arm/mach-pxa/pxa27x.c
+@@ -84,7 +84,7 @@ EXPORT_SYMBOL_GPL(pxa27x_configure_ac97reset);
+ */
+ static unsigned int pwrmode = PWRMODE_SLEEP;
+
+-int __init pxa27x_set_pwrmode(unsigned int mode)
++int pxa27x_set_pwrmode(unsigned int mode)
+ {
+ switch (mode) {
+ case PWRMODE_SLEEP:
+diff --git a/arch/arm/mach-tegra/board-paz00.c b/arch/arm/mach-tegra/board-paz00.c
+index fbe74c6806f3..49d1110cff53 100644
+--- a/arch/arm/mach-tegra/board-paz00.c
++++ b/arch/arm/mach-tegra/board-paz00.c
+@@ -39,8 +39,8 @@ static struct platform_device wifi_rfkill_device = {
+ static struct gpiod_lookup_table wifi_gpio_lookup = {
+ .dev_id = "rfkill_gpio",
+ .table = {
+- GPIO_LOOKUP_IDX("tegra-gpio", 25, NULL, 0, 0),
+- GPIO_LOOKUP_IDX("tegra-gpio", 85, NULL, 1, 0),
++ GPIO_LOOKUP("tegra-gpio", 25, "reset", 0),
++ GPIO_LOOKUP("tegra-gpio", 85, "shutdown", 0),
+ { },
+ },
+ };
+diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
+index 1a7815e5421b..ad4eb2d26e16 100644
+--- a/arch/arm/mm/dma-mapping.c
++++ b/arch/arm/mm/dma-mapping.c
+@@ -1407,12 +1407,19 @@ static int arm_iommu_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
+ unsigned long uaddr = vma->vm_start;
+ unsigned long usize = vma->vm_end - vma->vm_start;
+ struct page **pages = __iommu_get_pages(cpu_addr, attrs);
++ unsigned long nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
++ unsigned long off = vma->vm_pgoff;
+
+ vma->vm_page_prot = __get_dma_pgprot(attrs, vma->vm_page_prot);
+
+ if (!pages)
+ return -ENXIO;
+
++ if (off >= nr_pages || (usize >> PAGE_SHIFT) > nr_pages - off)
++ return -ENXIO;
++
++ pages += off;
++
+ do {
+ int ret = vm_insert_page(vma, uaddr, *pages++);
+ if (ret) {
+diff --git a/arch/arm/vdso/vdsomunge.c b/arch/arm/vdso/vdsomunge.c
+index 0cebd98cd88c..f6455273b2f8 100644
+--- a/arch/arm/vdso/vdsomunge.c
++++ b/arch/arm/vdso/vdsomunge.c
+@@ -66,7 +66,7 @@
+ ((((x) & 0x000000ff) << 24) | \
+ (((x) & 0x0000ff00) << 8) | \
+ (((x) & 0x00ff0000) >> 8) | \
+- (((x) & 0xff000000) << 24))
++ (((x) & 0xff000000) >> 24))
+
+ #if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ #define HOST_ORDER ELFDATA2LSB
+diff --git a/arch/arm64/include/asm/irq.h b/arch/arm64/include/asm/irq.h
+index bbb251b14746..8b9bf54105b3 100644
+--- a/arch/arm64/include/asm/irq.h
++++ b/arch/arm64/include/asm/irq.h
+@@ -21,4 +21,9 @@ static inline void acpi_irq_init(void)
+ }
+ #define acpi_irq_init acpi_irq_init
+
++static inline int nr_legacy_irqs(void)
++{
++ return 0;
++}
++
+ #endif
+diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
+index 536274ed292e..e9e5467e0bf4 100644
+--- a/arch/arm64/include/asm/ptrace.h
++++ b/arch/arm64/include/asm/ptrace.h
+@@ -83,14 +83,14 @@
+ #define compat_sp regs[13]
+ #define compat_lr regs[14]
+ #define compat_sp_hyp regs[15]
+-#define compat_sp_irq regs[16]
+-#define compat_lr_irq regs[17]
+-#define compat_sp_svc regs[18]
+-#define compat_lr_svc regs[19]
+-#define compat_sp_abt regs[20]
+-#define compat_lr_abt regs[21]
+-#define compat_sp_und regs[22]
+-#define compat_lr_und regs[23]
++#define compat_lr_irq regs[16]
++#define compat_sp_irq regs[17]
++#define compat_lr_svc regs[18]
++#define compat_sp_svc regs[19]
++#define compat_lr_abt regs[20]
++#define compat_sp_abt regs[21]
++#define compat_lr_und regs[22]
++#define compat_sp_und regs[23]
+ #define compat_r8_fiq regs[24]
+ #define compat_r9_fiq regs[25]
+ #define compat_r10_fiq regs[26]
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index 98073332e2d0..4d77757b5894 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -60,9 +60,12 @@ PECOFF_FILE_ALIGNMENT = 0x200;
+ #define PECOFF_EDATA_PADDING
+ #endif
+
+-#ifdef CONFIG_DEBUG_ALIGN_RODATA
++#if defined(CONFIG_DEBUG_ALIGN_RODATA)
+ #define ALIGN_DEBUG_RO . = ALIGN(1<<SECTION_SHIFT);
+ #define ALIGN_DEBUG_RO_MIN(min) ALIGN_DEBUG_RO
++#elif defined(CONFIG_DEBUG_RODATA)
++#define ALIGN_DEBUG_RO . = ALIGN(1<<PAGE_SHIFT);
++#define ALIGN_DEBUG_RO_MIN(min) ALIGN_DEBUG_RO
+ #else
+ #define ALIGN_DEBUG_RO
+ #define ALIGN_DEBUG_RO_MIN(min) . = ALIGN(min);
+diff --git a/arch/mips/ath79/setup.c b/arch/mips/ath79/setup.c
+index 1ba21204ebe0..9a0013703579 100644
+--- a/arch/mips/ath79/setup.c
++++ b/arch/mips/ath79/setup.c
+@@ -216,9 +216,9 @@ void __init plat_mem_setup(void)
+ AR71XX_RESET_SIZE);
+ ath79_pll_base = ioremap_nocache(AR71XX_PLL_BASE,
+ AR71XX_PLL_SIZE);
++ ath79_detect_sys_type();
+ ath79_ddr_ctrl_init();
+
+- ath79_detect_sys_type();
+ if (mips_machtype != ATH79_MACH_GENERIC_OF)
+ detect_memory_region(0, ATH79_MEM_SIZE_MIN, ATH79_MEM_SIZE_MAX);
+
+diff --git a/arch/mips/include/asm/cdmm.h b/arch/mips/include/asm/cdmm.h
+index bece2064cc8c..c06dbf8ba937 100644
+--- a/arch/mips/include/asm/cdmm.h
++++ b/arch/mips/include/asm/cdmm.h
+@@ -84,6 +84,17 @@ void mips_cdmm_driver_unregister(struct mips_cdmm_driver *);
+ module_driver(__mips_cdmm_driver, mips_cdmm_driver_register, \
+ mips_cdmm_driver_unregister)
+
++/*
++ * builtin_mips_cdmm_driver() - Helper macro for drivers that don't do anything
++ * special in init and have no exit. This eliminates some boilerplate. Each
++ * driver may only use this macro once, and calling it replaces device_initcall
++ * (or in some cases, the legacy __initcall). This is meant to be a direct
++ * parallel of module_mips_cdmm_driver() above but without the __exit stuff that
++ * is not used for builtin cases.
++ */
++#define builtin_mips_cdmm_driver(__mips_cdmm_driver) \
++ builtin_driver(__mips_cdmm_driver, mips_cdmm_driver_register)
++
+ /* drivers/tty/mips_ejtag_fdc.c */
+
+ #ifdef CONFIG_MIPS_EJTAG_FDC_EARLYCON
+diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
+index d5fa3eaf39a1..41b1b090f56f 100644
+--- a/arch/mips/kvm/emulate.c
++++ b/arch/mips/kvm/emulate.c
+@@ -1581,7 +1581,7 @@ enum emulation_result kvm_mips_emulate_cache(uint32_t inst, uint32_t *opc,
+
+ base = (inst >> 21) & 0x1f;
+ op_inst = (inst >> 16) & 0x1f;
+- offset = inst & 0xffff;
++ offset = (int16_t)inst;
+ cache = (inst >> 16) & 0x3;
+ op = (inst >> 18) & 0x7;
+
+diff --git a/arch/mips/kvm/locore.S b/arch/mips/kvm/locore.S
+index c567240386a0..d1ee95a7f7dd 100644
+--- a/arch/mips/kvm/locore.S
++++ b/arch/mips/kvm/locore.S
+@@ -165,9 +165,11 @@ FEXPORT(__kvm_mips_vcpu_run)
+
+ FEXPORT(__kvm_mips_load_asid)
+ /* Set the ASID for the Guest Kernel */
+- INT_SLL t0, t0, 1 /* with kseg0 @ 0x40000000, kernel */
+- /* addresses shift to 0x80000000 */
+- bltz t0, 1f /* If kernel */
++ PTR_L t0, VCPU_COP0(k1)
++ LONG_L t0, COP0_STATUS(t0)
++ andi t0, KSU_USER | ST0_ERL | ST0_EXL
++ xori t0, KSU_USER
++ bnez t0, 1f /* If kernel */
+ INT_ADDIU t1, k1, VCPU_GUEST_KERNEL_ASID /* (BD) */
+ INT_ADDIU t1, k1, VCPU_GUEST_USER_ASID /* else user */
+ 1:
+@@ -482,9 +484,11 @@ __kvm_mips_return_to_guest:
+ mtc0 t0, CP0_EPC
+
+ /* Set the ASID for the Guest Kernel */
+- INT_SLL t0, t0, 1 /* with kseg0 @ 0x40000000, kernel */
+- /* addresses shift to 0x80000000 */
+- bltz t0, 1f /* If kernel */
++ PTR_L t0, VCPU_COP0(k1)
++ LONG_L t0, COP0_STATUS(t0)
++ andi t0, KSU_USER | ST0_ERL | ST0_EXL
++ xori t0, KSU_USER
++ bnez t0, 1f /* If kernel */
+ INT_ADDIU t1, k1, VCPU_GUEST_KERNEL_ASID /* (BD) */
+ INT_ADDIU t1, k1, VCPU_GUEST_USER_ASID /* else user */
+ 1:
+diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
+index 49ff3bfc007e..b9b803facdbf 100644
+--- a/arch/mips/kvm/mips.c
++++ b/arch/mips/kvm/mips.c
+@@ -279,7 +279,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id)
+
+ if (!gebase) {
+ err = -ENOMEM;
+- goto out_free_cpu;
++ goto out_uninit_cpu;
+ }
+ kvm_debug("Allocated %d bytes for KVM Exception Handlers @ %p\n",
+ ALIGN(size, PAGE_SIZE), gebase);
+@@ -343,6 +343,9 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id)
+ out_free_gebase:
+ kfree(gebase);
+
++out_uninit_cpu:
++ kvm_vcpu_uninit(vcpu);
++
+ out_free_cpu:
+ kfree(vcpu);
+
+diff --git a/arch/mips/lantiq/clk.c b/arch/mips/lantiq/clk.c
+index 3fc2e6d70c77..a0706fd4ce0a 100644
+--- a/arch/mips/lantiq/clk.c
++++ b/arch/mips/lantiq/clk.c
+@@ -99,6 +99,23 @@ int clk_set_rate(struct clk *clk, unsigned long rate)
+ }
+ EXPORT_SYMBOL(clk_set_rate);
+
++long clk_round_rate(struct clk *clk, unsigned long rate)
++{
++ if (unlikely(!clk_good(clk)))
++ return 0;
++ if (clk->rates && *clk->rates) {
++ unsigned long *r = clk->rates;
++
++ while (*r && (*r != rate))
++ r++;
++ if (!*r) {
++ return clk->rate;
++ }
++ }
++ return rate;
++}
++EXPORT_SYMBOL(clk_round_rate);
++
+ int clk_enable(struct clk *clk)
+ {
+ if (unlikely(!clk_good(clk)))
+diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c
+index 8b1c8e33f184..d319f36f7d1d 100644
+--- a/arch/s390/kernel/ptrace.c
++++ b/arch/s390/kernel/ptrace.c
+@@ -244,7 +244,7 @@ static unsigned long __peek_user(struct task_struct *child, addr_t addr)
+ ((addr_t) child->thread.fpu.vxrs + 2*offset);
+ else
+ tmp = *(addr_t *)
+- ((addr_t) &child->thread.fpu.fprs + offset);
++ ((addr_t) child->thread.fpu.fprs + offset);
+
+ } else if (addr < (addr_t) (&dummy->regs.per_info + 1)) {
+ /*
+@@ -388,7 +388,7 @@ static int __poke_user(struct task_struct *child, addr_t addr, addr_t data)
+ child->thread.fpu.vxrs + 2*offset) = data;
+ else
+ *(addr_t *)((addr_t)
+- &child->thread.fpu.fprs + offset) = data;
++ child->thread.fpu.fprs + offset) = data;
+
+ } else if (addr < (addr_t) (&dummy->regs.per_info + 1)) {
+ /*
+@@ -622,7 +622,7 @@ static u32 __peek_user_compat(struct task_struct *child, addr_t addr)
+ ((addr_t) child->thread.fpu.vxrs + 2*offset);
+ else
+ tmp = *(__u32 *)
+- ((addr_t) &child->thread.fpu.fprs + offset);
++ ((addr_t) child->thread.fpu.fprs + offset);
+
+ } else if (addr < (addr_t) (&dummy32->regs.per_info + 1)) {
+ /*
+@@ -747,7 +747,7 @@ static int __poke_user_compat(struct task_struct *child,
+ child->thread.fpu.vxrs + 2*offset) = tmp;
+ else
+ *(__u32 *)((addr_t)
+- &child->thread.fpu.fprs + offset) = tmp;
++ child->thread.fpu.fprs + offset) = tmp;
+
+ } else if (addr < (addr_t) (&dummy32->regs.per_info + 1)) {
+ /*
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index 5c2c169395c3..08de785c9e63 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -1057,8 +1057,7 @@ static int __inject_extcall(struct kvm_vcpu *vcpu, struct kvm_s390_irq *irq)
+ src_id, 0);
+
+ /* sending vcpu invalid */
+- if (src_id >= KVM_MAX_VCPUS ||
+- kvm_get_vcpu(vcpu->kvm, src_id) == NULL)
++ if (kvm_get_vcpu_by_id(vcpu->kvm, src_id) == NULL)
+ return -EINVAL;
+
+ if (sclp.has_sigpif)
+@@ -1137,6 +1136,10 @@ static int __inject_sigp_emergency(struct kvm_vcpu *vcpu,
+ trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_INT_EMERGENCY,
+ irq->u.emerg.code, 0);
+
++ /* sending vcpu invalid */
++ if (kvm_get_vcpu_by_id(vcpu->kvm, irq->u.emerg.code) == NULL)
++ return -EINVAL;
++
+ set_bit(irq->u.emerg.code, li->sigp_emerg_pending);
+ set_bit(IRQ_PEND_EXT_EMERGENCY, &li->pending_irqs);
+ atomic_or(CPUSTAT_EXT_INT, li->cpuflags);
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 0a67c40eece9..1290af83417a 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -342,12 +342,16 @@ static int kvm_vm_ioctl_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap)
+ r = 0;
+ break;
+ case KVM_CAP_S390_VECTOR_REGISTERS:
+- if (MACHINE_HAS_VX) {
++ mutex_lock(&kvm->lock);
++ if (atomic_read(&kvm->online_vcpus)) {
++ r = -EBUSY;
++ } else if (MACHINE_HAS_VX) {
+ set_kvm_facility(kvm->arch.model.fac->mask, 129);
+ set_kvm_facility(kvm->arch.model.fac->list, 129);
+ r = 0;
+ } else
+ r = -EINVAL;
++ mutex_unlock(&kvm->lock);
+ VM_EVENT(kvm, 3, "ENABLE: CAP_S390_VECTOR_REGISTERS %s",
+ r ? "(not available)" : "(success)");
+ break;
+@@ -1120,7 +1124,9 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+ if (!kvm->arch.sca)
+ goto out_err;
+ spin_lock(&kvm_lock);
+- sca_offset = (sca_offset + 16) & 0x7f0;
++ sca_offset += 16;
++ if (sca_offset + sizeof(struct sca_block) > PAGE_SIZE)
++ sca_offset = 0;
+ kvm->arch.sca = (struct sca_block *) ((char *) kvm->arch.sca + sca_offset);
+ spin_unlock(&kvm_lock);
+
+diff --git a/arch/s390/kvm/sigp.c b/arch/s390/kvm/sigp.c
+index da690b69f9fe..77c22d685c7a 100644
+--- a/arch/s390/kvm/sigp.c
++++ b/arch/s390/kvm/sigp.c
+@@ -291,12 +291,8 @@ static int handle_sigp_dst(struct kvm_vcpu *vcpu, u8 order_code,
+ u16 cpu_addr, u32 parameter, u64 *status_reg)
+ {
+ int rc;
+- struct kvm_vcpu *dst_vcpu;
++ struct kvm_vcpu *dst_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, cpu_addr);
+
+- if (cpu_addr >= KVM_MAX_VCPUS)
+- return SIGP_CC_NOT_OPERATIONAL;
+-
+- dst_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_addr);
+ if (!dst_vcpu)
+ return SIGP_CC_NOT_OPERATIONAL;
+
+@@ -478,7 +474,7 @@ int kvm_s390_handle_sigp_pei(struct kvm_vcpu *vcpu)
+ trace_kvm_s390_handle_sigp_pei(vcpu, order_code, cpu_addr);
+
+ if (order_code == SIGP_EXTERNAL_CALL) {
+- dest_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_addr);
++ dest_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, cpu_addr);
+ BUG_ON(dest_vcpu == NULL);
+
+ kvm_s390_vcpu_wakeup(dest_vcpu);
+diff --git a/arch/s390/pci/pci_insn.c b/arch/s390/pci/pci_insn.c
+index dcc2634ccbe2..10ca15dcab11 100644
+--- a/arch/s390/pci/pci_insn.c
++++ b/arch/s390/pci/pci_insn.c
+@@ -16,11 +16,11 @@
+ static inline void zpci_err_insn(u8 cc, u8 status, u64 req, u64 offset)
+ {
+ struct {
+- u8 cc;
+- u8 status;
+ u64 req;
+ u64 offset;
+- } data = {cc, status, req, offset};
++ u8 cc;
++ u8 status;
++ } __packed data = {req, offset, cc, status};
+
+ zpci_err_hex(&data, sizeof(data));
+ }
+diff --git a/arch/x86/include/asm/i8259.h b/arch/x86/include/asm/i8259.h
+index ccffa53750a8..39bcefc20de7 100644
+--- a/arch/x86/include/asm/i8259.h
++++ b/arch/x86/include/asm/i8259.h
+@@ -60,6 +60,7 @@ struct legacy_pic {
+ void (*mask_all)(void);
+ void (*restore_mask)(void);
+ void (*init)(int auto_eoi);
++ int (*probe)(void);
+ int (*irq_pending)(unsigned int irq);
+ void (*make_irq)(unsigned int irq);
+ };
+diff --git a/arch/x86/include/asm/kvm_emulate.h b/arch/x86/include/asm/kvm_emulate.h
+index e16466ec473c..e9cd7befcb76 100644
+--- a/arch/x86/include/asm/kvm_emulate.h
++++ b/arch/x86/include/asm/kvm_emulate.h
+@@ -112,6 +112,16 @@ struct x86_emulate_ops {
+ struct x86_exception *fault);
+
+ /*
++ * read_phys: Read bytes of standard (non-emulated/special) memory.
++ * Used for descriptor reading.
++ * @addr: [IN ] Physical address from which to read.
++ * @val: [OUT] Value read from memory.
++ * @bytes: [IN ] Number of bytes to read from memory.
++ */
++ int (*read_phys)(struct x86_emulate_ctxt *ctxt, unsigned long addr,
++ void *val, unsigned int bytes);
++
++ /*
+ * write_std: Write bytes of standard (non-emulated/special) memory.
+ * Used for descriptor writing.
+ * @addr: [IN ] Linear address to which to write.
+diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
+index b5d7640abc5d..8a4add8e4639 100644
+--- a/arch/x86/include/uapi/asm/svm.h
++++ b/arch/x86/include/uapi/asm/svm.h
+@@ -100,6 +100,7 @@
+ { SVM_EXIT_EXCP_BASE + UD_VECTOR, "UD excp" }, \
+ { SVM_EXIT_EXCP_BASE + PF_VECTOR, "PF excp" }, \
+ { SVM_EXIT_EXCP_BASE + NM_VECTOR, "NM excp" }, \
++ { SVM_EXIT_EXCP_BASE + AC_VECTOR, "AC excp" }, \
+ { SVM_EXIT_EXCP_BASE + MC_VECTOR, "MC excp" }, \
+ { SVM_EXIT_INTR, "interrupt" }, \
+ { SVM_EXIT_NMI, "nmi" }, \
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index 836d11b92811..861bc59c8f25 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -361,7 +361,11 @@ int __init arch_probe_nr_irqs(void)
+ if (nr < nr_irqs)
+ nr_irqs = nr;
+
+- return nr_legacy_irqs();
++ /*
++ * We don't know if PIC is present at this point so we need to do
++ * probe() to get the right number of legacy IRQs.
++ */
++ return legacy_pic->probe();
+ }
+
+ #ifdef CONFIG_X86_IO_APIC
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index de22ea7ff82f..1a292573ddf7 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -273,10 +273,9 @@ __setup("nosmap", setup_disable_smap);
+
+ static __always_inline void setup_smap(struct cpuinfo_x86 *c)
+ {
+- unsigned long eflags;
++ unsigned long eflags = native_save_fl();
+
+ /* This should have been cleared long ago */
+- raw_local_save_flags(eflags);
+ BUG_ON(eflags & X86_EFLAGS_AC);
+
+ if (cpu_has(c, X86_FEATURE_SMAP)) {
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index 50ec9af1bd51..6545e6ddbfb1 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -385,20 +385,19 @@ fpu__alloc_mathframe(unsigned long sp, int ia32_frame,
+ */
+ void fpu__init_prepare_fx_sw_frame(void)
+ {
+- int fsave_header_size = sizeof(struct fregs_state);
+ int size = xstate_size + FP_XSTATE_MAGIC2_SIZE;
+
+- if (config_enabled(CONFIG_X86_32))
+- size += fsave_header_size;
+-
+ fx_sw_reserved.magic1 = FP_XSTATE_MAGIC1;
+ fx_sw_reserved.extended_size = size;
+ fx_sw_reserved.xfeatures = xfeatures_mask;
+ fx_sw_reserved.xstate_size = xstate_size;
+
+- if (config_enabled(CONFIG_IA32_EMULATION)) {
++ if (config_enabled(CONFIG_IA32_EMULATION) ||
++ config_enabled(CONFIG_X86_32)) {
++ int fsave_header_size = sizeof(struct fregs_state);
++
+ fx_sw_reserved_ia32 = fx_sw_reserved;
+- fx_sw_reserved_ia32.extended_size += fsave_header_size;
++ fx_sw_reserved_ia32.extended_size = size + fsave_header_size;
+ }
+ }
+
+diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
+index 62fc001c7846..2c4ac072a702 100644
+--- a/arch/x86/kernel/fpu/xstate.c
++++ b/arch/x86/kernel/fpu/xstate.c
+@@ -402,7 +402,6 @@ void *get_xsave_addr(struct xregs_state *xsave, int xstate_feature)
+ if (!boot_cpu_has(X86_FEATURE_XSAVE))
+ return NULL;
+
+- xsave = ¤t->thread.fpu.state.xsave;
+ /*
+ * We should not ever be requesting features that we
+ * have not enabled. Remember that pcntxt_mask is
+diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
+index 1d40ca8a73f2..ffdc0e860390 100644
+--- a/arch/x86/kernel/head_64.S
++++ b/arch/x86/kernel/head_64.S
+@@ -65,6 +65,9 @@ startup_64:
+ * tables and then reload them.
+ */
+
++ /* Sanitize CPU configuration */
++ call verify_cpu
++
+ /*
+ * Compute the delta between the address I am compiled to run at and the
+ * address I am actually running at.
+@@ -174,6 +177,9 @@ ENTRY(secondary_startup_64)
+ * after the boot processor executes this code.
+ */
+
++ /* Sanitize CPU configuration */
++ call verify_cpu
++
+ movq $(init_level4_pgt - __START_KERNEL_map), %rax
+ 1:
+
+@@ -288,6 +294,8 @@ ENTRY(secondary_startup_64)
+ pushq %rax # target address in negative space
+ lretq
+
++#include "verify_cpu.S"
++
+ #ifdef CONFIG_HOTPLUG_CPU
+ /*
+ * Boot CPU0 entry point. It's called from play_dead(). Everything has been set
+diff --git a/arch/x86/kernel/i8259.c b/arch/x86/kernel/i8259.c
+index 16cb827a5b27..be22f5a2192e 100644
+--- a/arch/x86/kernel/i8259.c
++++ b/arch/x86/kernel/i8259.c
+@@ -295,16 +295,11 @@ static void unmask_8259A(void)
+ raw_spin_unlock_irqrestore(&i8259A_lock, flags);
+ }
+
+-static void init_8259A(int auto_eoi)
++static int probe_8259A(void)
+ {
+ unsigned long flags;
+ unsigned char probe_val = ~(1 << PIC_CASCADE_IR);
+ unsigned char new_val;
+-
+- i8259A_auto_eoi = auto_eoi;
+-
+- raw_spin_lock_irqsave(&i8259A_lock, flags);
+-
+ /*
+ * Check to see if we have a PIC.
+ * Mask all except the cascade and read
+@@ -312,16 +307,28 @@ static void init_8259A(int auto_eoi)
+ * have a PIC, we will read 0xff as opposed to the
+ * value we wrote.
+ */
++ raw_spin_lock_irqsave(&i8259A_lock, flags);
++
+ outb(0xff, PIC_SLAVE_IMR); /* mask all of 8259A-2 */
+ outb(probe_val, PIC_MASTER_IMR);
+ new_val = inb(PIC_MASTER_IMR);
+ if (new_val != probe_val) {
+ printk(KERN_INFO "Using NULL legacy PIC\n");
+ legacy_pic = &null_legacy_pic;
+- raw_spin_unlock_irqrestore(&i8259A_lock, flags);
+- return;
+ }
+
++ raw_spin_unlock_irqrestore(&i8259A_lock, flags);
++ return nr_legacy_irqs();
++}
++
++static void init_8259A(int auto_eoi)
++{
++ unsigned long flags;
++
++ i8259A_auto_eoi = auto_eoi;
++
++ raw_spin_lock_irqsave(&i8259A_lock, flags);
++
+ outb(0xff, PIC_MASTER_IMR); /* mask all of 8259A-1 */
+
+ /*
+@@ -379,6 +386,10 @@ static int legacy_pic_irq_pending_noop(unsigned int irq)
+ {
+ return 0;
+ }
++static int legacy_pic_probe(void)
++{
++ return 0;
++}
+
+ struct legacy_pic null_legacy_pic = {
+ .nr_legacy_irqs = 0,
+@@ -388,6 +399,7 @@ struct legacy_pic null_legacy_pic = {
+ .mask_all = legacy_pic_noop,
+ .restore_mask = legacy_pic_noop,
+ .init = legacy_pic_int_noop,
++ .probe = legacy_pic_probe,
+ .irq_pending = legacy_pic_irq_pending_noop,
+ .make_irq = legacy_pic_uint_noop,
+ };
+@@ -400,6 +412,7 @@ struct legacy_pic default_legacy_pic = {
+ .mask_all = mask_8259A,
+ .restore_mask = unmask_8259A,
+ .init = init_8259A,
++ .probe = probe_8259A,
+ .irq_pending = i8259A_irq_pending,
+ .make_irq = make_8259A_irq,
+ };
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index a3cccbfc5f77..37c8ea8e75ff 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -1180,7 +1180,7 @@ void __init setup_arch(char **cmdline_p)
+ */
+ clone_pgd_range(initial_page_table,
+ swapper_pg_dir + KERNEL_PGD_BOUNDARY,
+- KERNEL_PGD_PTRS);
++ min(KERNEL_PGD_PTRS, KERNEL_PGD_BOUNDARY));
+ #endif
+
+ tboot_probe();
+diff --git a/arch/x86/kernel/verify_cpu.S b/arch/x86/kernel/verify_cpu.S
+index b9242bacbe59..4cf401f581e7 100644
+--- a/arch/x86/kernel/verify_cpu.S
++++ b/arch/x86/kernel/verify_cpu.S
+@@ -34,10 +34,11 @@
+ #include <asm/msr-index.h>
+
+ verify_cpu:
+- pushfl # Save caller passed flags
+- pushl $0 # Kill any dangerous flags
+- popfl
++ pushf # Save caller passed flags
++ push $0 # Kill any dangerous flags
++ popf
+
++#ifndef __x86_64__
+ pushfl # standard way to check for cpuid
+ popl %eax
+ movl %eax,%ebx
+@@ -48,6 +49,7 @@ verify_cpu:
+ popl %eax
+ cmpl %eax,%ebx
+ jz verify_cpu_no_longmode # cpu has no cpuid
++#endif
+
+ movl $0x0,%eax # See if cpuid 1 is implemented
+ cpuid
+@@ -130,10 +132,10 @@ verify_cpu_sse_test:
+ jmp verify_cpu_sse_test # try again
+
+ verify_cpu_no_longmode:
+- popfl # Restore caller passed flags
++ popf # Restore caller passed flags
+ movl $1,%eax
+ ret
+ verify_cpu_sse_ok:
+- popfl # Restore caller passed flags
++ popf # Restore caller passed flags
+ xorl %eax, %eax
+ ret
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 9da95b9daf8d..1505587d06e9 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -2272,8 +2272,8 @@ static int emulator_has_longmode(struct x86_emulate_ctxt *ctxt)
+ #define GET_SMSTATE(type, smbase, offset) \
+ ({ \
+ type __val; \
+- int r = ctxt->ops->read_std(ctxt, smbase + offset, &__val, \
+- sizeof(__val), NULL); \
++ int r = ctxt->ops->read_phys(ctxt, smbase + offset, &__val, \
++ sizeof(__val)); \
+ if (r != X86EMUL_CONTINUE) \
+ return X86EMUL_UNHANDLEABLE; \
+ __val; \
+@@ -2484,17 +2484,36 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt)
+
+ /*
+ * Get back to real mode, to prepare a safe state in which to load
+- * CR0/CR3/CR4/EFER. Also this will ensure that addresses passed
+- * to read_std/write_std are not virtual.
+- *
+- * CR4.PCIDE must be zero, because it is a 64-bit mode only feature.
++ * CR0/CR3/CR4/EFER. It's all a bit more complicated if the vCPU
++ * supports long mode.
+ */
++ cr4 = ctxt->ops->get_cr(ctxt, 4);
++ if (emulator_has_longmode(ctxt)) {
++ struct desc_struct cs_desc;
++
++ /* Zero CR4.PCIDE before CR0.PG. */
++ if (cr4 & X86_CR4_PCIDE) {
++ ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PCIDE);
++ cr4 &= ~X86_CR4_PCIDE;
++ }
++
++ /* A 32-bit code segment is required to clear EFER.LMA. */
++ memset(&cs_desc, 0, sizeof(cs_desc));
++ cs_desc.type = 0xb;
++ cs_desc.s = cs_desc.g = cs_desc.p = 1;
++ ctxt->ops->set_segment(ctxt, 0, &cs_desc, 0, VCPU_SREG_CS);
++ }
++
++ /* For the 64-bit case, this will clear EFER.LMA. */
+ cr0 = ctxt->ops->get_cr(ctxt, 0);
+ if (cr0 & X86_CR0_PE)
+ ctxt->ops->set_cr(ctxt, 0, cr0 & ~(X86_CR0_PG | X86_CR0_PE));
+- cr4 = ctxt->ops->get_cr(ctxt, 4);
++
++ /* Now clear CR4.PAE (which must be done before clearing EFER.LME). */
+ if (cr4 & X86_CR4_PAE)
+ ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PAE);
++
++ /* And finally go back to 32-bit mode. */
+ efer = 0;
+ ctxt->ops->set_msr(ctxt, MSR_EFER, efer);
+
+@@ -4455,7 +4474,7 @@ static const struct opcode twobyte_table[256] = {
+ F(DstMem | SrcReg | Src2CL | ModRM, em_shld), N, N,
+ /* 0xA8 - 0xAF */
+ I(Stack | Src2GS, em_push_sreg), I(Stack | Src2GS, em_pop_sreg),
+- II(No64 | EmulateOnUD | ImplicitOps, em_rsm, rsm),
++ II(EmulateOnUD | ImplicitOps, em_rsm, rsm),
+ F(DstMem | SrcReg | ModRM | BitOp | Lock | PageTable, em_bts),
+ F(DstMem | SrcReg | Src2ImmByte | ModRM, em_shrd),
+ F(DstMem | SrcReg | Src2CL | ModRM, em_shrd),
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 8d9013c5e1ee..ae4483a3e1a7 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -348,6 +348,8 @@ void kvm_apic_update_irr(struct kvm_vcpu *vcpu, u32 *pir)
+ struct kvm_lapic *apic = vcpu->arch.apic;
+
+ __kvm_apic_update_irr(pir, apic->regs);
++
++ kvm_make_request(KVM_REQ_EVENT, vcpu);
+ }
+ EXPORT_SYMBOL_GPL(kvm_apic_update_irr);
+
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 2f9ed1ff0632..d7f89387ba0c 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -1086,7 +1086,7 @@ static u64 svm_compute_tsc_offset(struct kvm_vcpu *vcpu, u64 target_tsc)
+ return target_tsc - tsc;
+ }
+
+-static void init_vmcb(struct vcpu_svm *svm, bool init_event)
++static void init_vmcb(struct vcpu_svm *svm)
+ {
+ struct vmcb_control_area *control = &svm->vmcb->control;
+ struct vmcb_save_area *save = &svm->vmcb->save;
+@@ -1107,6 +1107,7 @@ static void init_vmcb(struct vcpu_svm *svm, bool init_event)
+ set_exception_intercept(svm, PF_VECTOR);
+ set_exception_intercept(svm, UD_VECTOR);
+ set_exception_intercept(svm, MC_VECTOR);
++ set_exception_intercept(svm, AC_VECTOR);
+
+ set_intercept(svm, INTERCEPT_INTR);
+ set_intercept(svm, INTERCEPT_NMI);
+@@ -1157,8 +1158,7 @@ static void init_vmcb(struct vcpu_svm *svm, bool init_event)
+ init_sys_seg(&save->ldtr, SEG_TYPE_LDT);
+ init_sys_seg(&save->tr, SEG_TYPE_BUSY_TSS16);
+
+- if (!init_event)
+- svm_set_efer(&svm->vcpu, 0);
++ svm_set_efer(&svm->vcpu, 0);
+ save->dr6 = 0xffff0ff0;
+ kvm_set_rflags(&svm->vcpu, 2);
+ save->rip = 0x0000fff0;
+@@ -1212,7 +1212,7 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+ if (kvm_vcpu_is_reset_bsp(&svm->vcpu))
+ svm->vcpu.arch.apic_base |= MSR_IA32_APICBASE_BSP;
+ }
+- init_vmcb(svm, init_event);
++ init_vmcb(svm);
+
+ kvm_cpuid(vcpu, &eax, &dummy, &dummy, &dummy);
+ kvm_register_write(vcpu, VCPU_REGS_RDX, eax);
+@@ -1268,7 +1268,7 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
+ clear_page(svm->vmcb);
+ svm->vmcb_pa = page_to_pfn(page) << PAGE_SHIFT;
+ svm->asid_generation = 0;
+- init_vmcb(svm, false);
++ init_vmcb(svm);
+
+ svm_init_osvw(&svm->vcpu);
+
+@@ -1796,6 +1796,12 @@ static int ud_interception(struct vcpu_svm *svm)
+ return 1;
+ }
+
++static int ac_interception(struct vcpu_svm *svm)
++{
++ kvm_queue_exception_e(&svm->vcpu, AC_VECTOR, 0);
++ return 1;
++}
++
+ static void svm_fpu_activate(struct kvm_vcpu *vcpu)
+ {
+ struct vcpu_svm *svm = to_svm(vcpu);
+@@ -1890,7 +1896,7 @@ static int shutdown_interception(struct vcpu_svm *svm)
+ * so reinitialize it.
+ */
+ clear_page(svm->vmcb);
+- init_vmcb(svm, false);
++ init_vmcb(svm);
+
+ kvm_run->exit_reason = KVM_EXIT_SHUTDOWN;
+ return 0;
+@@ -3371,6 +3377,7 @@ static int (*const svm_exit_handlers[])(struct vcpu_svm *svm) = {
+ [SVM_EXIT_EXCP_BASE + PF_VECTOR] = pf_interception,
+ [SVM_EXIT_EXCP_BASE + NM_VECTOR] = nm_interception,
+ [SVM_EXIT_EXCP_BASE + MC_VECTOR] = mc_interception,
++ [SVM_EXIT_EXCP_BASE + AC_VECTOR] = ac_interception,
+ [SVM_EXIT_INTR] = intr_interception,
+ [SVM_EXIT_NMI] = nmi_interception,
+ [SVM_EXIT_SMI] = nop_on_interception,
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 6a8bc64566ab..343d3692dd65 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -1567,7 +1567,7 @@ static void update_exception_bitmap(struct kvm_vcpu *vcpu)
+ u32 eb;
+
+ eb = (1u << PF_VECTOR) | (1u << UD_VECTOR) | (1u << MC_VECTOR) |
+- (1u << NM_VECTOR) | (1u << DB_VECTOR);
++ (1u << NM_VECTOR) | (1u << DB_VECTOR) | (1u << AC_VECTOR);
+ if ((vcpu->guest_debug &
+ (KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP)) ==
+ (KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP))
+@@ -4771,8 +4771,7 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+ vmx_set_cr0(vcpu, cr0); /* enter rmode */
+ vmx->vcpu.arch.cr0 = cr0;
+ vmx_set_cr4(vcpu, 0);
+- if (!init_event)
+- vmx_set_efer(vcpu, 0);
++ vmx_set_efer(vcpu, 0);
+ vmx_fpu_activate(vcpu);
+ update_exception_bitmap(vcpu);
+
+@@ -5104,6 +5103,9 @@ static int handle_exception(struct kvm_vcpu *vcpu)
+ return handle_rmode_exception(vcpu, ex_no, error_code);
+
+ switch (ex_no) {
++ case AC_VECTOR:
++ kvm_queue_exception_e(vcpu, AC_VECTOR, error_code);
++ return 1;
+ case DB_VECTOR:
+ dr6 = vmcs_readl(EXIT_QUALIFICATION);
+ if (!(vcpu->guest_debug &
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 9a9a19830321..43609af03283 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -622,7 +622,9 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
+ if ((cr0 ^ old_cr0) & update_bits)
+ kvm_mmu_reset_context(vcpu);
+
+- if ((cr0 ^ old_cr0) & X86_CR0_CD)
++ if (((cr0 ^ old_cr0) & X86_CR0_CD) &&
++ kvm_arch_has_noncoherent_dma(vcpu->kvm) &&
++ !kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED))
+ kvm_zap_gfn_range(vcpu->kvm, 0, ~0ULL);
+
+ return 0;
+@@ -4059,6 +4061,15 @@ static int kvm_read_guest_virt_system(struct x86_emulate_ctxt *ctxt,
+ return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, 0, exception);
+ }
+
++static int kvm_read_guest_phys_system(struct x86_emulate_ctxt *ctxt,
++ unsigned long addr, void *val, unsigned int bytes)
++{
++ struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
++ int r = kvm_vcpu_read_guest(vcpu, addr, val, bytes);
++
++ return r < 0 ? X86EMUL_IO_NEEDED : X86EMUL_CONTINUE;
++}
++
+ int kvm_write_guest_virt_system(struct x86_emulate_ctxt *ctxt,
+ gva_t addr, void *val,
+ unsigned int bytes,
+@@ -4794,6 +4805,7 @@ static const struct x86_emulate_ops emulate_ops = {
+ .write_gpr = emulator_write_gpr,
+ .read_std = kvm_read_guest_virt_system,
+ .write_std = kvm_write_guest_virt_system,
++ .read_phys = kvm_read_guest_phys_system,
+ .fetch = kvm_fetch_guest_virt,
+ .read_emulated = emulator_read_emulated,
+ .write_emulated = emulator_write_emulated,
+diff --git a/arch/x86/mm/mpx.c b/arch/x86/mm/mpx.c
+index 134948b0926f..71fc79a58a15 100644
+--- a/arch/x86/mm/mpx.c
++++ b/arch/x86/mm/mpx.c
+@@ -585,6 +585,29 @@ static unsigned long mpx_bd_entry_to_bt_addr(struct mm_struct *mm,
+ }
+
+ /*
++ * We only want to do a 4-byte get_user() on 32-bit. Otherwise,
++ * we might run off the end of the bounds table if we are on
++ * a 64-bit kernel and try to get 8 bytes.
++ */
++int get_user_bd_entry(struct mm_struct *mm, unsigned long *bd_entry_ret,
++ long __user *bd_entry_ptr)
++{
++ u32 bd_entry_32;
++ int ret;
++
++ if (is_64bit_mm(mm))
++ return get_user(*bd_entry_ret, bd_entry_ptr);
++
++ /*
++ * Note that get_user() uses the type of the *pointer* to
++ * establish the size of the get, not the destination.
++ */
++ ret = get_user(bd_entry_32, (u32 __user *)bd_entry_ptr);
++ *bd_entry_ret = bd_entry_32;
++ return ret;
++}
++
++/*
+ * Get the base of bounds tables pointed by specific bounds
+ * directory entry.
+ */
+@@ -604,7 +627,7 @@ static int get_bt_addr(struct mm_struct *mm,
+ int need_write = 0;
+
+ pagefault_disable();
+- ret = get_user(bd_entry, bd_entry_ptr);
++ ret = get_user_bd_entry(mm, &bd_entry, bd_entry_ptr);
+ pagefault_enable();
+ if (!ret)
+ break;
+@@ -699,11 +722,23 @@ static unsigned long mpx_get_bt_entry_offset_bytes(struct mm_struct *mm,
+ */
+ static inline unsigned long bd_entry_virt_space(struct mm_struct *mm)
+ {
+- unsigned long long virt_space = (1ULL << boot_cpu_data.x86_virt_bits);
+- if (is_64bit_mm(mm))
+- return virt_space / MPX_BD_NR_ENTRIES_64;
+- else
+- return virt_space / MPX_BD_NR_ENTRIES_32;
++ unsigned long long virt_space;
++ unsigned long long GB = (1ULL << 30);
++
++ /*
++ * This covers 32-bit emulation as well as 32-bit kernels
++ * running on 64-bit harware.
++ */
++ if (!is_64bit_mm(mm))
++ return (4ULL * GB) / MPX_BD_NR_ENTRIES_32;
++
++ /*
++ * 'x86_virt_bits' returns what the hardware is capable
++ * of, and returns the full >32-bit adddress space when
++ * running 32-bit kernels on 64-bit hardware.
++ */
++ virt_space = (1ULL << boot_cpu_data.x86_virt_bits);
++ return virt_space / MPX_BD_NR_ENTRIES_64;
+ }
+
+ /*
+diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c
+index e527a3e13939..fa893c3ec408 100644
+--- a/drivers/bluetooth/ath3k.c
++++ b/drivers/bluetooth/ath3k.c
+@@ -93,6 +93,7 @@ static const struct usb_device_id ath3k_table[] = {
+ { USB_DEVICE(0x04CA, 0x300f) },
+ { USB_DEVICE(0x04CA, 0x3010) },
+ { USB_DEVICE(0x0930, 0x0219) },
++ { USB_DEVICE(0x0930, 0x021c) },
+ { USB_DEVICE(0x0930, 0x0220) },
+ { USB_DEVICE(0x0930, 0x0227) },
+ { USB_DEVICE(0x0b05, 0x17d0) },
+@@ -104,6 +105,7 @@ static const struct usb_device_id ath3k_table[] = {
+ { USB_DEVICE(0x0CF3, 0x311F) },
+ { USB_DEVICE(0x0cf3, 0x3121) },
+ { USB_DEVICE(0x0CF3, 0x817a) },
++ { USB_DEVICE(0x0CF3, 0x817b) },
+ { USB_DEVICE(0x0cf3, 0xe003) },
+ { USB_DEVICE(0x0CF3, 0xE004) },
+ { USB_DEVICE(0x0CF3, 0xE005) },
+@@ -153,6 +155,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
+ { USB_DEVICE(0x04ca, 0x300f), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x3010), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
++ { USB_DEVICE(0x0930, 0x021c), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0930, 0x0220), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0930, 0x0227), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0b05, 0x17d0), .driver_info = BTUSB_ATH3012 },
+@@ -164,6 +167,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
+ { USB_DEVICE(0x0cf3, 0x311F), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0x3121), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0CF3, 0x817a), .driver_info = BTUSB_ATH3012 },
++ { USB_DEVICE(0x0CF3, 0x817b), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0xe005), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0xe006), .driver_info = BTUSB_ATH3012 },
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index b6aceaf82aa8..3d0d643c8bff 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -195,6 +195,7 @@ static const struct usb_device_id blacklist_table[] = {
+ { USB_DEVICE(0x04ca, 0x300f), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x3010), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
++ { USB_DEVICE(0x0930, 0x021c), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0930, 0x0220), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0930, 0x0227), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0b05, 0x17d0), .driver_info = BTUSB_ATH3012 },
+@@ -206,6 +207,7 @@ static const struct usb_device_id blacklist_table[] = {
+ { USB_DEVICE(0x0cf3, 0x311f), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0x3121), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0x817a), .driver_info = BTUSB_ATH3012 },
++ { USB_DEVICE(0x0cf3, 0x817b), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0xe003), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0xe005), .driver_info = BTUSB_ATH3012 },
+diff --git a/drivers/clk/bcm/clk-iproc-pll.c b/drivers/clk/bcm/clk-iproc-pll.c
+index 2dda4e8295a9..d679ab869653 100644
+--- a/drivers/clk/bcm/clk-iproc-pll.c
++++ b/drivers/clk/bcm/clk-iproc-pll.c
+@@ -345,8 +345,8 @@ static unsigned long iproc_pll_recalc_rate(struct clk_hw *hw,
+ struct iproc_pll *pll = clk->pll;
+ const struct iproc_pll_ctrl *ctrl = pll->ctrl;
+ u32 val;
+- u64 ndiv;
+- unsigned int ndiv_int, ndiv_frac, pdiv;
++ u64 ndiv, ndiv_int, ndiv_frac;
++ unsigned int pdiv;
+
+ if (parent_rate == 0)
+ return 0;
+@@ -366,22 +366,19 @@ static unsigned long iproc_pll_recalc_rate(struct clk_hw *hw,
+ val = readl(pll->pll_base + ctrl->ndiv_int.offset);
+ ndiv_int = (val >> ctrl->ndiv_int.shift) &
+ bit_mask(ctrl->ndiv_int.width);
+- ndiv = (u64)ndiv_int << ctrl->ndiv_int.shift;
++ ndiv = ndiv_int << 20;
+
+ if (ctrl->flags & IPROC_CLK_PLL_HAS_NDIV_FRAC) {
+ val = readl(pll->pll_base + ctrl->ndiv_frac.offset);
+ ndiv_frac = (val >> ctrl->ndiv_frac.shift) &
+ bit_mask(ctrl->ndiv_frac.width);
+-
+- if (ndiv_frac != 0)
+- ndiv = ((u64)ndiv_int << ctrl->ndiv_int.shift) |
+- ndiv_frac;
++ ndiv += ndiv_frac;
+ }
+
+ val = readl(pll->pll_base + ctrl->pdiv.offset);
+ pdiv = (val >> ctrl->pdiv.shift) & bit_mask(ctrl->pdiv.width);
+
+- clk->rate = (ndiv * parent_rate) >> ctrl->ndiv_int.shift;
++ clk->rate = (ndiv * parent_rate) >> 20;
+
+ if (pdiv == 0)
+ clk->rate *= 2;
+diff --git a/drivers/clk/versatile/clk-icst.c b/drivers/clk/versatile/clk-icst.c
+index a3893ea2199d..08c5ee976879 100644
+--- a/drivers/clk/versatile/clk-icst.c
++++ b/drivers/clk/versatile/clk-icst.c
+@@ -157,8 +157,10 @@ struct clk *icst_clk_register(struct device *dev,
+ icst->lockreg = base + desc->lock_offset;
+
+ clk = clk_register(dev, &icst->hw);
+- if (IS_ERR(clk))
++ if (IS_ERR(clk)) {
++ kfree(pclone);
+ kfree(icst);
++ }
+
+ return clk;
+ }
+diff --git a/drivers/mfd/twl6040.c b/drivers/mfd/twl6040.c
+index a151ee2eed2a..08a693cd38cc 100644
+--- a/drivers/mfd/twl6040.c
++++ b/drivers/mfd/twl6040.c
+@@ -647,6 +647,8 @@ static int twl6040_probe(struct i2c_client *client,
+
+ twl6040->clk32k = devm_clk_get(&client->dev, "clk32k");
+ if (IS_ERR(twl6040->clk32k)) {
++ if (PTR_ERR(twl6040->clk32k) == -EPROBE_DEFER)
++ return -EPROBE_DEFER;
+ dev_info(&client->dev, "clk32k is not handled\n");
+ twl6040->clk32k = NULL;
+ }
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 771a449d2f56..bcd7bddbe312 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1749,6 +1749,7 @@ err_undo_flags:
+ slave_dev->dev_addr))
+ eth_hw_addr_random(bond_dev);
+ if (bond_dev->type != ARPHRD_ETHER) {
++ dev_close(bond_dev);
+ ether_setup(bond_dev);
+ bond_dev->flags |= IFF_MASTER;
+ bond_dev->priv_flags &= ~IFF_TX_SKB_SHARING;
+diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c
+index aede704605c6..141c2a42d7ed 100644
+--- a/drivers/net/can/dev.c
++++ b/drivers/net/can/dev.c
+@@ -915,7 +915,7 @@ static int can_fill_info(struct sk_buff *skb, const struct net_device *dev)
+ nla_put(skb, IFLA_CAN_BITTIMING_CONST,
+ sizeof(*priv->bittiming_const), priv->bittiming_const)) ||
+
+- nla_put(skb, IFLA_CAN_CLOCK, sizeof(cm), &priv->clock) ||
++ nla_put(skb, IFLA_CAN_CLOCK, sizeof(priv->clock), &priv->clock) ||
+ nla_put_u32(skb, IFLA_CAN_STATE, state) ||
+ nla_put(skb, IFLA_CAN_CTRLMODE, sizeof(cm), &cm) ||
+ nla_put_u32(skb, IFLA_CAN_RESTART_MS, priv->restart_ms) ||
+diff --git a/drivers/net/can/sja1000/sja1000.c b/drivers/net/can/sja1000/sja1000.c
+index 7b92e911a616..f10834be48a5 100644
+--- a/drivers/net/can/sja1000/sja1000.c
++++ b/drivers/net/can/sja1000/sja1000.c
+@@ -218,6 +218,9 @@ static void sja1000_start(struct net_device *dev)
+ priv->write_reg(priv, SJA1000_RXERR, 0x0);
+ priv->read_reg(priv, SJA1000_ECC);
+
++ /* clear interrupt flags */
++ priv->read_reg(priv, SJA1000_IR);
++
+ /* leave reset mode */
+ set_normal_mode(dev);
+ }
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 1805541b4240..5e3cd76cb69b 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -907,8 +907,10 @@ static void bcmgenet_power_up(struct bcmgenet_priv *priv,
+ }
+
+ bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+- if (mode == GENET_POWER_PASSIVE)
++ if (mode == GENET_POWER_PASSIVE) {
+ bcmgenet_phy_power_set(priv->dev, true);
++ bcmgenet_mii_reset(priv->dev);
++ }
+ }
+
+ /* ioctl handle special commands that are not present in ethtool. */
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+index 7299d1075422..c739f7ebc992 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+@@ -674,6 +674,7 @@ int bcmgenet_mii_init(struct net_device *dev);
+ int bcmgenet_mii_config(struct net_device *dev);
+ int bcmgenet_mii_probe(struct net_device *dev);
+ void bcmgenet_mii_exit(struct net_device *dev);
++void bcmgenet_mii_reset(struct net_device *dev);
+ void bcmgenet_phy_power_set(struct net_device *dev, bool enable);
+ void bcmgenet_mii_setup(struct net_device *dev);
+
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index c8affad76f36..8bdfe53754ba 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -163,6 +163,7 @@ void bcmgenet_mii_setup(struct net_device *dev)
+ phy_print_status(phydev);
+ }
+
++
+ static int bcmgenet_fixed_phy_link_update(struct net_device *dev,
+ struct fixed_phy_status *status)
+ {
+@@ -172,6 +173,22 @@ static int bcmgenet_fixed_phy_link_update(struct net_device *dev,
+ return 0;
+ }
+
++/* Perform a voluntary PHY software reset, since the EPHY is very finicky about
++ * not doing it and will start corrupting packets
++ */
++void bcmgenet_mii_reset(struct net_device *dev)
++{
++ struct bcmgenet_priv *priv = netdev_priv(dev);
++
++ if (GENET_IS_V4(priv))
++ return;
++
++ if (priv->phydev) {
++ phy_init_hw(priv->phydev);
++ phy_start_aneg(priv->phydev);
++ }
++}
++
+ void bcmgenet_phy_power_set(struct net_device *dev, bool enable)
+ {
+ struct bcmgenet_priv *priv = netdev_priv(dev);
+@@ -214,6 +231,7 @@ static void bcmgenet_internal_phy_setup(struct net_device *dev)
+ reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+ reg |= EXT_PWR_DN_EN_LD;
+ bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
++ bcmgenet_mii_reset(dev);
+ }
+
+ static void bcmgenet_moca_phy_setup(struct bcmgenet_priv *priv)
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 514df76fc70f..0e49244981d6 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -949,7 +949,7 @@ static void mvneta_defaults_set(struct mvneta_port *pp)
+ /* Set CPU queue access map - all CPUs have access to all RX
+ * queues and to all TX queues
+ */
+- for (cpu = 0; cpu < CONFIG_NR_CPUS; cpu++)
++ for_each_present_cpu(cpu)
+ mvreg_write(pp, MVNETA_CPU_MAP(cpu),
+ (MVNETA_CPU_RXQ_ACCESS_ALL_MASK |
+ MVNETA_CPU_TXQ_ACCESS_ALL_MASK));
+@@ -1533,12 +1533,16 @@ static int mvneta_rx(struct mvneta_port *pp, int rx_todo,
+ }
+
+ skb = build_skb(data, pp->frag_size > PAGE_SIZE ? 0 : pp->frag_size);
+- if (!skb)
+- goto err_drop_frame;
+
++ /* After refill old buffer has to be unmapped regardless
++ * the skb is successfully built or not.
++ */
+ dma_unmap_single(dev->dev.parent, phys_addr,
+ MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
+
++ if (!skb)
++ goto err_drop_frame;
++
+ rcvd_pkts++;
+ rcvd_bytes += rx_bytes;
+
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index ff649ebef637..286cc6b69d57 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -1849,7 +1849,9 @@ static void efx_ef10_tx_write(struct efx_tx_queue *tx_queue)
+ unsigned int write_ptr;
+ efx_qword_t *txd;
+
+- BUG_ON(tx_queue->write_count == tx_queue->insert_count);
++ tx_queue->xmit_more_available = false;
++ if (unlikely(tx_queue->write_count == tx_queue->insert_count))
++ return;
+
+ do {
+ write_ptr = tx_queue->write_count & tx_queue->ptr_mask;
+diff --git a/drivers/net/ethernet/sfc/farch.c b/drivers/net/ethernet/sfc/farch.c
+index f08266f0eca2..5a1c5a8f278a 100644
+--- a/drivers/net/ethernet/sfc/farch.c
++++ b/drivers/net/ethernet/sfc/farch.c
+@@ -321,7 +321,9 @@ void efx_farch_tx_write(struct efx_tx_queue *tx_queue)
+ unsigned write_ptr;
+ unsigned old_write_count = tx_queue->write_count;
+
+- BUG_ON(tx_queue->write_count == tx_queue->insert_count);
++ tx_queue->xmit_more_available = false;
++ if (unlikely(tx_queue->write_count == tx_queue->insert_count))
++ return;
+
+ do {
+ write_ptr = tx_queue->write_count & tx_queue->ptr_mask;
+diff --git a/drivers/net/ethernet/sfc/net_driver.h b/drivers/net/ethernet/sfc/net_driver.h
+index c530e1c4cb4f..24038ef96d9f 100644
+--- a/drivers/net/ethernet/sfc/net_driver.h
++++ b/drivers/net/ethernet/sfc/net_driver.h
+@@ -219,6 +219,7 @@ struct efx_tx_buffer {
+ * @tso_packets: Number of packets via the TSO xmit path
+ * @pushes: Number of times the TX push feature has been used
+ * @pio_packets: Number of times the TX PIO feature has been used
++ * @xmit_more_available: Are any packets waiting to be pushed to the NIC
+ * @empty_read_count: If the completion path has seen the queue as empty
+ * and the transmission path has not yet checked this, the value of
+ * @read_count bitwise-added to %EFX_EMPTY_COUNT_VALID; otherwise 0.
+@@ -253,6 +254,7 @@ struct efx_tx_queue {
+ unsigned int tso_packets;
+ unsigned int pushes;
+ unsigned int pio_packets;
++ bool xmit_more_available;
+ /* Statistics to supplement MAC stats */
+ unsigned long tx_packets;
+
+diff --git a/drivers/net/ethernet/sfc/tx.c b/drivers/net/ethernet/sfc/tx.c
+index 1833a0146571..67f6afaa022f 100644
+--- a/drivers/net/ethernet/sfc/tx.c
++++ b/drivers/net/ethernet/sfc/tx.c
+@@ -431,8 +431,20 @@ finish_packet:
+ efx_tx_maybe_stop_queue(tx_queue);
+
+ /* Pass off to hardware */
+- if (!skb->xmit_more || netif_xmit_stopped(tx_queue->core_txq))
++ if (!skb->xmit_more || netif_xmit_stopped(tx_queue->core_txq)) {
++ struct efx_tx_queue *txq2 = efx_tx_queue_partner(tx_queue);
++
++ /* There could be packets left on the partner queue if those
++ * SKBs had skb->xmit_more set. If we do not push those they
++ * could be left for a long time and cause a netdev watchdog.
++ */
++ if (txq2->xmit_more_available)
++ efx_nic_push_buffers(txq2);
++
+ efx_nic_push_buffers(tx_queue);
++ } else {
++ tx_queue->xmit_more_available = skb->xmit_more;
++ }
+
+ tx_queue->tx_packets++;
+
+@@ -722,6 +734,7 @@ void efx_init_tx_queue(struct efx_tx_queue *tx_queue)
+ tx_queue->read_count = 0;
+ tx_queue->old_read_count = 0;
+ tx_queue->empty_read_count = 0 | EFX_EMPTY_COUNT_VALID;
++ tx_queue->xmit_more_available = false;
+
+ /* Set up TX descriptor ring */
+ efx_nic_init_tx(tx_queue);
+@@ -747,6 +760,7 @@ void efx_fini_tx_queue(struct efx_tx_queue *tx_queue)
+
+ ++tx_queue->read_count;
+ }
++ tx_queue->xmit_more_available = false;
+ netdev_tx_reset_queue(tx_queue->core_txq);
+ }
+
+@@ -1302,8 +1316,20 @@ static int efx_enqueue_skb_tso(struct efx_tx_queue *tx_queue,
+ efx_tx_maybe_stop_queue(tx_queue);
+
+ /* Pass off to hardware */
+- if (!skb->xmit_more || netif_xmit_stopped(tx_queue->core_txq))
++ if (!skb->xmit_more || netif_xmit_stopped(tx_queue->core_txq)) {
++ struct efx_tx_queue *txq2 = efx_tx_queue_partner(tx_queue);
++
++ /* There could be packets left on the partner queue if those
++ * SKBs had skb->xmit_more set. If we do not push those they
++ * could be left for a long time and cause a netdev watchdog.
++ */
++ if (txq2->xmit_more_available)
++ efx_nic_push_buffers(txq2);
++
+ efx_nic_push_buffers(tx_queue);
++ } else {
++ tx_queue->xmit_more_available = skb->xmit_more;
++ }
+
+ tx_queue->tso_bursts++;
+ return NETDEV_TX_OK;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+index 771cda2a48b2..2e51b816a7e8 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+@@ -721,10 +721,13 @@ static int stmmac_get_ts_info(struct net_device *dev,
+ {
+ struct stmmac_priv *priv = netdev_priv(dev);
+
+- if ((priv->hwts_tx_en) && (priv->hwts_rx_en)) {
++ if ((priv->dma_cap.time_stamp || priv->dma_cap.atime_stamp)) {
+
+- info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
++ info->so_timestamping = SOF_TIMESTAMPING_TX_SOFTWARE |
++ SOF_TIMESTAMPING_TX_HARDWARE |
++ SOF_TIMESTAMPING_RX_SOFTWARE |
+ SOF_TIMESTAMPING_RX_HARDWARE |
++ SOF_TIMESTAMPING_SOFTWARE |
+ SOF_TIMESTAMPING_RAW_HARDWARE;
+
+ if (priv->ptp_clock)
+diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
+index b87b98617073..9e3d89d06445 100644
+--- a/drivers/net/wireless/ath/ath10k/core.c
++++ b/drivers/net/wireless/ath/ath10k/core.c
+@@ -142,12 +142,17 @@ static const char *const ath10k_core_fw_feature_str[] = {
+ [ATH10K_FW_FEATURE_IGNORE_OTP_RESULT] = "ignore-otp",
+ [ATH10K_FW_FEATURE_NO_NWIFI_DECAP_4ADDR_PADDING] = "no-4addr-pad",
+ [ATH10K_FW_FEATURE_SUPPORTS_SKIP_CLOCK_INIT] = "skip-clock-init",
++ [ATH10K_FW_FEATURE_RAW_MODE_SUPPORT] = "raw-mode",
+ };
+
+ static unsigned int ath10k_core_get_fw_feature_str(char *buf,
+ size_t buf_len,
+ enum ath10k_fw_features feat)
+ {
++ /* make sure that ath10k_core_fw_feature_str[] gets updated */
++ BUILD_BUG_ON(ARRAY_SIZE(ath10k_core_fw_feature_str) !=
++ ATH10K_FW_FEATURE_COUNT);
++
+ if (feat >= ARRAY_SIZE(ath10k_core_fw_feature_str) ||
+ WARN_ON(!ath10k_core_fw_feature_str[feat])) {
+ return scnprintf(buf, buf_len, "bit%d", feat);
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 64674c955d44..e1d32de8b111 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -2083,7 +2083,8 @@ static void ath10k_peer_assoc_h_ht(struct ath10k *ar,
+ enum ieee80211_band band;
+ const u8 *ht_mcs_mask;
+ const u16 *vht_mcs_mask;
+- int i, n, max_nss;
++ int i, n;
++ u8 max_nss;
+ u32 stbc;
+
+ lockdep_assert_held(&ar->conf_mutex);
+@@ -2168,7 +2169,7 @@ static void ath10k_peer_assoc_h_ht(struct ath10k *ar,
+ arg->peer_ht_rates.rates[i] = i;
+ } else {
+ arg->peer_ht_rates.num_rates = n;
+- arg->peer_num_spatial_streams = max_nss;
++ arg->peer_num_spatial_streams = min(sta->rx_nss, max_nss);
+ }
+
+ ath10k_dbg(ar, ATH10K_DBG_MAC, "mac ht peer %pM mcs cnt %d nss %d\n",
+@@ -4055,7 +4056,7 @@ static int ath10k_config(struct ieee80211_hw *hw, u32 changed)
+
+ static u32 get_nss_from_chainmask(u16 chain_mask)
+ {
+- if ((chain_mask & 0x15) == 0x15)
++ if ((chain_mask & 0xf) == 0xf)
+ return 4;
+ else if ((chain_mask & 0x7) == 0x7)
+ return 3;
+diff --git a/drivers/net/wireless/iwlwifi/pcie/drv.c b/drivers/net/wireless/iwlwifi/pcie/drv.c
+index 644b58bc5226..639761fb2bfb 100644
+--- a/drivers/net/wireless/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/iwlwifi/pcie/drv.c
+@@ -423,14 +423,21 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ /* 8000 Series */
+ {IWL_PCI_DEVICE(0x24F3, 0x0010, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x1010, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x0130, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x1130, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x0132, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x1132, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x0110, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x01F0, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x0012, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x1012, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x1110, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x0050, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x0250, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x1050, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x0150, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x1150, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F4, 0x0030, iwl8260_2ac_cfg)},
+- {IWL_PCI_DEVICE(0x24F4, 0x1130, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F4, 0x1030, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0xC010, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0xC110, iwl8260_2ac_cfg)},
+@@ -438,18 +445,28 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x24F3, 0xC050, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0xD050, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x8010, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x8110, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x9010, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x9110, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F4, 0x8030, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F4, 0x9030, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x8130, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x9130, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x8132, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x9132, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x8050, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x8150, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x9050, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x9150, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x0004, iwl8260_2n_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x0044, iwl8260_2n_cfg)},
+ {IWL_PCI_DEVICE(0x24F5, 0x0010, iwl4165_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F6, 0x0030, iwl4165_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x0810, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x0910, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x0850, iwl8260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x24F3, 0x0950, iwl8260_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x24F3, 0x0930, iwl8260_2ac_cfg)},
+ #endif /* CONFIG_IWLMVM */
+
+ {0}
+diff --git a/drivers/net/wireless/iwlwifi/pcie/trans.c b/drivers/net/wireless/iwlwifi/pcie/trans.c
+index 6ba7d300b08f..90283453073c 100644
+--- a/drivers/net/wireless/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/iwlwifi/pcie/trans.c
+@@ -592,10 +592,8 @@ static int iwl_pcie_prepare_card_hw(struct iwl_trans *trans)
+
+ do {
+ ret = iwl_pcie_set_hw_ready(trans);
+- if (ret >= 0) {
+- ret = 0;
+- goto out;
+- }
++ if (ret >= 0)
++ return 0;
+
+ usleep_range(200, 1000);
+ t += 200;
+@@ -605,10 +603,6 @@ static int iwl_pcie_prepare_card_hw(struct iwl_trans *trans)
+
+ IWL_ERR(trans, "Couldn't prepare the card\n");
+
+-out:
+- iwl_clear_bit(trans, CSR_DBG_LINK_PWR_MGMT_REG,
+- CSR_RESET_LINK_PWR_MGMT_DISABLED);
+-
+ return ret;
+ }
+
+diff --git a/drivers/net/wireless/mwifiex/debugfs.c b/drivers/net/wireless/mwifiex/debugfs.c
+index 5a0636d43a1b..5583856fc5c4 100644
+--- a/drivers/net/wireless/mwifiex/debugfs.c
++++ b/drivers/net/wireless/mwifiex/debugfs.c
+@@ -731,7 +731,7 @@ mwifiex_rdeeprom_read(struct file *file, char __user *ubuf,
+ (struct mwifiex_private *) file->private_data;
+ unsigned long addr = get_zeroed_page(GFP_KERNEL);
+ char *buf = (char *) addr;
+- int pos = 0, ret = 0, i;
++ int pos, ret, i;
+ u8 value[MAX_EEPROM_DATA];
+
+ if (!buf)
+@@ -739,7 +739,7 @@ mwifiex_rdeeprom_read(struct file *file, char __user *ubuf,
+
+ if (saved_offset == -1) {
+ /* No command has been given */
+- pos += snprintf(buf, PAGE_SIZE, "0");
++ pos = snprintf(buf, PAGE_SIZE, "0");
+ goto done;
+ }
+
+@@ -748,17 +748,17 @@ mwifiex_rdeeprom_read(struct file *file, char __user *ubuf,
+ (u16) saved_bytes, value);
+ if (ret) {
+ ret = -EINVAL;
+- goto done;
++ goto out_free;
+ }
+
+- pos += snprintf(buf, PAGE_SIZE, "%d %d ", saved_offset, saved_bytes);
++ pos = snprintf(buf, PAGE_SIZE, "%d %d ", saved_offset, saved_bytes);
+
+ for (i = 0; i < saved_bytes; i++)
+- pos += snprintf(buf + strlen(buf), PAGE_SIZE, "%d ", value[i]);
+-
+- ret = simple_read_from_buffer(ubuf, count, ppos, buf, pos);
++ pos += scnprintf(buf + pos, PAGE_SIZE - pos, "%d ", value[i]);
+
+ done:
++ ret = simple_read_from_buffer(ubuf, count, ppos, buf, pos);
++out_free:
+ free_page(addr);
+ return ret;
+ }
+diff --git a/drivers/net/wireless/mwifiex/pcie.c b/drivers/net/wireless/mwifiex/pcie.c
+index 408b68460716..21192b6f9c64 100644
+--- a/drivers/net/wireless/mwifiex/pcie.c
++++ b/drivers/net/wireless/mwifiex/pcie.c
+@@ -1815,7 +1815,6 @@ static int mwifiex_pcie_event_complete(struct mwifiex_adapter *adapter,
+ if (!card->evt_buf_list[rdptr]) {
+ skb_push(skb, INTF_HEADER_LEN);
+ skb_put(skb, MAX_EVENT_SIZE - skb->len);
+- memset(skb->data, 0, MAX_EVENT_SIZE);
+ if (mwifiex_map_pci_memory(adapter, skb,
+ MAX_EVENT_SIZE,
+ PCI_DMA_FROMDEVICE))
+diff --git a/drivers/net/wireless/mwifiex/scan.c b/drivers/net/wireless/mwifiex/scan.c
+index 5847863a2d6b..278ec477fd90 100644
+--- a/drivers/net/wireless/mwifiex/scan.c
++++ b/drivers/net/wireless/mwifiex/scan.c
+@@ -1889,7 +1889,7 @@ mwifiex_active_scan_req_for_passive_chan(struct mwifiex_private *priv)
+ u8 id = 0;
+ struct mwifiex_user_scan_cfg *user_scan_cfg;
+
+- if (adapter->active_scan_triggered) {
++ if (adapter->active_scan_triggered || !priv->scan_request) {
+ adapter->active_scan_triggered = false;
+ return 0;
+ }
+diff --git a/drivers/nfc/st-nci/spi.c b/drivers/nfc/st-nci/spi.c
+index 598a58c4d6d1..887d308c2ec8 100644
+--- a/drivers/nfc/st-nci/spi.c
++++ b/drivers/nfc/st-nci/spi.c
+@@ -25,6 +25,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/delay.h>
+ #include <linux/nfc.h>
++#include <net/nfc/nci.h>
+ #include <linux/platform_data/st-nci.h>
+
+ #include "ndlc.h"
+@@ -94,7 +95,8 @@ static int st_nci_spi_write(void *phy_id, struct sk_buff *skb)
+ struct st_nci_spi_phy *phy = phy_id;
+ struct spi_device *dev = phy->spi_dev;
+ struct sk_buff *skb_rx;
+- u8 buf[ST_NCI_SPI_MAX_SIZE];
++ u8 buf[ST_NCI_SPI_MAX_SIZE + NCI_DATA_HDR_SIZE +
++ ST_NCI_FRAME_HEADROOM + ST_NCI_FRAME_TAILROOM];
+ struct spi_transfer spi_xfer = {
+ .tx_buf = skb->data,
+ .rx_buf = buf,
+diff --git a/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c b/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
+index e1a3721bc8e5..d809c9eaa323 100644
+--- a/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
++++ b/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
+@@ -584,7 +584,7 @@ static void pm8xxx_gpio_dbg_show(struct seq_file *s, struct gpio_chip *chip)
+ }
+
+ #else
+-#define msm_gpio_dbg_show NULL
++#define pm8xxx_gpio_dbg_show NULL
+ #endif
+
+ static struct gpio_chip pm8xxx_gpio_template = {
+diff --git a/drivers/pinctrl/qcom/pinctrl-ssbi-mpp.c b/drivers/pinctrl/qcom/pinctrl-ssbi-mpp.c
+index 6652b8d7f707..8982027de8e8 100644
+--- a/drivers/pinctrl/qcom/pinctrl-ssbi-mpp.c
++++ b/drivers/pinctrl/qcom/pinctrl-ssbi-mpp.c
+@@ -639,7 +639,7 @@ static void pm8xxx_mpp_dbg_show(struct seq_file *s, struct gpio_chip *chip)
+ }
+
+ #else
+-#define msm_mpp_dbg_show NULL
++#define pm8xxx_mpp_dbg_show NULL
+ #endif
+
+ static struct gpio_chip pm8xxx_mpp_template = {
+diff --git a/drivers/pinctrl/uniphier/pinctrl-uniphier-core.c b/drivers/pinctrl/uniphier/pinctrl-uniphier-core.c
+index 918f3b643f1b..589872cc8adb 100644
+--- a/drivers/pinctrl/uniphier/pinctrl-uniphier-core.c
++++ b/drivers/pinctrl/uniphier/pinctrl-uniphier-core.c
+@@ -539,6 +539,12 @@ static int uniphier_pmx_set_one_mux(struct pinctrl_dev *pctldev, unsigned pin,
+ unsigned reg, reg_end, shift, mask;
+ int ret;
+
++ /* some pins need input-enabling */
++ ret = uniphier_conf_pin_input_enable(pctldev,
++ &pctldev->desc->pins[pin], 1);
++ if (ret)
++ return ret;
++
+ reg = UNIPHIER_PINCTRL_PINMUX_BASE + pin * mux_bits / 32 * reg_stride;
+ reg_end = reg + reg_stride;
+ shift = pin * mux_bits % 32;
+@@ -563,9 +569,7 @@ static int uniphier_pmx_set_one_mux(struct pinctrl_dev *pctldev, unsigned pin,
+ return ret;
+ }
+
+- /* some pins need input-enabling */
+- return uniphier_conf_pin_input_enable(pctldev,
+- &pctldev->desc->pins[pin], 1);
++ return 0;
+ }
+
+ static int uniphier_pmx_set_mux(struct pinctrl_dev *pctldev,
+diff --git a/drivers/staging/lustre/include/linux/lnet/lib-lnet.h b/drivers/staging/lustre/include/linux/lnet/lib-lnet.h
+index a9c9a077c77d..bc3d907fd20f 100644
+--- a/drivers/staging/lustre/include/linux/lnet/lib-lnet.h
++++ b/drivers/staging/lustre/include/linux/lnet/lib-lnet.h
+@@ -680,7 +680,7 @@ void lnet_debug_peer(lnet_nid_t nid);
+ static inline void
+ lnet_peer_set_alive(lnet_peer_t *lp)
+ {
+- lp->lp_last_alive = lp->lp_last_query = get_seconds();
++ lp->lp_last_alive = lp->lp_last_query = jiffies;
+ if (!lp->lp_alive)
+ lnet_notify_locked(lp, 0, 1, lp->lp_last_alive);
+ }
+diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
+index f8b5b332e7c3..943a0e204532 100644
+--- a/drivers/staging/rtl8712/usb_intf.c
++++ b/drivers/staging/rtl8712/usb_intf.c
+@@ -144,6 +144,7 @@ static struct usb_device_id rtl871x_usb_id_tbl[] = {
+ {USB_DEVICE(0x0DF6, 0x0058)},
+ {USB_DEVICE(0x0DF6, 0x0049)},
+ {USB_DEVICE(0x0DF6, 0x004C)},
++ {USB_DEVICE(0x0DF6, 0x006C)},
+ {USB_DEVICE(0x0DF6, 0x0064)},
+ /* Skyworth */
+ {USB_DEVICE(0x14b2, 0x3300)},
+diff --git a/drivers/tty/mips_ejtag_fdc.c b/drivers/tty/mips_ejtag_fdc.c
+index a8c8cfd52a23..57ff5d3157a6 100644
+--- a/drivers/tty/mips_ejtag_fdc.c
++++ b/drivers/tty/mips_ejtag_fdc.c
+@@ -1048,38 +1048,6 @@ err_destroy_ports:
+ return ret;
+ }
+
+-static int mips_ejtag_fdc_tty_remove(struct mips_cdmm_device *dev)
+-{
+- struct mips_ejtag_fdc_tty *priv = mips_cdmm_get_drvdata(dev);
+- struct mips_ejtag_fdc_tty_port *dport;
+- int nport;
+- unsigned int cfg;
+-
+- if (priv->irq >= 0) {
+- raw_spin_lock_irq(&priv->lock);
+- cfg = mips_ejtag_fdc_read(priv, REG_FDCFG);
+- /* Disable interrupts */
+- cfg &= ~(REG_FDCFG_TXINTTHRES | REG_FDCFG_RXINTTHRES);
+- cfg |= REG_FDCFG_TXINTTHRES_DISABLED;
+- cfg |= REG_FDCFG_RXINTTHRES_DISABLED;
+- mips_ejtag_fdc_write(priv, REG_FDCFG, cfg);
+- raw_spin_unlock_irq(&priv->lock);
+- } else {
+- priv->removing = true;
+- del_timer_sync(&priv->poll_timer);
+- }
+- kthread_stop(priv->thread);
+- if (dev->cpu == 0)
+- mips_ejtag_fdc_con.tty_drv = NULL;
+- tty_unregister_driver(priv->driver);
+- for (nport = 0; nport < NUM_TTY_CHANNELS; nport++) {
+- dport = &priv->ports[nport];
+- tty_port_destroy(&dport->port);
+- }
+- put_tty_driver(priv->driver);
+- return 0;
+-}
+-
+ static int mips_ejtag_fdc_tty_cpu_down(struct mips_cdmm_device *dev)
+ {
+ struct mips_ejtag_fdc_tty *priv = mips_cdmm_get_drvdata(dev);
+@@ -1152,12 +1120,11 @@ static struct mips_cdmm_driver mips_ejtag_fdc_tty_driver = {
+ .name = "mips_ejtag_fdc",
+ },
+ .probe = mips_ejtag_fdc_tty_probe,
+- .remove = mips_ejtag_fdc_tty_remove,
+ .cpu_down = mips_ejtag_fdc_tty_cpu_down,
+ .cpu_up = mips_ejtag_fdc_tty_cpu_up,
+ .id_table = mips_ejtag_fdc_tty_ids,
+ };
+-module_mips_cdmm_driver(mips_ejtag_fdc_tty_driver);
++builtin_mips_cdmm_driver(mips_ejtag_fdc_tty_driver);
+
+ static int __init mips_ejtag_fdc_init_console(void)
+ {
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index b09023b07169..a0285da0244c 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -169,7 +169,7 @@ static inline int tty_copy_to_user(struct tty_struct *tty,
+ {
+ struct n_tty_data *ldata = tty->disc_data;
+
+- tty_audit_add_data(tty, to, n, ldata->icanon);
++ tty_audit_add_data(tty, from, n, ldata->icanon);
+ return copy_to_user(to, from, n);
+ }
+
+diff --git a/drivers/tty/tty_audit.c b/drivers/tty/tty_audit.c
+index 90ca082935f6..3d245cd3d8e6 100644
+--- a/drivers/tty/tty_audit.c
++++ b/drivers/tty/tty_audit.c
+@@ -265,7 +265,7 @@ static struct tty_audit_buf *tty_audit_buf_get(struct tty_struct *tty,
+ *
+ * Audit @data of @size from @tty, if necessary.
+ */
+-void tty_audit_add_data(struct tty_struct *tty, unsigned char *data,
++void tty_audit_add_data(struct tty_struct *tty, const void *data,
+ size_t size, unsigned icanon)
+ {
+ struct tty_audit_buf *buf;
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 2eefaa6e3e3a..f435977de740 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -1282,18 +1282,22 @@ int tty_send_xchar(struct tty_struct *tty, char ch)
+ int was_stopped = tty->stopped;
+
+ if (tty->ops->send_xchar) {
++ down_read(&tty->termios_rwsem);
+ tty->ops->send_xchar(tty, ch);
++ up_read(&tty->termios_rwsem);
+ return 0;
+ }
+
+ if (tty_write_lock(tty, 0) < 0)
+ return -ERESTARTSYS;
+
++ down_read(&tty->termios_rwsem);
+ if (was_stopped)
+ start_tty(tty);
+ tty->ops->write(tty, &ch, 1);
+ if (was_stopped)
+ stop_tty(tty);
++ up_read(&tty->termios_rwsem);
+ tty_write_unlock(tty);
+ return 0;
+ }
+diff --git a/drivers/tty/tty_ioctl.c b/drivers/tty/tty_ioctl.c
+index 9c5aebfe7053..1445dd39aa62 100644
+--- a/drivers/tty/tty_ioctl.c
++++ b/drivers/tty/tty_ioctl.c
+@@ -1147,16 +1147,12 @@ int n_tty_ioctl_helper(struct tty_struct *tty, struct file *file,
+ spin_unlock_irq(&tty->flow_lock);
+ break;
+ case TCIOFF:
+- down_read(&tty->termios_rwsem);
+ if (STOP_CHAR(tty) != __DISABLED_CHAR)
+ retval = tty_send_xchar(tty, STOP_CHAR(tty));
+- up_read(&tty->termios_rwsem);
+ break;
+ case TCION:
+- down_read(&tty->termios_rwsem);
+ if (START_CHAR(tty) != __DISABLED_CHAR)
+ retval = tty_send_xchar(tty, START_CHAR(tty));
+- up_read(&tty->termios_rwsem);
+ break;
+ default:
+ return -EINVAL;
+diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
+index dcc50c878159..ad53aed9b280 100644
+--- a/drivers/usb/chipidea/ci_hdrc_imx.c
++++ b/drivers/usb/chipidea/ci_hdrc_imx.c
+@@ -73,6 +73,12 @@ struct ci_hdrc_imx_data {
+ struct imx_usbmisc_data *usbmisc_data;
+ bool supports_runtime_pm;
+ bool in_lpm;
++ /* SoC before i.mx6 (except imx23/imx28) needs three clks */
++ bool need_three_clks;
++ struct clk *clk_ipg;
++ struct clk *clk_ahb;
++ struct clk *clk_per;
++ /* --------------------------------- */
+ };
+
+ /* Common functions shared by usbmisc drivers */
+@@ -124,6 +130,102 @@ static struct imx_usbmisc_data *usbmisc_get_init_data(struct device *dev)
+ }
+
+ /* End of common functions shared by usbmisc drivers*/
++static int imx_get_clks(struct device *dev)
++{
++ struct ci_hdrc_imx_data *data = dev_get_drvdata(dev);
++ int ret = 0;
++
++ data->clk_ipg = devm_clk_get(dev, "ipg");
++ if (IS_ERR(data->clk_ipg)) {
++ /* If the platform only needs one clocks */
++ data->clk = devm_clk_get(dev, NULL);
++ if (IS_ERR(data->clk)) {
++ ret = PTR_ERR(data->clk);
++ dev_err(dev,
++ "Failed to get clks, err=%ld,%ld\n",
++ PTR_ERR(data->clk), PTR_ERR(data->clk_ipg));
++ return ret;
++ }
++ return ret;
++ }
++
++ data->clk_ahb = devm_clk_get(dev, "ahb");
++ if (IS_ERR(data->clk_ahb)) {
++ ret = PTR_ERR(data->clk_ahb);
++ dev_err(dev,
++ "Failed to get ahb clock, err=%d\n", ret);
++ return ret;
++ }
++
++ data->clk_per = devm_clk_get(dev, "per");
++ if (IS_ERR(data->clk_per)) {
++ ret = PTR_ERR(data->clk_per);
++ dev_err(dev,
++ "Failed to get per clock, err=%d\n", ret);
++ return ret;
++ }
++
++ data->need_three_clks = true;
++ return ret;
++}
++
++static int imx_prepare_enable_clks(struct device *dev)
++{
++ struct ci_hdrc_imx_data *data = dev_get_drvdata(dev);
++ int ret = 0;
++
++ if (data->need_three_clks) {
++ ret = clk_prepare_enable(data->clk_ipg);
++ if (ret) {
++ dev_err(dev,
++ "Failed to prepare/enable ipg clk, err=%d\n",
++ ret);
++ return ret;
++ }
++
++ ret = clk_prepare_enable(data->clk_ahb);
++ if (ret) {
++ dev_err(dev,
++ "Failed to prepare/enable ahb clk, err=%d\n",
++ ret);
++ clk_disable_unprepare(data->clk_ipg);
++ return ret;
++ }
++
++ ret = clk_prepare_enable(data->clk_per);
++ if (ret) {
++ dev_err(dev,
++ "Failed to prepare/enable per clk, err=%d\n",
++ ret);
++ clk_disable_unprepare(data->clk_ahb);
++ clk_disable_unprepare(data->clk_ipg);
++ return ret;
++ }
++ } else {
++ ret = clk_prepare_enable(data->clk);
++ if (ret) {
++ dev_err(dev,
++ "Failed to prepare/enable clk, err=%d\n",
++ ret);
++ return ret;
++ }
++ }
++
++ return ret;
++}
++
++static void imx_disable_unprepare_clks(struct device *dev)
++{
++ struct ci_hdrc_imx_data *data = dev_get_drvdata(dev);
++
++ if (data->need_three_clks) {
++ clk_disable_unprepare(data->clk_per);
++ clk_disable_unprepare(data->clk_ahb);
++ clk_disable_unprepare(data->clk_ipg);
++ } else {
++ clk_disable_unprepare(data->clk);
++ }
++}
+
+ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ {
+@@ -142,23 +244,18 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ if (!data)
+ return -ENOMEM;
+
++ platform_set_drvdata(pdev, data);
+ data->usbmisc_data = usbmisc_get_init_data(&pdev->dev);
+ if (IS_ERR(data->usbmisc_data))
+ return PTR_ERR(data->usbmisc_data);
+
+- data->clk = devm_clk_get(&pdev->dev, NULL);
+- if (IS_ERR(data->clk)) {
+- dev_err(&pdev->dev,
+- "Failed to get clock, err=%ld\n", PTR_ERR(data->clk));
+- return PTR_ERR(data->clk);
+- }
++ ret = imx_get_clks(&pdev->dev);
++ if (ret)
++ return ret;
+
+- ret = clk_prepare_enable(data->clk);
+- if (ret) {
+- dev_err(&pdev->dev,
+- "Failed to prepare or enable clock, err=%d\n", ret);
++ ret = imx_prepare_enable_clks(&pdev->dev);
++ if (ret)
+ return ret;
+- }
+
+ data->phy = devm_usb_get_phy_by_phandle(&pdev->dev, "fsl,usbphy", 0);
+ if (IS_ERR(data->phy)) {
+@@ -201,8 +298,6 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ goto disable_device;
+ }
+
+- platform_set_drvdata(pdev, data);
+-
+ if (data->supports_runtime_pm) {
+ pm_runtime_set_active(&pdev->dev);
+ pm_runtime_enable(&pdev->dev);
+@@ -215,7 +310,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ disable_device:
+ ci_hdrc_remove_device(data->ci_pdev);
+ err_clk:
+- clk_disable_unprepare(data->clk);
++ imx_disable_unprepare_clks(&pdev->dev);
+ return ret;
+ }
+
+@@ -229,7 +324,7 @@ static int ci_hdrc_imx_remove(struct platform_device *pdev)
+ pm_runtime_put_noidle(&pdev->dev);
+ }
+ ci_hdrc_remove_device(data->ci_pdev);
+- clk_disable_unprepare(data->clk);
++ imx_disable_unprepare_clks(&pdev->dev);
+
+ return 0;
+ }
+@@ -241,7 +336,7 @@ static int imx_controller_suspend(struct device *dev)
+
+ dev_dbg(dev, "at %s\n", __func__);
+
+- clk_disable_unprepare(data->clk);
++ imx_disable_unprepare_clks(dev);
+ data->in_lpm = true;
+
+ return 0;
+@@ -259,7 +354,7 @@ static int imx_controller_resume(struct device *dev)
+ return 0;
+ }
+
+- ret = clk_prepare_enable(data->clk);
++ ret = imx_prepare_enable_clks(dev);
+ if (ret)
+ return ret;
+
+@@ -274,7 +369,7 @@ static int imx_controller_resume(struct device *dev)
+ return 0;
+
+ clk_disable:
+- clk_disable_unprepare(data->clk);
++ imx_disable_unprepare_clks(dev);
+ return ret;
+ }
+
+diff --git a/drivers/usb/chipidea/debug.c b/drivers/usb/chipidea/debug.c
+index 080b7be3daf0..58c8485a0715 100644
+--- a/drivers/usb/chipidea/debug.c
++++ b/drivers/usb/chipidea/debug.c
+@@ -322,8 +322,10 @@ static ssize_t ci_role_write(struct file *file, const char __user *ubuf,
+ return -EINVAL;
+
+ pm_runtime_get_sync(ci->dev);
++ disable_irq(ci->irq);
+ ci_role_stop(ci);
+ ret = ci_role_start(ci, role);
++ enable_irq(ci->irq);
+ pm_runtime_put_sync(ci->dev);
+
+ return ret ? ret : count;
+diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
+index 8223fe73ea85..391a1225b0ba 100644
+--- a/drivers/usb/chipidea/udc.c
++++ b/drivers/usb/chipidea/udc.c
+@@ -1751,6 +1751,22 @@ static int ci_udc_start(struct usb_gadget *gadget,
+ return retval;
+ }
+
++static void ci_udc_stop_for_otg_fsm(struct ci_hdrc *ci)
++{
++ if (!ci_otg_is_fsm_mode(ci))
++ return;
++
++ mutex_lock(&ci->fsm.lock);
++ if (ci->fsm.otg->state == OTG_STATE_A_PERIPHERAL) {
++ ci->fsm.a_bidl_adis_tmout = 1;
++ ci_hdrc_otg_fsm_start(ci);
++ } else if (ci->fsm.otg->state == OTG_STATE_B_PERIPHERAL) {
++ ci->fsm.protocol = PROTO_UNDEF;
++ ci->fsm.otg->state = OTG_STATE_UNDEFINED;
++ }
++ mutex_unlock(&ci->fsm.lock);
++}
++
+ /**
+ * ci_udc_stop: unregister a gadget driver
+ */
+@@ -1775,6 +1791,7 @@ static int ci_udc_stop(struct usb_gadget *gadget)
+ ci->driver = NULL;
+ spin_unlock_irqrestore(&ci->lock, flags);
+
++ ci_udc_stop_for_otg_fsm(ci);
+ return 0;
+ }
+
+diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c
+index 433bbc34a8a4..071964c7847f 100644
+--- a/drivers/usb/class/usblp.c
++++ b/drivers/usb/class/usblp.c
+@@ -884,11 +884,11 @@ static int usblp_wwait(struct usblp *usblp, int nonblock)
+
+ add_wait_queue(&usblp->wwait, &waita);
+ for (;;) {
+- set_current_state(TASK_INTERRUPTIBLE);
+ if (mutex_lock_interruptible(&usblp->mut)) {
+ rc = -EINTR;
+ break;
+ }
++ set_current_state(TASK_INTERRUPTIBLE);
+ rc = usblp_wtest(usblp, nonblock);
+ mutex_unlock(&usblp->mut);
+ if (rc <= 0)
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 064123e44566..51156ec91c78 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -488,6 +488,9 @@ static int dwc3_phy_setup(struct dwc3 *dwc)
+ if (dwc->dis_u2_susphy_quirk)
+ reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;
+
++ if (dwc->dis_enblslpm_quirk)
++ reg &= ~DWC3_GUSB2PHYCFG_ENBLSLPM;
++
+ dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
+
+ return 0;
+@@ -507,12 +510,18 @@ static int dwc3_core_init(struct dwc3 *dwc)
+
+ reg = dwc3_readl(dwc->regs, DWC3_GSNPSID);
+ /* This should read as U3 followed by revision number */
+- if ((reg & DWC3_GSNPSID_MASK) != 0x55330000) {
++ if ((reg & DWC3_GSNPSID_MASK) == 0x55330000) {
++ /* Detected DWC_usb3 IP */
++ dwc->revision = reg;
++ } else if ((reg & DWC3_GSNPSID_MASK) == 0x33310000) {
++ /* Detected DWC_usb31 IP */
++ dwc->revision = dwc3_readl(dwc->regs, DWC3_VER_NUMBER);
++ dwc->revision |= DWC3_REVISION_IS_DWC31;
++ } else {
+ dev_err(dwc->dev, "this is not a DesignWare USB3 DRD Core\n");
+ ret = -ENODEV;
+ goto err0;
+ }
+- dwc->revision = reg;
+
+ /*
+ * Write Linux Version Code to our GUID register so it's easy to figure
+@@ -879,6 +888,8 @@ static int dwc3_probe(struct platform_device *pdev)
+ "snps,dis_u3_susphy_quirk");
+ dwc->dis_u2_susphy_quirk = of_property_read_bool(node,
+ "snps,dis_u2_susphy_quirk");
++ dwc->dis_enblslpm_quirk = device_property_read_bool(dev,
++ "snps,dis_enblslpm_quirk");
+
+ dwc->tx_de_emphasis_quirk = of_property_read_bool(node,
+ "snps,tx_de_emphasis_quirk");
+@@ -909,6 +920,7 @@ static int dwc3_probe(struct platform_device *pdev)
+ dwc->rx_detect_poll_quirk = pdata->rx_detect_poll_quirk;
+ dwc->dis_u3_susphy_quirk = pdata->dis_u3_susphy_quirk;
+ dwc->dis_u2_susphy_quirk = pdata->dis_u2_susphy_quirk;
++ dwc->dis_enblslpm_quirk = pdata->dis_enblslpm_quirk;
+
+ dwc->tx_de_emphasis_quirk = pdata->tx_de_emphasis_quirk;
+ if (pdata->tx_de_emphasis)
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 044778884585..6e53ce9ce320 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -108,6 +108,9 @@
+ #define DWC3_GPRTBIMAP_FS0 0xc188
+ #define DWC3_GPRTBIMAP_FS1 0xc18c
+
++#define DWC3_VER_NUMBER 0xc1a0
++#define DWC3_VER_TYPE 0xc1a4
++
+ #define DWC3_GUSB2PHYCFG(n) (0xc200 + (n * 0x04))
+ #define DWC3_GUSB2I2CCTL(n) (0xc240 + (n * 0x04))
+
+@@ -175,6 +178,7 @@
+ #define DWC3_GUSB2PHYCFG_PHYSOFTRST (1 << 31)
+ #define DWC3_GUSB2PHYCFG_SUSPHY (1 << 6)
+ #define DWC3_GUSB2PHYCFG_ULPI_UTMI (1 << 4)
++#define DWC3_GUSB2PHYCFG_ENBLSLPM (1 << 8)
+
+ /* Global USB2 PHY Vendor Control Register */
+ #define DWC3_GUSB2PHYACC_NEWREGREQ (1 << 25)
+@@ -712,6 +716,8 @@ struct dwc3_scratchpad_array {
+ * @rx_detect_poll_quirk: set if we enable rx_detect to polling lfps quirk
+ * @dis_u3_susphy_quirk: set if we disable usb3 suspend phy
+ * @dis_u2_susphy_quirk: set if we disable usb2 suspend phy
++ * @dis_enblslpm_quirk: set if we clear enblslpm in GUSB2PHYCFG,
++ * disabling the suspend signal to the PHY.
+ * @tx_de_emphasis_quirk: set if we enable Tx de-emphasis quirk
+ * @tx_de_emphasis: Tx de-emphasis value
+ * 0 - -6dB de-emphasis
+@@ -766,6 +772,14 @@ struct dwc3 {
+ u32 num_event_buffers;
+ u32 u1u2;
+ u32 maximum_speed;
++
++ /*
++ * All 3.1 IP version constants are greater than the 3.0 IP
++ * version constants. This works for most version checks in
++ * dwc3. However, in the future, this may not apply as
++ * features may be developed on newer versions of the 3.0 IP
++ * that are not in the 3.1 IP.
++ */
+ u32 revision;
+
+ #define DWC3_REVISION_173A 0x5533173a
+@@ -788,6 +802,13 @@ struct dwc3 {
+ #define DWC3_REVISION_270A 0x5533270a
+ #define DWC3_REVISION_280A 0x5533280a
+
++/*
++ * NOTICE: we're using bit 31 as a "is usb 3.1" flag. This is really
++ * just so dwc31 revisions are always larger than dwc3.
++ */
++#define DWC3_REVISION_IS_DWC31 0x80000000
++#define DWC3_USB31_REVISION_110A (0x3131302a | DWC3_REVISION_IS_USB31)
++
+ enum dwc3_ep0_next ep0_next_event;
+ enum dwc3_ep0_state ep0state;
+ enum dwc3_link_state link_state;
+@@ -841,6 +862,7 @@ struct dwc3 {
+ unsigned rx_detect_poll_quirk:1;
+ unsigned dis_u3_susphy_quirk:1;
+ unsigned dis_u2_susphy_quirk:1;
++ unsigned dis_enblslpm_quirk:1;
+
+ unsigned tx_de_emphasis_quirk:1;
+ unsigned tx_de_emphasis:2;
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index f62617999f3c..2cd33cb07ed9 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -27,6 +27,8 @@
+ #include "platform_data.h"
+
+ #define PCI_DEVICE_ID_SYNOPSYS_HAPSUSB3 0xabcd
++#define PCI_DEVICE_ID_SYNOPSYS_HAPSUSB3_AXI 0xabce
++#define PCI_DEVICE_ID_SYNOPSYS_HAPSUSB31 0xabcf
+ #define PCI_DEVICE_ID_INTEL_BYT 0x0f37
+ #define PCI_DEVICE_ID_INTEL_MRFLD 0x119e
+ #define PCI_DEVICE_ID_INTEL_BSW 0x22B7
+@@ -106,6 +108,22 @@ static int dwc3_pci_quirks(struct pci_dev *pdev)
+ }
+ }
+
++ if (pdev->vendor == PCI_VENDOR_ID_SYNOPSYS &&
++ (pdev->device == PCI_DEVICE_ID_SYNOPSYS_HAPSUSB3 ||
++ pdev->device == PCI_DEVICE_ID_SYNOPSYS_HAPSUSB3_AXI ||
++ pdev->device == PCI_DEVICE_ID_SYNOPSYS_HAPSUSB31)) {
++
++ struct dwc3_platform_data pdata;
++
++ memset(&pdata, 0, sizeof(pdata));
++ pdata.usb3_lpm_capable = true;
++ pdata.has_lpm_erratum = true;
++ pdata.dis_enblslpm_quirk = true;
++
++ return platform_device_add_data(pci_get_drvdata(pdev), &pdata,
++ sizeof(pdata));
++ }
++
+ return 0;
+ }
+
+@@ -178,6 +196,14 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
+ PCI_DEVICE(PCI_VENDOR_ID_SYNOPSYS,
+ PCI_DEVICE_ID_SYNOPSYS_HAPSUSB3),
+ },
++ {
++ PCI_DEVICE(PCI_VENDOR_ID_SYNOPSYS,
++ PCI_DEVICE_ID_SYNOPSYS_HAPSUSB3_AXI),
++ },
++ {
++ PCI_DEVICE(PCI_VENDOR_ID_SYNOPSYS,
++ PCI_DEVICE_ID_SYNOPSYS_HAPSUSB31),
++ },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BSW), },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BYT), },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MRFLD), },
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 1e8bdf817811..9c38e4b0a262 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1872,27 +1872,32 @@ static int dwc3_cleanup_done_reqs(struct dwc3 *dwc, struct dwc3_ep *dep,
+ unsigned int i;
+ int ret;
+
+- req = next_request(&dep->req_queued);
+- if (!req) {
+- WARN_ON_ONCE(1);
+- return 1;
+- }
+- i = 0;
+ do {
+- slot = req->start_slot + i;
+- if ((slot == DWC3_TRB_NUM - 1) &&
++ req = next_request(&dep->req_queued);
++ if (!req) {
++ WARN_ON_ONCE(1);
++ return 1;
++ }
++ i = 0;
++ do {
++ slot = req->start_slot + i;
++ if ((slot == DWC3_TRB_NUM - 1) &&
+ usb_endpoint_xfer_isoc(dep->endpoint.desc))
+- slot++;
+- slot %= DWC3_TRB_NUM;
+- trb = &dep->trb_pool[slot];
++ slot++;
++ slot %= DWC3_TRB_NUM;
++ trb = &dep->trb_pool[slot];
++
++ ret = __dwc3_cleanup_done_trbs(dwc, dep, req, trb,
++ event, status);
++ if (ret)
++ break;
++ } while (++i < req->request.num_mapped_sgs);
++
++ dwc3_gadget_giveback(dep, req, status);
+
+- ret = __dwc3_cleanup_done_trbs(dwc, dep, req, trb,
+- event, status);
+ if (ret)
+ break;
+- } while (++i < req->request.num_mapped_sgs);
+-
+- dwc3_gadget_giveback(dep, req, status);
++ } while (1);
+
+ if (usb_endpoint_xfer_isoc(dep->endpoint.desc) &&
+ list_empty(&dep->req_queued)) {
+@@ -2718,12 +2723,34 @@ int dwc3_gadget_init(struct dwc3 *dwc)
+ }
+
+ dwc->gadget.ops = &dwc3_gadget_ops;
+- dwc->gadget.max_speed = USB_SPEED_SUPER;
+ dwc->gadget.speed = USB_SPEED_UNKNOWN;
+ dwc->gadget.sg_supported = true;
+ dwc->gadget.name = "dwc3-gadget";
+
+ /*
++ * FIXME We might be setting max_speed to <SUPER, however versions
++ * <2.20a of dwc3 have an issue with metastability (documented
++ * elsewhere in this driver) which tells us we can't set max speed to
++ * anything lower than SUPER.
++ *
++ * Because gadget.max_speed is only used by composite.c and function
++ * drivers (i.e. it won't go into dwc3's registers) we are allowing this
++ * to happen so we avoid sending SuperSpeed Capability descriptor
++ * together with our BOS descriptor as that could confuse host into
++ * thinking we can handle super speed.
++ *
++ * Note that, in fact, we won't even support GetBOS requests when speed
++ * is less than super speed because we don't have means, yet, to tell
++ * composite.c that we are USB 2.0 + LPM ECN.
++ */
++ if (dwc->revision < DWC3_REVISION_220A)
++ dwc3_trace(trace_dwc3_gadget,
++ "Changing max_speed on rev %08x\n",
++ dwc->revision);
++
++ dwc->gadget.max_speed = dwc->maximum_speed;
++
++ /*
+ * Per databook, DWC3 needs buffer size to be aligned to MaxPacketSize
+ * on ep out.
+ */
+diff --git a/drivers/usb/dwc3/platform_data.h b/drivers/usb/dwc3/platform_data.h
+index d3614ecbb9ca..db2938002260 100644
+--- a/drivers/usb/dwc3/platform_data.h
++++ b/drivers/usb/dwc3/platform_data.h
+@@ -42,6 +42,7 @@ struct dwc3_platform_data {
+ unsigned rx_detect_poll_quirk:1;
+ unsigned dis_u3_susphy_quirk:1;
+ unsigned dis_u2_susphy_quirk:1;
++ unsigned dis_enblslpm_quirk:1;
+
+ unsigned tx_de_emphasis_quirk:1;
+ unsigned tx_de_emphasis:2;
+diff --git a/drivers/usb/gadget/udc/atmel_usba_udc.c b/drivers/usb/gadget/udc/atmel_usba_udc.c
+index f0f2b066ac08..f92f5aff0dd5 100644
+--- a/drivers/usb/gadget/udc/atmel_usba_udc.c
++++ b/drivers/usb/gadget/udc/atmel_usba_udc.c
+@@ -1633,7 +1633,7 @@ static irqreturn_t usba_udc_irq(int irq, void *devid)
+ spin_lock(&udc->lock);
+
+ int_enb = usba_int_enb_get(udc);
+- status = usba_readl(udc, INT_STA) & int_enb;
++ status = usba_readl(udc, INT_STA) & (int_enb | USBA_HIGH_SPEED);
+ DBG(DBG_INT, "irq, status=%#08x\n", status);
+
+ if (status & USBA_DET_SUSPEND) {
+diff --git a/drivers/usb/gadget/udc/net2280.c b/drivers/usb/gadget/udc/net2280.c
+index cf0ed42f5591..6706aef907f4 100644
+--- a/drivers/usb/gadget/udc/net2280.c
++++ b/drivers/usb/gadget/udc/net2280.c
+@@ -1913,7 +1913,7 @@ static void defect7374_disable_data_eps(struct net2280 *dev)
+
+ for (i = 1; i < 5; i++) {
+ ep = &dev->ep[i];
+- writel(0, &ep->cfg->ep_cfg);
++ writel(i, &ep->cfg->ep_cfg);
+ }
+
+ /* CSROUT, CSRIN, PCIOUT, PCIIN, STATIN, RCIN */
+diff --git a/drivers/usb/host/ehci-orion.c b/drivers/usb/host/ehci-orion.c
+index bfcbb9aa8816..ee8d5faa0194 100644
+--- a/drivers/usb/host/ehci-orion.c
++++ b/drivers/usb/host/ehci-orion.c
+@@ -224,7 +224,8 @@ static int ehci_orion_drv_probe(struct platform_device *pdev)
+ priv->phy = devm_phy_optional_get(&pdev->dev, "usb");
+ if (IS_ERR(priv->phy)) {
+ err = PTR_ERR(priv->phy);
+- goto err_phy_get;
++ if (err != -ENOSYS)
++ goto err_phy_get;
+ } else {
+ err = phy_init(priv->phy);
+ if (err)
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 97ffe3997273..fe9e2d3a223c 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -3914,28 +3914,6 @@ cleanup:
+ return ret;
+ }
+
+-static int ep_ring_is_processing(struct xhci_hcd *xhci,
+- int slot_id, unsigned int ep_index)
+-{
+- struct xhci_virt_device *xdev;
+- struct xhci_ring *ep_ring;
+- struct xhci_ep_ctx *ep_ctx;
+- struct xhci_virt_ep *xep;
+- dma_addr_t hw_deq;
+-
+- xdev = xhci->devs[slot_id];
+- xep = &xhci->devs[slot_id]->eps[ep_index];
+- ep_ring = xep->ring;
+- ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index);
+-
+- if ((le32_to_cpu(ep_ctx->ep_info) & EP_STATE_MASK) != EP_STATE_RUNNING)
+- return 0;
+-
+- hw_deq = le64_to_cpu(ep_ctx->deq) & ~EP_CTX_CYCLE_MASK;
+- return (hw_deq !=
+- xhci_trb_virt_to_dma(ep_ring->enq_seg, ep_ring->enqueue));
+-}
+-
+ /*
+ * Check transfer ring to guarantee there is enough room for the urb.
+ * Update ISO URB start_frame and interval.
+@@ -4001,10 +3979,12 @@ int xhci_queue_isoc_tx_prepare(struct xhci_hcd *xhci, gfp_t mem_flags,
+ }
+
+ /* Calculate the start frame and put it in urb->start_frame. */
+- if (HCC_CFC(xhci->hcc_params) &&
+- ep_ring_is_processing(xhci, slot_id, ep_index)) {
+- urb->start_frame = xep->next_frame_id;
+- goto skip_start_over;
++ if (HCC_CFC(xhci->hcc_params) && !list_empty(&ep_ring->td_list)) {
++ if ((le32_to_cpu(ep_ctx->ep_info) & EP_STATE_MASK) ==
++ EP_STATE_RUNNING) {
++ urb->start_frame = xep->next_frame_id;
++ goto skip_start_over;
++ }
+ }
+
+ start_frame = readl(&xhci->run_regs->microframe_index);
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 9957bd96d4bc..385f9f5d6715 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -175,6 +175,16 @@ int xhci_reset(struct xhci_hcd *xhci)
+ command |= CMD_RESET;
+ writel(command, &xhci->op_regs->command);
+
++ /* Existing Intel xHCI controllers require a delay of 1 mS,
++ * after setting the CMD_RESET bit, and before accessing any
++ * HC registers. This allows the HC to complete the
++ * reset operation and be ready for HC register access.
++ * Without this delay, the subsequent HC register access,
++ * may result in a system hang very rarely.
++ */
++ if (xhci->quirks & XHCI_INTEL_HOST)
++ udelay(1000);
++
+ ret = xhci_handshake(&xhci->op_regs->command,
+ CMD_RESET, 0, 10 * 1000 * 1000);
+ if (ret)
+diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
+index 4a518ff12310..2c624a10748d 100644
+--- a/drivers/usb/musb/musb_core.c
++++ b/drivers/usb/musb/musb_core.c
+@@ -132,7 +132,7 @@ static inline struct musb *dev_to_musb(struct device *dev)
+ /*-------------------------------------------------------------------------*/
+
+ #ifndef CONFIG_BLACKFIN
+-static int musb_ulpi_read(struct usb_phy *phy, u32 offset)
++static int musb_ulpi_read(struct usb_phy *phy, u32 reg)
+ {
+ void __iomem *addr = phy->io_priv;
+ int i = 0;
+@@ -151,7 +151,7 @@ static int musb_ulpi_read(struct usb_phy *phy, u32 offset)
+ * ULPICarKitControlDisableUTMI after clearing POWER_SUSPENDM.
+ */
+
+- musb_writeb(addr, MUSB_ULPI_REG_ADDR, (u8)offset);
++ musb_writeb(addr, MUSB_ULPI_REG_ADDR, (u8)reg);
+ musb_writeb(addr, MUSB_ULPI_REG_CONTROL,
+ MUSB_ULPI_REG_REQ | MUSB_ULPI_RDN_WR);
+
+@@ -176,7 +176,7 @@ out:
+ return ret;
+ }
+
+-static int musb_ulpi_write(struct usb_phy *phy, u32 offset, u32 data)
++static int musb_ulpi_write(struct usb_phy *phy, u32 val, u32 reg)
+ {
+ void __iomem *addr = phy->io_priv;
+ int i = 0;
+@@ -191,8 +191,8 @@ static int musb_ulpi_write(struct usb_phy *phy, u32 offset, u32 data)
+ power &= ~MUSB_POWER_SUSPENDM;
+ musb_writeb(addr, MUSB_POWER, power);
+
+- musb_writeb(addr, MUSB_ULPI_REG_ADDR, (u8)offset);
+- musb_writeb(addr, MUSB_ULPI_REG_DATA, (u8)data);
++ musb_writeb(addr, MUSB_ULPI_REG_ADDR, (u8)reg);
++ musb_writeb(addr, MUSB_ULPI_REG_DATA, (u8)val);
+ musb_writeb(addr, MUSB_ULPI_REG_CONTROL, MUSB_ULPI_REG_REQ);
+
+ while (!(musb_readb(addr, MUSB_ULPI_REG_CONTROL)
+diff --git a/drivers/usb/phy/phy-omap-otg.c b/drivers/usb/phy/phy-omap-otg.c
+index 1270906ccb95..c4bf2de6d14e 100644
+--- a/drivers/usb/phy/phy-omap-otg.c
++++ b/drivers/usb/phy/phy-omap-otg.c
+@@ -105,7 +105,6 @@ static int omap_otg_probe(struct platform_device *pdev)
+ extcon = extcon_get_extcon_dev(config->extcon);
+ if (!extcon)
+ return -EPROBE_DEFER;
+- otg_dev->extcon = extcon;
+
+ otg_dev = devm_kzalloc(&pdev->dev, sizeof(*otg_dev), GFP_KERNEL);
+ if (!otg_dev)
+@@ -115,6 +114,7 @@ static int omap_otg_probe(struct platform_device *pdev)
+ if (IS_ERR(otg_dev->base))
+ return PTR_ERR(otg_dev->base);
+
++ otg_dev->extcon = extcon;
+ otg_dev->id_nb.notifier_call = omap_otg_id_notifier;
+ otg_dev->vbus_nb.notifier_call = omap_otg_vbus_notifier;
+
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 6956c4f62216..e945b5195258 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -162,6 +162,7 @@ static void option_instat_callback(struct urb *urb);
+ #define NOVATELWIRELESS_PRODUCT_HSPA_EMBEDDED_HIGHSPEED 0x9001
+ #define NOVATELWIRELESS_PRODUCT_E362 0x9010
+ #define NOVATELWIRELESS_PRODUCT_E371 0x9011
++#define NOVATELWIRELESS_PRODUCT_U620L 0x9022
+ #define NOVATELWIRELESS_PRODUCT_G2 0xA010
+ #define NOVATELWIRELESS_PRODUCT_MC551 0xB001
+
+@@ -357,6 +358,7 @@ static void option_instat_callback(struct urb *urb);
+ /* This is the 4G XS Stick W14 a.k.a. Mobilcom Debitel Surf-Stick *
+ * It seems to contain a Qualcomm QSC6240/6290 chipset */
+ #define FOUR_G_SYSTEMS_PRODUCT_W14 0x9603
++#define FOUR_G_SYSTEMS_PRODUCT_W100 0x9b01
+
+ /* iBall 3.5G connect wireless modem */
+ #define IBALL_3_5G_CONNECT 0x9605
+@@ -522,6 +524,11 @@ static const struct option_blacklist_info four_g_w14_blacklist = {
+ .sendsetup = BIT(0) | BIT(1),
+ };
+
++static const struct option_blacklist_info four_g_w100_blacklist = {
++ .sendsetup = BIT(1) | BIT(2),
++ .reserved = BIT(3),
++};
++
+ static const struct option_blacklist_info alcatel_x200_blacklist = {
+ .sendsetup = BIT(0) | BIT(1),
+ .reserved = BIT(4),
+@@ -1060,6 +1067,7 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_MC551, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_E362, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_E371, 0xff, 0xff, 0xff) },
++ { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_U620L, 0xff, 0x00, 0x00) },
+
+ { USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01) },
+ { USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01A) },
+@@ -1653,6 +1661,9 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W14),
+ .driver_info = (kernel_ulong_t)&four_g_w14_blacklist
+ },
++ { USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W100),
++ .driver_info = (kernel_ulong_t)&four_g_w100_blacklist
++ },
+ { USB_DEVICE_INTERFACE_CLASS(LONGCHEER_VENDOR_ID, SPEEDUP_PRODUCT_SU9800, 0xff) },
+ { USB_DEVICE(LONGCHEER_VENDOR_ID, ZOOM_PRODUCT_4597) },
+ { USB_DEVICE(LONGCHEER_VENDOR_ID, IBALL_3_5G_CONNECT) },
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index ebcec8cda858..514fa91cf74e 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -22,6 +22,8 @@
+ #define DRIVER_AUTHOR "Qualcomm Inc"
+ #define DRIVER_DESC "Qualcomm USB Serial driver"
+
++#define QUECTEL_EC20_PID 0x9215
++
+ /* standard device layouts supported by this driver */
+ enum qcserial_layouts {
+ QCSERIAL_G2K = 0, /* Gobi 2000 */
+@@ -153,6 +155,8 @@ static const struct usb_device_id id_table[] = {
+ {DEVICE_SWI(0x1199, 0x9056)}, /* Sierra Wireless Modem */
+ {DEVICE_SWI(0x1199, 0x9060)}, /* Sierra Wireless Modem */
+ {DEVICE_SWI(0x1199, 0x9061)}, /* Sierra Wireless Modem */
++ {DEVICE_SWI(0x1199, 0x9070)}, /* Sierra Wireless MC74xx/EM74xx */
++ {DEVICE_SWI(0x1199, 0x9071)}, /* Sierra Wireless MC74xx/EM74xx */
+ {DEVICE_SWI(0x413c, 0x81a2)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */
+ {DEVICE_SWI(0x413c, 0x81a3)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */
+ {DEVICE_SWI(0x413c, 0x81a4)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */
+@@ -167,6 +171,38 @@ static const struct usb_device_id id_table[] = {
+ };
+ MODULE_DEVICE_TABLE(usb, id_table);
+
++static int handle_quectel_ec20(struct device *dev, int ifnum)
++{
++ int altsetting = 0;
++
++ /*
++ * Quectel EC20 Mini PCIe LTE module layout:
++ * 0: DM/DIAG (use libqcdm from ModemManager for communication)
++ * 1: NMEA
++ * 2: AT-capable modem port
++ * 3: Modem interface
++ * 4: NDIS
++ */
++ switch (ifnum) {
++ case 0:
++ dev_dbg(dev, "Quectel EC20 DM/DIAG interface found\n");
++ break;
++ case 1:
++ dev_dbg(dev, "Quectel EC20 NMEA GPS interface found\n");
++ break;
++ case 2:
++ case 3:
++ dev_dbg(dev, "Quectel EC20 Modem port found\n");
++ break;
++ case 4:
++ /* Don't claim the QMI/net interface */
++ altsetting = -1;
++ break;
++ }
++
++ return altsetting;
++}
++
+ static int qcprobe(struct usb_serial *serial, const struct usb_device_id *id)
+ {
+ struct usb_host_interface *intf = serial->interface->cur_altsetting;
+@@ -176,6 +212,10 @@ static int qcprobe(struct usb_serial *serial, const struct usb_device_id *id)
+ __u8 ifnum;
+ int altsetting = -1;
+
++ /* we only support vendor specific functions */
++ if (intf->desc.bInterfaceClass != USB_CLASS_VENDOR_SPEC)
++ goto done;
++
+ nintf = serial->dev->actconfig->desc.bNumInterfaces;
+ dev_dbg(dev, "Num Interfaces = %d\n", nintf);
+ ifnum = intf->desc.bInterfaceNumber;
+@@ -235,6 +275,12 @@ static int qcprobe(struct usb_serial *serial, const struct usb_device_id *id)
+ altsetting = -1;
+ break;
+ case QCSERIAL_G2K:
++ /* handle non-standard layouts */
++ if (nintf == 5 && id->idProduct == QUECTEL_EC20_PID) {
++ altsetting = handle_quectel_ec20(dev, ifnum);
++ goto done;
++ }
++
+ /*
+ * Gobi 2K+ USB layout:
+ * 0: QMI/net
+@@ -295,29 +341,39 @@ static int qcprobe(struct usb_serial *serial, const struct usb_device_id *id)
+ break;
+ case QCSERIAL_HWI:
+ /*
+- * Huawei layout:
+- * 0: AT-capable modem port
+- * 1: DM/DIAG
+- * 2: AT-capable modem port
+- * 3: CCID-compatible PCSC interface
+- * 4: QMI/net
+- * 5: NMEA
++ * Huawei devices map functions by subclass + protocol
++ * instead of interface numbers. The protocol identify
++ * a specific function, while the subclass indicate a
++ * specific firmware source
++ *
++ * This is a blacklist of functions known to be
++ * non-serial. The rest are assumed to be serial and
++ * will be handled by this driver
+ */
+- switch (ifnum) {
+- case 0:
+- case 2:
+- dev_dbg(dev, "Modem port found\n");
+- break;
+- case 1:
+- dev_dbg(dev, "DM/DIAG interface found\n");
+- break;
+- case 5:
+- dev_dbg(dev, "NMEA GPS interface found\n");
+- break;
+- default:
+- /* don't claim any unsupported interface */
++ switch (intf->desc.bInterfaceProtocol) {
++ /* QMI combined (qmi_wwan) */
++ case 0x07:
++ case 0x37:
++ case 0x67:
++ /* QMI data (qmi_wwan) */
++ case 0x08:
++ case 0x38:
++ case 0x68:
++ /* QMI control (qmi_wwan) */
++ case 0x09:
++ case 0x39:
++ case 0x69:
++ /* NCM like (huawei_cdc_ncm) */
++ case 0x16:
++ case 0x46:
++ case 0x76:
+ altsetting = -1;
+ break;
++ default:
++ dev_dbg(dev, "Huawei type serial port found (%02x/%02x/%02x)\n",
++ intf->desc.bInterfaceClass,
++ intf->desc.bInterfaceSubClass,
++ intf->desc.bInterfaceProtocol);
+ }
+ break;
+ default:
+diff --git a/drivers/usb/serial/ti_usb_3410_5052.c b/drivers/usb/serial/ti_usb_3410_5052.c
+index e9da41d9fe7f..2694df2f4559 100644
+--- a/drivers/usb/serial/ti_usb_3410_5052.c
++++ b/drivers/usb/serial/ti_usb_3410_5052.c
+@@ -159,6 +159,7 @@ static const struct usb_device_id ti_id_table_3410[] = {
+ { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_STEREO_PLUG_ID) },
+ { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_STRIP_PORT_ID) },
+ { USB_DEVICE(TI_VENDOR_ID, FRI2_PRODUCT_ID) },
++ { USB_DEVICE(HONEYWELL_VENDOR_ID, HONEYWELL_HGI80_PRODUCT_ID) },
+ { } /* terminator */
+ };
+
+@@ -191,6 +192,7 @@ static const struct usb_device_id ti_id_table_combined[] = {
+ { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_PRODUCT_ID) },
+ { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_STRIP_PORT_ID) },
+ { USB_DEVICE(TI_VENDOR_ID, FRI2_PRODUCT_ID) },
++ { USB_DEVICE(HONEYWELL_VENDOR_ID, HONEYWELL_HGI80_PRODUCT_ID) },
+ { } /* terminator */
+ };
+
+diff --git a/drivers/usb/serial/ti_usb_3410_5052.h b/drivers/usb/serial/ti_usb_3410_5052.h
+index 4a2423e84d55..98f35c656c02 100644
+--- a/drivers/usb/serial/ti_usb_3410_5052.h
++++ b/drivers/usb/serial/ti_usb_3410_5052.h
+@@ -56,6 +56,10 @@
+ #define ABBOTT_PRODUCT_ID ABBOTT_STEREO_PLUG_ID
+ #define ABBOTT_STRIP_PORT_ID 0x3420
+
++/* Honeywell vendor and product IDs */
++#define HONEYWELL_VENDOR_ID 0x10ac
++#define HONEYWELL_HGI80_PRODUCT_ID 0x0102 /* Honeywell HGI80 */
++
+ /* Commands */
+ #define TI_GET_VERSION 0x01
+ #define TI_GET_PORT_STATUS 0x02
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 6cd5e65c4aff..fb236239972f 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -39,6 +39,7 @@
+ #include <asm/irq.h>
+ #include <asm/idle.h>
+ #include <asm/io_apic.h>
++#include <asm/i8259.h>
+ #include <asm/xen/pci.h>
+ #include <xen/page.h>
+ #endif
+@@ -420,7 +421,7 @@ static int __must_check xen_allocate_irq_gsi(unsigned gsi)
+ return xen_allocate_irq_dynamic();
+
+ /* Legacy IRQ descriptors are already allocated by the arch. */
+- if (gsi < NR_IRQS_LEGACY)
++ if (gsi < nr_legacy_irqs())
+ irq = gsi;
+ else
+ irq = irq_alloc_desc_at(gsi, -1);
+@@ -446,7 +447,7 @@ static void xen_free_irq(unsigned irq)
+ kfree(info);
+
+ /* Legacy IRQ descriptors are managed by the arch. */
+- if (irq < NR_IRQS_LEGACY)
++ if (irq < nr_legacy_irqs())
+ return;
+
+ irq_free_desc(irq);
+diff --git a/fs/proc/array.c b/fs/proc/array.c
+index f60f0121e331..eed2050db9be 100644
+--- a/fs/proc/array.c
++++ b/fs/proc/array.c
+@@ -375,7 +375,7 @@ int proc_pid_status(struct seq_file *m, struct pid_namespace *ns,
+ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
+ struct pid *pid, struct task_struct *task, int whole)
+ {
+- unsigned long vsize, eip, esp, wchan = ~0UL;
++ unsigned long vsize, eip, esp, wchan = 0;
+ int priority, nice;
+ int tty_pgrp = -1, tty_nr = 0;
+ sigset_t sigign, sigcatch;
+@@ -507,7 +507,19 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
+ seq_put_decimal_ull(m, ' ', task->blocked.sig[0] & 0x7fffffffUL);
+ seq_put_decimal_ull(m, ' ', sigign.sig[0] & 0x7fffffffUL);
+ seq_put_decimal_ull(m, ' ', sigcatch.sig[0] & 0x7fffffffUL);
+- seq_put_decimal_ull(m, ' ', wchan);
++
++ /*
++ * We used to output the absolute kernel address, but that's an
++ * information leak - so instead we show a 0/1 flag here, to signal
++ * to user-space whether there's a wchan field in /proc/PID/wchan.
++ *
++ * This works with older implementations of procps as well.
++ */
++ if (wchan)
++ seq_puts(m, " 1");
++ else
++ seq_puts(m, " 0");
++
+ seq_put_decimal_ull(m, ' ', 0);
+ seq_put_decimal_ull(m, ' ', 0);
+ seq_put_decimal_ll(m, ' ', task->exit_signal);
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index b25eee4cead5..29595af32866 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -430,13 +430,10 @@ static int proc_pid_wchan(struct seq_file *m, struct pid_namespace *ns,
+
+ wchan = get_wchan(task);
+
+- if (lookup_symbol_name(wchan, symname) < 0) {
+- if (!ptrace_may_access(task, PTRACE_MODE_READ))
+- return 0;
+- seq_printf(m, "%lu", wchan);
+- } else {
++ if (wchan && ptrace_may_access(task, PTRACE_MODE_READ) && !lookup_symbol_name(wchan, symname))
+ seq_printf(m, "%s", symname);
+- }
++ else
++ seq_putc(m, '0');
+
+ return 0;
+ }
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 1bef9e21e725..e480f9fbdd62 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -442,6 +442,17 @@ static inline struct kvm_vcpu *kvm_get_vcpu(struct kvm *kvm, int i)
+ (vcpup = kvm_get_vcpu(kvm, idx)) != NULL; \
+ idx++)
+
++static inline struct kvm_vcpu *kvm_get_vcpu_by_id(struct kvm *kvm, int id)
++{
++ struct kvm_vcpu *vcpu;
++ int i;
++
++ kvm_for_each_vcpu(i, vcpu, kvm)
++ if (vcpu->vcpu_id == id)
++ return vcpu;
++ return NULL;
++}
++
+ #define kvm_for_each_memslot(memslot, slots) \
+ for (memslot = &slots->memslots[0]; \
+ memslot < slots->memslots + KVM_MEM_SLOTS_NUM && memslot->npages;\
+diff --git a/include/linux/tty.h b/include/linux/tty.h
+index d072ded41678..9bddda029b45 100644
+--- a/include/linux/tty.h
++++ b/include/linux/tty.h
+@@ -605,7 +605,7 @@ extern void n_tty_inherit_ops(struct tty_ldisc_ops *ops);
+
+ /* tty_audit.c */
+ #ifdef CONFIG_AUDIT
+-extern void tty_audit_add_data(struct tty_struct *tty, unsigned char *data,
++extern void tty_audit_add_data(struct tty_struct *tty, const void *data,
+ size_t size, unsigned icanon);
+ extern void tty_audit_exit(void);
+ extern void tty_audit_fork(struct signal_struct *sig);
+@@ -613,8 +613,8 @@ extern void tty_audit_tiocsti(struct tty_struct *tty, char ch);
+ extern void tty_audit_push(struct tty_struct *tty);
+ extern int tty_audit_push_current(void);
+ #else
+-static inline void tty_audit_add_data(struct tty_struct *tty,
+- unsigned char *data, size_t size, unsigned icanon)
++static inline void tty_audit_add_data(struct tty_struct *tty, const void *data,
++ size_t size, unsigned icanon)
+ {
+ }
+ static inline void tty_audit_tiocsti(struct tty_struct *tty, char ch)
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 9e1a59e01fa2..544a0201abdb 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -469,6 +469,7 @@ struct hci_conn {
+ struct delayed_work auto_accept_work;
+ struct delayed_work idle_work;
+ struct delayed_work le_conn_timeout;
++ struct work_struct le_scan_cleanup;
+
+ struct device dev;
+ struct dentry *debugfs;
+diff --git a/include/net/dst_metadata.h b/include/net/dst_metadata.h
+index ce009710120c..6816f0fa5693 100644
+--- a/include/net/dst_metadata.h
++++ b/include/net/dst_metadata.h
+@@ -63,12 +63,13 @@ static inline struct metadata_dst *tun_rx_dst(int md_size)
+ static inline struct metadata_dst *tun_dst_unclone(struct sk_buff *skb)
+ {
+ struct metadata_dst *md_dst = skb_metadata_dst(skb);
+- int md_size = md_dst->u.tun_info.options_len;
++ int md_size;
+ struct metadata_dst *new_md;
+
+ if (!md_dst)
+ return ERR_PTR(-EINVAL);
+
++ md_size = md_dst->u.tun_info.options_len;
+ new_md = metadata_dst_alloc(md_size, GFP_ATOMIC);
+ if (!new_md)
+ return ERR_PTR(-ENOMEM);
+diff --git a/include/net/inet_common.h b/include/net/inet_common.h
+index 279f83591971..109e3ee9108c 100644
+--- a/include/net/inet_common.h
++++ b/include/net/inet_common.h
+@@ -41,7 +41,8 @@ int inet_recv_error(struct sock *sk, struct msghdr *msg, int len,
+
+ static inline void inet_ctl_sock_destroy(struct sock *sk)
+ {
+- sock_release(sk->sk_socket);
++ if (sk)
++ sock_release(sk->sk_socket);
+ }
+
+ #endif
+diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h
+index 727d6e9a9685..965fa5b1a274 100644
+--- a/include/net/ip_fib.h
++++ b/include/net/ip_fib.h
+@@ -317,7 +317,7 @@ void fib_flush_external(struct net *net);
+
+ /* Exported by fib_semantics.c */
+ int ip_fib_check_default(__be32 gw, struct net_device *dev);
+-int fib_sync_down_dev(struct net_device *dev, unsigned long event);
++int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force);
+ int fib_sync_down_addr(struct net *net, __be32 local);
+ int fib_sync_up(struct net_device *dev, unsigned int nh_flags);
+ void fib_select_multipath(struct fib_result *res);
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 2dda439c8cb8..ec4836f243bc 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -137,18 +137,51 @@ static void hci_conn_cleanup(struct hci_conn *conn)
+ hci_conn_put(conn);
+ }
+
+-/* This function requires the caller holds hdev->lock */
+-static void hci_connect_le_scan_remove(struct hci_conn *conn)
++static void le_scan_cleanup(struct work_struct *work)
+ {
+- hci_connect_le_scan_cleanup(conn);
++ struct hci_conn *conn = container_of(work, struct hci_conn,
++ le_scan_cleanup);
++ struct hci_dev *hdev = conn->hdev;
++ struct hci_conn *c = NULL;
+
+- /* We can't call hci_conn_del here since that would deadlock
+- * with trying to call cancel_delayed_work_sync(&conn->disc_work).
+- * Instead, call just hci_conn_cleanup() which contains the bare
+- * minimum cleanup operations needed for a connection in this
+- * state.
++ BT_DBG("%s hcon %p", hdev->name, conn);
++
++ hci_dev_lock(hdev);
++
++ /* Check that the hci_conn is still around */
++ rcu_read_lock();
++ list_for_each_entry_rcu(c, &hdev->conn_hash.list, list) {
++ if (c == conn)
++ break;
++ }
++ rcu_read_unlock();
++
++ if (c == conn) {
++ hci_connect_le_scan_cleanup(conn);
++ hci_conn_cleanup(conn);
++ }
++
++ hci_dev_unlock(hdev);
++ hci_dev_put(hdev);
++ hci_conn_put(conn);
++}
++
++static void hci_connect_le_scan_remove(struct hci_conn *conn)
++{
++ BT_DBG("%s hcon %p", conn->hdev->name, conn);
++
++ /* We can't call hci_conn_del/hci_conn_cleanup here since that
++ * could deadlock with another hci_conn_del() call that's holding
++ * hci_dev_lock and doing cancel_delayed_work_sync(&conn->disc_work).
++ * Instead, grab temporary extra references to the hci_dev and
++ * hci_conn and perform the necessary cleanup in a separate work
++ * callback.
+ */
+- hci_conn_cleanup(conn);
++
++ hci_dev_hold(conn->hdev);
++ hci_conn_get(conn);
++
++ schedule_work(&conn->le_scan_cleanup);
+ }
+
+ static void hci_acl_create_connection(struct hci_conn *conn)
+@@ -580,6 +613,7 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
+ INIT_DELAYED_WORK(&conn->auto_accept_work, hci_conn_auto_accept);
+ INIT_DELAYED_WORK(&conn->idle_work, hci_conn_idle);
+ INIT_DELAYED_WORK(&conn->le_conn_timeout, le_conn_timeout);
++ INIT_WORK(&conn->le_scan_cleanup, le_scan_cleanup);
+
+ atomic_set(&conn->refcnt, 0);
+
+diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
+index f1a117f8cad2..0bec4588c3c8 100644
+--- a/net/bluetooth/hidp/core.c
++++ b/net/bluetooth/hidp/core.c
+@@ -401,6 +401,20 @@ static void hidp_idle_timeout(unsigned long arg)
+ {
+ struct hidp_session *session = (struct hidp_session *) arg;
+
++ /* The HIDP user-space API only contains calls to add and remove
++ * devices. There is no way to forward events of any kind. Therefore,
++ * we have to forcefully disconnect a device on idle-timeouts. This is
++ * unfortunate and weird API design, but it is spec-compliant and
++ * required for backwards-compatibility. Hence, on idle-timeout, we
++ * signal driver-detach events, so poll() will be woken up with an
++ * error-condition on both sockets.
++ */
++
++ session->intr_sock->sk->sk_err = EUNATCH;
++ session->ctrl_sock->sk->sk_err = EUNATCH;
++ wake_up_interruptible(sk_sleep(session->intr_sock->sk));
++ wake_up_interruptible(sk_sleep(session->ctrl_sock->sk));
++
+ hidp_session_terminate(session);
+ }
+
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index c4fe2fee753f..72c9376ec03c 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -3090,6 +3090,11 @@ static int unpair_device(struct sock *sk, struct hci_dev *hdev, void *data,
+ } else {
+ u8 addr_type;
+
++ if (cp->addr.type == BDADDR_LE_PUBLIC)
++ addr_type = ADDR_LE_DEV_PUBLIC;
++ else
++ addr_type = ADDR_LE_DEV_RANDOM;
++
+ conn = hci_conn_hash_lookup_ba(hdev, LE_LINK,
+ &cp->addr.bdaddr);
+ if (conn) {
+@@ -3105,13 +3110,10 @@ static int unpair_device(struct sock *sk, struct hci_dev *hdev, void *data,
+ */
+ if (!cp->disconnect)
+ conn = NULL;
++ } else {
++ hci_conn_params_del(hdev, &cp->addr.bdaddr, addr_type);
+ }
+
+- if (cp->addr.type == BDADDR_LE_PUBLIC)
+- addr_type = ADDR_LE_DEV_PUBLIC;
+- else
+- addr_type = ADDR_LE_DEV_RANDOM;
+-
+ hci_remove_irk(hdev, &cp->addr.bdaddr, addr_type);
+
+ err = hci_remove_ltk(hdev, &cp->addr.bdaddr, addr_type);
+diff --git a/net/core/dst.c b/net/core/dst.c
+index 0771c8cb9307..d6a5a0bc7df5 100644
+--- a/net/core/dst.c
++++ b/net/core/dst.c
+@@ -306,7 +306,7 @@ void dst_release(struct dst_entry *dst)
+ if (unlikely(newrefcnt < 0))
+ net_warn_ratelimited("%s: dst:%p refcnt:%d\n",
+ __func__, dst, newrefcnt);
+- if (unlikely(dst->flags & DST_NOCACHE) && !newrefcnt)
++ if (!newrefcnt && unlikely(dst->flags & DST_NOCACHE))
+ call_rcu(&dst->rcu_head, dst_destroy_rcu);
+ }
+ }
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 690bcbc59f26..457b2cd75b85 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -1110,9 +1110,10 @@ static void nl_fib_lookup_exit(struct net *net)
+ net->ipv4.fibnl = NULL;
+ }
+
+-static void fib_disable_ip(struct net_device *dev, unsigned long event)
++static void fib_disable_ip(struct net_device *dev, unsigned long event,
++ bool force)
+ {
+- if (fib_sync_down_dev(dev, event))
++ if (fib_sync_down_dev(dev, event, force))
+ fib_flush(dev_net(dev));
+ rt_cache_flush(dev_net(dev));
+ arp_ifdown(dev);
+@@ -1140,7 +1141,7 @@ static int fib_inetaddr_event(struct notifier_block *this, unsigned long event,
+ /* Last address was deleted from this interface.
+ * Disable IP.
+ */
+- fib_disable_ip(dev, event);
++ fib_disable_ip(dev, event, true);
+ } else {
+ rt_cache_flush(dev_net(dev));
+ }
+@@ -1157,7 +1158,7 @@ static int fib_netdev_event(struct notifier_block *this, unsigned long event, vo
+ unsigned int flags;
+
+ if (event == NETDEV_UNREGISTER) {
+- fib_disable_ip(dev, event);
++ fib_disable_ip(dev, event, true);
+ rt_flush_dev(dev);
+ return NOTIFY_DONE;
+ }
+@@ -1178,14 +1179,14 @@ static int fib_netdev_event(struct notifier_block *this, unsigned long event, vo
+ rt_cache_flush(net);
+ break;
+ case NETDEV_DOWN:
+- fib_disable_ip(dev, event);
++ fib_disable_ip(dev, event, false);
+ break;
+ case NETDEV_CHANGE:
+ flags = dev_get_flags(dev);
+ if (flags & (IFF_RUNNING | IFF_LOWER_UP))
+ fib_sync_up(dev, RTNH_F_LINKDOWN);
+ else
+- fib_sync_down_dev(dev, event);
++ fib_sync_down_dev(dev, event, false);
+ /* fall through */
+ case NETDEV_CHANGEMTU:
+ rt_cache_flush(net);
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 064bd3caaa4f..ef5892f5e046 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -864,14 +864,21 @@ static bool fib_valid_prefsrc(struct fib_config *cfg, __be32 fib_prefsrc)
+ if (cfg->fc_type != RTN_LOCAL || !cfg->fc_dst ||
+ fib_prefsrc != cfg->fc_dst) {
+ u32 tb_id = cfg->fc_table;
++ int rc;
+
+ if (tb_id == RT_TABLE_MAIN)
+ tb_id = RT_TABLE_LOCAL;
+
+- if (inet_addr_type_table(cfg->fc_nlinfo.nl_net,
+- fib_prefsrc, tb_id) != RTN_LOCAL) {
+- return false;
++ rc = inet_addr_type_table(cfg->fc_nlinfo.nl_net,
++ fib_prefsrc, tb_id);
++
++ if (rc != RTN_LOCAL && tb_id != RT_TABLE_LOCAL) {
++ rc = inet_addr_type_table(cfg->fc_nlinfo.nl_net,
++ fib_prefsrc, RT_TABLE_LOCAL);
+ }
++
++ if (rc != RTN_LOCAL)
++ return false;
+ }
+ return true;
+ }
+@@ -1281,7 +1288,13 @@ int fib_sync_down_addr(struct net *net, __be32 local)
+ return ret;
+ }
+
+-int fib_sync_down_dev(struct net_device *dev, unsigned long event)
++/* Event force Flags Description
++ * NETDEV_CHANGE 0 LINKDOWN Carrier OFF, not for scope host
++ * NETDEV_DOWN 0 LINKDOWN|DEAD Link down, not for scope host
++ * NETDEV_DOWN 1 LINKDOWN|DEAD Last address removed
++ * NETDEV_UNREGISTER 1 LINKDOWN|DEAD Device removed
++ */
++int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force)
+ {
+ int ret = 0;
+ int scope = RT_SCOPE_NOWHERE;
+@@ -1290,8 +1303,7 @@ int fib_sync_down_dev(struct net_device *dev, unsigned long event)
+ struct hlist_head *head = &fib_info_devhash[hash];
+ struct fib_nh *nh;
+
+- if (event == NETDEV_UNREGISTER ||
+- event == NETDEV_DOWN)
++ if (force)
+ scope = -1;
+
+ hlist_for_each_entry(nh, head, nh_hash) {
+@@ -1440,6 +1452,13 @@ int fib_sync_up(struct net_device *dev, unsigned int nh_flags)
+ if (!(dev->flags & IFF_UP))
+ return 0;
+
++ if (nh_flags & RTNH_F_DEAD) {
++ unsigned int flags = dev_get_flags(dev);
++
++ if (flags & (IFF_RUNNING | IFF_LOWER_UP))
++ nh_flags |= RTNH_F_LINKDOWN;
++ }
++
+ prev_fi = NULL;
+ hash = fib_devindex_hashfn(dev->ifindex);
+ head = &fib_info_devhash[hash];
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index 866ee89f5254..8e8203d5c520 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -1682,8 +1682,8 @@ static inline int ipmr_forward_finish(struct sock *sk, struct sk_buff *skb)
+ {
+ struct ip_options *opt = &(IPCB(skb)->opt);
+
+- IP_INC_STATS_BH(dev_net(skb_dst(skb)->dev), IPSTATS_MIB_OUTFORWDATAGRAMS);
+- IP_ADD_STATS_BH(dev_net(skb_dst(skb)->dev), IPSTATS_MIB_OUTOCTETS, skb->len);
++ IP_INC_STATS(dev_net(skb_dst(skb)->dev), IPSTATS_MIB_OUTFORWDATAGRAMS);
++ IP_ADD_STATS(dev_net(skb_dst(skb)->dev), IPSTATS_MIB_OUTOCTETS, skb->len);
+
+ if (unlikely(opt->optlen))
+ ip_forward_options(skb);
+@@ -1745,7 +1745,7 @@ static void ipmr_queue_xmit(struct net *net, struct mr_table *mrt,
+ * to blackhole.
+ */
+
+- IP_INC_STATS_BH(dev_net(dev), IPSTATS_MIB_FRAGFAILS);
++ IP_INC_STATS(dev_net(dev), IPSTATS_MIB_FRAGFAILS);
+ ip_rt_put(rt);
+ goto out_free;
+ }
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 894da3a70aff..ade773744b98 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -48,14 +48,14 @@ static void set_local_port_range(struct net *net, int range[2])
+ {
+ bool same_parity = !((range[0] ^ range[1]) & 1);
+
+- write_seqlock(&net->ipv4.ip_local_ports.lock);
++ write_seqlock_bh(&net->ipv4.ip_local_ports.lock);
+ if (same_parity && !net->ipv4.ip_local_ports.warned) {
+ net->ipv4.ip_local_ports.warned = true;
+ pr_err_ratelimited("ip_local_port_range: prefer different parity for start/end values.\n");
+ }
+ net->ipv4.ip_local_ports.range[0] = range[0];
+ net->ipv4.ip_local_ports.range[1] = range[1];
+- write_sequnlock(&net->ipv4.ip_local_ports.lock);
++ write_sequnlock_bh(&net->ipv4.ip_local_ports.lock);
+ }
+
+ /* Validate changes from /proc interface. */
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 36b85bd05ac8..dd00828863a0 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -417,6 +417,7 @@ static struct inet6_dev *ipv6_add_dev(struct net_device *dev)
+ if (err) {
+ ipv6_mc_destroy_dev(ndev);
+ del_timer(&ndev->regen_timer);
++ snmp6_unregister_dev(ndev);
+ goto err_release;
+ }
+ /* protected by rtnl_lock */
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index 94428fd85b2f..dcccae86190f 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -1394,34 +1394,20 @@ static int ipip6_tunnel_init(struct net_device *dev)
+ return 0;
+ }
+
+-static int __net_init ipip6_fb_tunnel_init(struct net_device *dev)
++static void __net_init ipip6_fb_tunnel_init(struct net_device *dev)
+ {
+ struct ip_tunnel *tunnel = netdev_priv(dev);
+ struct iphdr *iph = &tunnel->parms.iph;
+ struct net *net = dev_net(dev);
+ struct sit_net *sitn = net_generic(net, sit_net_id);
+
+- tunnel->dev = dev;
+- tunnel->net = dev_net(dev);
+-
+ iph->version = 4;
+ iph->protocol = IPPROTO_IPV6;
+ iph->ihl = 5;
+ iph->ttl = 64;
+
+- dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
+- if (!dev->tstats)
+- return -ENOMEM;
+-
+- tunnel->dst_cache = alloc_percpu(struct ip_tunnel_dst);
+- if (!tunnel->dst_cache) {
+- free_percpu(dev->tstats);
+- return -ENOMEM;
+- }
+-
+ dev_hold(dev);
+ rcu_assign_pointer(sitn->tunnels_wc[0], tunnel);
+- return 0;
+ }
+
+ static int ipip6_validate(struct nlattr *tb[], struct nlattr *data[])
+@@ -1831,23 +1817,19 @@ static int __net_init sit_init_net(struct net *net)
+ */
+ sitn->fb_tunnel_dev->features |= NETIF_F_NETNS_LOCAL;
+
+- err = ipip6_fb_tunnel_init(sitn->fb_tunnel_dev);
+- if (err)
+- goto err_dev_free;
+-
+- ipip6_tunnel_clone_6rd(sitn->fb_tunnel_dev, sitn);
+ err = register_netdev(sitn->fb_tunnel_dev);
+ if (err)
+ goto err_reg_dev;
+
++ ipip6_tunnel_clone_6rd(sitn->fb_tunnel_dev, sitn);
++ ipip6_fb_tunnel_init(sitn->fb_tunnel_dev);
++
+ t = netdev_priv(sitn->fb_tunnel_dev);
+
+ strcpy(t->parms.name, sitn->fb_tunnel_dev->name);
+ return 0;
+
+ err_reg_dev:
+- dev_put(sitn->fb_tunnel_dev);
+-err_dev_free:
+ ipip6_dev_free(sitn->fb_tunnel_dev);
+ err_alloc_dev:
+ return err;
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index cd7e55e08a23..d011bc539197 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -3391,7 +3391,7 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
+
+ if (ifmgd->rssi_min_thold != ifmgd->rssi_max_thold &&
+ ifmgd->count_beacon_signal >= IEEE80211_SIGNAL_AVE_MIN_COUNT) {
+- int sig = ifmgd->ave_beacon_signal;
++ int sig = ifmgd->ave_beacon_signal / 16;
+ int last_sig = ifmgd->last_ave_beacon_signal;
+ struct ieee80211_event event = {
+ .type = RSSI_EVENT,
+@@ -5028,6 +5028,25 @@ int ieee80211_mgd_deauth(struct ieee80211_sub_if_data *sdata,
+ return 0;
+ }
+
++ if (ifmgd->assoc_data &&
++ ether_addr_equal(ifmgd->assoc_data->bss->bssid, req->bssid)) {
++ sdata_info(sdata,
++ "aborting association with %pM by local choice (Reason: %u=%s)\n",
++ req->bssid, req->reason_code,
++ ieee80211_get_reason_code_string(req->reason_code));
++
++ drv_mgd_prepare_tx(sdata->local, sdata);
++ ieee80211_send_deauth_disassoc(sdata, req->bssid,
++ IEEE80211_STYPE_DEAUTH,
++ req->reason_code, tx,
++ frame_buf);
++ ieee80211_destroy_assoc_data(sdata, false);
++ ieee80211_report_disconnect(sdata, frame_buf,
++ sizeof(frame_buf), true,
++ req->reason_code);
++ return 0;
++ }
++
+ if (ifmgd->associated &&
+ ether_addr_equal(ifmgd->associated->bssid, req->bssid)) {
+ sdata_info(sdata,
+diff --git a/net/mac80211/trace.h b/net/mac80211/trace.h
+index 6f14591d8ca9..0b13bfa6f32f 100644
+--- a/net/mac80211/trace.h
++++ b/net/mac80211/trace.h
+@@ -33,11 +33,11 @@
+ __field(u32, chan_width) \
+ __field(u32, center_freq1) \
+ __field(u32, center_freq2)
+-#define CHANDEF_ASSIGN(c) \
+- __entry->control_freq = (c)->chan ? (c)->chan->center_freq : 0; \
+- __entry->chan_width = (c)->width; \
+- __entry->center_freq1 = (c)->center_freq1; \
+- __entry->center_freq2 = (c)->center_freq2;
++#define CHANDEF_ASSIGN(c) \
++ __entry->control_freq = (c) ? ((c)->chan ? (c)->chan->center_freq : 0) : 0; \
++ __entry->chan_width = (c) ? (c)->width : 0; \
++ __entry->center_freq1 = (c) ? (c)->center_freq1 : 0; \
++ __entry->center_freq2 = (c) ? (c)->center_freq2 : 0;
+ #define CHANDEF_PR_FMT " control:%d MHz width:%d center: %d/%d MHz"
+ #define CHANDEF_PR_ARG __entry->control_freq, __entry->chan_width, \
+ __entry->center_freq1, __entry->center_freq2
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index 1104421bc525..cd90ece80772 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -2951,6 +2951,13 @@ ieee80211_extend_noa_desc(struct ieee80211_noa_data *data, u32 tsf, int i)
+ if (end > 0)
+ return false;
+
++ /* One shot NOA */
++ if (data->count[i] == 1)
++ return false;
++
++ if (data->desc[i].interval == 0)
++ return false;
++
+ /* End time is in the past, check for repetitions */
+ skip = DIV_ROUND_UP(-end, data->desc[i].interval);
+ if (data->count[i] < 255) {
+diff --git a/net/nfc/nci/hci.c b/net/nfc/nci/hci.c
+index 609f92283d1b..30b09f04c142 100644
+--- a/net/nfc/nci/hci.c
++++ b/net/nfc/nci/hci.c
+@@ -101,6 +101,20 @@ struct nci_hcp_packet {
+ #define NCI_HCP_MSG_GET_CMD(header) (header & 0x3f)
+ #define NCI_HCP_MSG_GET_PIPE(header) (header & 0x7f)
+
++static int nci_hci_result_to_errno(u8 result)
++{
++ switch (result) {
++ case NCI_HCI_ANY_OK:
++ return 0;
++ case NCI_HCI_ANY_E_REG_PAR_UNKNOWN:
++ return -EOPNOTSUPP;
++ case NCI_HCI_ANY_E_TIMEOUT:
++ return -ETIME;
++ default:
++ return -1;
++ }
++}
++
+ /* HCI core */
+ static void nci_hci_reset_pipes(struct nci_hci_dev *hdev)
+ {
+@@ -146,18 +160,18 @@ static int nci_hci_send_data(struct nci_dev *ndev, u8 pipe,
+ if (!conn_info)
+ return -EPROTO;
+
+- skb = nci_skb_alloc(ndev, 2 + conn_info->max_pkt_payload_len +
++ i = 0;
++ skb = nci_skb_alloc(ndev, conn_info->max_pkt_payload_len +
+ NCI_DATA_HDR_SIZE, GFP_KERNEL);
+ if (!skb)
+ return -ENOMEM;
+
+- skb_reserve(skb, 2 + NCI_DATA_HDR_SIZE);
++ skb_reserve(skb, NCI_DATA_HDR_SIZE + 2);
+ *skb_push(skb, 1) = data_type;
+
+- i = 0;
+- len = conn_info->max_pkt_payload_len;
+-
+ do {
++ len = conn_info->max_pkt_payload_len;
++
+ /* If last packet add NCI_HFP_NO_CHAINING */
+ if (i + conn_info->max_pkt_payload_len -
+ (skb->len + 1) >= data_len) {
+@@ -177,9 +191,15 @@ static int nci_hci_send_data(struct nci_dev *ndev, u8 pipe,
+ return r;
+
+ i += len;
++
+ if (i < data_len) {
+- skb_trim(skb, 0);
+- skb_pull(skb, len);
++ skb = nci_skb_alloc(ndev,
++ conn_info->max_pkt_payload_len +
++ NCI_DATA_HDR_SIZE, GFP_KERNEL);
++ if (!skb)
++ return -ENOMEM;
++
++ skb_reserve(skb, NCI_DATA_HDR_SIZE + 1);
+ }
+ } while (i < data_len);
+
+@@ -212,7 +232,8 @@ int nci_hci_send_cmd(struct nci_dev *ndev, u8 gate, u8 cmd,
+ const u8 *param, size_t param_len,
+ struct sk_buff **skb)
+ {
+- struct nci_conn_info *conn_info;
++ struct nci_hcp_message *message;
++ struct nci_conn_info *conn_info;
+ struct nci_data data;
+ int r;
+ u8 pipe = ndev->hci_dev->gate2pipe[gate];
+@@ -232,9 +253,15 @@ int nci_hci_send_cmd(struct nci_dev *ndev, u8 gate, u8 cmd,
+
+ r = nci_request(ndev, nci_hci_send_data_req, (unsigned long)&data,
+ msecs_to_jiffies(NCI_DATA_TIMEOUT));
+-
+- if (r == NCI_STATUS_OK && skb)
+- *skb = conn_info->rx_skb;
++ if (r == NCI_STATUS_OK) {
++ message = (struct nci_hcp_message *)conn_info->rx_skb->data;
++ r = nci_hci_result_to_errno(
++ NCI_HCP_MSG_GET_CMD(message->header));
++ skb_pull(conn_info->rx_skb, NCI_HCI_HCP_MESSAGE_HEADER_LEN);
++
++ if (!r && skb)
++ *skb = conn_info->rx_skb;
++ }
+
+ return r;
+ }
+@@ -328,9 +355,6 @@ static void nci_hci_resp_received(struct nci_dev *ndev, u8 pipe,
+ struct nci_conn_info *conn_info;
+ u8 status = result;
+
+- if (result != NCI_HCI_ANY_OK)
+- goto exit;
+-
+ conn_info = ndev->hci_dev->conn_info;
+ if (!conn_info) {
+ status = NCI_STATUS_REJECTED;
+@@ -340,7 +364,7 @@ static void nci_hci_resp_received(struct nci_dev *ndev, u8 pipe,
+ conn_info->rx_skb = skb;
+
+ exit:
+- nci_req_complete(ndev, status);
++ nci_req_complete(ndev, NCI_STATUS_OK);
+ }
+
+ /* Receive hcp message for pipe, with type and cmd.
+@@ -378,7 +402,7 @@ static void nci_hci_msg_rx_work(struct work_struct *work)
+ u8 pipe, type, instruction;
+
+ while ((skb = skb_dequeue(&hdev->msg_rx_queue)) != NULL) {
+- pipe = skb->data[0];
++ pipe = NCI_HCP_MSG_GET_PIPE(skb->data[0]);
+ skb_pull(skb, NCI_HCI_HCP_PACKET_HEADER_LEN);
+ message = (struct nci_hcp_message *)skb->data;
+ type = NCI_HCP_MSG_GET_TYPE(message->header);
+@@ -395,7 +419,7 @@ void nci_hci_data_received_cb(void *context,
+ {
+ struct nci_dev *ndev = (struct nci_dev *)context;
+ struct nci_hcp_packet *packet;
+- u8 pipe, type, instruction;
++ u8 pipe, type;
+ struct sk_buff *hcp_skb;
+ struct sk_buff *frag_skb;
+ int msg_len;
+@@ -415,7 +439,7 @@ void nci_hci_data_received_cb(void *context,
+
+ /* it's the last fragment. Does it need re-aggregation? */
+ if (skb_queue_len(&ndev->hci_dev->rx_hcp_frags)) {
+- pipe = packet->header & NCI_HCI_FRAGMENT;
++ pipe = NCI_HCP_MSG_GET_PIPE(packet->header);
+ skb_queue_tail(&ndev->hci_dev->rx_hcp_frags, skb);
+
+ msg_len = 0;
+@@ -434,7 +458,7 @@ void nci_hci_data_received_cb(void *context,
+ *skb_put(hcp_skb, NCI_HCI_HCP_PACKET_HEADER_LEN) = pipe;
+
+ skb_queue_walk(&ndev->hci_dev->rx_hcp_frags, frag_skb) {
+- msg_len = frag_skb->len - NCI_HCI_HCP_PACKET_HEADER_LEN;
++ msg_len = frag_skb->len - NCI_HCI_HCP_PACKET_HEADER_LEN;
+ memcpy(skb_put(hcp_skb, msg_len), frag_skb->data +
+ NCI_HCI_HCP_PACKET_HEADER_LEN, msg_len);
+ }
+@@ -452,11 +476,10 @@ void nci_hci_data_received_cb(void *context,
+ packet = (struct nci_hcp_packet *)hcp_skb->data;
+ type = NCI_HCP_MSG_GET_TYPE(packet->message.header);
+ if (type == NCI_HCI_HCP_RESPONSE) {
+- pipe = packet->header;
+- instruction = NCI_HCP_MSG_GET_CMD(packet->message.header);
+- skb_pull(hcp_skb, NCI_HCI_HCP_PACKET_HEADER_LEN +
+- NCI_HCI_HCP_MESSAGE_HEADER_LEN);
+- nci_hci_hcp_message_rx(ndev, pipe, type, instruction, hcp_skb);
++ pipe = NCI_HCP_MSG_GET_PIPE(packet->header);
++ skb_pull(hcp_skb, NCI_HCI_HCP_PACKET_HEADER_LEN);
++ nci_hci_hcp_message_rx(ndev, pipe, type,
++ NCI_STATUS_OK, hcp_skb);
+ } else {
+ skb_queue_tail(&ndev->hci_dev->msg_rx_queue, hcp_skb);
+ schedule_work(&ndev->hci_dev->msg_rx_work);
+@@ -488,6 +511,7 @@ EXPORT_SYMBOL(nci_hci_open_pipe);
+ int nci_hci_set_param(struct nci_dev *ndev, u8 gate, u8 idx,
+ const u8 *param, size_t param_len)
+ {
++ struct nci_hcp_message *message;
+ struct nci_conn_info *conn_info;
+ struct nci_data data;
+ int r;
+@@ -520,6 +544,12 @@ int nci_hci_set_param(struct nci_dev *ndev, u8 gate, u8 idx,
+ r = nci_request(ndev, nci_hci_send_data_req,
+ (unsigned long)&data,
+ msecs_to_jiffies(NCI_DATA_TIMEOUT));
++ if (r == NCI_STATUS_OK) {
++ message = (struct nci_hcp_message *)conn_info->rx_skb->data;
++ r = nci_hci_result_to_errno(
++ NCI_HCP_MSG_GET_CMD(message->header));
++ skb_pull(conn_info->rx_skb, NCI_HCI_HCP_MESSAGE_HEADER_LEN);
++ }
+
+ kfree(tmp);
+ return r;
+@@ -529,6 +559,7 @@ EXPORT_SYMBOL(nci_hci_set_param);
+ int nci_hci_get_param(struct nci_dev *ndev, u8 gate, u8 idx,
+ struct sk_buff **skb)
+ {
++ struct nci_hcp_message *message;
+ struct nci_conn_info *conn_info;
+ struct nci_data data;
+ int r;
+@@ -553,8 +584,15 @@ int nci_hci_get_param(struct nci_dev *ndev, u8 gate, u8 idx,
+ r = nci_request(ndev, nci_hci_send_data_req, (unsigned long)&data,
+ msecs_to_jiffies(NCI_DATA_TIMEOUT));
+
+- if (r == NCI_STATUS_OK)
+- *skb = conn_info->rx_skb;
++ if (r == NCI_STATUS_OK) {
++ message = (struct nci_hcp_message *)conn_info->rx_skb->data;
++ r = nci_hci_result_to_errno(
++ NCI_HCP_MSG_GET_CMD(message->header));
++ skb_pull(conn_info->rx_skb, NCI_HCI_HCP_MESSAGE_HEADER_LEN);
++
++ if (!r && skb)
++ *skb = conn_info->rx_skb;
++ }
+
+ return r;
+ }
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index aa4b15c35884..27b2898f275c 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2903,22 +2903,40 @@ static int packet_release(struct socket *sock)
+ * Attach a packet hook.
+ */
+
+-static int packet_do_bind(struct sock *sk, struct net_device *dev, __be16 proto)
++static int packet_do_bind(struct sock *sk, const char *name, int ifindex,
++ __be16 proto)
+ {
+ struct packet_sock *po = pkt_sk(sk);
+ struct net_device *dev_curr;
+ __be16 proto_curr;
+ bool need_rehook;
++ struct net_device *dev = NULL;
++ int ret = 0;
++ bool unlisted = false;
+
+- if (po->fanout) {
+- if (dev)
+- dev_put(dev);
+-
++ if (po->fanout)
+ return -EINVAL;
+- }
+
+ lock_sock(sk);
+ spin_lock(&po->bind_lock);
++ rcu_read_lock();
++
++ if (name) {
++ dev = dev_get_by_name_rcu(sock_net(sk), name);
++ if (!dev) {
++ ret = -ENODEV;
++ goto out_unlock;
++ }
++ } else if (ifindex) {
++ dev = dev_get_by_index_rcu(sock_net(sk), ifindex);
++ if (!dev) {
++ ret = -ENODEV;
++ goto out_unlock;
++ }
++ }
++
++ if (dev)
++ dev_hold(dev);
+
+ proto_curr = po->prot_hook.type;
+ dev_curr = po->prot_hook.dev;
+@@ -2926,14 +2944,29 @@ static int packet_do_bind(struct sock *sk, struct net_device *dev, __be16 proto)
+ need_rehook = proto_curr != proto || dev_curr != dev;
+
+ if (need_rehook) {
+- unregister_prot_hook(sk, true);
++ if (po->running) {
++ rcu_read_unlock();
++ __unregister_prot_hook(sk, true);
++ rcu_read_lock();
++ dev_curr = po->prot_hook.dev;
++ if (dev)
++ unlisted = !dev_get_by_index_rcu(sock_net(sk),
++ dev->ifindex);
++ }
+
+ po->num = proto;
+ po->prot_hook.type = proto;
+- po->prot_hook.dev = dev;
+
+- po->ifindex = dev ? dev->ifindex : 0;
+- packet_cached_dev_assign(po, dev);
++ if (unlikely(unlisted)) {
++ dev_put(dev);
++ po->prot_hook.dev = NULL;
++ po->ifindex = -1;
++ packet_cached_dev_reset(po);
++ } else {
++ po->prot_hook.dev = dev;
++ po->ifindex = dev ? dev->ifindex : 0;
++ packet_cached_dev_assign(po, dev);
++ }
+ }
+ if (dev_curr)
+ dev_put(dev_curr);
+@@ -2941,7 +2974,7 @@ static int packet_do_bind(struct sock *sk, struct net_device *dev, __be16 proto)
+ if (proto == 0 || !need_rehook)
+ goto out_unlock;
+
+- if (!dev || (dev->flags & IFF_UP)) {
++ if (!unlisted && (!dev || (dev->flags & IFF_UP))) {
+ register_prot_hook(sk);
+ } else {
+ sk->sk_err = ENETDOWN;
+@@ -2950,9 +2983,10 @@ static int packet_do_bind(struct sock *sk, struct net_device *dev, __be16 proto)
+ }
+
+ out_unlock:
++ rcu_read_unlock();
+ spin_unlock(&po->bind_lock);
+ release_sock(sk);
+- return 0;
++ return ret;
+ }
+
+ /*
+@@ -2964,8 +2998,6 @@ static int packet_bind_spkt(struct socket *sock, struct sockaddr *uaddr,
+ {
+ struct sock *sk = sock->sk;
+ char name[15];
+- struct net_device *dev;
+- int err = -ENODEV;
+
+ /*
+ * Check legality
+@@ -2975,19 +3007,13 @@ static int packet_bind_spkt(struct socket *sock, struct sockaddr *uaddr,
+ return -EINVAL;
+ strlcpy(name, uaddr->sa_data, sizeof(name));
+
+- dev = dev_get_by_name(sock_net(sk), name);
+- if (dev)
+- err = packet_do_bind(sk, dev, pkt_sk(sk)->num);
+- return err;
++ return packet_do_bind(sk, name, 0, pkt_sk(sk)->num);
+ }
+
+ static int packet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ {
+ struct sockaddr_ll *sll = (struct sockaddr_ll *)uaddr;
+ struct sock *sk = sock->sk;
+- struct net_device *dev = NULL;
+- int err;
+-
+
+ /*
+ * Check legality
+@@ -2998,16 +3024,8 @@ static int packet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len
+ if (sll->sll_family != AF_PACKET)
+ return -EINVAL;
+
+- if (sll->sll_ifindex) {
+- err = -ENODEV;
+- dev = dev_get_by_index(sock_net(sk), sll->sll_ifindex);
+- if (dev == NULL)
+- goto out;
+- }
+- err = packet_do_bind(sk, dev, sll->sll_protocol ? : pkt_sk(sk)->num);
+-
+-out:
+- return err;
++ return packet_do_bind(sk, NULL, sll->sll_ifindex,
++ sll->sll_protocol ? : pkt_sk(sk)->num);
+ }
+
+ static struct proto packet_proto = {
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index 6e648d90297a..cd7c5f131e72 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -48,6 +48,7 @@
+ #include <linux/tipc_netlink.h>
+ #include "core.h"
+ #include "bearer.h"
++#include "msg.h"
+
+ /* IANA assigned UDP port */
+ #define UDP_PORT_DEFAULT 6118
+@@ -222,6 +223,10 @@ static int tipc_udp_recv(struct sock *sk, struct sk_buff *skb)
+ {
+ struct udp_bearer *ub;
+ struct tipc_bearer *b;
++ int usr = msg_user(buf_msg(skb));
++
++ if ((usr == LINK_PROTOCOL) || (usr == NAME_DISTRIBUTOR))
++ skb_linearize(skb);
+
+ ub = rcu_dereference_sk_user_data(sk);
+ if (!ub) {
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 5d8748b4c8a2..6a1040daa9b9 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -3409,12 +3409,6 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ wdev->iftype))
+ return -EINVAL;
+
+- if (info->attrs[NL80211_ATTR_ACL_POLICY]) {
+- params.acl = parse_acl_data(&rdev->wiphy, info);
+- if (IS_ERR(params.acl))
+- return PTR_ERR(params.acl);
+- }
+-
+ if (info->attrs[NL80211_ATTR_SMPS_MODE]) {
+ params.smps_mode =
+ nla_get_u8(info->attrs[NL80211_ATTR_SMPS_MODE]);
+@@ -3438,6 +3432,12 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
+ params.smps_mode = NL80211_SMPS_OFF;
+ }
+
++ if (info->attrs[NL80211_ATTR_ACL_POLICY]) {
++ params.acl = parse_acl_data(&rdev->wiphy, info);
++ if (IS_ERR(params.acl))
++ return PTR_ERR(params.acl);
++ }
++
+ wdev_lock(wdev);
+ err = rdev_start_ap(rdev, dev, ¶ms);
+ if (!err) {
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index 417ebb11cf48..bec63e0d2605 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -174,6 +174,8 @@ struct snd_usb_midi_in_endpoint {
+ u8 running_status_length;
+ } ports[0x10];
+ u8 seen_f5;
++ bool in_sysex;
++ u8 last_cin;
+ u8 error_resubmit;
+ int current_port;
+ };
+@@ -468,6 +470,39 @@ static void snd_usbmidi_maudio_broken_running_status_input(
+ }
+
+ /*
++ * QinHeng CH345 is buggy: every second packet inside a SysEx has not CIN 4
++ * but the previously seen CIN, but still with three data bytes.
++ */
++static void ch345_broken_sysex_input(struct snd_usb_midi_in_endpoint *ep,
++ uint8_t *buffer, int buffer_length)
++{
++ unsigned int i, cin, length;
++
++ for (i = 0; i + 3 < buffer_length; i += 4) {
++ if (buffer[i] == 0 && i > 0)
++ break;
++ cin = buffer[i] & 0x0f;
++ if (ep->in_sysex &&
++ cin == ep->last_cin &&
++ (buffer[i + 1 + (cin == 0x6)] & 0x80) == 0)
++ cin = 0x4;
++#if 0
++ if (buffer[i + 1] == 0x90) {
++ /*
++ * Either a corrupted running status or a real note-on
++ * message; impossible to detect reliably.
++ */
++ }
++#endif
++ length = snd_usbmidi_cin_length[cin];
++ snd_usbmidi_input_data(ep, 0, &buffer[i + 1], length);
++ ep->in_sysex = cin == 0x4;
++ if (!ep->in_sysex)
++ ep->last_cin = cin;
++ }
++}
++
++/*
+ * CME protocol: like the standard protocol, but SysEx commands are sent as a
+ * single USB packet preceded by a 0x0F byte.
+ */
+@@ -660,6 +695,12 @@ static struct usb_protocol_ops snd_usbmidi_cme_ops = {
+ .output_packet = snd_usbmidi_output_standard_packet,
+ };
+
++static struct usb_protocol_ops snd_usbmidi_ch345_broken_sysex_ops = {
++ .input = ch345_broken_sysex_input,
++ .output = snd_usbmidi_standard_output,
++ .output_packet = snd_usbmidi_output_standard_packet,
++};
++
+ /*
+ * AKAI MPD16 protocol:
+ *
+@@ -1341,6 +1382,7 @@ static int snd_usbmidi_out_endpoint_create(struct snd_usb_midi *umidi,
+ * Various chips declare a packet size larger than 4 bytes, but
+ * do not actually work with larger packets:
+ */
++ case USB_ID(0x0a67, 0x5011): /* Medeli DD305 */
+ case USB_ID(0x0a92, 0x1020): /* ESI M4U */
+ case USB_ID(0x1430, 0x474b): /* RedOctane GH MIDI INTERFACE */
+ case USB_ID(0x15ca, 0x0101): /* Textech USB Midi Cable */
+@@ -2375,6 +2417,10 @@ int snd_usbmidi_create(struct snd_card *card,
+
+ err = snd_usbmidi_detect_per_port_endpoints(umidi, endpoints);
+ break;
++ case QUIRK_MIDI_CH345:
++ umidi->usb_protocol_ops = &snd_usbmidi_ch345_broken_sysex_ops;
++ err = snd_usbmidi_detect_per_port_endpoints(umidi, endpoints);
++ break;
+ default:
+ dev_err(&umidi->dev->dev, "invalid quirk type %d\n",
+ quirk->type);
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index e4756651a52c..ecc2a4ea014d 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -2820,6 +2820,17 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .idProduct = 0x1020,
+ },
+
++/* QinHeng devices */
++{
++ USB_DEVICE(0x1a86, 0x752d),
++ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ .vendor_name = "QinHeng",
++ .product_name = "CH345",
++ .ifnum = 1,
++ .type = QUIRK_MIDI_CH345
++ }
++},
++
+ /* KeithMcMillen Stringport */
+ {
+ USB_DEVICE(0x1f38, 0x0001),
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 00ebc0ca008e..eef9b8e4b949 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -535,6 +535,7 @@ int snd_usb_create_quirk(struct snd_usb_audio *chip,
+ [QUIRK_MIDI_CME] = create_any_midi_quirk,
+ [QUIRK_MIDI_AKAI] = create_any_midi_quirk,
+ [QUIRK_MIDI_FTDI] = create_any_midi_quirk,
++ [QUIRK_MIDI_CH345] = create_any_midi_quirk,
+ [QUIRK_AUDIO_STANDARD_INTERFACE] = create_standard_audio_quirk,
+ [QUIRK_AUDIO_FIXED_ENDPOINT] = create_fixed_stream_quirk,
+ [QUIRK_AUDIO_EDIROL_UAXX] = create_uaxx_quirk,
+@@ -1271,6 +1272,7 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ case USB_ID(0x20b1, 0x000a): /* Gustard DAC-X20U */
+ case USB_ID(0x20b1, 0x2009): /* DIYINHK DSD DXD 384kHz USB to I2S/DSD */
+ case USB_ID(0x20b1, 0x2023): /* JLsounds I2SoverUSB */
++ case USB_ID(0x20b1, 0x3023): /* Aune X1S 32BIT/384 DSD DAC */
+ if (fp->altsetting == 3)
+ return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+ break;
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index 33a176437e2e..4b4327b45606 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -94,6 +94,7 @@ enum quirk_type {
+ QUIRK_MIDI_AKAI,
+ QUIRK_MIDI_US122L,
+ QUIRK_MIDI_FTDI,
++ QUIRK_MIDI_CH345,
+ QUIRK_AUDIO_STANDARD_INTERFACE,
+ QUIRK_AUDIO_FIXED_ENDPOINT,
+ QUIRK_AUDIO_EDIROL_UAXX,
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [gentoo-commits] proj/linux-patches:4.3 commit in: /
@ 2015-12-11 0:31 Mike Pagano
0 siblings, 0 replies; 9+ messages in thread
From: Mike Pagano @ 2015-12-11 0:31 UTC (permalink / raw
To: gentoo-commits
commit: 69c780db4978344f372352a5887d76d8c3ef0ea6
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 11 00:31:17 2015 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Dec 11 00:31:17 2015 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=69c780db
Linux patch 4.3.2
0000_README | 8 +++++++-
1001_linux-4.3.2.patch | 49 +++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 56 insertions(+), 1 deletion(-)
diff --git a/0000_README b/0000_README
index eb12e02..5fc79da 100644
--- a/0000_README
+++ b/0000_README
@@ -43,7 +43,13 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
-Patch: 1000_linux-4.3.1.patch From: http://www.kernel.org Desc: Linux 4.3.1
+Patch: 1000_linux-4.3.1.patch
+From: http://www.kernel.org
+Desc: Linux 4.3.1
+
+Patch: 1001_linux-4.3.2.patch
+From: http://www.kernel.org
+Desc: Linux 4.3.2
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
diff --git a/1001_linux-4.3.2.patch b/1001_linux-4.3.2.patch
new file mode 100644
index 0000000..c3c1b19
--- /dev/null
+++ b/1001_linux-4.3.2.patch
@@ -0,0 +1,49 @@
+diff --git a/Makefile b/Makefile
+index 266eeacc1490..1a4953b3e10f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 3
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Blurry Fish Butt
+
+diff --git a/crypto/asymmetric_keys/x509_cert_parser.c b/crypto/asymmetric_keys/x509_cert_parser.c
+index af71878dc15b..021d39c0ba75 100644
+--- a/crypto/asymmetric_keys/x509_cert_parser.c
++++ b/crypto/asymmetric_keys/x509_cert_parser.c
+@@ -531,7 +531,11 @@ int x509_decode_time(time64_t *_t, size_t hdrlen,
+ if (*p != 'Z')
+ goto unsupported_time;
+
+- mon_len = month_lengths[mon];
++ if (year < 1970 ||
++ mon < 1 || mon > 12)
++ goto invalid_time;
++
++ mon_len = month_lengths[mon - 1];
+ if (mon == 2) {
+ if (year % 4 == 0) {
+ mon_len = 29;
+@@ -543,14 +547,12 @@ int x509_decode_time(time64_t *_t, size_t hdrlen,
+ }
+ }
+
+- if (year < 1970 ||
+- mon < 1 || mon > 12 ||
+- day < 1 || day > mon_len ||
+- hour < 0 || hour > 23 ||
+- min < 0 || min > 59 ||
+- sec < 0 || sec > 59)
++ if (day < 1 || day > mon_len ||
++ hour > 23 ||
++ min > 59 ||
++ sec > 59)
+ goto invalid_time;
+-
++
+ *_t = mktime64(year, mon, day, hour, min, sec);
+ return 0;
+
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [gentoo-commits] proj/linux-patches:4.3 commit in: /
@ 2015-12-15 11:14 Mike Pagano
0 siblings, 0 replies; 9+ messages in thread
From: Mike Pagano @ 2015-12-15 11:14 UTC (permalink / raw
To: gentoo-commits
commit: adfe38f2fe47d59f83fb2135810d41e997022b61
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Dec 15 11:13:57 2015 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Dec 15 11:13:57 2015 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=adfe38f2
Linux patch 4.3.3
0000_README | 4 +
1002_linux-4.3.3.patch | 4424 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4428 insertions(+)
diff --git a/0000_README b/0000_README
index 5fc79da..7b7e0b4 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch: 1001_linux-4.3.2.patch
From: http://www.kernel.org
Desc: Linux 4.3.2
+Patch: 1002_linux-4.3.3.patch
+From: http://www.kernel.org
+Desc: Linux 4.3.3
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1002_linux-4.3.3.patch b/1002_linux-4.3.3.patch
new file mode 100644
index 0000000..7a2500e
--- /dev/null
+++ b/1002_linux-4.3.3.patch
@@ -0,0 +1,4424 @@
+diff --git a/Makefile b/Makefile
+index 1a4953b3e10f..2070d16bb5a4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 3
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Blurry Fish Butt
+
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index c4e9c37f3e38..0e5f4fc12449 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -91,7 +91,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
+
+ seg_size += bv.bv_len;
+ bvprv = bv;
+- bvprvp = &bv;
++ bvprvp = &bvprv;
+ sectors += bv.bv_len >> 9;
+ continue;
+ }
+@@ -101,7 +101,7 @@ new_segment:
+
+ nsegs++;
+ bvprv = bv;
+- bvprvp = &bv;
++ bvprvp = &bvprv;
+ seg_size = bv.bv_len;
+ sectors += bv.bv_len >> 9;
+ }
+diff --git a/certs/.gitignore b/certs/.gitignore
+new file mode 100644
+index 000000000000..f51aea4a71ec
+--- /dev/null
++++ b/certs/.gitignore
+@@ -0,0 +1,4 @@
++#
++# Generated files
++#
++x509_certificate_list
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index 128e7df5b807..8630a77ea462 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -3444,6 +3444,7 @@ static void rbd_queue_workfn(struct work_struct *work)
+ goto err_rq;
+ }
+ img_request->rq = rq;
++ snapc = NULL; /* img_request consumes a ref */
+
+ if (op_type == OBJ_OP_DISCARD)
+ result = rbd_img_request_fill(img_request, OBJ_REQUEST_NODATA,
+diff --git a/drivers/firewire/ohci.c b/drivers/firewire/ohci.c
+index f51d376d10ba..c2f5117fd8cb 100644
+--- a/drivers/firewire/ohci.c
++++ b/drivers/firewire/ohci.c
+@@ -3675,6 +3675,11 @@ static int pci_probe(struct pci_dev *dev,
+
+ reg_write(ohci, OHCI1394_IsoXmitIntMaskSet, ~0);
+ ohci->it_context_support = reg_read(ohci, OHCI1394_IsoXmitIntMaskSet);
++ /* JMicron JMB38x often shows 0 at first read, just ignore it */
++ if (!ohci->it_context_support) {
++ ohci_notice(ohci, "overriding IsoXmitIntMask\n");
++ ohci->it_context_support = 0xf;
++ }
+ reg_write(ohci, OHCI1394_IsoXmitIntMaskClear, ~0);
+ ohci->it_context_mask = ohci->it_context_support;
+ ohci->n_it = hweight32(ohci->it_context_mask);
+diff --git a/drivers/media/pci/cobalt/Kconfig b/drivers/media/pci/cobalt/Kconfig
+index 1f88ccc174da..a01f0cc745cc 100644
+--- a/drivers/media/pci/cobalt/Kconfig
++++ b/drivers/media/pci/cobalt/Kconfig
+@@ -1,6 +1,6 @@
+ config VIDEO_COBALT
+ tristate "Cisco Cobalt support"
+- depends on VIDEO_V4L2 && I2C && MEDIA_CONTROLLER
++ depends on VIDEO_V4L2 && I2C && VIDEO_V4L2_SUBDEV_API
+ depends on PCI_MSI && MTD_COMPLEX_MAPPINGS
+ depends on GPIOLIB || COMPILE_TEST
+ depends on SND
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+index a9377727c11c..7f709cbdcd87 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+@@ -1583,8 +1583,14 @@ err_disable_device:
+ static void nicvf_remove(struct pci_dev *pdev)
+ {
+ struct net_device *netdev = pci_get_drvdata(pdev);
+- struct nicvf *nic = netdev_priv(netdev);
+- struct net_device *pnetdev = nic->pnicvf->netdev;
++ struct nicvf *nic;
++ struct net_device *pnetdev;
++
++ if (!netdev)
++ return;
++
++ nic = netdev_priv(netdev);
++ pnetdev = nic->pnicvf->netdev;
+
+ /* Check if this Qset is assigned to different VF.
+ * If yes, clean primary and all secondary Qsets.
+diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+index 731423ca575d..8bead97373ab 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
++++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+@@ -4934,26 +4934,41 @@ static void rem_slave_counters(struct mlx4_dev *dev, int slave)
+ struct res_counter *counter;
+ struct res_counter *tmp;
+ int err;
+- int index;
++ int *counters_arr = NULL;
++ int i, j;
+
+ err = move_all_busy(dev, slave, RES_COUNTER);
+ if (err)
+ mlx4_warn(dev, "rem_slave_counters: Could not move all counters - too busy for slave %d\n",
+ slave);
+
+- spin_lock_irq(mlx4_tlock(dev));
+- list_for_each_entry_safe(counter, tmp, counter_list, com.list) {
+- if (counter->com.owner == slave) {
+- index = counter->com.res_id;
+- rb_erase(&counter->com.node,
+- &tracker->res_tree[RES_COUNTER]);
+- list_del(&counter->com.list);
+- kfree(counter);
+- __mlx4_counter_free(dev, index);
++ counters_arr = kmalloc_array(dev->caps.max_counters,
++ sizeof(*counters_arr), GFP_KERNEL);
++ if (!counters_arr)
++ return;
++
++ do {
++ i = 0;
++ j = 0;
++ spin_lock_irq(mlx4_tlock(dev));
++ list_for_each_entry_safe(counter, tmp, counter_list, com.list) {
++ if (counter->com.owner == slave) {
++ counters_arr[i++] = counter->com.res_id;
++ rb_erase(&counter->com.node,
++ &tracker->res_tree[RES_COUNTER]);
++ list_del(&counter->com.list);
++ kfree(counter);
++ }
++ }
++ spin_unlock_irq(mlx4_tlock(dev));
++
++ while (j < i) {
++ __mlx4_counter_free(dev, counters_arr[j++]);
+ mlx4_release_resource(dev, slave, RES_COUNTER, 1, 0);
+ }
+- }
+- spin_unlock_irq(mlx4_tlock(dev));
++ } while (i);
++
++ kfree(counters_arr);
+ }
+
+ static void rem_slave_xrcdns(struct mlx4_dev *dev, int slave)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 59874d666cff..443632df2010 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -1332,6 +1332,42 @@ static int mlx5e_modify_tir_lro(struct mlx5e_priv *priv, int tt)
+ return err;
+ }
+
++static int mlx5e_refresh_tir_self_loopback_enable(struct mlx5_core_dev *mdev,
++ u32 tirn)
++{
++ void *in;
++ int inlen;
++ int err;
++
++ inlen = MLX5_ST_SZ_BYTES(modify_tir_in);
++ in = mlx5_vzalloc(inlen);
++ if (!in)
++ return -ENOMEM;
++
++ MLX5_SET(modify_tir_in, in, bitmask.self_lb_en, 1);
++
++ err = mlx5_core_modify_tir(mdev, tirn, in, inlen);
++
++ kvfree(in);
++
++ return err;
++}
++
++static int mlx5e_refresh_tirs_self_loopback_enable(struct mlx5e_priv *priv)
++{
++ int err;
++ int i;
++
++ for (i = 0; i < MLX5E_NUM_TT; i++) {
++ err = mlx5e_refresh_tir_self_loopback_enable(priv->mdev,
++ priv->tirn[i]);
++ if (err)
++ return err;
++ }
++
++ return 0;
++}
++
+ static int mlx5e_set_dev_port_mtu(struct net_device *netdev)
+ {
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+@@ -1367,13 +1403,20 @@ int mlx5e_open_locked(struct net_device *netdev)
+
+ err = mlx5e_set_dev_port_mtu(netdev);
+ if (err)
+- return err;
++ goto err_clear_state_opened_flag;
+
+ err = mlx5e_open_channels(priv);
+ if (err) {
+ netdev_err(netdev, "%s: mlx5e_open_channels failed, %d\n",
+ __func__, err);
+- return err;
++ goto err_clear_state_opened_flag;
++ }
++
++ err = mlx5e_refresh_tirs_self_loopback_enable(priv);
++ if (err) {
++ netdev_err(netdev, "%s: mlx5e_refresh_tirs_self_loopback_enable failed, %d\n",
++ __func__, err);
++ goto err_close_channels;
+ }
+
+ mlx5e_update_carrier(priv);
+@@ -1382,6 +1425,12 @@ int mlx5e_open_locked(struct net_device *netdev)
+ schedule_delayed_work(&priv->update_stats_work, 0);
+
+ return 0;
++
++err_close_channels:
++ mlx5e_close_channels(priv);
++err_clear_state_opened_flag:
++ clear_bit(MLX5E_STATE_OPENED, &priv->state);
++ return err;
+ }
+
+ static int mlx5e_open(struct net_device *netdev)
+@@ -1899,6 +1948,9 @@ static int mlx5e_check_required_hca_cap(struct mlx5_core_dev *mdev)
+ "Not creating net device, some required device capabilities are missing\n");
+ return -ENOTSUPP;
+ }
++ if (!MLX5_CAP_ETH(mdev, self_lb_en_modifiable))
++ mlx5_core_warn(mdev, "Self loop back prevention is not supported\n");
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index b4f21232019a..79ef799f88ab 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -7429,15 +7429,15 @@ process_pkt:
+
+ rtl8169_rx_vlan_tag(desc, skb);
+
++ if (skb->pkt_type == PACKET_MULTICAST)
++ dev->stats.multicast++;
++
+ napi_gro_receive(&tp->napi, skb);
+
+ u64_stats_update_begin(&tp->rx_stats.syncp);
+ tp->rx_stats.packets++;
+ tp->rx_stats.bytes += pkt_size;
+ u64_stats_update_end(&tp->rx_stats.syncp);
+-
+- if (skb->pkt_type == PACKET_MULTICAST)
+- dev->stats.multicast++;
+ }
+ release_descriptor:
+ desc->opts2 = 0;
+diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
+index 9c71295f2fef..85e640440bd9 100644
+--- a/drivers/net/phy/broadcom.c
++++ b/drivers/net/phy/broadcom.c
+@@ -675,7 +675,7 @@ static struct mdio_device_id __maybe_unused broadcom_tbl[] = {
+ { PHY_ID_BCM5461, 0xfffffff0 },
+ { PHY_ID_BCM54616S, 0xfffffff0 },
+ { PHY_ID_BCM5464, 0xfffffff0 },
+- { PHY_ID_BCM5482, 0xfffffff0 },
++ { PHY_ID_BCM5481, 0xfffffff0 },
+ { PHY_ID_BCM5482, 0xfffffff0 },
+ { PHY_ID_BCM50610, 0xfffffff0 },
+ { PHY_ID_BCM50610M, 0xfffffff0 },
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 2a7c1be23c4f..66e0853d1680 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -775,6 +775,7 @@ static const struct usb_device_id products[] = {
+ {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */
+ {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
+ {QMI_FIXED_INTF(0x1bc7, 0x1201, 2)}, /* Telit LE920 */
++ {QMI_FIXED_INTF(0x1c9e, 0x9b01, 3)}, /* XS Stick W100-2 from 4G Systems */
+ {QMI_FIXED_INTF(0x0b3c, 0xc000, 4)}, /* Olivetti Olicard 100 */
+ {QMI_FIXED_INTF(0x0b3c, 0xc001, 4)}, /* Olivetti Olicard 120 */
+ {QMI_FIXED_INTF(0x0b3c, 0xc002, 4)}, /* Olivetti Olicard 140 */
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 488c6f50df73..c9e309cd9d82 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -581,7 +581,6 @@ static int vrf_newlink(struct net *src_net, struct net_device *dev,
+ {
+ struct net_vrf *vrf = netdev_priv(dev);
+ struct net_vrf_dev *vrf_ptr;
+- int err;
+
+ if (!data || !data[IFLA_VRF_TABLE])
+ return -EINVAL;
+@@ -590,26 +589,16 @@ static int vrf_newlink(struct net *src_net, struct net_device *dev,
+
+ dev->priv_flags |= IFF_VRF_MASTER;
+
+- err = -ENOMEM;
+ vrf_ptr = kmalloc(sizeof(*dev->vrf_ptr), GFP_KERNEL);
+ if (!vrf_ptr)
+- goto out_fail;
++ return -ENOMEM;
+
+ vrf_ptr->ifindex = dev->ifindex;
+ vrf_ptr->tb_id = vrf->tb_id;
+
+- err = register_netdevice(dev);
+- if (err < 0)
+- goto out_fail;
+-
+ rcu_assign_pointer(dev->vrf_ptr, vrf_ptr);
+
+- return 0;
+-
+-out_fail:
+- kfree(vrf_ptr);
+- free_netdev(dev);
+- return err;
++ return register_netdev(dev);
+ }
+
+ static size_t vrf_nl_getsize(const struct net_device *dev)
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 938efe33be80..94eea1f43280 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -3398,7 +3398,7 @@ int btrfs_set_disk_extent_flags(struct btrfs_trans_handle *trans,
+ int btrfs_free_extent(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root,
+ u64 bytenr, u64 num_bytes, u64 parent, u64 root_objectid,
+- u64 owner, u64 offset, int no_quota);
++ u64 owner, u64 offset);
+
+ int btrfs_free_reserved_extent(struct btrfs_root *root, u64 start, u64 len,
+ int delalloc);
+@@ -3411,7 +3411,7 @@ int btrfs_finish_extent_commit(struct btrfs_trans_handle *trans,
+ int btrfs_inc_extent_ref(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root,
+ u64 bytenr, u64 num_bytes, u64 parent,
+- u64 root_objectid, u64 owner, u64 offset, int no_quota);
++ u64 root_objectid, u64 owner, u64 offset);
+
+ int btrfs_start_dirty_block_groups(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root);
+diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
+index ac3e81da6d4e..7832031fef68 100644
+--- a/fs/btrfs/delayed-ref.c
++++ b/fs/btrfs/delayed-ref.c
+@@ -197,6 +197,119 @@ static inline void drop_delayed_ref(struct btrfs_trans_handle *trans,
+ trans->delayed_ref_updates--;
+ }
+
++static bool merge_ref(struct btrfs_trans_handle *trans,
++ struct btrfs_delayed_ref_root *delayed_refs,
++ struct btrfs_delayed_ref_head *head,
++ struct btrfs_delayed_ref_node *ref,
++ u64 seq)
++{
++ struct btrfs_delayed_ref_node *next;
++ bool done = false;
++
++ next = list_first_entry(&head->ref_list, struct btrfs_delayed_ref_node,
++ list);
++ while (!done && &next->list != &head->ref_list) {
++ int mod;
++ struct btrfs_delayed_ref_node *next2;
++
++ next2 = list_next_entry(next, list);
++
++ if (next == ref)
++ goto next;
++
++ if (seq && next->seq >= seq)
++ goto next;
++
++ if (next->type != ref->type)
++ goto next;
++
++ if ((ref->type == BTRFS_TREE_BLOCK_REF_KEY ||
++ ref->type == BTRFS_SHARED_BLOCK_REF_KEY) &&
++ comp_tree_refs(btrfs_delayed_node_to_tree_ref(ref),
++ btrfs_delayed_node_to_tree_ref(next),
++ ref->type))
++ goto next;
++ if ((ref->type == BTRFS_EXTENT_DATA_REF_KEY ||
++ ref->type == BTRFS_SHARED_DATA_REF_KEY) &&
++ comp_data_refs(btrfs_delayed_node_to_data_ref(ref),
++ btrfs_delayed_node_to_data_ref(next)))
++ goto next;
++
++ if (ref->action == next->action) {
++ mod = next->ref_mod;
++ } else {
++ if (ref->ref_mod < next->ref_mod) {
++ swap(ref, next);
++ done = true;
++ }
++ mod = -next->ref_mod;
++ }
++
++ drop_delayed_ref(trans, delayed_refs, head, next);
++ ref->ref_mod += mod;
++ if (ref->ref_mod == 0) {
++ drop_delayed_ref(trans, delayed_refs, head, ref);
++ done = true;
++ } else {
++ /*
++ * Can't have multiples of the same ref on a tree block.
++ */
++ WARN_ON(ref->type == BTRFS_TREE_BLOCK_REF_KEY ||
++ ref->type == BTRFS_SHARED_BLOCK_REF_KEY);
++ }
++next:
++ next = next2;
++ }
++
++ return done;
++}
++
++void btrfs_merge_delayed_refs(struct btrfs_trans_handle *trans,
++ struct btrfs_fs_info *fs_info,
++ struct btrfs_delayed_ref_root *delayed_refs,
++ struct btrfs_delayed_ref_head *head)
++{
++ struct btrfs_delayed_ref_node *ref;
++ u64 seq = 0;
++
++ assert_spin_locked(&head->lock);
++
++ if (list_empty(&head->ref_list))
++ return;
++
++ /* We don't have too many refs to merge for data. */
++ if (head->is_data)
++ return;
++
++ spin_lock(&fs_info->tree_mod_seq_lock);
++ if (!list_empty(&fs_info->tree_mod_seq_list)) {
++ struct seq_list *elem;
++
++ elem = list_first_entry(&fs_info->tree_mod_seq_list,
++ struct seq_list, list);
++ seq = elem->seq;
++ }
++ spin_unlock(&fs_info->tree_mod_seq_lock);
++
++ ref = list_first_entry(&head->ref_list, struct btrfs_delayed_ref_node,
++ list);
++ while (&ref->list != &head->ref_list) {
++ if (seq && ref->seq >= seq)
++ goto next;
++
++ if (merge_ref(trans, delayed_refs, head, ref, seq)) {
++ if (list_empty(&head->ref_list))
++ break;
++ ref = list_first_entry(&head->ref_list,
++ struct btrfs_delayed_ref_node,
++ list);
++ continue;
++ }
++next:
++ ref = list_next_entry(ref, list);
++ }
++}
++
+ int btrfs_check_delayed_seq(struct btrfs_fs_info *fs_info,
+ struct btrfs_delayed_ref_root *delayed_refs,
+ u64 seq)
+@@ -292,8 +405,7 @@ add_delayed_ref_tail_merge(struct btrfs_trans_handle *trans,
+ exist = list_entry(href->ref_list.prev, struct btrfs_delayed_ref_node,
+ list);
+ /* No need to compare bytenr nor is_head */
+- if (exist->type != ref->type || exist->no_quota != ref->no_quota ||
+- exist->seq != ref->seq)
++ if (exist->type != ref->type || exist->seq != ref->seq)
+ goto add_tail;
+
+ if ((exist->type == BTRFS_TREE_BLOCK_REF_KEY ||
+@@ -524,7 +636,7 @@ add_delayed_tree_ref(struct btrfs_fs_info *fs_info,
+ struct btrfs_delayed_ref_head *head_ref,
+ struct btrfs_delayed_ref_node *ref, u64 bytenr,
+ u64 num_bytes, u64 parent, u64 ref_root, int level,
+- int action, int no_quota)
++ int action)
+ {
+ struct btrfs_delayed_tree_ref *full_ref;
+ struct btrfs_delayed_ref_root *delayed_refs;
+@@ -546,7 +658,6 @@ add_delayed_tree_ref(struct btrfs_fs_info *fs_info,
+ ref->action = action;
+ ref->is_head = 0;
+ ref->in_tree = 1;
+- ref->no_quota = no_quota;
+ ref->seq = seq;
+
+ full_ref = btrfs_delayed_node_to_tree_ref(ref);
+@@ -579,7 +690,7 @@ add_delayed_data_ref(struct btrfs_fs_info *fs_info,
+ struct btrfs_delayed_ref_head *head_ref,
+ struct btrfs_delayed_ref_node *ref, u64 bytenr,
+ u64 num_bytes, u64 parent, u64 ref_root, u64 owner,
+- u64 offset, int action, int no_quota)
++ u64 offset, int action)
+ {
+ struct btrfs_delayed_data_ref *full_ref;
+ struct btrfs_delayed_ref_root *delayed_refs;
+@@ -602,7 +713,6 @@ add_delayed_data_ref(struct btrfs_fs_info *fs_info,
+ ref->action = action;
+ ref->is_head = 0;
+ ref->in_tree = 1;
+- ref->no_quota = no_quota;
+ ref->seq = seq;
+
+ full_ref = btrfs_delayed_node_to_data_ref(ref);
+@@ -633,17 +743,13 @@ int btrfs_add_delayed_tree_ref(struct btrfs_fs_info *fs_info,
+ struct btrfs_trans_handle *trans,
+ u64 bytenr, u64 num_bytes, u64 parent,
+ u64 ref_root, int level, int action,
+- struct btrfs_delayed_extent_op *extent_op,
+- int no_quota)
++ struct btrfs_delayed_extent_op *extent_op)
+ {
+ struct btrfs_delayed_tree_ref *ref;
+ struct btrfs_delayed_ref_head *head_ref;
+ struct btrfs_delayed_ref_root *delayed_refs;
+ struct btrfs_qgroup_extent_record *record = NULL;
+
+- if (!is_fstree(ref_root) || !fs_info->quota_enabled)
+- no_quota = 0;
+-
+ BUG_ON(extent_op && extent_op->is_data);
+ ref = kmem_cache_alloc(btrfs_delayed_tree_ref_cachep, GFP_NOFS);
+ if (!ref)
+@@ -672,8 +778,7 @@ int btrfs_add_delayed_tree_ref(struct btrfs_fs_info *fs_info,
+ bytenr, num_bytes, action, 0);
+
+ add_delayed_tree_ref(fs_info, trans, head_ref, &ref->node, bytenr,
+- num_bytes, parent, ref_root, level, action,
+- no_quota);
++ num_bytes, parent, ref_root, level, action);
+ spin_unlock(&delayed_refs->lock);
+
+ return 0;
+@@ -694,17 +799,13 @@ int btrfs_add_delayed_data_ref(struct btrfs_fs_info *fs_info,
+ u64 bytenr, u64 num_bytes,
+ u64 parent, u64 ref_root,
+ u64 owner, u64 offset, int action,
+- struct btrfs_delayed_extent_op *extent_op,
+- int no_quota)
++ struct btrfs_delayed_extent_op *extent_op)
+ {
+ struct btrfs_delayed_data_ref *ref;
+ struct btrfs_delayed_ref_head *head_ref;
+ struct btrfs_delayed_ref_root *delayed_refs;
+ struct btrfs_qgroup_extent_record *record = NULL;
+
+- if (!is_fstree(ref_root) || !fs_info->quota_enabled)
+- no_quota = 0;
+-
+ BUG_ON(extent_op && !extent_op->is_data);
+ ref = kmem_cache_alloc(btrfs_delayed_data_ref_cachep, GFP_NOFS);
+ if (!ref)
+@@ -740,7 +841,7 @@ int btrfs_add_delayed_data_ref(struct btrfs_fs_info *fs_info,
+
+ add_delayed_data_ref(fs_info, trans, head_ref, &ref->node, bytenr,
+ num_bytes, parent, ref_root, owner, offset,
+- action, no_quota);
++ action);
+ spin_unlock(&delayed_refs->lock);
+
+ return 0;
+diff --git a/fs/btrfs/delayed-ref.h b/fs/btrfs/delayed-ref.h
+index 13fb5e6090fe..930887a4275f 100644
+--- a/fs/btrfs/delayed-ref.h
++++ b/fs/btrfs/delayed-ref.h
+@@ -68,7 +68,6 @@ struct btrfs_delayed_ref_node {
+
+ unsigned int action:8;
+ unsigned int type:8;
+- unsigned int no_quota:1;
+ /* is this node still in the rbtree? */
+ unsigned int is_head:1;
+ unsigned int in_tree:1;
+@@ -233,15 +232,13 @@ int btrfs_add_delayed_tree_ref(struct btrfs_fs_info *fs_info,
+ struct btrfs_trans_handle *trans,
+ u64 bytenr, u64 num_bytes, u64 parent,
+ u64 ref_root, int level, int action,
+- struct btrfs_delayed_extent_op *extent_op,
+- int no_quota);
++ struct btrfs_delayed_extent_op *extent_op);
+ int btrfs_add_delayed_data_ref(struct btrfs_fs_info *fs_info,
+ struct btrfs_trans_handle *trans,
+ u64 bytenr, u64 num_bytes,
+ u64 parent, u64 ref_root,
+ u64 owner, u64 offset, int action,
+- struct btrfs_delayed_extent_op *extent_op,
+- int no_quota);
++ struct btrfs_delayed_extent_op *extent_op);
+ int btrfs_add_delayed_extent_op(struct btrfs_fs_info *fs_info,
+ struct btrfs_trans_handle *trans,
+ u64 bytenr, u64 num_bytes,
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 601d7d45d164..cadacf643bd0 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -95,8 +95,7 @@ static int alloc_reserved_tree_block(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root,
+ u64 parent, u64 root_objectid,
+ u64 flags, struct btrfs_disk_key *key,
+- int level, struct btrfs_key *ins,
+- int no_quota);
++ int level, struct btrfs_key *ins);
+ static int do_chunk_alloc(struct btrfs_trans_handle *trans,
+ struct btrfs_root *extent_root, u64 flags,
+ int force);
+@@ -2009,8 +2008,7 @@ int btrfs_discard_extent(struct btrfs_root *root, u64 bytenr,
+ int btrfs_inc_extent_ref(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root,
+ u64 bytenr, u64 num_bytes, u64 parent,
+- u64 root_objectid, u64 owner, u64 offset,
+- int no_quota)
++ u64 root_objectid, u64 owner, u64 offset)
+ {
+ int ret;
+ struct btrfs_fs_info *fs_info = root->fs_info;
+@@ -2022,12 +2020,12 @@ int btrfs_inc_extent_ref(struct btrfs_trans_handle *trans,
+ ret = btrfs_add_delayed_tree_ref(fs_info, trans, bytenr,
+ num_bytes,
+ parent, root_objectid, (int)owner,
+- BTRFS_ADD_DELAYED_REF, NULL, no_quota);
++ BTRFS_ADD_DELAYED_REF, NULL);
+ } else {
+ ret = btrfs_add_delayed_data_ref(fs_info, trans, bytenr,
+ num_bytes,
+ parent, root_objectid, owner, offset,
+- BTRFS_ADD_DELAYED_REF, NULL, no_quota);
++ BTRFS_ADD_DELAYED_REF, NULL);
+ }
+ return ret;
+ }
+@@ -2048,15 +2046,11 @@ static int __btrfs_inc_extent_ref(struct btrfs_trans_handle *trans,
+ u64 num_bytes = node->num_bytes;
+ u64 refs;
+ int ret;
+- int no_quota = node->no_quota;
+
+ path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
+
+- if (!is_fstree(root_objectid) || !root->fs_info->quota_enabled)
+- no_quota = 1;
+-
+ path->reada = 1;
+ path->leave_spinning = 1;
+ /* this will setup the path even if it fails to insert the back ref */
+@@ -2291,8 +2285,7 @@ static int run_delayed_tree_ref(struct btrfs_trans_handle *trans,
+ parent, ref_root,
+ extent_op->flags_to_set,
+ &extent_op->key,
+- ref->level, &ins,
+- node->no_quota);
++ ref->level, &ins);
+ } else if (node->action == BTRFS_ADD_DELAYED_REF) {
+ ret = __btrfs_inc_extent_ref(trans, root, node,
+ parent, ref_root,
+@@ -2433,7 +2426,21 @@ static noinline int __btrfs_run_delayed_refs(struct btrfs_trans_handle *trans,
+ }
+ }
+
++ /*
++ * We need to try and merge add/drops of the same ref since we
++ * can run into issues with relocate dropping the implicit ref
++ * and then it being added back again before the drop can
++ * finish. If we merged anything we need to re-loop so we can
++ * get a good ref.
++ * Or we can get node references of the same type that weren't
++ * merged when created due to bumps in the tree mod seq, and
++ * we need to merge them to prevent adding an inline extent
++ * backref before dropping it (triggering a BUG_ON at
++ * insert_inline_extent_backref()).
++ */
+ spin_lock(&locked_ref->lock);
++ btrfs_merge_delayed_refs(trans, fs_info, delayed_refs,
++ locked_ref);
+
+ /*
+ * locked_ref is the head node, so we have to go one
+@@ -3109,7 +3116,7 @@ static int __btrfs_mod_ref(struct btrfs_trans_handle *trans,
+ int level;
+ int ret = 0;
+ int (*process_func)(struct btrfs_trans_handle *, struct btrfs_root *,
+- u64, u64, u64, u64, u64, u64, int);
++ u64, u64, u64, u64, u64, u64);
+
+
+ if (btrfs_test_is_dummy_root(root))
+@@ -3150,15 +3157,14 @@ static int __btrfs_mod_ref(struct btrfs_trans_handle *trans,
+ key.offset -= btrfs_file_extent_offset(buf, fi);
+ ret = process_func(trans, root, bytenr, num_bytes,
+ parent, ref_root, key.objectid,
+- key.offset, 1);
++ key.offset);
+ if (ret)
+ goto fail;
+ } else {
+ bytenr = btrfs_node_blockptr(buf, i);
+ num_bytes = root->nodesize;
+ ret = process_func(trans, root, bytenr, num_bytes,
+- parent, ref_root, level - 1, 0,
+- 1);
++ parent, ref_root, level - 1, 0);
+ if (ret)
+ goto fail;
+ }
+@@ -6233,7 +6239,6 @@ static int __btrfs_free_extent(struct btrfs_trans_handle *trans,
+ int extent_slot = 0;
+ int found_extent = 0;
+ int num_to_del = 1;
+- int no_quota = node->no_quota;
+ u32 item_size;
+ u64 refs;
+ u64 bytenr = node->bytenr;
+@@ -6242,9 +6247,6 @@ static int __btrfs_free_extent(struct btrfs_trans_handle *trans,
+ bool skinny_metadata = btrfs_fs_incompat(root->fs_info,
+ SKINNY_METADATA);
+
+- if (!info->quota_enabled || !is_fstree(root_objectid))
+- no_quota = 1;
+-
+ path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
+@@ -6570,7 +6572,7 @@ void btrfs_free_tree_block(struct btrfs_trans_handle *trans,
+ buf->start, buf->len,
+ parent, root->root_key.objectid,
+ btrfs_header_level(buf),
+- BTRFS_DROP_DELAYED_REF, NULL, 0);
++ BTRFS_DROP_DELAYED_REF, NULL);
+ BUG_ON(ret); /* -ENOMEM */
+ }
+
+@@ -6618,7 +6620,7 @@ out:
+ /* Can return -ENOMEM */
+ int btrfs_free_extent(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ u64 bytenr, u64 num_bytes, u64 parent, u64 root_objectid,
+- u64 owner, u64 offset, int no_quota)
++ u64 owner, u64 offset)
+ {
+ int ret;
+ struct btrfs_fs_info *fs_info = root->fs_info;
+@@ -6641,13 +6643,13 @@ int btrfs_free_extent(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ ret = btrfs_add_delayed_tree_ref(fs_info, trans, bytenr,
+ num_bytes,
+ parent, root_objectid, (int)owner,
+- BTRFS_DROP_DELAYED_REF, NULL, no_quota);
++ BTRFS_DROP_DELAYED_REF, NULL);
+ } else {
+ ret = btrfs_add_delayed_data_ref(fs_info, trans, bytenr,
+ num_bytes,
+ parent, root_objectid, owner,
+ offset, BTRFS_DROP_DELAYED_REF,
+- NULL, no_quota);
++ NULL);
+ }
+ return ret;
+ }
+@@ -7429,8 +7431,7 @@ static int alloc_reserved_tree_block(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root,
+ u64 parent, u64 root_objectid,
+ u64 flags, struct btrfs_disk_key *key,
+- int level, struct btrfs_key *ins,
+- int no_quota)
++ int level, struct btrfs_key *ins)
+ {
+ int ret;
+ struct btrfs_fs_info *fs_info = root->fs_info;
+@@ -7520,7 +7521,7 @@ int btrfs_alloc_reserved_file_extent(struct btrfs_trans_handle *trans,
+ ret = btrfs_add_delayed_data_ref(root->fs_info, trans, ins->objectid,
+ ins->offset, 0,
+ root_objectid, owner, offset,
+- BTRFS_ADD_DELAYED_EXTENT, NULL, 0);
++ BTRFS_ADD_DELAYED_EXTENT, NULL);
+ return ret;
+ }
+
+@@ -7734,7 +7735,7 @@ struct extent_buffer *btrfs_alloc_tree_block(struct btrfs_trans_handle *trans,
+ ins.objectid, ins.offset,
+ parent, root_objectid, level,
+ BTRFS_ADD_DELAYED_EXTENT,
+- extent_op, 0);
++ extent_op);
+ if (ret)
+ goto out_free_delayed;
+ }
+@@ -8282,7 +8283,7 @@ skip:
+ }
+ }
+ ret = btrfs_free_extent(trans, root, bytenr, blocksize, parent,
+- root->root_key.objectid, level - 1, 0, 0);
++ root->root_key.objectid, level - 1, 0);
+ BUG_ON(ret); /* -ENOMEM */
+ }
+ btrfs_tree_unlock(next);
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 8c6f247ba81d..e27ea7ae7f26 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -756,8 +756,16 @@ next_slot:
+ }
+
+ btrfs_item_key_to_cpu(leaf, &key, path->slots[0]);
+- if (key.objectid > ino ||
+- key.type > BTRFS_EXTENT_DATA_KEY || key.offset >= end)
++
++ if (key.objectid > ino)
++ break;
++ if (WARN_ON_ONCE(key.objectid < ino) ||
++ key.type < BTRFS_EXTENT_DATA_KEY) {
++ ASSERT(del_nr == 0);
++ path->slots[0]++;
++ goto next_slot;
++ }
++ if (key.type > BTRFS_EXTENT_DATA_KEY || key.offset >= end)
+ break;
+
+ fi = btrfs_item_ptr(leaf, path->slots[0],
+@@ -776,8 +784,8 @@ next_slot:
+ btrfs_file_extent_inline_len(leaf,
+ path->slots[0], fi);
+ } else {
+- WARN_ON(1);
+- extent_end = search_start;
++ /* can't happen */
++ BUG();
+ }
+
+ /*
+@@ -847,7 +855,7 @@ next_slot:
+ disk_bytenr, num_bytes, 0,
+ root->root_key.objectid,
+ new_key.objectid,
+- start - extent_offset, 1);
++ start - extent_offset);
+ BUG_ON(ret); /* -ENOMEM */
+ }
+ key.offset = start;
+@@ -925,7 +933,7 @@ delete_extent_item:
+ disk_bytenr, num_bytes, 0,
+ root->root_key.objectid,
+ key.objectid, key.offset -
+- extent_offset, 0);
++ extent_offset);
+ BUG_ON(ret); /* -ENOMEM */
+ inode_sub_bytes(inode,
+ extent_end - key.offset);
+@@ -1204,7 +1212,7 @@ again:
+
+ ret = btrfs_inc_extent_ref(trans, root, bytenr, num_bytes, 0,
+ root->root_key.objectid,
+- ino, orig_offset, 1);
++ ino, orig_offset);
+ BUG_ON(ret); /* -ENOMEM */
+
+ if (split == start) {
+@@ -1231,7 +1239,7 @@ again:
+ del_nr++;
+ ret = btrfs_free_extent(trans, root, bytenr, num_bytes,
+ 0, root->root_key.objectid,
+- ino, orig_offset, 0);
++ ino, orig_offset);
+ BUG_ON(ret); /* -ENOMEM */
+ }
+ other_start = 0;
+@@ -1248,7 +1256,7 @@ again:
+ del_nr++;
+ ret = btrfs_free_extent(trans, root, bytenr, num_bytes,
+ 0, root->root_key.objectid,
+- ino, orig_offset, 0);
++ ino, orig_offset);
+ BUG_ON(ret); /* -ENOMEM */
+ }
+ if (del_nr == 0) {
+@@ -1868,8 +1876,13 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ struct btrfs_log_ctx ctx;
+ int ret = 0;
+ bool full_sync = 0;
+- const u64 len = end - start + 1;
++ u64 len;
+
++ /*
++ * The range length can be represented by u64, we have to do the typecasts
++ * to avoid signed overflow if it's [0, LLONG_MAX] eg. from fsync()
++ */
++ len = (u64)end - (u64)start + 1;
+ trace_btrfs_sync_file(file, datasync);
+
+ /*
+@@ -2057,8 +2070,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ }
+ }
+ if (!full_sync) {
+- ret = btrfs_wait_ordered_range(inode, start,
+- end - start + 1);
++ ret = btrfs_wait_ordered_range(inode, start, len);
+ if (ret) {
+ btrfs_end_transaction(trans, root);
+ goto out;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 611b66d73e80..396e3d5c4e83 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1294,8 +1294,14 @@ next_slot:
+ num_bytes = 0;
+ btrfs_item_key_to_cpu(leaf, &found_key, path->slots[0]);
+
+- if (found_key.objectid > ino ||
+- found_key.type > BTRFS_EXTENT_DATA_KEY ||
++ if (found_key.objectid > ino)
++ break;
++ if (WARN_ON_ONCE(found_key.objectid < ino) ||
++ found_key.type < BTRFS_EXTENT_DATA_KEY) {
++ path->slots[0]++;
++ goto next_slot;
++ }
++ if (found_key.type > BTRFS_EXTENT_DATA_KEY ||
+ found_key.offset > end)
+ break;
+
+@@ -2573,7 +2579,7 @@ again:
+ ret = btrfs_inc_extent_ref(trans, root, new->bytenr,
+ new->disk_len, 0,
+ backref->root_id, backref->inum,
+- new->file_pos, 0); /* start - extent_offset */
++ new->file_pos); /* start - extent_offset */
+ if (ret) {
+ btrfs_abort_transaction(trans, root, ret);
+ goto out_free_path;
+@@ -4217,6 +4223,47 @@ static int truncate_space_check(struct btrfs_trans_handle *trans,
+
+ }
+
++static int truncate_inline_extent(struct inode *inode,
++ struct btrfs_path *path,
++ struct btrfs_key *found_key,
++ const u64 item_end,
++ const u64 new_size)
++{
++ struct extent_buffer *leaf = path->nodes[0];
++ int slot = path->slots[0];
++ struct btrfs_file_extent_item *fi;
++ u32 size = (u32)(new_size - found_key->offset);
++ struct btrfs_root *root = BTRFS_I(inode)->root;
++
++ fi = btrfs_item_ptr(leaf, slot, struct btrfs_file_extent_item);
++
++ if (btrfs_file_extent_compression(leaf, fi) != BTRFS_COMPRESS_NONE) {
++ loff_t offset = new_size;
++ loff_t page_end = ALIGN(offset, PAGE_CACHE_SIZE);
++
++ /*
++ * Zero out the remaining of the last page of our inline extent,
++ * instead of directly truncating our inline extent here - that
++ * would be much more complex (decompressing all the data, then
++ * compressing the truncated data, which might be bigger than
++ * the size of the inline extent, resize the extent, etc).
++ * We release the path because to get the page we might need to
++ * read the extent item from disk (data not in the page cache).
++ */
++ btrfs_release_path(path);
++ return btrfs_truncate_page(inode, offset, page_end - offset, 0);
++ }
++
++ btrfs_set_file_extent_ram_bytes(leaf, fi, size);
++ size = btrfs_file_extent_calc_inline_size(size);
++ btrfs_truncate_item(root, path, size, 1);
++
++ if (test_bit(BTRFS_ROOT_REF_COWS, &root->state))
++ inode_sub_bytes(inode, item_end + 1 - new_size);
++
++ return 0;
++}
++
+ /*
+ * this can truncate away extent items, csum items and directory items.
+ * It starts at a high offset and removes keys until it can't find
+@@ -4411,27 +4458,40 @@ search_again:
+ * special encodings
+ */
+ if (!del_item &&
+- btrfs_file_extent_compression(leaf, fi) == 0 &&
+ btrfs_file_extent_encryption(leaf, fi) == 0 &&
+ btrfs_file_extent_other_encoding(leaf, fi) == 0) {
+- u32 size = new_size - found_key.offset;
+-
+- if (test_bit(BTRFS_ROOT_REF_COWS, &root->state))
+- inode_sub_bytes(inode, item_end + 1 -
+- new_size);
+
+ /*
+- * update the ram bytes to properly reflect
+- * the new size of our item
++ * Need to release path in order to truncate a
++ * compressed extent. So delete any accumulated
++ * extent items so far.
+ */
+- btrfs_set_file_extent_ram_bytes(leaf, fi, size);
+- size =
+- btrfs_file_extent_calc_inline_size(size);
+- btrfs_truncate_item(root, path, size, 1);
++ if (btrfs_file_extent_compression(leaf, fi) !=
++ BTRFS_COMPRESS_NONE && pending_del_nr) {
++ err = btrfs_del_items(trans, root, path,
++ pending_del_slot,
++ pending_del_nr);
++ if (err) {
++ btrfs_abort_transaction(trans,
++ root,
++ err);
++ goto error;
++ }
++ pending_del_nr = 0;
++ }
++
++ err = truncate_inline_extent(inode, path,
++ &found_key,
++ item_end,
++ new_size);
++ if (err) {
++ btrfs_abort_transaction(trans,
++ root, err);
++ goto error;
++ }
+ } else if (test_bit(BTRFS_ROOT_REF_COWS,
+ &root->state)) {
+- inode_sub_bytes(inode, item_end + 1 -
+- found_key.offset);
++ inode_sub_bytes(inode, item_end + 1 - new_size);
+ }
+ }
+ delete:
+@@ -4461,7 +4521,7 @@ delete:
+ ret = btrfs_free_extent(trans, root, extent_start,
+ extent_num_bytes, 0,
+ btrfs_header_owner(leaf),
+- ino, extent_offset, 0);
++ ino, extent_offset);
+ BUG_ON(ret);
+ if (btrfs_should_throttle_delayed_refs(trans, root))
+ btrfs_async_run_delayed_refs(root,
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 8d20f3b1cab0..6548a36823bc 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3203,41 +3203,6 @@ out:
+ return ret;
+ }
+
+-/* Helper to check and see if this root currently has a ref on the given disk
+- * bytenr. If it does then we need to update the quota for this root. This
+- * doesn't do anything if quotas aren't enabled.
+- */
+-static int check_ref(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+- u64 disko)
+-{
+- struct seq_list tree_mod_seq_elem = SEQ_LIST_INIT(tree_mod_seq_elem);
+- struct ulist *roots;
+- struct ulist_iterator uiter;
+- struct ulist_node *root_node = NULL;
+- int ret;
+-
+- if (!root->fs_info->quota_enabled)
+- return 1;
+-
+- btrfs_get_tree_mod_seq(root->fs_info, &tree_mod_seq_elem);
+- ret = btrfs_find_all_roots(trans, root->fs_info, disko,
+- tree_mod_seq_elem.seq, &roots);
+- if (ret < 0)
+- goto out;
+- ret = 0;
+- ULIST_ITER_INIT(&uiter);
+- while ((root_node = ulist_next(roots, &uiter))) {
+- if (root_node->val == root->objectid) {
+- ret = 1;
+- break;
+- }
+- }
+- ulist_free(roots);
+-out:
+- btrfs_put_tree_mod_seq(root->fs_info, &tree_mod_seq_elem);
+- return ret;
+-}
+-
+ static int clone_finish_inode_update(struct btrfs_trans_handle *trans,
+ struct inode *inode,
+ u64 endoff,
+@@ -3328,6 +3293,150 @@ static void clone_update_extent_map(struct inode *inode,
+ &BTRFS_I(inode)->runtime_flags);
+ }
+
++/*
++ * Make sure we do not end up inserting an inline extent into a file that has
++ * already other (non-inline) extents. If a file has an inline extent it can
++ * not have any other extents and the (single) inline extent must start at the
++ * file offset 0. Failing to respect these rules will lead to file corruption,
++ * resulting in EIO errors on read/write operations, hitting BUG_ON's in mm, etc
++ *
++ * We can have extents that have been already written to disk or we can have
++ * dirty ranges still in delalloc, in which case the extent maps and items are
++ * created only when we run delalloc, and the delalloc ranges might fall outside
++ * the range we are currently locking in the inode's io tree. So we check the
++ * inode's i_size because of that (i_size updates are done while holding the
++ * i_mutex, which we are holding here).
++ * We also check to see if the inode has a size not greater than "datal" but has
++ * extents beyond it, due to an fallocate with FALLOC_FL_KEEP_SIZE (and we are
++ * protected against such concurrent fallocate calls by the i_mutex).
++ *
++ * If the file has no extents but a size greater than datal, do not allow the
++ * copy because we would need turn the inline extent into a non-inline one (even
++ * with NO_HOLES enabled). If we find our destination inode only has one inline
++ * extent, just overwrite it with the source inline extent if its size is less
++ * than the source extent's size, or we could copy the source inline extent's
++ * data into the destination inode's inline extent if the later is greater then
++ * the former.
++ */
++static int clone_copy_inline_extent(struct inode *src,
++ struct inode *dst,
++ struct btrfs_trans_handle *trans,
++ struct btrfs_path *path,
++ struct btrfs_key *new_key,
++ const u64 drop_start,
++ const u64 datal,
++ const u64 skip,
++ const u64 size,
++ char *inline_data)
++{
++ struct btrfs_root *root = BTRFS_I(dst)->root;
++ const u64 aligned_end = ALIGN(new_key->offset + datal,
++ root->sectorsize);
++ int ret;
++ struct btrfs_key key;
++
++ if (new_key->offset > 0)
++ return -EOPNOTSUPP;
++
++ key.objectid = btrfs_ino(dst);
++ key.type = BTRFS_EXTENT_DATA_KEY;
++ key.offset = 0;
++ ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
++ if (ret < 0) {
++ return ret;
++ } else if (ret > 0) {
++ if (path->slots[0] >= btrfs_header_nritems(path->nodes[0])) {
++ ret = btrfs_next_leaf(root, path);
++ if (ret < 0)
++ return ret;
++ else if (ret > 0)
++ goto copy_inline_extent;
++ }
++ btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0]);
++ if (key.objectid == btrfs_ino(dst) &&
++ key.type == BTRFS_EXTENT_DATA_KEY) {
++ ASSERT(key.offset > 0);
++ return -EOPNOTSUPP;
++ }
++ } else if (i_size_read(dst) <= datal) {
++ struct btrfs_file_extent_item *ei;
++ u64 ext_len;
++
++ /*
++ * If the file size is <= datal, make sure there are no other
++ * extents following (can happen do to an fallocate call with
++ * the flag FALLOC_FL_KEEP_SIZE).
++ */
++ ei = btrfs_item_ptr(path->nodes[0], path->slots[0],
++ struct btrfs_file_extent_item);
++ /*
++ * If it's an inline extent, it can not have other extents
++ * following it.
++ */
++ if (btrfs_file_extent_type(path->nodes[0], ei) ==
++ BTRFS_FILE_EXTENT_INLINE)
++ goto copy_inline_extent;
++
++ ext_len = btrfs_file_extent_num_bytes(path->nodes[0], ei);
++ if (ext_len > aligned_end)
++ return -EOPNOTSUPP;
++
++ ret = btrfs_next_item(root, path);
++ if (ret < 0) {
++ return ret;
++ } else if (ret == 0) {
++ btrfs_item_key_to_cpu(path->nodes[0], &key,
++ path->slots[0]);
++ if (key.objectid == btrfs_ino(dst) &&
++ key.type == BTRFS_EXTENT_DATA_KEY)
++ return -EOPNOTSUPP;
++ }
++ }
++
++copy_inline_extent:
++ /*
++ * We have no extent items, or we have an extent at offset 0 which may
++ * or may not be inlined. All these cases are dealt the same way.
++ */
++ if (i_size_read(dst) > datal) {
++ /*
++ * If the destination inode has an inline extent...
++ * This would require copying the data from the source inline
++ * extent into the beginning of the destination's inline extent.
++ * But this is really complex, both extents can be compressed
++ * or just one of them, which would require decompressing and
++ * re-compressing data (which could increase the new compressed
++ * size, not allowing the compressed data to fit anymore in an
++ * inline extent).
++ * So just don't support this case for now (it should be rare,
++ * we are not really saving space when cloning inline extents).
++ */
++ return -EOPNOTSUPP;
++ }
++
++ btrfs_release_path(path);
++ ret = btrfs_drop_extents(trans, root, dst, drop_start, aligned_end, 1);
++ if (ret)
++ return ret;
++ ret = btrfs_insert_empty_item(trans, root, path, new_key, size);
++ if (ret)
++ return ret;
++
++ if (skip) {
++ const u32 start = btrfs_file_extent_calc_inline_size(0);
++
++ memmove(inline_data + start, inline_data + start + skip, datal);
++ }
++
++ write_extent_buffer(path->nodes[0], inline_data,
++ btrfs_item_ptr_offset(path->nodes[0],
++ path->slots[0]),
++ size);
++ inode_add_bytes(dst, datal);
++
++ return 0;
++}
++
+ /**
+ * btrfs_clone() - clone a range from inode file to another
+ *
+@@ -3352,9 +3461,7 @@ static int btrfs_clone(struct inode *src, struct inode *inode,
+ u32 nritems;
+ int slot;
+ int ret;
+- int no_quota;
+ const u64 len = olen_aligned;
+- u64 last_disko = 0;
+ u64 last_dest_end = destoff;
+
+ ret = -ENOMEM;
+@@ -3400,7 +3507,6 @@ static int btrfs_clone(struct inode *src, struct inode *inode,
+
+ nritems = btrfs_header_nritems(path->nodes[0]);
+ process_slot:
+- no_quota = 1;
+ if (path->slots[0] >= nritems) {
+ ret = btrfs_next_leaf(BTRFS_I(src)->root, path);
+ if (ret < 0)
+@@ -3552,35 +3658,13 @@ process_slot:
+ btrfs_set_file_extent_num_bytes(leaf, extent,
+ datal);
+
+- /*
+- * We need to look up the roots that point at
+- * this bytenr and see if the new root does. If
+- * it does not we need to make sure we update
+- * quotas appropriately.
+- */
+- if (disko && root != BTRFS_I(src)->root &&
+- disko != last_disko) {
+- no_quota = check_ref(trans, root,
+- disko);
+- if (no_quota < 0) {
+- btrfs_abort_transaction(trans,
+- root,
+- ret);
+- btrfs_end_transaction(trans,
+- root);
+- ret = no_quota;
+- goto out;
+- }
+- }
+-
+ if (disko) {
+ inode_add_bytes(inode, datal);
+ ret = btrfs_inc_extent_ref(trans, root,
+ disko, diskl, 0,
+ root->root_key.objectid,
+ btrfs_ino(inode),
+- new_key.offset - datao,
+- no_quota);
++ new_key.offset - datao);
+ if (ret) {
+ btrfs_abort_transaction(trans,
+ root,
+@@ -3594,21 +3678,6 @@ process_slot:
+ } else if (type == BTRFS_FILE_EXTENT_INLINE) {
+ u64 skip = 0;
+ u64 trim = 0;
+- u64 aligned_end = 0;
+-
+- /*
+- * Don't copy an inline extent into an offset
+- * greater than zero. Having an inline extent
+- * at such an offset results in chaos as btrfs
+- * isn't prepared for such cases. Just skip
+- * this case for the same reasons as commented
+- * at btrfs_ioctl_clone().
+- */
+- if (last_dest_end > 0) {
+- ret = -EOPNOTSUPP;
+- btrfs_end_transaction(trans, root);
+- goto out;
+- }
+
+ if (off > key.offset) {
+ skip = off - key.offset;
+@@ -3626,42 +3695,22 @@ process_slot:
+ size -= skip + trim;
+ datal -= skip + trim;
+
+- aligned_end = ALIGN(new_key.offset + datal,
+- root->sectorsize);
+- ret = btrfs_drop_extents(trans, root, inode,
+- drop_start,
+- aligned_end,
+- 1);
++ ret = clone_copy_inline_extent(src, inode,
++ trans, path,
++ &new_key,
++ drop_start,
++ datal,
++ skip, size, buf);
+ if (ret) {
+ if (ret != -EOPNOTSUPP)
+ btrfs_abort_transaction(trans,
+- root, ret);
+- btrfs_end_transaction(trans, root);
+- goto out;
+- }
+-
+- ret = btrfs_insert_empty_item(trans, root, path,
+- &new_key, size);
+- if (ret) {
+- btrfs_abort_transaction(trans, root,
+- ret);
++ root,
++ ret);
+ btrfs_end_transaction(trans, root);
+ goto out;
+ }
+-
+- if (skip) {
+- u32 start =
+- btrfs_file_extent_calc_inline_size(0);
+- memmove(buf+start, buf+start+skip,
+- datal);
+- }
+-
+ leaf = path->nodes[0];
+ slot = path->slots[0];
+- write_extent_buffer(leaf, buf,
+- btrfs_item_ptr_offset(leaf, slot),
+- size);
+- inode_add_bytes(inode, datal);
+ }
+
+ /* If we have an implicit hole (NO_HOLES feature). */
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 303babeef505..ab507e3d536b 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -1716,7 +1716,7 @@ int replace_file_extents(struct btrfs_trans_handle *trans,
+ ret = btrfs_inc_extent_ref(trans, root, new_bytenr,
+ num_bytes, parent,
+ btrfs_header_owner(leaf),
+- key.objectid, key.offset, 1);
++ key.objectid, key.offset);
+ if (ret) {
+ btrfs_abort_transaction(trans, root, ret);
+ break;
+@@ -1724,7 +1724,7 @@ int replace_file_extents(struct btrfs_trans_handle *trans,
+
+ ret = btrfs_free_extent(trans, root, bytenr, num_bytes,
+ parent, btrfs_header_owner(leaf),
+- key.objectid, key.offset, 1);
++ key.objectid, key.offset);
+ if (ret) {
+ btrfs_abort_transaction(trans, root, ret);
+ break;
+@@ -1900,23 +1900,21 @@ again:
+
+ ret = btrfs_inc_extent_ref(trans, src, old_bytenr, blocksize,
+ path->nodes[level]->start,
+- src->root_key.objectid, level - 1, 0,
+- 1);
++ src->root_key.objectid, level - 1, 0);
+ BUG_ON(ret);
+ ret = btrfs_inc_extent_ref(trans, dest, new_bytenr, blocksize,
+ 0, dest->root_key.objectid, level - 1,
+- 0, 1);
++ 0);
+ BUG_ON(ret);
+
+ ret = btrfs_free_extent(trans, src, new_bytenr, blocksize,
+ path->nodes[level]->start,
+- src->root_key.objectid, level - 1, 0,
+- 1);
++ src->root_key.objectid, level - 1, 0);
+ BUG_ON(ret);
+
+ ret = btrfs_free_extent(trans, dest, old_bytenr, blocksize,
+ 0, dest->root_key.objectid, level - 1,
+- 0, 1);
++ 0);
+ BUG_ON(ret);
+
+ btrfs_unlock_up_safe(path, 0);
+@@ -2745,7 +2743,7 @@ static int do_relocation(struct btrfs_trans_handle *trans,
+ node->eb->start, blocksize,
+ upper->eb->start,
+ btrfs_header_owner(upper->eb),
+- node->level, 0, 1);
++ node->level, 0);
+ BUG_ON(ret);
+
+ ret = btrfs_drop_subtree(trans, root, eb, upper->eb);
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index a739b825bdd3..23bb2e4b911b 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -2353,8 +2353,14 @@ static int send_subvol_begin(struct send_ctx *sctx)
+ }
+
+ TLV_PUT_STRING(sctx, BTRFS_SEND_A_PATH, name, namelen);
+- TLV_PUT_UUID(sctx, BTRFS_SEND_A_UUID,
+- sctx->send_root->root_item.uuid);
++
++ if (!btrfs_is_empty_uuid(sctx->send_root->root_item.received_uuid))
++ TLV_PUT_UUID(sctx, BTRFS_SEND_A_UUID,
++ sctx->send_root->root_item.received_uuid);
++ else
++ TLV_PUT_UUID(sctx, BTRFS_SEND_A_UUID,
++ sctx->send_root->root_item.uuid);
++
+ TLV_PUT_U64(sctx, BTRFS_SEND_A_CTRANSID,
+ le64_to_cpu(sctx->send_root->root_item.ctransid));
+ if (parent_root) {
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 1bbaace73383..6f8af2de5912 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -691,7 +691,7 @@ static noinline int replay_one_extent(struct btrfs_trans_handle *trans,
+ ret = btrfs_inc_extent_ref(trans, root,
+ ins.objectid, ins.offset,
+ 0, root->root_key.objectid,
+- key->objectid, offset, 0);
++ key->objectid, offset);
+ if (ret)
+ goto out;
+ } else {
+diff --git a/fs/btrfs/xattr.c b/fs/btrfs/xattr.c
+index 6f518c90e1c1..1fcd7b6e7564 100644
+--- a/fs/btrfs/xattr.c
++++ b/fs/btrfs/xattr.c
+@@ -313,8 +313,10 @@ ssize_t btrfs_listxattr(struct dentry *dentry, char *buffer, size_t size)
+ /* check to make sure this item is what we want */
+ if (found_key.objectid != key.objectid)
+ break;
+- if (found_key.type != BTRFS_XATTR_ITEM_KEY)
++ if (found_key.type > BTRFS_XATTR_ITEM_KEY)
+ break;
++ if (found_key.type < BTRFS_XATTR_ITEM_KEY)
++ goto next;
+
+ di = btrfs_item_ptr(leaf, slot, struct btrfs_dir_item);
+ if (verify_dir_item(root, leaf, di))
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 51cb02da75d9..fe2c982764e7 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -1935,7 +1935,7 @@ static struct ceph_msg *create_request_message(struct ceph_mds_client *mdsc,
+
+ len = sizeof(*head) +
+ pathlen1 + pathlen2 + 2*(1 + sizeof(u32) + sizeof(u64)) +
+- sizeof(struct timespec);
++ sizeof(struct ceph_timespec);
+
+ /* calculate (max) length for cap releases */
+ len += sizeof(struct ceph_mds_request_release) *
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index c711be8d6a3c..9c8d23316da1 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -271,8 +271,12 @@ static struct dentry *start_creating(const char *name, struct dentry *parent)
+ dput(dentry);
+ dentry = ERR_PTR(-EEXIST);
+ }
+- if (IS_ERR(dentry))
++
++ if (IS_ERR(dentry)) {
+ mutex_unlock(&d_inode(parent)->i_mutex);
++ simple_release_fs(&debugfs_mount, &debugfs_mount_count);
++ }
++
+ return dentry;
+ }
+
+diff --git a/fs/ext4/crypto.c b/fs/ext4/crypto.c
+index 45731558138c..2fab243a4c9e 100644
+--- a/fs/ext4/crypto.c
++++ b/fs/ext4/crypto.c
+@@ -411,7 +411,13 @@ int ext4_encrypted_zeroout(struct inode *inode, struct ext4_extent *ex)
+ ext4_lblk_t lblk = ex->ee_block;
+ ext4_fsblk_t pblk = ext4_ext_pblock(ex);
+ unsigned int len = ext4_ext_get_actual_len(ex);
+- int err = 0;
++ int ret, err = 0;
++
++#if 0
++ ext4_msg(inode->i_sb, KERN_CRIT,
++ "ext4_encrypted_zeroout ino %lu lblk %u len %u",
++ (unsigned long) inode->i_ino, lblk, len);
++#endif
+
+ BUG_ON(inode->i_sb->s_blocksize != PAGE_CACHE_SIZE);
+
+@@ -437,17 +443,26 @@ int ext4_encrypted_zeroout(struct inode *inode, struct ext4_extent *ex)
+ goto errout;
+ }
+ bio->bi_bdev = inode->i_sb->s_bdev;
+- bio->bi_iter.bi_sector = pblk;
+- err = bio_add_page(bio, ciphertext_page,
++ bio->bi_iter.bi_sector =
++ pblk << (inode->i_sb->s_blocksize_bits - 9);
++ ret = bio_add_page(bio, ciphertext_page,
+ inode->i_sb->s_blocksize, 0);
+- if (err) {
++ if (ret != inode->i_sb->s_blocksize) {
++ /* should never happen! */
++ ext4_msg(inode->i_sb, KERN_ERR,
++ "bio_add_page failed: %d", ret);
++ WARN_ON(1);
+ bio_put(bio);
++ err = -EIO;
+ goto errout;
+ }
+ err = submit_bio_wait(WRITE, bio);
++ if ((err == 0) && bio->bi_error)
++ err = -EIO;
+ bio_put(bio);
+ if (err)
+ goto errout;
++ lblk++; pblk++;
+ }
+ err = 0;
+ errout:
+diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
+index d41843181818..e770c1ee4613 100644
+--- a/fs/ext4/ext4_jbd2.c
++++ b/fs/ext4/ext4_jbd2.c
+@@ -88,13 +88,13 @@ int __ext4_journal_stop(const char *where, unsigned int line, handle_t *handle)
+ return 0;
+ }
+
++ err = handle->h_err;
+ if (!handle->h_transaction) {
+- err = jbd2_journal_stop(handle);
+- return handle->h_err ? handle->h_err : err;
++ rc = jbd2_journal_stop(handle);
++ return err ? err : rc;
+ }
+
+ sb = handle->h_transaction->t_journal->j_private;
+- err = handle->h_err;
+ rc = jbd2_journal_stop(handle);
+
+ if (!err)
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 2553aa8b608d..7f486e350d15 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -3558,6 +3558,9 @@ static int ext4_ext_convert_to_initialized(handle_t *handle,
+ max_zeroout = sbi->s_extent_max_zeroout_kb >>
+ (inode->i_sb->s_blocksize_bits - 10);
+
++ if (ext4_encrypted_inode(inode))
++ max_zeroout = 0;
++
+ /* If extent is less than s_max_zeroout_kb, zeroout directly */
+ if (max_zeroout && (ee_len <= max_zeroout)) {
+ err = ext4_ext_zeroout(inode, ex);
+diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
+index 84ba4d2b3a35..17fbe3882b8e 100644
+--- a/fs/ext4/page-io.c
++++ b/fs/ext4/page-io.c
+@@ -425,6 +425,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
+ struct buffer_head *bh, *head;
+ int ret = 0;
+ int nr_submitted = 0;
++ int nr_to_submit = 0;
+
+ blocksize = 1 << inode->i_blkbits;
+
+@@ -477,11 +478,13 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
+ unmap_underlying_metadata(bh->b_bdev, bh->b_blocknr);
+ }
+ set_buffer_async_write(bh);
++ nr_to_submit++;
+ } while ((bh = bh->b_this_page) != head);
+
+ bh = head = page_buffers(page);
+
+- if (ext4_encrypted_inode(inode) && S_ISREG(inode->i_mode)) {
++ if (ext4_encrypted_inode(inode) && S_ISREG(inode->i_mode) &&
++ nr_to_submit) {
+ data_page = ext4_encrypt(inode, page);
+ if (IS_ERR(data_page)) {
+ ret = PTR_ERR(data_page);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index a63c7b0a10cf..df84bd256c9f 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -394,9 +394,13 @@ static void ext4_handle_error(struct super_block *sb)
+ smp_wmb();
+ sb->s_flags |= MS_RDONLY;
+ }
+- if (test_opt(sb, ERRORS_PANIC))
++ if (test_opt(sb, ERRORS_PANIC)) {
++ if (EXT4_SB(sb)->s_journal &&
++ !(EXT4_SB(sb)->s_journal->j_flags & JBD2_REC_ERR))
++ return;
+ panic("EXT4-fs (device %s): panic forced after error\n",
+ sb->s_id);
++ }
+ }
+
+ #define ext4_error_ratelimit(sb) \
+@@ -585,8 +589,12 @@ void __ext4_abort(struct super_block *sb, const char *function,
+ jbd2_journal_abort(EXT4_SB(sb)->s_journal, -EIO);
+ save_error_info(sb, function, line);
+ }
+- if (test_opt(sb, ERRORS_PANIC))
++ if (test_opt(sb, ERRORS_PANIC)) {
++ if (EXT4_SB(sb)->s_journal &&
++ !(EXT4_SB(sb)->s_journal->j_flags & JBD2_REC_ERR))
++ return;
+ panic("EXT4-fs panic from previous error\n");
++ }
+ }
+
+ void __ext4_msg(struct super_block *sb,
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index 8270fe9e3641..37023d0bdae4 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -2071,8 +2071,12 @@ static void __journal_abort_soft (journal_t *journal, int errno)
+
+ __jbd2_journal_abort_hard(journal);
+
+- if (errno)
++ if (errno) {
+ jbd2_journal_update_sb_errno(journal);
++ write_lock(&journal->j_state_lock);
++ journal->j_flags |= JBD2_REC_ERR;
++ write_unlock(&journal->j_state_lock);
++ }
+ }
+
+ /**
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 326d9e10d833..ffdf9b9e88ab 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -1824,7 +1824,11 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ if ((long)fattr->gencount - (long)nfsi->attr_gencount > 0)
+ nfsi->attr_gencount = fattr->gencount;
+ }
+- invalid &= ~NFS_INO_INVALID_ATTR;
++
++ /* Don't declare attrcache up to date if there were no attrs! */
++ if (fattr->valid != 0)
++ invalid &= ~NFS_INO_INVALID_ATTR;
++
+ /* Don't invalidate the data if we were to blame */
+ if (!(S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode)
+ || S_ISLNK(inode->i_mode)))
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index 223bedda64ae..10410e8b5853 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -33,7 +33,7 @@ static int nfs_get_cb_ident_idr(struct nfs_client *clp, int minorversion)
+ return ret;
+ idr_preload(GFP_KERNEL);
+ spin_lock(&nn->nfs_client_lock);
+- ret = idr_alloc(&nn->cb_ident_idr, clp, 0, 0, GFP_NOWAIT);
++ ret = idr_alloc(&nn->cb_ident_idr, clp, 1, 0, GFP_NOWAIT);
+ if (ret >= 0)
+ clp->cl_cb_ident = ret;
+ spin_unlock(&nn->nfs_client_lock);
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 8abe27165ad0..abf5caea20c9 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -872,33 +872,38 @@ send_layoutget(struct pnfs_layout_hdr *lo,
+
+ dprintk("--> %s\n", __func__);
+
+- lgp = kzalloc(sizeof(*lgp), gfp_flags);
+- if (lgp == NULL)
+- return NULL;
++ /*
++ * Synchronously retrieve layout information from server and
++ * store in lseg. If we race with a concurrent seqid morphing
++ * op, then re-send the LAYOUTGET.
++ */
++ do {
++ lgp = kzalloc(sizeof(*lgp), gfp_flags);
++ if (lgp == NULL)
++ return NULL;
++
++ i_size = i_size_read(ino);
++
++ lgp->args.minlength = PAGE_CACHE_SIZE;
++ if (lgp->args.minlength > range->length)
++ lgp->args.minlength = range->length;
++ if (range->iomode == IOMODE_READ) {
++ if (range->offset >= i_size)
++ lgp->args.minlength = 0;
++ else if (i_size - range->offset < lgp->args.minlength)
++ lgp->args.minlength = i_size - range->offset;
++ }
++ lgp->args.maxcount = PNFS_LAYOUT_MAXSIZE;
++ lgp->args.range = *range;
++ lgp->args.type = server->pnfs_curr_ld->id;
++ lgp->args.inode = ino;
++ lgp->args.ctx = get_nfs_open_context(ctx);
++ lgp->gfp_flags = gfp_flags;
++ lgp->cred = lo->plh_lc_cred;
+
+- i_size = i_size_read(ino);
++ lseg = nfs4_proc_layoutget(lgp, gfp_flags);
++ } while (lseg == ERR_PTR(-EAGAIN));
+
+- lgp->args.minlength = PAGE_CACHE_SIZE;
+- if (lgp->args.minlength > range->length)
+- lgp->args.minlength = range->length;
+- if (range->iomode == IOMODE_READ) {
+- if (range->offset >= i_size)
+- lgp->args.minlength = 0;
+- else if (i_size - range->offset < lgp->args.minlength)
+- lgp->args.minlength = i_size - range->offset;
+- }
+- lgp->args.maxcount = PNFS_LAYOUT_MAXSIZE;
+- lgp->args.range = *range;
+- lgp->args.type = server->pnfs_curr_ld->id;
+- lgp->args.inode = ino;
+- lgp->args.ctx = get_nfs_open_context(ctx);
+- lgp->gfp_flags = gfp_flags;
+- lgp->cred = lo->plh_lc_cred;
+-
+- /* Synchronously retrieve layout information from server and
+- * store in lseg.
+- */
+- lseg = nfs4_proc_layoutget(lgp, gfp_flags);
+ if (IS_ERR(lseg)) {
+ switch (PTR_ERR(lseg)) {
+ case -ENOMEM:
+@@ -1687,6 +1692,7 @@ pnfs_layout_process(struct nfs4_layoutget *lgp)
+ /* existing state ID, make sure the sequence number matches. */
+ if (pnfs_layout_stateid_blocked(lo, &res->stateid)) {
+ dprintk("%s forget reply due to sequence\n", __func__);
++ status = -EAGAIN;
+ goto out_forget_reply;
+ }
+ pnfs_set_layout_stateid(lo, &res->stateid, false);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 0f1d5691b795..0dea0c254ddf 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -765,16 +765,68 @@ void nfs4_unhash_stid(struct nfs4_stid *s)
+ s->sc_type = 0;
+ }
+
+-static void
++/**
++ * nfs4_get_existing_delegation - Discover if this delegation already exists
++ * @clp: a pointer to the nfs4_client we're granting a delegation to
++ * @fp: a pointer to the nfs4_file we're granting a delegation on
++ *
++ * Return:
++ * On success: NULL if an existing delegation was not found.
++ *
++ * On error: -EAGAIN if one was previously granted to this nfs4_client
++ * for this nfs4_file.
++ *
++ */
++
++static int
++nfs4_get_existing_delegation(struct nfs4_client *clp, struct nfs4_file *fp)
++{
++ struct nfs4_delegation *searchdp = NULL;
++ struct nfs4_client *searchclp = NULL;
++
++ lockdep_assert_held(&state_lock);
++ lockdep_assert_held(&fp->fi_lock);
++
++ list_for_each_entry(searchdp, &fp->fi_delegations, dl_perfile) {
++ searchclp = searchdp->dl_stid.sc_client;
++ if (clp == searchclp) {
++ return -EAGAIN;
++ }
++ }
++ return 0;
++}
++
++/**
++ * hash_delegation_locked - Add a delegation to the appropriate lists
++ * @dp: a pointer to the nfs4_delegation we are adding.
++ * @fp: a pointer to the nfs4_file we're granting a delegation on
++ *
++ * Return:
++ * On success: NULL if the delegation was successfully hashed.
++ *
++ * On error: -EAGAIN if one was previously granted to this
++ * nfs4_client for this nfs4_file. Delegation is not hashed.
++ *
++ */
++
++static int
+ hash_delegation_locked(struct nfs4_delegation *dp, struct nfs4_file *fp)
+ {
++ int status;
++ struct nfs4_client *clp = dp->dl_stid.sc_client;
++
+ lockdep_assert_held(&state_lock);
+ lockdep_assert_held(&fp->fi_lock);
+
++ status = nfs4_get_existing_delegation(clp, fp);
++ if (status)
++ return status;
++ ++fp->fi_delegees;
+ atomic_inc(&dp->dl_stid.sc_count);
+ dp->dl_stid.sc_type = NFS4_DELEG_STID;
+ list_add(&dp->dl_perfile, &fp->fi_delegations);
+- list_add(&dp->dl_perclnt, &dp->dl_stid.sc_client->cl_delegations);
++ list_add(&dp->dl_perclnt, &clp->cl_delegations);
++ return 0;
+ }
+
+ static bool
+@@ -3360,6 +3412,7 @@ static void init_open_stateid(struct nfs4_ol_stateid *stp, struct nfs4_file *fp,
+ stp->st_access_bmap = 0;
+ stp->st_deny_bmap = 0;
+ stp->st_openstp = NULL;
++ init_rwsem(&stp->st_rwsem);
+ spin_lock(&oo->oo_owner.so_client->cl_lock);
+ list_add(&stp->st_perstateowner, &oo->oo_owner.so_stateids);
+ spin_lock(&fp->fi_lock);
+@@ -3945,6 +3998,18 @@ static struct file_lock *nfs4_alloc_init_lease(struct nfs4_file *fp, int flag)
+ return fl;
+ }
+
++/**
++ * nfs4_setlease - Obtain a delegation by requesting lease from vfs layer
++ * @dp: a pointer to the nfs4_delegation we're adding.
++ *
++ * Return:
++ * On success: Return code will be 0 on success.
++ *
++ * On error: -EAGAIN if there was an existing delegation.
++ * nonzero if there is an error in other cases.
++ *
++ */
++
+ static int nfs4_setlease(struct nfs4_delegation *dp)
+ {
+ struct nfs4_file *fp = dp->dl_stid.sc_file;
+@@ -3976,16 +4041,19 @@ static int nfs4_setlease(struct nfs4_delegation *dp)
+ goto out_unlock;
+ /* Race breaker */
+ if (fp->fi_deleg_file) {
+- status = 0;
+- ++fp->fi_delegees;
+- hash_delegation_locked(dp, fp);
++ status = hash_delegation_locked(dp, fp);
+ goto out_unlock;
+ }
+ fp->fi_deleg_file = filp;
+- fp->fi_delegees = 1;
+- hash_delegation_locked(dp, fp);
++ fp->fi_delegees = 0;
++ status = hash_delegation_locked(dp, fp);
+ spin_unlock(&fp->fi_lock);
+ spin_unlock(&state_lock);
++ if (status) {
++ /* Should never happen, this is a new fi_deleg_file */
++ WARN_ON_ONCE(1);
++ goto out_fput;
++ }
+ return 0;
+ out_unlock:
+ spin_unlock(&fp->fi_lock);
+@@ -4005,6 +4073,15 @@ nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+ if (fp->fi_had_conflict)
+ return ERR_PTR(-EAGAIN);
+
++ spin_lock(&state_lock);
++ spin_lock(&fp->fi_lock);
++ status = nfs4_get_existing_delegation(clp, fp);
++ spin_unlock(&fp->fi_lock);
++ spin_unlock(&state_lock);
++
++ if (status)
++ return ERR_PTR(status);
++
+ dp = alloc_init_deleg(clp, fh, odstate);
+ if (!dp)
+ return ERR_PTR(-ENOMEM);
+@@ -4023,9 +4100,7 @@ nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+ status = -EAGAIN;
+ goto out_unlock;
+ }
+- ++fp->fi_delegees;
+- hash_delegation_locked(dp, fp);
+- status = 0;
++ status = hash_delegation_locked(dp, fp);
+ out_unlock:
+ spin_unlock(&fp->fi_lock);
+ spin_unlock(&state_lock);
+@@ -4187,15 +4262,20 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
+ */
+ if (stp) {
+ /* Stateid was found, this is an OPEN upgrade */
++ down_read(&stp->st_rwsem);
+ status = nfs4_upgrade_open(rqstp, fp, current_fh, stp, open);
+- if (status)
++ if (status) {
++ up_read(&stp->st_rwsem);
+ goto out;
++ }
+ } else {
+ stp = open->op_stp;
+ open->op_stp = NULL;
+ init_open_stateid(stp, fp, open);
++ down_read(&stp->st_rwsem);
+ status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
+ if (status) {
++ up_read(&stp->st_rwsem);
+ release_open_stateid(stp);
+ goto out;
+ }
+@@ -4207,6 +4287,7 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
+ }
+ update_stateid(&stp->st_stid.sc_stateid);
+ memcpy(&open->op_stateid, &stp->st_stid.sc_stateid, sizeof(stateid_t));
++ up_read(&stp->st_rwsem);
+
+ if (nfsd4_has_session(&resp->cstate)) {
+ if (open->op_deleg_want & NFS4_SHARE_WANT_NO_DELEG) {
+@@ -4819,10 +4900,13 @@ static __be32 nfs4_seqid_op_checks(struct nfsd4_compound_state *cstate, stateid_
+ * revoked delegations are kept only for free_stateid.
+ */
+ return nfserr_bad_stateid;
++ down_write(&stp->st_rwsem);
+ status = check_stateid_generation(stateid, &stp->st_stid.sc_stateid, nfsd4_has_session(cstate));
+- if (status)
+- return status;
+- return nfs4_check_fh(current_fh, &stp->st_stid);
++ if (status == nfs_ok)
++ status = nfs4_check_fh(current_fh, &stp->st_stid);
++ if (status != nfs_ok)
++ up_write(&stp->st_rwsem);
++ return status;
+ }
+
+ /*
+@@ -4869,6 +4953,7 @@ static __be32 nfs4_preprocess_confirmed_seqid_op(struct nfsd4_compound_state *cs
+ return status;
+ oo = openowner(stp->st_stateowner);
+ if (!(oo->oo_flags & NFS4_OO_CONFIRMED)) {
++ up_write(&stp->st_rwsem);
+ nfs4_put_stid(&stp->st_stid);
+ return nfserr_bad_stateid;
+ }
+@@ -4899,11 +4984,14 @@ nfsd4_open_confirm(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ goto out;
+ oo = openowner(stp->st_stateowner);
+ status = nfserr_bad_stateid;
+- if (oo->oo_flags & NFS4_OO_CONFIRMED)
++ if (oo->oo_flags & NFS4_OO_CONFIRMED) {
++ up_write(&stp->st_rwsem);
+ goto put_stateid;
++ }
+ oo->oo_flags |= NFS4_OO_CONFIRMED;
+ update_stateid(&stp->st_stid.sc_stateid);
+ memcpy(&oc->oc_resp_stateid, &stp->st_stid.sc_stateid, sizeof(stateid_t));
++ up_write(&stp->st_rwsem);
+ dprintk("NFSD: %s: success, seqid=%d stateid=" STATEID_FMT "\n",
+ __func__, oc->oc_seqid, STATEID_VAL(&stp->st_stid.sc_stateid));
+
+@@ -4982,6 +5070,7 @@ nfsd4_open_downgrade(struct svc_rqst *rqstp,
+ memcpy(&od->od_stateid, &stp->st_stid.sc_stateid, sizeof(stateid_t));
+ status = nfs_ok;
+ put_stateid:
++ up_write(&stp->st_rwsem);
+ nfs4_put_stid(&stp->st_stid);
+ out:
+ nfsd4_bump_seqid(cstate, status);
+@@ -5035,6 +5124,7 @@ nfsd4_close(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ goto out;
+ update_stateid(&stp->st_stid.sc_stateid);
+ memcpy(&close->cl_stateid, &stp->st_stid.sc_stateid, sizeof(stateid_t));
++ up_write(&stp->st_rwsem);
+
+ nfsd4_close_open_stateid(stp);
+
+@@ -5260,6 +5350,7 @@ init_lock_stateid(struct nfs4_ol_stateid *stp, struct nfs4_lockowner *lo,
+ stp->st_access_bmap = 0;
+ stp->st_deny_bmap = open_stp->st_deny_bmap;
+ stp->st_openstp = open_stp;
++ init_rwsem(&stp->st_rwsem);
+ list_add(&stp->st_locks, &open_stp->st_locks);
+ list_add(&stp->st_perstateowner, &lo->lo_owner.so_stateids);
+ spin_lock(&fp->fi_lock);
+@@ -5428,6 +5519,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ &open_stp, nn);
+ if (status)
+ goto out;
++ up_write(&open_stp->st_rwsem);
+ open_sop = openowner(open_stp->st_stateowner);
+ status = nfserr_bad_stateid;
+ if (!same_clid(&open_sop->oo_owner.so_client->cl_clientid,
+@@ -5435,6 +5527,8 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ goto out;
+ status = lookup_or_create_lock_state(cstate, open_stp, lock,
+ &lock_stp, &new);
++ if (status == nfs_ok)
++ down_write(&lock_stp->st_rwsem);
+ } else {
+ status = nfs4_preprocess_seqid_op(cstate,
+ lock->lk_old_lock_seqid,
+@@ -5540,6 +5634,8 @@ out:
+ seqid_mutating_err(ntohl(status)))
+ lock_sop->lo_owner.so_seqid++;
+
++ up_write(&lock_stp->st_rwsem);
++
+ /*
+ * If this is a new, never-before-used stateid, and we are
+ * returning an error, then just go ahead and release it.
+@@ -5709,6 +5805,7 @@ nfsd4_locku(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ fput:
+ fput(filp);
+ put_stateid:
++ up_write(&stp->st_rwsem);
+ nfs4_put_stid(&stp->st_stid);
+ out:
+ nfsd4_bump_seqid(cstate, status);
+diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
+index 583ffc13cae2..31bde12feefe 100644
+--- a/fs/nfsd/state.h
++++ b/fs/nfsd/state.h
+@@ -534,15 +534,16 @@ struct nfs4_file {
+ * Better suggestions welcome.
+ */
+ struct nfs4_ol_stateid {
+- struct nfs4_stid st_stid; /* must be first field */
+- struct list_head st_perfile;
+- struct list_head st_perstateowner;
+- struct list_head st_locks;
+- struct nfs4_stateowner * st_stateowner;
+- struct nfs4_clnt_odstate * st_clnt_odstate;
+- unsigned char st_access_bmap;
+- unsigned char st_deny_bmap;
+- struct nfs4_ol_stateid * st_openstp;
++ struct nfs4_stid st_stid;
++ struct list_head st_perfile;
++ struct list_head st_perstateowner;
++ struct list_head st_locks;
++ struct nfs4_stateowner *st_stateowner;
++ struct nfs4_clnt_odstate *st_clnt_odstate;
++ unsigned char st_access_bmap;
++ unsigned char st_deny_bmap;
++ struct nfs4_ol_stateid *st_openstp;
++ struct rw_semaphore st_rwsem;
+ };
+
+ static inline struct nfs4_ol_stateid *openlockstateid(struct nfs4_stid *s)
+diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
+index b7dfac226b1e..12bfa9ca5583 100644
+--- a/fs/ocfs2/namei.c
++++ b/fs/ocfs2/namei.c
+@@ -374,6 +374,8 @@ static int ocfs2_mknod(struct inode *dir,
+ mlog_errno(status);
+ goto leave;
+ }
++ /* update inode->i_mode after mask with "umask". */
++ inode->i_mode = mode;
+
+ handle = ocfs2_start_trans(osb, ocfs2_mknod_credits(osb->sb,
+ S_ISDIR(mode),
+diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
+index f1f32af6d9b9..3e4ff3f1d314 100644
+--- a/include/linux/ipv6.h
++++ b/include/linux/ipv6.h
+@@ -227,7 +227,7 @@ struct ipv6_pinfo {
+ struct ipv6_ac_socklist *ipv6_ac_list;
+ struct ipv6_fl_socklist __rcu *ipv6_fl_list;
+
+- struct ipv6_txoptions *opt;
++ struct ipv6_txoptions __rcu *opt;
+ struct sk_buff *pktoptions;
+ struct sk_buff *rxpmtu;
+ struct inet6_cork cork;
+diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
+index df07e78487d5..1abeb820a630 100644
+--- a/include/linux/jbd2.h
++++ b/include/linux/jbd2.h
+@@ -1046,6 +1046,7 @@ struct journal_s
+ #define JBD2_ABORT_ON_SYNCDATA_ERR 0x040 /* Abort the journal on file
+ * data write error in ordered
+ * mode */
++#define JBD2_REC_ERR 0x080 /* The errno in the sb has been recorded */
+
+ /*
+ * Function declarations for the journaling transaction and buffer
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index dd2097455a2e..1565324eb620 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -453,26 +453,28 @@ struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
+ u8 lro_cap[0x1];
+ u8 lro_psh_flag[0x1];
+ u8 lro_time_stamp[0x1];
+- u8 reserved_0[0x6];
++ u8 reserved_0[0x3];
++ u8 self_lb_en_modifiable[0x1];
++ u8 reserved_1[0x2];
+ u8 max_lso_cap[0x5];
+- u8 reserved_1[0x4];
++ u8 reserved_2[0x4];
+ u8 rss_ind_tbl_cap[0x4];
+- u8 reserved_2[0x3];
++ u8 reserved_3[0x3];
+ u8 tunnel_lso_const_out_ip_id[0x1];
+- u8 reserved_3[0x2];
++ u8 reserved_4[0x2];
+ u8 tunnel_statless_gre[0x1];
+ u8 tunnel_stateless_vxlan[0x1];
+
+- u8 reserved_4[0x20];
++ u8 reserved_5[0x20];
+
+- u8 reserved_5[0x10];
++ u8 reserved_6[0x10];
+ u8 lro_min_mss_size[0x10];
+
+- u8 reserved_6[0x120];
++ u8 reserved_7[0x120];
+
+ u8 lro_timer_supported_periods[4][0x20];
+
+- u8 reserved_7[0x600];
++ u8 reserved_8[0x600];
+ };
+
+ struct mlx5_ifc_roce_cap_bits {
+@@ -4051,9 +4053,11 @@ struct mlx5_ifc_modify_tis_in_bits {
+ };
+
+ struct mlx5_ifc_modify_tir_bitmask_bits {
+- u8 reserved[0x20];
++ u8 reserved_0[0x20];
+
+- u8 reserved1[0x1f];
++ u8 reserved_1[0x1b];
++ u8 self_lb_en[0x1];
++ u8 reserved_2[0x3];
+ u8 lro[0x1];
+ };
+
+diff --git a/include/net/af_unix.h b/include/net/af_unix.h
+index b36d837c701e..2a91a0561a47 100644
+--- a/include/net/af_unix.h
++++ b/include/net/af_unix.h
+@@ -62,6 +62,7 @@ struct unix_sock {
+ #define UNIX_GC_CANDIDATE 0
+ #define UNIX_GC_MAYBE_CYCLE 1
+ struct socket_wq peer_wq;
++ wait_queue_t peer_wake;
+ };
+
+ static inline struct unix_sock *unix_sk(const struct sock *sk)
+diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h
+index aaf9700fc9e5..fb961a576abe 100644
+--- a/include/net/ip6_fib.h
++++ b/include/net/ip6_fib.h
+@@ -167,7 +167,8 @@ static inline void rt6_update_expires(struct rt6_info *rt0, int timeout)
+
+ static inline u32 rt6_get_cookie(const struct rt6_info *rt)
+ {
+- if (rt->rt6i_flags & RTF_PCPU || unlikely(rt->dst.flags & DST_NOCACHE))
++ if (rt->rt6i_flags & RTF_PCPU ||
++ (unlikely(rt->dst.flags & DST_NOCACHE) && rt->dst.from))
+ rt = (struct rt6_info *)(rt->dst.from);
+
+ return rt->rt6i_node ? rt->rt6i_node->fn_sernum : 0;
+diff --git a/include/net/ip6_tunnel.h b/include/net/ip6_tunnel.h
+index fa915fa0f703..d49a8f8fae45 100644
+--- a/include/net/ip6_tunnel.h
++++ b/include/net/ip6_tunnel.h
+@@ -90,11 +90,12 @@ static inline void ip6tunnel_xmit(struct sock *sk, struct sk_buff *skb,
+ err = ip6_local_out_sk(sk, skb);
+
+ if (net_xmit_eval(err) == 0) {
+- struct pcpu_sw_netstats *tstats = this_cpu_ptr(dev->tstats);
++ struct pcpu_sw_netstats *tstats = get_cpu_ptr(dev->tstats);
+ u64_stats_update_begin(&tstats->syncp);
+ tstats->tx_bytes += pkt_len;
+ tstats->tx_packets++;
+ u64_stats_update_end(&tstats->syncp);
++ put_cpu_ptr(tstats);
+ } else {
+ stats->tx_errors++;
+ stats->tx_aborted_errors++;
+diff --git a/include/net/ip_tunnels.h b/include/net/ip_tunnels.h
+index f6dafec9102c..62a750a6a8f8 100644
+--- a/include/net/ip_tunnels.h
++++ b/include/net/ip_tunnels.h
+@@ -287,12 +287,13 @@ static inline void iptunnel_xmit_stats(int err,
+ struct pcpu_sw_netstats __percpu *stats)
+ {
+ if (err > 0) {
+- struct pcpu_sw_netstats *tstats = this_cpu_ptr(stats);
++ struct pcpu_sw_netstats *tstats = get_cpu_ptr(stats);
+
+ u64_stats_update_begin(&tstats->syncp);
+ tstats->tx_bytes += err;
+ tstats->tx_packets++;
+ u64_stats_update_end(&tstats->syncp);
++ put_cpu_ptr(tstats);
+ } else if (err < 0) {
+ err_stats->tx_errors++;
+ err_stats->tx_aborted_errors++;
+diff --git a/include/net/ipv6.h b/include/net/ipv6.h
+index 711cca428cc8..b14e1581c477 100644
+--- a/include/net/ipv6.h
++++ b/include/net/ipv6.h
+@@ -205,6 +205,7 @@ extern rwlock_t ip6_ra_lock;
+ */
+
+ struct ipv6_txoptions {
++ atomic_t refcnt;
+ /* Length of this structure */
+ int tot_len;
+
+@@ -217,7 +218,7 @@ struct ipv6_txoptions {
+ struct ipv6_opt_hdr *dst0opt;
+ struct ipv6_rt_hdr *srcrt; /* Routing Header */
+ struct ipv6_opt_hdr *dst1opt;
+-
++ struct rcu_head rcu;
+ /* Option buffer, as read by IPV6_PKTOPTIONS, starts here. */
+ };
+
+@@ -252,6 +253,24 @@ struct ipv6_fl_socklist {
+ struct rcu_head rcu;
+ };
+
++static inline struct ipv6_txoptions *txopt_get(const struct ipv6_pinfo *np)
++{
++ struct ipv6_txoptions *opt;
++
++ rcu_read_lock();
++ opt = rcu_dereference(np->opt);
++ if (opt && !atomic_inc_not_zero(&opt->refcnt))
++ opt = NULL;
++ rcu_read_unlock();
++ return opt;
++}
++
++static inline void txopt_put(struct ipv6_txoptions *opt)
++{
++ if (opt && atomic_dec_and_test(&opt->refcnt))
++ kfree_rcu(opt, rcu);
++}
++
+ struct ip6_flowlabel *fl6_sock_lookup(struct sock *sk, __be32 label);
+ struct ipv6_txoptions *fl6_merge_options(struct ipv6_txoptions *opt_space,
+ struct ip6_flowlabel *fl,
+@@ -490,6 +509,7 @@ struct ip6_create_arg {
+ u32 user;
+ const struct in6_addr *src;
+ const struct in6_addr *dst;
++ int iif;
+ u8 ecn;
+ };
+
+diff --git a/include/net/ndisc.h b/include/net/ndisc.h
+index aba5695fadb0..b3a7751251b4 100644
+--- a/include/net/ndisc.h
++++ b/include/net/ndisc.h
+@@ -182,8 +182,7 @@ int ndisc_rcv(struct sk_buff *skb);
+
+ void ndisc_send_ns(struct net_device *dev, struct neighbour *neigh,
+ const struct in6_addr *solicit,
+- const struct in6_addr *daddr, const struct in6_addr *saddr,
+- struct sk_buff *oskb);
++ const struct in6_addr *daddr, const struct in6_addr *saddr);
+
+ void ndisc_send_rs(struct net_device *dev,
+ const struct in6_addr *saddr, const struct in6_addr *daddr);
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 444faa89a55f..f1ad8f8fd4f1 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -61,6 +61,9 @@ struct Qdisc {
+ */
+ #define TCQ_F_WARN_NONWC (1 << 16)
+ #define TCQ_F_CPUSTATS 0x20 /* run using percpu statistics */
++#define TCQ_F_NOPARENT 0x40 /* root of its hierarchy :
++ * qdisc_tree_decrease_qlen() should stop.
++ */
+ u32 limit;
+ const struct Qdisc_ops *ops;
+ struct qdisc_size_table __rcu *stab;
+diff --git a/include/net/switchdev.h b/include/net/switchdev.h
+index 319baab3b48e..731c40e34bf2 100644
+--- a/include/net/switchdev.h
++++ b/include/net/switchdev.h
+@@ -272,7 +272,7 @@ static inline int switchdev_port_fdb_dump(struct sk_buff *skb,
+ struct net_device *filter_dev,
+ int idx)
+ {
+- return -EOPNOTSUPP;
++ return idx;
+ }
+
+ static inline void switchdev_port_fwd_mark_set(struct net_device *dev,
+diff --git a/kernel/.gitignore b/kernel/.gitignore
+index 790d83c7d160..b3097bde4e9c 100644
+--- a/kernel/.gitignore
++++ b/kernel/.gitignore
+@@ -5,4 +5,3 @@ config_data.h
+ config_data.gz
+ timeconst.h
+ hz.bc
+-x509_certificate_list
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index 29ace107f236..7a0decf47110 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -104,7 +104,7 @@ static int array_map_update_elem(struct bpf_map *map, void *key, void *value,
+ /* all elements already exist */
+ return -EEXIST;
+
+- memcpy(array->value + array->elem_size * index, value, array->elem_size);
++ memcpy(array->value + array->elem_size * index, value, map->value_size);
+ return 0;
+ }
+
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 2b515ba7e94f..c169bba44e05 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -2215,7 +2215,7 @@ static int pneigh_fill_info(struct sk_buff *skb, struct pneigh_entry *pn,
+ ndm->ndm_pad2 = 0;
+ ndm->ndm_flags = pn->flags | NTF_PROXY;
+ ndm->ndm_type = RTN_UNICAST;
+- ndm->ndm_ifindex = pn->dev->ifindex;
++ ndm->ndm_ifindex = pn->dev ? pn->dev->ifindex : 0;
+ ndm->ndm_state = NUD_NONE;
+
+ if (nla_put(skb, NDA_DST, tbl->key_len, pn->key))
+@@ -2290,7 +2290,7 @@ static int pneigh_dump_table(struct neigh_table *tbl, struct sk_buff *skb,
+ if (h > s_h)
+ s_idx = 0;
+ for (n = tbl->phash_buckets[h], idx = 0; n; n = n->next) {
+- if (dev_net(n->dev) != net)
++ if (pneigh_net(n) != net)
+ continue;
+ if (idx < s_idx)
+ goto next;
+diff --git a/net/core/scm.c b/net/core/scm.c
+index 3b6899b7d810..8a1741b14302 100644
+--- a/net/core/scm.c
++++ b/net/core/scm.c
+@@ -305,6 +305,8 @@ void scm_detach_fds(struct msghdr *msg, struct scm_cookie *scm)
+ err = put_user(cmlen, &cm->cmsg_len);
+ if (!err) {
+ cmlen = CMSG_SPACE(i*sizeof(int));
++ if (msg->msg_controllen < cmlen)
++ cmlen = msg->msg_controllen;
+ msg->msg_control += cmlen;
+ msg->msg_controllen -= cmlen;
+ }
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index 5165571f397a..a0490508d213 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -202,7 +202,9 @@ static int dccp_v6_send_response(struct sock *sk, struct request_sock *req)
+ security_req_classify_flow(req, flowi6_to_flowi(&fl6));
+
+
+- final_p = fl6_update_dst(&fl6, np->opt, &final);
++ rcu_read_lock();
++ final_p = fl6_update_dst(&fl6, rcu_dereference(np->opt), &final);
++ rcu_read_unlock();
+
+ dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+@@ -219,7 +221,10 @@ static int dccp_v6_send_response(struct sock *sk, struct request_sock *req)
+ &ireq->ir_v6_loc_addr,
+ &ireq->ir_v6_rmt_addr);
+ fl6.daddr = ireq->ir_v6_rmt_addr;
+- err = ip6_xmit(sk, skb, &fl6, np->opt, np->tclass);
++ rcu_read_lock();
++ err = ip6_xmit(sk, skb, &fl6, rcu_dereference(np->opt),
++ np->tclass);
++ rcu_read_unlock();
+ err = net_xmit_eval(err);
+ }
+
+@@ -415,6 +420,7 @@ static struct sock *dccp_v6_request_recv_sock(struct sock *sk,
+ {
+ struct inet_request_sock *ireq = inet_rsk(req);
+ struct ipv6_pinfo *newnp, *np = inet6_sk(sk);
++ struct ipv6_txoptions *opt;
+ struct inet_sock *newinet;
+ struct dccp6_sock *newdp6;
+ struct sock *newsk;
+@@ -534,13 +540,15 @@ static struct sock *dccp_v6_request_recv_sock(struct sock *sk,
+ * Yes, keeping reference count would be much more clever, but we make
+ * one more one thing there: reattach optmem to newsk.
+ */
+- if (np->opt != NULL)
+- newnp->opt = ipv6_dup_options(newsk, np->opt);
+-
++ opt = rcu_dereference(np->opt);
++ if (opt) {
++ opt = ipv6_dup_options(newsk, opt);
++ RCU_INIT_POINTER(newnp->opt, opt);
++ }
+ inet_csk(newsk)->icsk_ext_hdr_len = 0;
+- if (newnp->opt != NULL)
+- inet_csk(newsk)->icsk_ext_hdr_len = (newnp->opt->opt_nflen +
+- newnp->opt->opt_flen);
++ if (opt)
++ inet_csk(newsk)->icsk_ext_hdr_len = opt->opt_nflen +
++ opt->opt_flen;
+
+ dccp_sync_mss(newsk, dst_mtu(dst));
+
+@@ -793,6 +801,7 @@ static int dccp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ struct ipv6_pinfo *np = inet6_sk(sk);
+ struct dccp_sock *dp = dccp_sk(sk);
+ struct in6_addr *saddr = NULL, *final_p, final;
++ struct ipv6_txoptions *opt;
+ struct flowi6 fl6;
+ struct dst_entry *dst;
+ int addr_type;
+@@ -892,7 +901,8 @@ static int dccp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ fl6.fl6_sport = inet->inet_sport;
+ security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
+
+- final_p = fl6_update_dst(&fl6, np->opt, &final);
++ opt = rcu_dereference_protected(np->opt, sock_owned_by_user(sk));
++ final_p = fl6_update_dst(&fl6, opt, &final);
+
+ dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+@@ -912,9 +922,8 @@ static int dccp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ __ip6_dst_store(sk, dst, NULL, NULL);
+
+ icsk->icsk_ext_hdr_len = 0;
+- if (np->opt != NULL)
+- icsk->icsk_ext_hdr_len = (np->opt->opt_flen +
+- np->opt->opt_nflen);
++ if (opt)
++ icsk->icsk_ext_hdr_len = opt->opt_flen + opt->opt_nflen;
+
+ inet->inet_dport = usin->sin6_port;
+
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index 8e8203d5c520..ef7e2c4342cb 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -134,7 +134,7 @@ static int __ipmr_fill_mroute(struct mr_table *mrt, struct sk_buff *skb,
+ struct mfc_cache *c, struct rtmsg *rtm);
+ static void mroute_netlink_event(struct mr_table *mrt, struct mfc_cache *mfc,
+ int cmd);
+-static void mroute_clean_tables(struct mr_table *mrt);
++static void mroute_clean_tables(struct mr_table *mrt, bool all);
+ static void ipmr_expire_process(unsigned long arg);
+
+ #ifdef CONFIG_IP_MROUTE_MULTIPLE_TABLES
+@@ -350,7 +350,7 @@ static struct mr_table *ipmr_new_table(struct net *net, u32 id)
+ static void ipmr_free_table(struct mr_table *mrt)
+ {
+ del_timer_sync(&mrt->ipmr_expire_timer);
+- mroute_clean_tables(mrt);
++ mroute_clean_tables(mrt, true);
+ kfree(mrt);
+ }
+
+@@ -1208,7 +1208,7 @@ static int ipmr_mfc_add(struct net *net, struct mr_table *mrt,
+ * Close the multicast socket, and clear the vif tables etc
+ */
+
+-static void mroute_clean_tables(struct mr_table *mrt)
++static void mroute_clean_tables(struct mr_table *mrt, bool all)
+ {
+ int i;
+ LIST_HEAD(list);
+@@ -1217,8 +1217,9 @@ static void mroute_clean_tables(struct mr_table *mrt)
+ /* Shut down all active vif entries */
+
+ for (i = 0; i < mrt->maxvif; i++) {
+- if (!(mrt->vif_table[i].flags & VIFF_STATIC))
+- vif_delete(mrt, i, 0, &list);
++ if (!all && (mrt->vif_table[i].flags & VIFF_STATIC))
++ continue;
++ vif_delete(mrt, i, 0, &list);
+ }
+ unregister_netdevice_many(&list);
+
+@@ -1226,7 +1227,7 @@ static void mroute_clean_tables(struct mr_table *mrt)
+
+ for (i = 0; i < MFC_LINES; i++) {
+ list_for_each_entry_safe(c, next, &mrt->mfc_cache_array[i], list) {
+- if (c->mfc_flags & MFC_STATIC)
++ if (!all && (c->mfc_flags & MFC_STATIC))
+ continue;
+ list_del_rcu(&c->list);
+ mroute_netlink_event(mrt, c, RTM_DELROUTE);
+@@ -1261,7 +1262,7 @@ static void mrtsock_destruct(struct sock *sk)
+ NETCONFA_IFINDEX_ALL,
+ net->ipv4.devconf_all);
+ RCU_INIT_POINTER(mrt->mroute_sk, NULL);
+- mroute_clean_tables(mrt);
++ mroute_clean_tables(mrt, false);
+ }
+ }
+ rtnl_unlock();
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index a8f515bb19c4..0a2b61dbcd4e 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -4457,19 +4457,34 @@ static int __must_check tcp_queue_rcv(struct sock *sk, struct sk_buff *skb, int
+ int tcp_send_rcvq(struct sock *sk, struct msghdr *msg, size_t size)
+ {
+ struct sk_buff *skb;
++ int err = -ENOMEM;
++ int data_len = 0;
+ bool fragstolen;
+
+ if (size == 0)
+ return 0;
+
+- skb = alloc_skb(size, sk->sk_allocation);
++ if (size > PAGE_SIZE) {
++ int npages = min_t(size_t, size >> PAGE_SHIFT, MAX_SKB_FRAGS);
++
++ data_len = npages << PAGE_SHIFT;
++ size = data_len + (size & ~PAGE_MASK);
++ }
++ skb = alloc_skb_with_frags(size - data_len, data_len,
++ PAGE_ALLOC_COSTLY_ORDER,
++ &err, sk->sk_allocation);
+ if (!skb)
+ goto err;
+
++ skb_put(skb, size - data_len);
++ skb->data_len = data_len;
++ skb->len = size;
++
+ if (tcp_try_rmem_schedule(sk, skb, skb->truesize))
+ goto err_free;
+
+- if (memcpy_from_msg(skb_put(skb, size), msg, size))
++ err = skb_copy_datagram_from_iter(skb, 0, &msg->msg_iter, size);
++ if (err)
+ goto err_free;
+
+ TCP_SKB_CB(skb)->seq = tcp_sk(sk)->rcv_nxt;
+@@ -4485,7 +4500,8 @@ int tcp_send_rcvq(struct sock *sk, struct msghdr *msg, size_t size)
+ err_free:
+ kfree_skb(skb);
+ err:
+- return -ENOMEM;
++ return err;
++
+ }
+
+ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)
+@@ -5643,6 +5659,7 @@ discard:
+ }
+
+ tp->rcv_nxt = TCP_SKB_CB(skb)->seq + 1;
++ tp->copied_seq = tp->rcv_nxt;
+ tp->rcv_wup = TCP_SKB_CB(skb)->seq + 1;
+
+ /* RFC1323: The window in SYN & SYN/ACK segments is
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 93898e093d4e..a7739c83aa84 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -922,7 +922,8 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr,
+ }
+
+ md5sig = rcu_dereference_protected(tp->md5sig_info,
+- sock_owned_by_user(sk));
++ sock_owned_by_user(sk) ||
++ lockdep_is_held(&sk->sk_lock.slock));
+ if (!md5sig) {
+ md5sig = kmalloc(sizeof(*md5sig), gfp);
+ if (!md5sig)
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index 7149ebc820c7..04f0a052b524 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -176,6 +176,18 @@ static int tcp_write_timeout(struct sock *sk)
+ syn_set = true;
+ } else {
+ if (retransmits_timed_out(sk, sysctl_tcp_retries1, 0, 0)) {
++ /* Some middle-boxes may black-hole Fast Open _after_
++ * the handshake. Therefore we conservatively disable
++ * Fast Open on this path on recurring timeouts with
++ * few or zero bytes acked after Fast Open.
++ */
++ if (tp->syn_data_acked &&
++ tp->bytes_acked <= tp->rx_opt.mss_clamp) {
++ tcp_fastopen_cache_set(sk, 0, NULL, true, 0);
++ if (icsk->icsk_retransmits == sysctl_tcp_retries1)
++ NET_INC_STATS_BH(sock_net(sk),
++ LINUX_MIB_TCPFASTOPENACTIVEFAIL);
++ }
+ /* Black hole detection */
+ tcp_mtu_probing(icsk, sk);
+
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index dd00828863a0..3939dd290c44 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -3628,7 +3628,7 @@ static void addrconf_dad_work(struct work_struct *w)
+
+ /* send a neighbour solicitation for our addr */
+ addrconf_addr_solict_mult(&ifp->addr, &mcaddr);
+- ndisc_send_ns(ifp->idev->dev, NULL, &ifp->addr, &mcaddr, &in6addr_any, NULL);
++ ndisc_send_ns(ifp->idev->dev, NULL, &ifp->addr, &mcaddr, &in6addr_any);
+ out:
+ in6_ifa_put(ifp);
+ rtnl_unlock();
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index 44bb66bde0e2..38d66ddfb937 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -428,9 +428,11 @@ void inet6_destroy_sock(struct sock *sk)
+
+ /* Free tx options */
+
+- opt = xchg(&np->opt, NULL);
+- if (opt)
+- sock_kfree_s(sk, opt, opt->tot_len);
++ opt = xchg((__force struct ipv6_txoptions **)&np->opt, NULL);
++ if (opt) {
++ atomic_sub(opt->tot_len, &sk->sk_omem_alloc);
++ txopt_put(opt);
++ }
+ }
+ EXPORT_SYMBOL_GPL(inet6_destroy_sock);
+
+@@ -659,7 +661,10 @@ int inet6_sk_rebuild_header(struct sock *sk)
+ fl6.fl6_sport = inet->inet_sport;
+ security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
+
+- final_p = fl6_update_dst(&fl6, np->opt, &final);
++ rcu_read_lock();
++ final_p = fl6_update_dst(&fl6, rcu_dereference(np->opt),
++ &final);
++ rcu_read_unlock();
+
+ dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index 9aadd57808a5..a42a673aa547 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -167,8 +167,10 @@ ipv4_connected:
+
+ security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
+
+- opt = flowlabel ? flowlabel->opt : np->opt;
++ rcu_read_lock();
++ opt = flowlabel ? flowlabel->opt : rcu_dereference(np->opt);
+ final_p = fl6_update_dst(&fl6, opt, &final);
++ rcu_read_unlock();
+
+ dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
+ err = 0;
+diff --git a/net/ipv6/exthdrs.c b/net/ipv6/exthdrs.c
+index ce203b0402be..ea7c4d64a00a 100644
+--- a/net/ipv6/exthdrs.c
++++ b/net/ipv6/exthdrs.c
+@@ -727,6 +727,7 @@ ipv6_dup_options(struct sock *sk, struct ipv6_txoptions *opt)
+ *((char **)&opt2->dst1opt) += dif;
+ if (opt2->srcrt)
+ *((char **)&opt2->srcrt) += dif;
++ atomic_set(&opt2->refcnt, 1);
+ }
+ return opt2;
+ }
+@@ -790,7 +791,7 @@ ipv6_renew_options(struct sock *sk, struct ipv6_txoptions *opt,
+ return ERR_PTR(-ENOBUFS);
+
+ memset(opt2, 0, tot_len);
+-
++ atomic_set(&opt2->refcnt, 1);
+ opt2->tot_len = tot_len;
+ p = (char *)(opt2 + 1);
+
+diff --git a/net/ipv6/inet6_connection_sock.c b/net/ipv6/inet6_connection_sock.c
+index 6927f3fb5597..9beed302eb36 100644
+--- a/net/ipv6/inet6_connection_sock.c
++++ b/net/ipv6/inet6_connection_sock.c
+@@ -77,7 +77,9 @@ struct dst_entry *inet6_csk_route_req(struct sock *sk,
+ memset(fl6, 0, sizeof(*fl6));
+ fl6->flowi6_proto = IPPROTO_TCP;
+ fl6->daddr = ireq->ir_v6_rmt_addr;
+- final_p = fl6_update_dst(fl6, np->opt, &final);
++ rcu_read_lock();
++ final_p = fl6_update_dst(fl6, rcu_dereference(np->opt), &final);
++ rcu_read_unlock();
+ fl6->saddr = ireq->ir_v6_loc_addr;
+ fl6->flowi6_oif = ireq->ir_iif;
+ fl6->flowi6_mark = ireq->ir_mark;
+@@ -207,7 +209,9 @@ static struct dst_entry *inet6_csk_route_socket(struct sock *sk,
+ fl6->fl6_dport = inet->inet_dport;
+ security_sk_classify_flow(sk, flowi6_to_flowi(fl6));
+
+- final_p = fl6_update_dst(fl6, np->opt, &final);
++ rcu_read_lock();
++ final_p = fl6_update_dst(fl6, rcu_dereference(np->opt), &final);
++ rcu_read_unlock();
+
+ dst = __inet6_csk_dst_check(sk, np->dst_cookie);
+ if (!dst) {
+@@ -240,7 +244,8 @@ int inet6_csk_xmit(struct sock *sk, struct sk_buff *skb, struct flowi *fl_unused
+ /* Restore final destination back after routing done */
+ fl6.daddr = sk->sk_v6_daddr;
+
+- res = ip6_xmit(sk, skb, &fl6, np->opt, np->tclass);
++ res = ip6_xmit(sk, skb, &fl6, rcu_dereference(np->opt),
++ np->tclass);
+ rcu_read_unlock();
+ return res;
+ }
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index eabffbb89795..137fca42aaa6 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -177,7 +177,7 @@ void ip6_tnl_dst_reset(struct ip6_tnl *t)
+ int i;
+
+ for_each_possible_cpu(i)
+- ip6_tnl_per_cpu_dst_set(raw_cpu_ptr(t->dst_cache), NULL);
++ ip6_tnl_per_cpu_dst_set(per_cpu_ptr(t->dst_cache, i), NULL);
+ }
+ EXPORT_SYMBOL_GPL(ip6_tnl_dst_reset);
+
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index 0e004cc42a22..35eee72ab4af 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -118,7 +118,7 @@ static void mr6_netlink_event(struct mr6_table *mrt, struct mfc6_cache *mfc,
+ int cmd);
+ static int ip6mr_rtm_dumproute(struct sk_buff *skb,
+ struct netlink_callback *cb);
+-static void mroute_clean_tables(struct mr6_table *mrt);
++static void mroute_clean_tables(struct mr6_table *mrt, bool all);
+ static void ipmr_expire_process(unsigned long arg);
+
+ #ifdef CONFIG_IPV6_MROUTE_MULTIPLE_TABLES
+@@ -334,7 +334,7 @@ static struct mr6_table *ip6mr_new_table(struct net *net, u32 id)
+ static void ip6mr_free_table(struct mr6_table *mrt)
+ {
+ del_timer_sync(&mrt->ipmr_expire_timer);
+- mroute_clean_tables(mrt);
++ mroute_clean_tables(mrt, true);
+ kfree(mrt);
+ }
+
+@@ -1542,7 +1542,7 @@ static int ip6mr_mfc_add(struct net *net, struct mr6_table *mrt,
+ * Close the multicast socket, and clear the vif tables etc
+ */
+
+-static void mroute_clean_tables(struct mr6_table *mrt)
++static void mroute_clean_tables(struct mr6_table *mrt, bool all)
+ {
+ int i;
+ LIST_HEAD(list);
+@@ -1552,8 +1552,9 @@ static void mroute_clean_tables(struct mr6_table *mrt)
+ * Shut down all active vif entries
+ */
+ for (i = 0; i < mrt->maxvif; i++) {
+- if (!(mrt->vif6_table[i].flags & VIFF_STATIC))
+- mif6_delete(mrt, i, &list);
++ if (!all && (mrt->vif6_table[i].flags & VIFF_STATIC))
++ continue;
++ mif6_delete(mrt, i, &list);
+ }
+ unregister_netdevice_many(&list);
+
+@@ -1562,7 +1563,7 @@ static void mroute_clean_tables(struct mr6_table *mrt)
+ */
+ for (i = 0; i < MFC6_LINES; i++) {
+ list_for_each_entry_safe(c, next, &mrt->mfc6_cache_array[i], list) {
+- if (c->mfc_flags & MFC_STATIC)
++ if (!all && (c->mfc_flags & MFC_STATIC))
+ continue;
+ write_lock_bh(&mrt_lock);
+ list_del(&c->list);
+@@ -1625,7 +1626,7 @@ int ip6mr_sk_done(struct sock *sk)
+ net->ipv6.devconf_all);
+ write_unlock_bh(&mrt_lock);
+
+- mroute_clean_tables(mrt);
++ mroute_clean_tables(mrt, false);
+ err = 0;
+ break;
+ }
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index 63e6956917c9..4449ad1f8114 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -111,7 +111,8 @@ struct ipv6_txoptions *ipv6_update_options(struct sock *sk,
+ icsk->icsk_sync_mss(sk, icsk->icsk_pmtu_cookie);
+ }
+ }
+- opt = xchg(&inet6_sk(sk)->opt, opt);
++ opt = xchg((__force struct ipv6_txoptions **)&inet6_sk(sk)->opt,
++ opt);
+ sk_dst_reset(sk);
+
+ return opt;
+@@ -231,9 +232,12 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ sk->sk_socket->ops = &inet_dgram_ops;
+ sk->sk_family = PF_INET;
+ }
+- opt = xchg(&np->opt, NULL);
+- if (opt)
+- sock_kfree_s(sk, opt, opt->tot_len);
++ opt = xchg((__force struct ipv6_txoptions **)&np->opt,
++ NULL);
++ if (opt) {
++ atomic_sub(opt->tot_len, &sk->sk_omem_alloc);
++ txopt_put(opt);
++ }
+ pktopt = xchg(&np->pktoptions, NULL);
+ kfree_skb(pktopt);
+
+@@ -403,7 +407,8 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ if (optname != IPV6_RTHDR && !ns_capable(net->user_ns, CAP_NET_RAW))
+ break;
+
+- opt = ipv6_renew_options(sk, np->opt, optname,
++ opt = rcu_dereference_protected(np->opt, sock_owned_by_user(sk));
++ opt = ipv6_renew_options(sk, opt, optname,
+ (struct ipv6_opt_hdr __user *)optval,
+ optlen);
+ if (IS_ERR(opt)) {
+@@ -432,8 +437,10 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ retv = 0;
+ opt = ipv6_update_options(sk, opt);
+ sticky_done:
+- if (opt)
+- sock_kfree_s(sk, opt, opt->tot_len);
++ if (opt) {
++ atomic_sub(opt->tot_len, &sk->sk_omem_alloc);
++ txopt_put(opt);
++ }
+ break;
+ }
+
+@@ -486,6 +493,7 @@ sticky_done:
+ break;
+
+ memset(opt, 0, sizeof(*opt));
++ atomic_set(&opt->refcnt, 1);
+ opt->tot_len = sizeof(*opt) + optlen;
+ retv = -EFAULT;
+ if (copy_from_user(opt+1, optval, optlen))
+@@ -502,8 +510,10 @@ update:
+ retv = 0;
+ opt = ipv6_update_options(sk, opt);
+ done:
+- if (opt)
+- sock_kfree_s(sk, opt, opt->tot_len);
++ if (opt) {
++ atomic_sub(opt->tot_len, &sk->sk_omem_alloc);
++ txopt_put(opt);
++ }
+ break;
+ }
+ case IPV6_UNICAST_HOPS:
+@@ -1110,10 +1120,11 @@ static int do_ipv6_getsockopt(struct sock *sk, int level, int optname,
+ case IPV6_RTHDR:
+ case IPV6_DSTOPTS:
+ {
++ struct ipv6_txoptions *opt;
+
+ lock_sock(sk);
+- len = ipv6_getsockopt_sticky(sk, np->opt,
+- optname, optval, len);
++ opt = rcu_dereference_protected(np->opt, sock_owned_by_user(sk));
++ len = ipv6_getsockopt_sticky(sk, opt, optname, optval, len);
+ release_sock(sk);
+ /* check if ipv6_getsockopt_sticky() returns err code */
+ if (len < 0)
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index 083b2927fc67..41e3b5ee8d0b 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -1651,7 +1651,6 @@ out:
+ if (!err) {
+ ICMP6MSGOUT_INC_STATS(net, idev, ICMPV6_MLD2_REPORT);
+ ICMP6_INC_STATS(net, idev, ICMP6_MIB_OUTMSGS);
+- IP6_UPD_PO_STATS(net, idev, IPSTATS_MIB_OUTMCAST, payload_len);
+ } else {
+ IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS);
+ }
+@@ -2014,7 +2013,6 @@ out:
+ if (!err) {
+ ICMP6MSGOUT_INC_STATS(net, idev, type);
+ ICMP6_INC_STATS(net, idev, ICMP6_MIB_OUTMSGS);
+- IP6_UPD_PO_STATS(net, idev, IPSTATS_MIB_OUTMCAST, full_len);
+ } else
+ IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS);
+
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index 64a71354b069..9ad46cd7930d 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -553,8 +553,7 @@ static void ndisc_send_unsol_na(struct net_device *dev)
+
+ void ndisc_send_ns(struct net_device *dev, struct neighbour *neigh,
+ const struct in6_addr *solicit,
+- const struct in6_addr *daddr, const struct in6_addr *saddr,
+- struct sk_buff *oskb)
++ const struct in6_addr *daddr, const struct in6_addr *saddr)
+ {
+ struct sk_buff *skb;
+ struct in6_addr addr_buf;
+@@ -590,9 +589,6 @@ void ndisc_send_ns(struct net_device *dev, struct neighbour *neigh,
+ ndisc_fill_addr_option(skb, ND_OPT_SOURCE_LL_ADDR,
+ dev->dev_addr);
+
+- if (!(dev->priv_flags & IFF_XMIT_DST_RELEASE) && oskb)
+- skb_dst_copy(skb, oskb);
+-
+ ndisc_send_skb(skb, daddr, saddr);
+ }
+
+@@ -679,12 +675,12 @@ static void ndisc_solicit(struct neighbour *neigh, struct sk_buff *skb)
+ "%s: trying to ucast probe in NUD_INVALID: %pI6\n",
+ __func__, target);
+ }
+- ndisc_send_ns(dev, neigh, target, target, saddr, skb);
++ ndisc_send_ns(dev, neigh, target, target, saddr);
+ } else if ((probes -= NEIGH_VAR(neigh->parms, APP_PROBES)) < 0) {
+ neigh_app_ns(neigh);
+ } else {
+ addrconf_addr_solict_mult(target, &mcaddr);
+- ndisc_send_ns(dev, NULL, target, &mcaddr, saddr, skb);
++ ndisc_send_ns(dev, NULL, target, &mcaddr, saddr);
+ }
+ }
+
+diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
+index c7196ad1d69f..dc50143f50f2 100644
+--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
++++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
+@@ -190,7 +190,7 @@ static void nf_ct_frag6_expire(unsigned long data)
+ /* Creation primitives. */
+ static inline struct frag_queue *fq_find(struct net *net, __be32 id,
+ u32 user, struct in6_addr *src,
+- struct in6_addr *dst, u8 ecn)
++ struct in6_addr *dst, int iif, u8 ecn)
+ {
+ struct inet_frag_queue *q;
+ struct ip6_create_arg arg;
+@@ -200,6 +200,7 @@ static inline struct frag_queue *fq_find(struct net *net, __be32 id,
+ arg.user = user;
+ arg.src = src;
+ arg.dst = dst;
++ arg.iif = iif;
+ arg.ecn = ecn;
+
+ local_bh_disable();
+@@ -603,7 +604,7 @@ struct sk_buff *nf_ct_frag6_gather(struct sk_buff *skb, u32 user)
+ fhdr = (struct frag_hdr *)skb_transport_header(clone);
+
+ fq = fq_find(net, fhdr->identification, user, &hdr->saddr, &hdr->daddr,
+- ip6_frag_ecn(hdr));
++ skb->dev ? skb->dev->ifindex : 0, ip6_frag_ecn(hdr));
+ if (fq == NULL) {
+ pr_debug("Can't find and can't create new queue\n");
+ goto ret_orig;
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index fdbada1569a3..fe977299551e 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -732,6 +732,7 @@ static int raw6_getfrag(void *from, char *to, int offset, int len, int odd,
+
+ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ {
++ struct ipv6_txoptions *opt_to_free = NULL;
+ struct ipv6_txoptions opt_space;
+ DECLARE_SOCKADDR(struct sockaddr_in6 *, sin6, msg->msg_name);
+ struct in6_addr *daddr, *final_p, final;
+@@ -838,8 +839,10 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ if (!(opt->opt_nflen|opt->opt_flen))
+ opt = NULL;
+ }
+- if (!opt)
+- opt = np->opt;
++ if (!opt) {
++ opt = txopt_get(np);
++ opt_to_free = opt;
++ }
+ if (flowlabel)
+ opt = fl6_merge_options(&opt_space, flowlabel, opt);
+ opt = ipv6_fixup_options(&opt_space, opt);
+@@ -905,6 +908,7 @@ done:
+ dst_release(dst);
+ out:
+ fl6_sock_release(flowlabel);
++ txopt_put(opt_to_free);
+ return err < 0 ? err : len;
+ do_confirm:
+ dst_confirm(dst);
+diff --git a/net/ipv6/reassembly.c b/net/ipv6/reassembly.c
+index f1159bb76e0a..04013a910ce5 100644
+--- a/net/ipv6/reassembly.c
++++ b/net/ipv6/reassembly.c
+@@ -108,7 +108,10 @@ bool ip6_frag_match(const struct inet_frag_queue *q, const void *a)
+ return fq->id == arg->id &&
+ fq->user == arg->user &&
+ ipv6_addr_equal(&fq->saddr, arg->src) &&
+- ipv6_addr_equal(&fq->daddr, arg->dst);
++ ipv6_addr_equal(&fq->daddr, arg->dst) &&
++ (arg->iif == fq->iif ||
++ !(ipv6_addr_type(arg->dst) & (IPV6_ADDR_MULTICAST |
++ IPV6_ADDR_LINKLOCAL)));
+ }
+ EXPORT_SYMBOL(ip6_frag_match);
+
+@@ -180,7 +183,7 @@ static void ip6_frag_expire(unsigned long data)
+
+ static struct frag_queue *
+ fq_find(struct net *net, __be32 id, const struct in6_addr *src,
+- const struct in6_addr *dst, u8 ecn)
++ const struct in6_addr *dst, int iif, u8 ecn)
+ {
+ struct inet_frag_queue *q;
+ struct ip6_create_arg arg;
+@@ -190,6 +193,7 @@ fq_find(struct net *net, __be32 id, const struct in6_addr *src,
+ arg.user = IP6_DEFRAG_LOCAL_DELIVER;
+ arg.src = src;
+ arg.dst = dst;
++ arg.iif = iif;
+ arg.ecn = ecn;
+
+ hash = inet6_hash_frag(id, src, dst);
+@@ -551,7 +555,7 @@ static int ipv6_frag_rcv(struct sk_buff *skb)
+ }
+
+ fq = fq_find(net, fhdr->identification, &hdr->saddr, &hdr->daddr,
+- ip6_frag_ecn(hdr));
++ skb->dev ? skb->dev->ifindex : 0, ip6_frag_ecn(hdr));
+ if (fq) {
+ int ret;
+
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 946880ad48ac..fd0e6746d0cf 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -403,6 +403,14 @@ static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev,
+ }
+ }
+
++static bool __rt6_check_expired(const struct rt6_info *rt)
++{
++ if (rt->rt6i_flags & RTF_EXPIRES)
++ return time_after(jiffies, rt->dst.expires);
++ else
++ return false;
++}
++
+ static bool rt6_check_expired(const struct rt6_info *rt)
+ {
+ if (rt->rt6i_flags & RTF_EXPIRES) {
+@@ -538,7 +546,7 @@ static void rt6_probe_deferred(struct work_struct *w)
+ container_of(w, struct __rt6_probe_work, work);
+
+ addrconf_addr_solict_mult(&work->target, &mcaddr);
+- ndisc_send_ns(work->dev, NULL, &work->target, &mcaddr, NULL, NULL);
++ ndisc_send_ns(work->dev, NULL, &work->target, &mcaddr, NULL);
+ dev_put(work->dev);
+ kfree(work);
+ }
+@@ -1270,7 +1278,8 @@ static struct dst_entry *rt6_check(struct rt6_info *rt, u32 cookie)
+
+ static struct dst_entry *rt6_dst_from_check(struct rt6_info *rt, u32 cookie)
+ {
+- if (rt->dst.obsolete == DST_OBSOLETE_FORCE_CHK &&
++ if (!__rt6_check_expired(rt) &&
++ rt->dst.obsolete == DST_OBSOLETE_FORCE_CHK &&
+ rt6_check((struct rt6_info *)(rt->dst.from), cookie))
+ return &rt->dst;
+ else
+@@ -1290,7 +1299,8 @@ static struct dst_entry *ip6_dst_check(struct dst_entry *dst, u32 cookie)
+
+ rt6_dst_from_metrics_check(rt);
+
+- if ((rt->rt6i_flags & RTF_PCPU) || unlikely(dst->flags & DST_NOCACHE))
++ if (rt->rt6i_flags & RTF_PCPU ||
++ (unlikely(dst->flags & DST_NOCACHE) && rt->dst.from))
+ return rt6_dst_from_check(rt, cookie);
+ else
+ return rt6_check(rt, cookie);
+@@ -1340,6 +1350,12 @@ static void rt6_do_update_pmtu(struct rt6_info *rt, u32 mtu)
+ rt6_update_expires(rt, net->ipv6.sysctl.ip6_rt_mtu_expires);
+ }
+
++static bool rt6_cache_allowed_for_pmtu(const struct rt6_info *rt)
++{
++ return !(rt->rt6i_flags & RTF_CACHE) &&
++ (rt->rt6i_flags & RTF_PCPU || rt->rt6i_node);
++}
++
+ static void __ip6_rt_update_pmtu(struct dst_entry *dst, const struct sock *sk,
+ const struct ipv6hdr *iph, u32 mtu)
+ {
+@@ -1353,7 +1369,7 @@ static void __ip6_rt_update_pmtu(struct dst_entry *dst, const struct sock *sk,
+ if (mtu >= dst_mtu(dst))
+ return;
+
+- if (rt6->rt6i_flags & RTF_CACHE) {
++ if (!rt6_cache_allowed_for_pmtu(rt6)) {
+ rt6_do_update_pmtu(rt6, mtu);
+ } else {
+ const struct in6_addr *daddr, *saddr;
+diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
+index 0909f4e0d53c..f30bfdcdea54 100644
+--- a/net/ipv6/syncookies.c
++++ b/net/ipv6/syncookies.c
+@@ -225,7 +225,7 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
+ memset(&fl6, 0, sizeof(fl6));
+ fl6.flowi6_proto = IPPROTO_TCP;
+ fl6.daddr = ireq->ir_v6_rmt_addr;
+- final_p = fl6_update_dst(&fl6, np->opt, &final);
++ final_p = fl6_update_dst(&fl6, rcu_dereference(np->opt), &final);
+ fl6.saddr = ireq->ir_v6_loc_addr;
+ fl6.flowi6_oif = sk->sk_bound_dev_if;
+ fl6.flowi6_mark = ireq->ir_mark;
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 97d9314ea361..9e9b77bd2d0a 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -120,6 +120,7 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ struct ipv6_pinfo *np = inet6_sk(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct in6_addr *saddr = NULL, *final_p, final;
++ struct ipv6_txoptions *opt;
+ struct flowi6 fl6;
+ struct dst_entry *dst;
+ int addr_type;
+@@ -235,7 +236,8 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ fl6.fl6_dport = usin->sin6_port;
+ fl6.fl6_sport = inet->inet_sport;
+
+- final_p = fl6_update_dst(&fl6, np->opt, &final);
++ opt = rcu_dereference_protected(np->opt, sock_owned_by_user(sk));
++ final_p = fl6_update_dst(&fl6, opt, &final);
+
+ security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
+
+@@ -263,9 +265,9 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ tcp_fetch_timewait_stamp(sk, dst);
+
+ icsk->icsk_ext_hdr_len = 0;
+- if (np->opt)
+- icsk->icsk_ext_hdr_len = (np->opt->opt_flen +
+- np->opt->opt_nflen);
++ if (opt)
++ icsk->icsk_ext_hdr_len = opt->opt_flen +
++ opt->opt_nflen;
+
+ tp->rx_opt.mss_clamp = IPV6_MIN_MTU - sizeof(struct tcphdr) - sizeof(struct ipv6hdr);
+
+@@ -461,7 +463,8 @@ static int tcp_v6_send_synack(struct sock *sk, struct dst_entry *dst,
+ fl6->flowlabel = ip6_flowlabel(ipv6_hdr(ireq->pktopts));
+
+ skb_set_queue_mapping(skb, queue_mapping);
+- err = ip6_xmit(sk, skb, fl6, np->opt, np->tclass);
++ err = ip6_xmit(sk, skb, fl6, rcu_dereference(np->opt),
++ np->tclass);
+ err = net_xmit_eval(err);
+ }
+
+@@ -991,6 +994,7 @@ static struct sock *tcp_v6_syn_recv_sock(struct sock *sk, struct sk_buff *skb,
+ struct inet_request_sock *ireq;
+ struct ipv6_pinfo *newnp, *np = inet6_sk(sk);
+ struct tcp6_sock *newtcp6sk;
++ struct ipv6_txoptions *opt;
+ struct inet_sock *newinet;
+ struct tcp_sock *newtp;
+ struct sock *newsk;
+@@ -1126,13 +1130,15 @@ static struct sock *tcp_v6_syn_recv_sock(struct sock *sk, struct sk_buff *skb,
+ but we make one more one thing there: reattach optmem
+ to newsk.
+ */
+- if (np->opt)
+- newnp->opt = ipv6_dup_options(newsk, np->opt);
+-
++ opt = rcu_dereference(np->opt);
++ if (opt) {
++ opt = ipv6_dup_options(newsk, opt);
++ RCU_INIT_POINTER(newnp->opt, opt);
++ }
+ inet_csk(newsk)->icsk_ext_hdr_len = 0;
+- if (newnp->opt)
+- inet_csk(newsk)->icsk_ext_hdr_len = (newnp->opt->opt_nflen +
+- newnp->opt->opt_flen);
++ if (opt)
++ inet_csk(newsk)->icsk_ext_hdr_len = opt->opt_nflen +
++ opt->opt_flen;
+
+ tcp_ca_openreq_child(newsk, dst);
+
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 0aba654f5b91..8379fc2f4b1d 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1107,6 +1107,7 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ DECLARE_SOCKADDR(struct sockaddr_in6 *, sin6, msg->msg_name);
+ struct in6_addr *daddr, *final_p, final;
+ struct ipv6_txoptions *opt = NULL;
++ struct ipv6_txoptions *opt_to_free = NULL;
+ struct ip6_flowlabel *flowlabel = NULL;
+ struct flowi6 fl6;
+ struct dst_entry *dst;
+@@ -1260,8 +1261,10 @@ do_udp_sendmsg:
+ opt = NULL;
+ connected = 0;
+ }
+- if (!opt)
+- opt = np->opt;
++ if (!opt) {
++ opt = txopt_get(np);
++ opt_to_free = opt;
++ }
+ if (flowlabel)
+ opt = fl6_merge_options(&opt_space, flowlabel, opt);
+ opt = ipv6_fixup_options(&opt_space, opt);
+@@ -1370,6 +1373,7 @@ release_dst:
+ out:
+ dst_release(dst);
+ fl6_sock_release(flowlabel);
++ txopt_put(opt_to_free);
+ if (!err)
+ return len;
+ /*
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index d1ded3777815..0ce9da948ad7 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -486,6 +486,7 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ DECLARE_SOCKADDR(struct sockaddr_l2tpip6 *, lsa, msg->msg_name);
+ struct in6_addr *daddr, *final_p, final;
+ struct ipv6_pinfo *np = inet6_sk(sk);
++ struct ipv6_txoptions *opt_to_free = NULL;
+ struct ipv6_txoptions *opt = NULL;
+ struct ip6_flowlabel *flowlabel = NULL;
+ struct dst_entry *dst = NULL;
+@@ -575,8 +576,10 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ opt = NULL;
+ }
+
+- if (opt == NULL)
+- opt = np->opt;
++ if (!opt) {
++ opt = txopt_get(np);
++ opt_to_free = opt;
++ }
+ if (flowlabel)
+ opt = fl6_merge_options(&opt_space, flowlabel, opt);
+ opt = ipv6_fixup_options(&opt_space, opt);
+@@ -631,6 +634,7 @@ done:
+ dst_release(dst);
+ out:
+ fl6_sock_release(flowlabel);
++ txopt_put(opt_to_free);
+
+ return err < 0 ? err : len;
+
+diff --git a/net/openvswitch/dp_notify.c b/net/openvswitch/dp_notify.c
+index a7a80a6b77b0..653d073bae45 100644
+--- a/net/openvswitch/dp_notify.c
++++ b/net/openvswitch/dp_notify.c
+@@ -58,7 +58,7 @@ void ovs_dp_notify_wq(struct work_struct *work)
+ struct hlist_node *n;
+
+ hlist_for_each_entry_safe(vport, n, &dp->ports[i], dp_hash_node) {
+- if (vport->ops->type != OVS_VPORT_TYPE_NETDEV)
++ if (vport->ops->type == OVS_VPORT_TYPE_INTERNAL)
+ continue;
+
+ if (!(vport->dev->priv_flags & IFF_OVS_DATAPATH))
+diff --git a/net/openvswitch/vport-netdev.c b/net/openvswitch/vport-netdev.c
+index f7e8dcce7ada..ac14c488669c 100644
+--- a/net/openvswitch/vport-netdev.c
++++ b/net/openvswitch/vport-netdev.c
+@@ -180,9 +180,13 @@ void ovs_netdev_tunnel_destroy(struct vport *vport)
+ if (vport->dev->priv_flags & IFF_OVS_DATAPATH)
+ ovs_netdev_detach_dev(vport);
+
+- /* Early release so we can unregister the device */
++ /* We can be invoked by both explicit vport deletion and
++ * underlying netdev deregistration; delete the link only
++ * if it's not already shutting down.
++ */
++ if (vport->dev->reg_state == NETREG_REGISTERED)
++ rtnl_delete_link(vport->dev);
+ dev_put(vport->dev);
+- rtnl_delete_link(vport->dev);
+ vport->dev = NULL;
+ rtnl_unlock();
+
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 27b2898f275c..4695a36eeca3 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -1741,6 +1741,20 @@ static void fanout_release(struct sock *sk)
+ kfree_rcu(po->rollover, rcu);
+ }
+
++static bool packet_extra_vlan_len_allowed(const struct net_device *dev,
++ struct sk_buff *skb)
++{
++ /* Earlier code assumed this would be a VLAN pkt, double-check
++ * this now that we have the actual packet in hand. We can only
++ * do this check on Ethernet devices.
++ */
++ if (unlikely(dev->type != ARPHRD_ETHER))
++ return false;
++
++ skb_reset_mac_header(skb);
++ return likely(eth_hdr(skb)->h_proto == htons(ETH_P_8021Q));
++}
++
+ static const struct proto_ops packet_ops;
+
+ static const struct proto_ops packet_ops_spkt;
+@@ -1902,18 +1916,10 @@ retry:
+ goto retry;
+ }
+
+- if (len > (dev->mtu + dev->hard_header_len + extra_len)) {
+- /* Earlier code assumed this would be a VLAN pkt,
+- * double-check this now that we have the actual
+- * packet in hand.
+- */
+- struct ethhdr *ehdr;
+- skb_reset_mac_header(skb);
+- ehdr = eth_hdr(skb);
+- if (ehdr->h_proto != htons(ETH_P_8021Q)) {
+- err = -EMSGSIZE;
+- goto out_unlock;
+- }
++ if (len > (dev->mtu + dev->hard_header_len + extra_len) &&
++ !packet_extra_vlan_len_allowed(dev, skb)) {
++ err = -EMSGSIZE;
++ goto out_unlock;
+ }
+
+ skb->protocol = proto;
+@@ -2332,6 +2338,15 @@ static bool ll_header_truncated(const struct net_device *dev, int len)
+ return false;
+ }
+
++static void tpacket_set_protocol(const struct net_device *dev,
++ struct sk_buff *skb)
++{
++ if (dev->type == ARPHRD_ETHER) {
++ skb_reset_mac_header(skb);
++ skb->protocol = eth_hdr(skb)->h_proto;
++ }
++}
++
+ static int tpacket_fill_skb(struct packet_sock *po, struct sk_buff *skb,
+ void *frame, struct net_device *dev, int size_max,
+ __be16 proto, unsigned char *addr, int hlen)
+@@ -2368,8 +2383,6 @@ static int tpacket_fill_skb(struct packet_sock *po, struct sk_buff *skb,
+ skb_reserve(skb, hlen);
+ skb_reset_network_header(skb);
+
+- if (!packet_use_direct_xmit(po))
+- skb_probe_transport_header(skb, 0);
+ if (unlikely(po->tp_tx_has_off)) {
+ int off_min, off_max, off;
+ off_min = po->tp_hdrlen - sizeof(struct sockaddr_ll);
+@@ -2415,6 +2428,8 @@ static int tpacket_fill_skb(struct packet_sock *po, struct sk_buff *skb,
+ dev->hard_header_len);
+ if (unlikely(err))
+ return err;
++ if (!skb->protocol)
++ tpacket_set_protocol(dev, skb);
+
+ data += dev->hard_header_len;
+ to_write -= dev->hard_header_len;
+@@ -2449,6 +2464,8 @@ static int tpacket_fill_skb(struct packet_sock *po, struct sk_buff *skb,
+ len = ((to_write > len_max) ? len_max : to_write);
+ }
+
++ skb_probe_transport_header(skb, 0);
++
+ return tp_len;
+ }
+
+@@ -2493,12 +2510,13 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
+ if (unlikely(!(dev->flags & IFF_UP)))
+ goto out_put;
+
+- reserve = dev->hard_header_len + VLAN_HLEN;
++ if (po->sk.sk_socket->type == SOCK_RAW)
++ reserve = dev->hard_header_len;
+ size_max = po->tx_ring.frame_size
+ - (po->tp_hdrlen - sizeof(struct sockaddr_ll));
+
+- if (size_max > dev->mtu + reserve)
+- size_max = dev->mtu + reserve;
++ if (size_max > dev->mtu + reserve + VLAN_HLEN)
++ size_max = dev->mtu + reserve + VLAN_HLEN;
+
+ do {
+ ph = packet_current_frame(po, &po->tx_ring,
+@@ -2525,18 +2543,10 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
+ tp_len = tpacket_fill_skb(po, skb, ph, dev, size_max, proto,
+ addr, hlen);
+ if (likely(tp_len >= 0) &&
+- tp_len > dev->mtu + dev->hard_header_len) {
+- struct ethhdr *ehdr;
+- /* Earlier code assumed this would be a VLAN pkt,
+- * double-check this now that we have the actual
+- * packet in hand.
+- */
++ tp_len > dev->mtu + reserve &&
++ !packet_extra_vlan_len_allowed(dev, skb))
++ tp_len = -EMSGSIZE;
+
+- skb_reset_mac_header(skb);
+- ehdr = eth_hdr(skb);
+- if (ehdr->h_proto != htons(ETH_P_8021Q))
+- tp_len = -EMSGSIZE;
+- }
+ if (unlikely(tp_len < 0)) {
+ if (po->tp_loss) {
+ __packet_set_status(po, ph,
+@@ -2757,18 +2767,10 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+
+ sock_tx_timestamp(sk, &skb_shinfo(skb)->tx_flags);
+
+- if (!gso_type && (len > dev->mtu + reserve + extra_len)) {
+- /* Earlier code assumed this would be a VLAN pkt,
+- * double-check this now that we have the actual
+- * packet in hand.
+- */
+- struct ethhdr *ehdr;
+- skb_reset_mac_header(skb);
+- ehdr = eth_hdr(skb);
+- if (ehdr->h_proto != htons(ETH_P_8021Q)) {
+- err = -EMSGSIZE;
+- goto out_free;
+- }
++ if (!gso_type && (len > dev->mtu + reserve + extra_len) &&
++ !packet_extra_vlan_len_allowed(dev, skb)) {
++ err = -EMSGSIZE;
++ goto out_free;
+ }
+
+ skb->protocol = proto;
+@@ -2799,8 +2801,8 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ len += vnet_hdr_len;
+ }
+
+- if (!packet_use_direct_xmit(po))
+- skb_probe_transport_header(skb, reserve);
++ skb_probe_transport_header(skb, reserve);
++
+ if (unlikely(extra_len == 4))
+ skb->no_fcs = 1;
+
+diff --git a/net/rds/connection.c b/net/rds/connection.c
+index 49adeef8090c..9b2de5e67d79 100644
+--- a/net/rds/connection.c
++++ b/net/rds/connection.c
+@@ -190,12 +190,6 @@ new_conn:
+ }
+ }
+
+- if (trans == NULL) {
+- kmem_cache_free(rds_conn_slab, conn);
+- conn = ERR_PTR(-ENODEV);
+- goto out;
+- }
+-
+ conn->c_trans = trans;
+
+ ret = trans->conn_alloc(conn, gfp);
+diff --git a/net/rds/send.c b/net/rds/send.c
+index 4df61a515b83..859de6f32521 100644
+--- a/net/rds/send.c
++++ b/net/rds/send.c
+@@ -1009,11 +1009,13 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
+ release_sock(sk);
+ }
+
+- /* racing with another thread binding seems ok here */
++ lock_sock(sk);
+ if (daddr == 0 || rs->rs_bound_addr == 0) {
++ release_sock(sk);
+ ret = -ENOTCONN; /* XXX not a great errno */
+ goto out;
+ }
++ release_sock(sk);
+
+ if (payload_len > rds_sk_sndbuf(rs)) {
+ ret = -EMSGSIZE;
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index f43c8f33f09e..7ec667dd4ce1 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -253,7 +253,8 @@ int qdisc_set_default(const char *name)
+ }
+
+ /* We know handle. Find qdisc among all qdisc's attached to device
+- (root qdisc, all its children, children of children etc.)
++ * (root qdisc, all its children, children of children etc.)
++ * Note: caller either uses rtnl or rcu_read_lock()
+ */
+
+ static struct Qdisc *qdisc_match_from_root(struct Qdisc *root, u32 handle)
+@@ -264,7 +265,7 @@ static struct Qdisc *qdisc_match_from_root(struct Qdisc *root, u32 handle)
+ root->handle == handle)
+ return root;
+
+- list_for_each_entry(q, &root->list, list) {
++ list_for_each_entry_rcu(q, &root->list, list) {
+ if (q->handle == handle)
+ return q;
+ }
+@@ -277,15 +278,18 @@ void qdisc_list_add(struct Qdisc *q)
+ struct Qdisc *root = qdisc_dev(q)->qdisc;
+
+ WARN_ON_ONCE(root == &noop_qdisc);
+- list_add_tail(&q->list, &root->list);
++ ASSERT_RTNL();
++ list_add_tail_rcu(&q->list, &root->list);
+ }
+ }
+ EXPORT_SYMBOL(qdisc_list_add);
+
+ void qdisc_list_del(struct Qdisc *q)
+ {
+- if ((q->parent != TC_H_ROOT) && !(q->flags & TCQ_F_INGRESS))
+- list_del(&q->list);
++ if ((q->parent != TC_H_ROOT) && !(q->flags & TCQ_F_INGRESS)) {
++ ASSERT_RTNL();
++ list_del_rcu(&q->list);
++ }
+ }
+ EXPORT_SYMBOL(qdisc_list_del);
+
+@@ -750,14 +754,18 @@ void qdisc_tree_decrease_qlen(struct Qdisc *sch, unsigned int n)
+ if (n == 0)
+ return;
+ drops = max_t(int, n, 0);
++ rcu_read_lock();
+ while ((parentid = sch->parent)) {
+ if (TC_H_MAJ(parentid) == TC_H_MAJ(TC_H_INGRESS))
+- return;
++ break;
+
++ if (sch->flags & TCQ_F_NOPARENT)
++ break;
++ /* TODO: perform the search on a per txq basis */
+ sch = qdisc_lookup(qdisc_dev(sch), TC_H_MAJ(parentid));
+ if (sch == NULL) {
+- WARN_ON(parentid != TC_H_ROOT);
+- return;
++ WARN_ON_ONCE(parentid != TC_H_ROOT);
++ break;
+ }
+ cops = sch->ops->cl_ops;
+ if (cops->qlen_notify) {
+@@ -768,6 +776,7 @@ void qdisc_tree_decrease_qlen(struct Qdisc *sch, unsigned int n)
+ sch->q.qlen -= n;
+ __qdisc_qstats_drop(sch, drops);
+ }
++ rcu_read_unlock();
+ }
+ EXPORT_SYMBOL(qdisc_tree_decrease_qlen);
+
+@@ -941,7 +950,7 @@ qdisc_create(struct net_device *dev, struct netdev_queue *dev_queue,
+ }
+ lockdep_set_class(qdisc_lock(sch), &qdisc_tx_lock);
+ if (!netif_is_multiqueue(dev))
+- sch->flags |= TCQ_F_ONETXQUEUE;
++ sch->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT;
+ }
+
+ sch->handle = handle;
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index cb5d4ad32946..e82a1ad80aa5 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -737,7 +737,7 @@ static void attach_one_default_qdisc(struct net_device *dev,
+ return;
+ }
+ if (!netif_is_multiqueue(dev))
+- qdisc->flags |= TCQ_F_ONETXQUEUE;
++ qdisc->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT;
+ dev_queue->qdisc_sleeping = qdisc;
+ }
+
+diff --git a/net/sched/sch_mq.c b/net/sched/sch_mq.c
+index f3cbaecd283a..3e82f047caaf 100644
+--- a/net/sched/sch_mq.c
++++ b/net/sched/sch_mq.c
+@@ -63,7 +63,7 @@ static int mq_init(struct Qdisc *sch, struct nlattr *opt)
+ if (qdisc == NULL)
+ goto err;
+ priv->qdiscs[ntx] = qdisc;
+- qdisc->flags |= TCQ_F_ONETXQUEUE;
++ qdisc->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT;
+ }
+
+ sch->flags |= TCQ_F_MQROOT;
+@@ -156,7 +156,7 @@ static int mq_graft(struct Qdisc *sch, unsigned long cl, struct Qdisc *new,
+
+ *old = dev_graft_qdisc(dev_queue, new);
+ if (new)
+- new->flags |= TCQ_F_ONETXQUEUE;
++ new->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT;
+ if (dev->flags & IFF_UP)
+ dev_activate(dev);
+ return 0;
+diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
+index 3811a745452c..ad70ecf57ce7 100644
+--- a/net/sched/sch_mqprio.c
++++ b/net/sched/sch_mqprio.c
+@@ -132,7 +132,7 @@ static int mqprio_init(struct Qdisc *sch, struct nlattr *opt)
+ goto err;
+ }
+ priv->qdiscs[i] = qdisc;
+- qdisc->flags |= TCQ_F_ONETXQUEUE;
++ qdisc->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT;
+ }
+
+ /* If the mqprio options indicate that hardware should own
+@@ -209,7 +209,7 @@ static int mqprio_graft(struct Qdisc *sch, unsigned long cl, struct Qdisc *new,
+ *old = dev_graft_qdisc(dev_queue, new);
+
+ if (new)
+- new->flags |= TCQ_F_ONETXQUEUE;
++ new->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT;
+
+ if (dev->flags & IFF_UP)
+ dev_activate(dev);
+diff --git a/net/sctp/auth.c b/net/sctp/auth.c
+index 4f15b7d730e1..1543e39f47c3 100644
+--- a/net/sctp/auth.c
++++ b/net/sctp/auth.c
+@@ -809,8 +809,8 @@ int sctp_auth_ep_set_hmacs(struct sctp_endpoint *ep,
+ if (!has_sha1)
+ return -EINVAL;
+
+- memcpy(ep->auth_hmacs_list->hmac_ids, &hmacs->shmac_idents[0],
+- hmacs->shmac_num_idents * sizeof(__u16));
++ for (i = 0; i < hmacs->shmac_num_idents; i++)
++ ep->auth_hmacs_list->hmac_ids[i] = htons(hmacs->shmac_idents[i]);
+ ep->auth_hmacs_list->param_hdr.length = htons(sizeof(sctp_paramhdr_t) +
+ hmacs->shmac_num_idents * sizeof(__u16));
+ return 0;
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 17bef01b9aa3..3ec88be0faec 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -7375,6 +7375,13 @@ struct proto sctp_prot = {
+
+ #if IS_ENABLED(CONFIG_IPV6)
+
++#include <net/transp_v6.h>
++static void sctp_v6_destroy_sock(struct sock *sk)
++{
++ sctp_destroy_sock(sk);
++ inet6_destroy_sock(sk);
++}
++
+ struct proto sctpv6_prot = {
+ .name = "SCTPv6",
+ .owner = THIS_MODULE,
+@@ -7384,7 +7391,7 @@ struct proto sctpv6_prot = {
+ .accept = sctp_accept,
+ .ioctl = sctp_ioctl,
+ .init = sctp_init_sock,
+- .destroy = sctp_destroy_sock,
++ .destroy = sctp_v6_destroy_sock,
+ .shutdown = sctp_shutdown,
+ .setsockopt = sctp_setsockopt,
+ .getsockopt = sctp_getsockopt,
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index cd7c5f131e72..86f2e7c44694 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -159,8 +159,11 @@ static int tipc_udp_send_msg(struct net *net, struct sk_buff *skb,
+ struct sk_buff *clone;
+ struct rtable *rt;
+
+- if (skb_headroom(skb) < UDP_MIN_HEADROOM)
+- pskb_expand_head(skb, UDP_MIN_HEADROOM, 0, GFP_ATOMIC);
++ if (skb_headroom(skb) < UDP_MIN_HEADROOM) {
++ err = pskb_expand_head(skb, UDP_MIN_HEADROOM, 0, GFP_ATOMIC);
++ if (err)
++ goto tx_error;
++ }
+
+ clone = skb_clone(skb, GFP_ATOMIC);
+ skb_set_inner_protocol(clone, htons(ETH_P_TIPC));
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 94f658235fb4..128b0982c96b 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -326,6 +326,118 @@ found:
+ return s;
+ }
+
++/* Support code for asymmetrically connected dgram sockets
++ *
++ * If a datagram socket is connected to a socket not itself connected
++ * to the first socket (eg, /dev/log), clients may only enqueue more
++ * messages if the present receive queue of the server socket is not
++ * "too large". This means there's a second writeability condition
++ * poll and sendmsg need to test. The dgram recv code will do a wake
++ * up on the peer_wait wait queue of a socket upon reception of a
++ * datagram which needs to be propagated to sleeping would-be writers
++ * since these might not have sent anything so far. This can't be
++ * accomplished via poll_wait because the lifetime of the server
++ * socket might be less than that of its clients if these break their
++ * association with it or if the server socket is closed while clients
++ * are still connected to it and there's no way to inform "a polling
++ * implementation" that it should let go of a certain wait queue
++ *
++ * In order to propagate a wake up, a wait_queue_t of the client
++ * socket is enqueued on the peer_wait queue of the server socket
++ * whose wake function does a wake_up on the ordinary client socket
++ * wait queue. This connection is established whenever a write (or
++ * poll for write) hit the flow control condition and broken when the
++ * association to the server socket is dissolved or after a wake up
++ * was relayed.
++ */
++
++static int unix_dgram_peer_wake_relay(wait_queue_t *q, unsigned mode, int flags,
++ void *key)
++{
++ struct unix_sock *u;
++ wait_queue_head_t *u_sleep;
++
++ u = container_of(q, struct unix_sock, peer_wake);
++
++ __remove_wait_queue(&unix_sk(u->peer_wake.private)->peer_wait,
++ q);
++ u->peer_wake.private = NULL;
++
++ /* relaying can only happen while the wq still exists */
++ u_sleep = sk_sleep(&u->sk);
++ if (u_sleep)
++ wake_up_interruptible_poll(u_sleep, key);
++
++ return 0;
++}
++
++static int unix_dgram_peer_wake_connect(struct sock *sk, struct sock *other)
++{
++ struct unix_sock *u, *u_other;
++ int rc;
++
++ u = unix_sk(sk);
++ u_other = unix_sk(other);
++ rc = 0;
++ spin_lock(&u_other->peer_wait.lock);
++
++ if (!u->peer_wake.private) {
++ u->peer_wake.private = other;
++ __add_wait_queue(&u_other->peer_wait, &u->peer_wake);
++
++ rc = 1;
++ }
++
++ spin_unlock(&u_other->peer_wait.lock);
++ return rc;
++}
++
++static void unix_dgram_peer_wake_disconnect(struct sock *sk,
++ struct sock *other)
++{
++ struct unix_sock *u, *u_other;
++
++ u = unix_sk(sk);
++ u_other = unix_sk(other);
++ spin_lock(&u_other->peer_wait.lock);
++
++ if (u->peer_wake.private == other) {
++ __remove_wait_queue(&u_other->peer_wait, &u->peer_wake);
++ u->peer_wake.private = NULL;
++ }
++
++ spin_unlock(&u_other->peer_wait.lock);
++}
++
++static void unix_dgram_peer_wake_disconnect_wakeup(struct sock *sk,
++ struct sock *other)
++{
++ unix_dgram_peer_wake_disconnect(sk, other);
++ wake_up_interruptible_poll(sk_sleep(sk),
++ POLLOUT |
++ POLLWRNORM |
++ POLLWRBAND);
++}
++
++/* preconditions:
++ * - unix_peer(sk) == other
++ * - association is stable
++ */
++static int unix_dgram_peer_wake_me(struct sock *sk, struct sock *other)
++{
++ int connected;
++
++ connected = unix_dgram_peer_wake_connect(sk, other);
++
++ if (unix_recvq_full(other))
++ return 1;
++
++ if (connected)
++ unix_dgram_peer_wake_disconnect(sk, other);
++
++ return 0;
++}
++
+ static inline int unix_writable(struct sock *sk)
+ {
+ return (atomic_read(&sk->sk_wmem_alloc) << 2) <= sk->sk_sndbuf;
+@@ -430,6 +542,8 @@ static void unix_release_sock(struct sock *sk, int embrion)
+ skpair->sk_state_change(skpair);
+ sk_wake_async(skpair, SOCK_WAKE_WAITD, POLL_HUP);
+ }
++
++ unix_dgram_peer_wake_disconnect(sk, skpair);
+ sock_put(skpair); /* It may now die */
+ unix_peer(sk) = NULL;
+ }
+@@ -440,6 +554,7 @@ static void unix_release_sock(struct sock *sk, int embrion)
+ if (state == TCP_LISTEN)
+ unix_release_sock(skb->sk, 1);
+ /* passed fds are erased in the kfree_skb hook */
++ UNIXCB(skb).consumed = skb->len;
+ kfree_skb(skb);
+ }
+
+@@ -664,6 +779,7 @@ static struct sock *unix_create1(struct net *net, struct socket *sock, int kern)
+ INIT_LIST_HEAD(&u->link);
+ mutex_init(&u->readlock); /* single task reading lock */
+ init_waitqueue_head(&u->peer_wait);
++ init_waitqueue_func_entry(&u->peer_wake, unix_dgram_peer_wake_relay);
+ unix_insert_socket(unix_sockets_unbound(sk), sk);
+ out:
+ if (sk == NULL)
+@@ -1031,6 +1147,8 @@ restart:
+ if (unix_peer(sk)) {
+ struct sock *old_peer = unix_peer(sk);
+ unix_peer(sk) = other;
++ unix_dgram_peer_wake_disconnect_wakeup(sk, old_peer);
++
+ unix_state_double_unlock(sk, other);
+
+ if (other != old_peer)
+@@ -1432,6 +1550,14 @@ static int unix_scm_to_skb(struct scm_cookie *scm, struct sk_buff *skb, bool sen
+ return err;
+ }
+
++static bool unix_passcred_enabled(const struct socket *sock,
++ const struct sock *other)
++{
++ return test_bit(SOCK_PASSCRED, &sock->flags) ||
++ !other->sk_socket ||
++ test_bit(SOCK_PASSCRED, &other->sk_socket->flags);
++}
++
+ /*
+ * Some apps rely on write() giving SCM_CREDENTIALS
+ * We include credentials if source or destination socket
+@@ -1442,14 +1568,41 @@ static void maybe_add_creds(struct sk_buff *skb, const struct socket *sock,
+ {
+ if (UNIXCB(skb).pid)
+ return;
+- if (test_bit(SOCK_PASSCRED, &sock->flags) ||
+- !other->sk_socket ||
+- test_bit(SOCK_PASSCRED, &other->sk_socket->flags)) {
++ if (unix_passcred_enabled(sock, other)) {
+ UNIXCB(skb).pid = get_pid(task_tgid(current));
+ current_uid_gid(&UNIXCB(skb).uid, &UNIXCB(skb).gid);
+ }
+ }
+
++static int maybe_init_creds(struct scm_cookie *scm,
++ struct socket *socket,
++ const struct sock *other)
++{
++ int err;
++ struct msghdr msg = { .msg_controllen = 0 };
++
++ err = scm_send(socket, &msg, scm, false);
++ if (err)
++ return err;
++
++ if (unix_passcred_enabled(socket, other)) {
++ scm->pid = get_pid(task_tgid(current));
++ current_uid_gid(&scm->creds.uid, &scm->creds.gid);
++ }
++ return err;
++}
++
++static bool unix_skb_scm_eq(struct sk_buff *skb,
++ struct scm_cookie *scm)
++{
++ const struct unix_skb_parms *u = &UNIXCB(skb);
++
++ return u->pid == scm->pid &&
++ uid_eq(u->uid, scm->creds.uid) &&
++ gid_eq(u->gid, scm->creds.gid) &&
++ unix_secdata_eq(scm, skb);
++}
++
+ /*
+ * Send AF_UNIX data.
+ */
+@@ -1470,6 +1623,7 @@ static int unix_dgram_sendmsg(struct socket *sock, struct msghdr *msg,
+ struct scm_cookie scm;
+ int max_level;
+ int data_len = 0;
++ int sk_locked;
+
+ wait_for_unix_gc();
+ err = scm_send(sock, msg, &scm, false);
+@@ -1548,12 +1702,14 @@ restart:
+ goto out_free;
+ }
+
++ sk_locked = 0;
+ unix_state_lock(other);
++restart_locked:
+ err = -EPERM;
+ if (!unix_may_send(sk, other))
+ goto out_unlock;
+
+- if (sock_flag(other, SOCK_DEAD)) {
++ if (unlikely(sock_flag(other, SOCK_DEAD))) {
+ /*
+ * Check with 1003.1g - what should
+ * datagram error
+@@ -1561,10 +1717,14 @@ restart:
+ unix_state_unlock(other);
+ sock_put(other);
+
++ if (!sk_locked)
++ unix_state_lock(sk);
++
+ err = 0;
+- unix_state_lock(sk);
+ if (unix_peer(sk) == other) {
+ unix_peer(sk) = NULL;
++ unix_dgram_peer_wake_disconnect_wakeup(sk, other);
++
+ unix_state_unlock(sk);
+
+ unix_dgram_disconnected(sk, other);
+@@ -1590,21 +1750,38 @@ restart:
+ goto out_unlock;
+ }
+
+- if (unix_peer(other) != sk && unix_recvq_full(other)) {
+- if (!timeo) {
+- err = -EAGAIN;
+- goto out_unlock;
++ if (unlikely(unix_peer(other) != sk && unix_recvq_full(other))) {
++ if (timeo) {
++ timeo = unix_wait_for_peer(other, timeo);
++
++ err = sock_intr_errno(timeo);
++ if (signal_pending(current))
++ goto out_free;
++
++ goto restart;
+ }
+
+- timeo = unix_wait_for_peer(other, timeo);
++ if (!sk_locked) {
++ unix_state_unlock(other);
++ unix_state_double_lock(sk, other);
++ }
+
+- err = sock_intr_errno(timeo);
+- if (signal_pending(current))
+- goto out_free;
++ if (unix_peer(sk) != other ||
++ unix_dgram_peer_wake_me(sk, other)) {
++ err = -EAGAIN;
++ sk_locked = 1;
++ goto out_unlock;
++ }
+
+- goto restart;
++ if (!sk_locked) {
++ sk_locked = 1;
++ goto restart_locked;
++ }
+ }
+
++ if (unlikely(sk_locked))
++ unix_state_unlock(sk);
++
+ if (sock_flag(other, SOCK_RCVTSTAMP))
+ __net_timestamp(skb);
+ maybe_add_creds(skb, sock, other);
+@@ -1618,6 +1795,8 @@ restart:
+ return len;
+
+ out_unlock:
++ if (sk_locked)
++ unix_state_unlock(sk);
+ unix_state_unlock(other);
+ out_free:
+ kfree_skb(skb);
+@@ -1739,8 +1918,10 @@ out_err:
+ static ssize_t unix_stream_sendpage(struct socket *socket, struct page *page,
+ int offset, size_t size, int flags)
+ {
+- int err = 0;
+- bool send_sigpipe = true;
++ int err;
++ bool send_sigpipe = false;
++ bool init_scm = true;
++ struct scm_cookie scm;
+ struct sock *other, *sk = socket->sk;
+ struct sk_buff *skb, *newskb = NULL, *tail = NULL;
+
+@@ -1758,7 +1939,7 @@ alloc_skb:
+ newskb = sock_alloc_send_pskb(sk, 0, 0, flags & MSG_DONTWAIT,
+ &err, 0);
+ if (!newskb)
+- return err;
++ goto err;
+ }
+
+ /* we must acquire readlock as we modify already present
+@@ -1767,12 +1948,12 @@ alloc_skb:
+ err = mutex_lock_interruptible(&unix_sk(other)->readlock);
+ if (err) {
+ err = flags & MSG_DONTWAIT ? -EAGAIN : -ERESTARTSYS;
+- send_sigpipe = false;
+ goto err;
+ }
+
+ if (sk->sk_shutdown & SEND_SHUTDOWN) {
+ err = -EPIPE;
++ send_sigpipe = true;
+ goto err_unlock;
+ }
+
+@@ -1781,23 +1962,34 @@ alloc_skb:
+ if (sock_flag(other, SOCK_DEAD) ||
+ other->sk_shutdown & RCV_SHUTDOWN) {
+ err = -EPIPE;
++ send_sigpipe = true;
+ goto err_state_unlock;
+ }
+
++ if (init_scm) {
++ err = maybe_init_creds(&scm, socket, other);
++ if (err)
++ goto err_state_unlock;
++ init_scm = false;
++ }
++
+ skb = skb_peek_tail(&other->sk_receive_queue);
+ if (tail && tail == skb) {
+ skb = newskb;
+- } else if (!skb) {
+- if (newskb)
++ } else if (!skb || !unix_skb_scm_eq(skb, &scm)) {
++ if (newskb) {
+ skb = newskb;
+- else
++ } else {
++ tail = skb;
+ goto alloc_skb;
++ }
+ } else if (newskb) {
+ /* this is fast path, we don't necessarily need to
+ * call to kfree_skb even though with newskb == NULL
+ * this - does no harm
+ */
+ consume_skb(newskb);
++ newskb = NULL;
+ }
+
+ if (skb_append_pagefrags(skb, page, offset, size)) {
+@@ -1810,14 +2002,20 @@ alloc_skb:
+ skb->truesize += size;
+ atomic_add(size, &sk->sk_wmem_alloc);
+
+- if (newskb)
++ if (newskb) {
++ err = unix_scm_to_skb(&scm, skb, false);
++ if (err)
++ goto err_state_unlock;
++ spin_lock(&other->sk_receive_queue.lock);
+ __skb_queue_tail(&other->sk_receive_queue, newskb);
++ spin_unlock(&other->sk_receive_queue.lock);
++ }
+
+ unix_state_unlock(other);
+ mutex_unlock(&unix_sk(other)->readlock);
+
+ other->sk_data_ready(other);
+-
++ scm_destroy(&scm);
+ return size;
+
+ err_state_unlock:
+@@ -1828,6 +2026,8 @@ err:
+ kfree_skb(newskb);
+ if (send_sigpipe && !(flags & MSG_NOSIGNAL))
+ send_sig(SIGPIPE, current, 0);
++ if (!init_scm)
++ scm_destroy(&scm);
+ return err;
+ }
+
+@@ -2071,6 +2271,7 @@ static int unix_stream_read_generic(struct unix_stream_read_state *state)
+
+ do {
+ int chunk;
++ bool drop_skb;
+ struct sk_buff *skb, *last;
+
+ unix_state_lock(sk);
+@@ -2130,10 +2331,7 @@ unlock:
+
+ if (check_creds) {
+ /* Never glue messages from different writers */
+- if ((UNIXCB(skb).pid != scm.pid) ||
+- !uid_eq(UNIXCB(skb).uid, scm.creds.uid) ||
+- !gid_eq(UNIXCB(skb).gid, scm.creds.gid) ||
+- !unix_secdata_eq(&scm, skb))
++ if (!unix_skb_scm_eq(skb, &scm))
+ break;
+ } else if (test_bit(SOCK_PASSCRED, &sock->flags)) {
+ /* Copy credentials */
+@@ -2151,7 +2349,11 @@ unlock:
+ }
+
+ chunk = min_t(unsigned int, unix_skb_len(skb) - skip, size);
++ skb_get(skb);
+ chunk = state->recv_actor(skb, skip, chunk, state);
++ drop_skb = !unix_skb_len(skb);
++ /* skb is only safe to use if !drop_skb */
++ consume_skb(skb);
+ if (chunk < 0) {
+ if (copied == 0)
+ copied = -EFAULT;
+@@ -2160,6 +2362,18 @@ unlock:
+ copied += chunk;
+ size -= chunk;
+
++ if (drop_skb) {
++ /* the skb was touched by a concurrent reader;
++ * we should not expect anything from this skb
++ * anymore and assume it invalid - we can be
++ * sure it was dropped from the socket queue
++ *
++ * let's report a short read
++ */
++ err = 0;
++ break;
++ }
++
+ /* Mark read part of skb as used */
+ if (!(flags & MSG_PEEK)) {
+ UNIXCB(skb).consumed += chunk;
+@@ -2453,14 +2667,16 @@ static unsigned int unix_dgram_poll(struct file *file, struct socket *sock,
+ return mask;
+
+ writable = unix_writable(sk);
+- other = unix_peer_get(sk);
+- if (other) {
+- if (unix_peer(other) != sk) {
+- sock_poll_wait(file, &unix_sk(other)->peer_wait, wait);
+- if (unix_recvq_full(other))
+- writable = 0;
+- }
+- sock_put(other);
++ if (writable) {
++ unix_state_lock(sk);
++
++ other = unix_peer(sk);
++ if (other && unix_peer(other) != sk &&
++ unix_recvq_full(other) &&
++ unix_dgram_peer_wake_me(sk, other))
++ writable = 0;
++
++ unix_state_unlock(sk);
+ }
+
+ if (writable)
+diff --git a/sound/pci/Kconfig b/sound/pci/Kconfig
+index edfc1b8d553e..656ce39bddbc 100644
+--- a/sound/pci/Kconfig
++++ b/sound/pci/Kconfig
+@@ -25,7 +25,7 @@ config SND_ALS300
+ select SND_PCM
+ select SND_AC97_CODEC
+ select SND_OPL3_LIB
+- select ZONE_DMA
++ depends on ZONE_DMA
+ help
+ Say 'Y' or 'M' to include support for Avance Logic ALS300/ALS300+
+
+@@ -50,7 +50,7 @@ config SND_ALI5451
+ tristate "ALi M5451 PCI Audio Controller"
+ select SND_MPU401_UART
+ select SND_AC97_CODEC
+- select ZONE_DMA
++ depends on ZONE_DMA
+ help
+ Say Y here to include support for the integrated AC97 sound
+ device on motherboards using the ALi M5451 Audio Controller
+@@ -155,7 +155,7 @@ config SND_AZT3328
+ select SND_PCM
+ select SND_RAWMIDI
+ select SND_AC97_CODEC
+- select ZONE_DMA
++ depends on ZONE_DMA
+ help
+ Say Y here to include support for Aztech AZF3328 (PCI168)
+ soundcards.
+@@ -463,7 +463,7 @@ config SND_EMU10K1
+ select SND_HWDEP
+ select SND_RAWMIDI
+ select SND_AC97_CODEC
+- select ZONE_DMA
++ depends on ZONE_DMA
+ help
+ Say Y to include support for Sound Blaster PCI 512, Live!,
+ Audigy and E-mu APS (partially supported) soundcards.
+@@ -479,7 +479,7 @@ config SND_EMU10K1X
+ tristate "Emu10k1X (Dell OEM Version)"
+ select SND_AC97_CODEC
+ select SND_RAWMIDI
+- select ZONE_DMA
++ depends on ZONE_DMA
+ help
+ Say Y here to include support for the Dell OEM version of the
+ Sound Blaster Live!.
+@@ -513,7 +513,7 @@ config SND_ES1938
+ select SND_OPL3_LIB
+ select SND_MPU401_UART
+ select SND_AC97_CODEC
+- select ZONE_DMA
++ depends on ZONE_DMA
+ help
+ Say Y here to include support for soundcards based on ESS Solo-1
+ (ES1938, ES1946, ES1969) chips.
+@@ -525,7 +525,7 @@ config SND_ES1968
+ tristate "ESS ES1968/1978 (Maestro-1/2/2E)"
+ select SND_MPU401_UART
+ select SND_AC97_CODEC
+- select ZONE_DMA
++ depends on ZONE_DMA
+ help
+ Say Y here to include support for soundcards based on ESS Maestro
+ 1/2/2E chips.
+@@ -612,7 +612,7 @@ config SND_ICE1712
+ select SND_MPU401_UART
+ select SND_AC97_CODEC
+ select BITREVERSE
+- select ZONE_DMA
++ depends on ZONE_DMA
+ help
+ Say Y here to include support for soundcards based on the
+ ICE1712 (Envy24) chip.
+@@ -700,7 +700,7 @@ config SND_LX6464ES
+ config SND_MAESTRO3
+ tristate "ESS Allegro/Maestro3"
+ select SND_AC97_CODEC
+- select ZONE_DMA
++ depends on ZONE_DMA
+ help
+ Say Y here to include support for soundcards based on ESS Maestro 3
+ (Allegro) chips.
+@@ -806,7 +806,7 @@ config SND_SIS7019
+ tristate "SiS 7019 Audio Accelerator"
+ depends on X86_32
+ select SND_AC97_CODEC
+- select ZONE_DMA
++ depends on ZONE_DMA
+ help
+ Say Y here to include support for the SiS 7019 Audio Accelerator.
+
+@@ -818,7 +818,7 @@ config SND_SONICVIBES
+ select SND_OPL3_LIB
+ select SND_MPU401_UART
+ select SND_AC97_CODEC
+- select ZONE_DMA
++ depends on ZONE_DMA
+ help
+ Say Y here to include support for soundcards based on the S3
+ SonicVibes chip.
+@@ -830,7 +830,7 @@ config SND_TRIDENT
+ tristate "Trident 4D-Wave DX/NX; SiS 7018"
+ select SND_MPU401_UART
+ select SND_AC97_CODEC
+- select ZONE_DMA
++ depends on ZONE_DMA
+ help
+ Say Y here to include support for soundcards based on Trident
+ 4D-Wave DX/NX or SiS 7018 chips.
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index acbfbe087ee8..f22f5c409447 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -50,8 +50,9 @@ MODULE_PARM_DESC(static_hdmi_pcm, "Don't restrict PCM parameters per ELD info");
+ #define is_haswell(codec) ((codec)->core.vendor_id == 0x80862807)
+ #define is_broadwell(codec) ((codec)->core.vendor_id == 0x80862808)
+ #define is_skylake(codec) ((codec)->core.vendor_id == 0x80862809)
++#define is_broxton(codec) ((codec)->core.vendor_id == 0x8086280a)
+ #define is_haswell_plus(codec) (is_haswell(codec) || is_broadwell(codec) \
+- || is_skylake(codec))
++ || is_skylake(codec) || is_broxton(codec))
+
+ #define is_valleyview(codec) ((codec)->core.vendor_id == 0x80862882)
+ #define is_cherryview(codec) ((codec)->core.vendor_id == 0x80862883)
+diff --git a/tools/net/Makefile b/tools/net/Makefile
+index ee577ea03ba5..ddf888010652 100644
+--- a/tools/net/Makefile
++++ b/tools/net/Makefile
+@@ -4,6 +4,9 @@ CC = gcc
+ LEX = flex
+ YACC = bison
+
++CFLAGS += -Wall -O2
++CFLAGS += -D__EXPORTED_HEADERS__ -I../../include/uapi -I../../include
++
+ %.yacc.c: %.y
+ $(YACC) -o $@ -d $<
+
+@@ -12,15 +15,13 @@ YACC = bison
+
+ all : bpf_jit_disasm bpf_dbg bpf_asm
+
+-bpf_jit_disasm : CFLAGS = -Wall -O2 -DPACKAGE='bpf_jit_disasm'
++bpf_jit_disasm : CFLAGS += -DPACKAGE='bpf_jit_disasm'
+ bpf_jit_disasm : LDLIBS = -lopcodes -lbfd -ldl
+ bpf_jit_disasm : bpf_jit_disasm.o
+
+-bpf_dbg : CFLAGS = -Wall -O2
+ bpf_dbg : LDLIBS = -lreadline
+ bpf_dbg : bpf_dbg.o
+
+-bpf_asm : CFLAGS = -Wall -O2 -I.
+ bpf_asm : LDLIBS =
+ bpf_asm : bpf_asm.o bpf_exp.yacc.o bpf_exp.lex.o
+ bpf_exp.lex.o : bpf_exp.yacc.c
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [gentoo-commits] proj/linux-patches:4.3 commit in: /
@ 2016-01-20 13:15 Mike Pagano
0 siblings, 0 replies; 9+ messages in thread
From: Mike Pagano @ 2016-01-20 13:15 UTC (permalink / raw
To: gentoo-commits
commit: 385b45091bf33a9a1b2f145dedd9be4e5decb3fc
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jan 20 13:15:37 2016 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jan 20 13:15:37 2016 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=385b4509
Ensure that thread joining a session keyring does not leak the keyring reference. CVE-2016-0728.
0000_README | 4 ++
...ing-refleak-in-join-session-CVE-2016-0728.patch | 81 ++++++++++++++++++++++
2 files changed, 85 insertions(+)
diff --git a/0000_README b/0000_README
index 7b7e0b4..3caf321 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
+Patch: 1520_keyring-refleak-in-join-session-CVE-2016-0728.patch
+From: https://bugs.gentoo.org/show_bug.cgi?id=572384
+Desc: Ensure that thread joining a session keyring does not leak the keyring reference. CVE-2016-0728.
+
Patch: 2700_ThinkPad-30-brightness-control-fix.patch
From: Seth Forshee <seth.forshee@canonical.com>
Desc: ACPI: Disable Windows 8 compatibility for some Lenovo ThinkPads.
diff --git a/1520_keyring-refleak-in-join-session-CVE-2016-0728.patch b/1520_keyring-refleak-in-join-session-CVE-2016-0728.patch
new file mode 100644
index 0000000..49020d7
--- /dev/null
+++ b/1520_keyring-refleak-in-join-session-CVE-2016-0728.patch
@@ -0,0 +1,81 @@
+From 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2 Mon Sep 17 00:00:00 2001
+From: Yevgeny Pats <yevgeny@perception-point.io>
+Date: Tue, 19 Jan 2016 22:09:04 +0000
+Subject: KEYS: Fix keyring ref leak in join_session_keyring()
+
+This fixes CVE-2016-0728.
+
+If a thread is asked to join as a session keyring the keyring that's already
+set as its session, we leak a keyring reference.
+
+This can be tested with the following program:
+
+ #include <stddef.h>
+ #include <stdio.h>
+ #include <sys/types.h>
+ #include <keyutils.h>
+
+ int main(int argc, const char *argv[])
+ {
+ int i = 0;
+ key_serial_t serial;
+
+ serial = keyctl(KEYCTL_JOIN_SESSION_KEYRING,
+ "leaked-keyring");
+ if (serial < 0) {
+ perror("keyctl");
+ return -1;
+ }
+
+ if (keyctl(KEYCTL_SETPERM, serial,
+ KEY_POS_ALL | KEY_USR_ALL) < 0) {
+ perror("keyctl");
+ return -1;
+ }
+
+ for (i = 0; i < 100; i++) {
+ serial = keyctl(KEYCTL_JOIN_SESSION_KEYRING,
+ "leaked-keyring");
+ if (serial < 0) {
+ perror("keyctl");
+ return -1;
+ }
+ }
+
+ return 0;
+ }
+
+If, after the program has run, there something like the following line in
+/proc/keys:
+
+3f3d898f I--Q--- 100 perm 3f3f0000 0 0 keyring leaked-keyring: empty
+
+with a usage count of 100 * the number of times the program has been run,
+then the kernel is malfunctioning. If leaked-keyring has zero usages or
+has been garbage collected, then the problem is fixed.
+
+Reported-by: Yevgeny Pats <yevgeny@perception-point.io>
+Signed-off-by: David Howells <dhowells@redhat.com>
+Acked-by: Don Zickus <dzickus@redhat.com>
+Acked-by: Prarit Bhargava <prarit@redhat.com>
+Acked-by: Jarod Wilson <jarod@redhat.com>
+Signed-off-by: James Morris <james.l.morris@oracle.com>
+---
+ security/keys/process_keys.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/security/keys/process_keys.c b/security/keys/process_keys.c
+index a3f85d2..e6d50172 100644
+--- a/security/keys/process_keys.c
++++ b/security/keys/process_keys.c
+@@ -794,6 +794,7 @@ long join_session_keyring(const char *name)
+ ret = PTR_ERR(keyring);
+ goto error2;
+ } else if (keyring == new->session_keyring) {
++ key_put(keyring);
+ ret = 0;
+ goto error2;
+ }
+--
+cgit v0.12
+
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [gentoo-commits] proj/linux-patches:4.3 commit in: /
@ 2016-01-23 17:49 Mike Pagano
0 siblings, 0 replies; 9+ messages in thread
From: Mike Pagano @ 2016-01-23 17:49 UTC (permalink / raw
To: gentoo-commits
commit: ec088f99f55fda2cc620379c5492bb85214ff32e
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jan 23 17:49:35 2016 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jan 23 17:49:35 2016 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ec088f99
Linux 4.3.4. Includes patch for CVE-2016-0728
0000_README | 8 +-
1003_linux-4.3.4.patch | 1863 ++++++++++++++++++++
...ing-refleak-in-join-session-CVE-2016-0728.patch | 81 -
3 files changed, 1867 insertions(+), 85 deletions(-)
diff --git a/0000_README b/0000_README
index 3caf321..5f4c1bc 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch: 1002_linux-4.3.3.patch
From: http://www.kernel.org
Desc: Linux 4.3.3
+Patch: 1003_linux-4.3.4.patch
+From: http://www.kernel.org
+Desc: Linux 4.3.4
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
@@ -63,10 +67,6 @@ Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
-Patch: 1520_keyring-refleak-in-join-session-CVE-2016-0728.patch
-From: https://bugs.gentoo.org/show_bug.cgi?id=572384
-Desc: Ensure that thread joining a session keyring does not leak the keyring reference. CVE-2016-0728.
-
Patch: 2700_ThinkPad-30-brightness-control-fix.patch
From: Seth Forshee <seth.forshee@canonical.com>
Desc: ACPI: Disable Windows 8 compatibility for some Lenovo ThinkPads.
diff --git a/1003_linux-4.3.4.patch b/1003_linux-4.3.4.patch
new file mode 100644
index 0000000..95478de
--- /dev/null
+++ b/1003_linux-4.3.4.patch
@@ -0,0 +1,1863 @@
+diff --git a/Makefile b/Makefile
+index 2070d16bb5a4..69430ed64270 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 3
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Blurry Fish Butt
+
+diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
+index 739a4a6b3b9b..2f6e3c6ad39b 100644
+--- a/drivers/acpi/osl.c
++++ b/drivers/acpi/osl.c
+@@ -81,6 +81,7 @@ static struct workqueue_struct *kacpid_wq;
+ static struct workqueue_struct *kacpi_notify_wq;
+ static struct workqueue_struct *kacpi_hotplug_wq;
+ static bool acpi_os_initialized;
++unsigned int acpi_sci_irq = INVALID_ACPI_IRQ;
+
+ /*
+ * This list of permanent mappings is for memory that may be accessed from
+@@ -856,17 +857,19 @@ acpi_os_install_interrupt_handler(u32 gsi, acpi_osd_handler handler,
+ acpi_irq_handler = NULL;
+ return AE_NOT_ACQUIRED;
+ }
++ acpi_sci_irq = irq;
+
+ return AE_OK;
+ }
+
+-acpi_status acpi_os_remove_interrupt_handler(u32 irq, acpi_osd_handler handler)
++acpi_status acpi_os_remove_interrupt_handler(u32 gsi, acpi_osd_handler handler)
+ {
+- if (irq != acpi_gbl_FADT.sci_interrupt)
++ if (gsi != acpi_gbl_FADT.sci_interrupt || !acpi_sci_irq_valid())
+ return AE_BAD_PARAMETER;
+
+- free_irq(irq, acpi_irq);
++ free_irq(acpi_sci_irq, acpi_irq);
+ acpi_irq_handler = NULL;
++ acpi_sci_irq = INVALID_ACPI_IRQ;
+
+ return AE_OK;
+ }
+@@ -1180,8 +1183,8 @@ void acpi_os_wait_events_complete(void)
+ * Make sure the GPE handler or the fixed event handler is not used
+ * on another CPU after removal.
+ */
+- if (acpi_irq_handler)
+- synchronize_hardirq(acpi_gbl_FADT.sci_interrupt);
++ if (acpi_sci_irq_valid())
++ synchronize_hardirq(acpi_sci_irq);
+ flush_workqueue(kacpid_wq);
+ flush_workqueue(kacpi_notify_wq);
+ }
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 2f0d4db40a9e..3fe1fbec7677 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -632,14 +632,16 @@ static int acpi_freeze_prepare(void)
+ acpi_enable_wakeup_devices(ACPI_STATE_S0);
+ acpi_enable_all_wakeup_gpes();
+ acpi_os_wait_events_complete();
+- enable_irq_wake(acpi_gbl_FADT.sci_interrupt);
++ if (acpi_sci_irq_valid())
++ enable_irq_wake(acpi_sci_irq);
+ return 0;
+ }
+
+ static void acpi_freeze_restore(void)
+ {
+ acpi_disable_wakeup_devices(ACPI_STATE_S0);
+- disable_irq_wake(acpi_gbl_FADT.sci_interrupt);
++ if (acpi_sci_irq_valid())
++ disable_irq_wake(acpi_sci_irq);
+ acpi_enable_all_runtime_gpes();
+ }
+
+diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
+index f8319a0860fd..39be5acc9c48 100644
+--- a/drivers/char/tpm/tpm.h
++++ b/drivers/char/tpm/tpm.h
+@@ -115,6 +115,13 @@ enum tpm2_startup_types {
+ TPM2_SU_STATE = 0x0001,
+ };
+
++enum tpm2_start_method {
++ TPM2_START_ACPI = 2,
++ TPM2_START_FIFO = 6,
++ TPM2_START_CRB = 7,
++ TPM2_START_CRB_WITH_ACPI = 8,
++};
++
+ struct tpm_chip;
+
+ struct tpm_vendor_specific {
+diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
+index 1267322595da..2b971b3e5c1c 100644
+--- a/drivers/char/tpm/tpm_crb.c
++++ b/drivers/char/tpm/tpm_crb.c
+@@ -34,12 +34,6 @@ enum crb_defaults {
+ CRB_ACPI_START_INDEX = 1,
+ };
+
+-enum crb_start_method {
+- CRB_SM_ACPI_START = 2,
+- CRB_SM_CRB = 7,
+- CRB_SM_CRB_WITH_ACPI_START = 8,
+-};
+-
+ struct acpi_tpm2 {
+ struct acpi_table_header hdr;
+ u16 platform_class;
+@@ -220,12 +214,6 @@ static int crb_acpi_add(struct acpi_device *device)
+ u64 pa;
+ int rc;
+
+- chip = tpmm_chip_alloc(dev, &tpm_crb);
+- if (IS_ERR(chip))
+- return PTR_ERR(chip);
+-
+- chip->flags = TPM_CHIP_FLAG_TPM2;
+-
+ status = acpi_get_table(ACPI_SIG_TPM2, 1,
+ (struct acpi_table_header **) &buf);
+ if (ACPI_FAILURE(status)) {
+@@ -233,13 +221,15 @@ static int crb_acpi_add(struct acpi_device *device)
+ return -ENODEV;
+ }
+
+- /* At least some versions of AMI BIOS have a bug that TPM2 table has
+- * zero address for the control area and therefore we must fail.
+- */
+- if (!buf->control_area_pa) {
+- dev_err(dev, "TPM2 ACPI table has a zero address for the control area\n");
+- return -EINVAL;
+- }
++ /* Should the FIFO driver handle this? */
++ if (buf->start_method == TPM2_START_FIFO)
++ return -ENODEV;
++
++ chip = tpmm_chip_alloc(dev, &tpm_crb);
++ if (IS_ERR(chip))
++ return PTR_ERR(chip);
++
++ chip->flags = TPM_CHIP_FLAG_TPM2;
+
+ if (buf->hdr.length < sizeof(struct acpi_tpm2)) {
+ dev_err(dev, "TPM2 ACPI table has wrong size");
+@@ -259,11 +249,11 @@ static int crb_acpi_add(struct acpi_device *device)
+ * report only ACPI start but in practice seems to require both
+ * ACPI start and CRB start.
+ */
+- if (sm == CRB_SM_CRB || sm == CRB_SM_CRB_WITH_ACPI_START ||
++ if (sm == TPM2_START_CRB || sm == TPM2_START_FIFO ||
+ !strcmp(acpi_device_hid(device), "MSFT0101"))
+ priv->flags |= CRB_FL_CRB_START;
+
+- if (sm == CRB_SM_ACPI_START || sm == CRB_SM_CRB_WITH_ACPI_START)
++ if (sm == TPM2_START_ACPI || sm == TPM2_START_CRB_WITH_ACPI)
+ priv->flags |= CRB_FL_ACPI_START;
+
+ priv->cca = (struct crb_control_area __iomem *)
+diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c
+index f2dffa770b8e..696ef1d56b4f 100644
+--- a/drivers/char/tpm/tpm_tis.c
++++ b/drivers/char/tpm/tpm_tis.c
+@@ -1,6 +1,6 @@
+ /*
+ * Copyright (C) 2005, 2006 IBM Corporation
+- * Copyright (C) 2014 Intel Corporation
++ * Copyright (C) 2014, 2015 Intel Corporation
+ *
+ * Authors:
+ * Leendert van Doorn <leendert@watson.ibm.com>
+@@ -28,6 +28,7 @@
+ #include <linux/wait.h>
+ #include <linux/acpi.h>
+ #include <linux/freezer.h>
++#include <acpi/actbl2.h>
+ #include "tpm.h"
+
+ enum tis_access {
+@@ -65,6 +66,17 @@ enum tis_defaults {
+ TIS_LONG_TIMEOUT = 2000, /* 2 sec */
+ };
+
++struct tpm_info {
++ unsigned long start;
++ unsigned long len;
++ unsigned int irq;
++};
++
++static struct tpm_info tis_default_info = {
++ .start = TIS_MEM_BASE,
++ .len = TIS_MEM_LEN,
++ .irq = 0,
++};
+
+ /* Some timeout values are needed before it is known whether the chip is
+ * TPM 1.0 or TPM 2.0.
+@@ -91,26 +103,54 @@ struct priv_data {
+ };
+
+ #if defined(CONFIG_PNP) && defined(CONFIG_ACPI)
+-static int is_itpm(struct pnp_dev *dev)
++static int has_hid(struct acpi_device *dev, const char *hid)
+ {
+- struct acpi_device *acpi = pnp_acpi_device(dev);
+ struct acpi_hardware_id *id;
+
+- if (!acpi)
+- return 0;
+-
+- list_for_each_entry(id, &acpi->pnp.ids, list) {
+- if (!strcmp("INTC0102", id->id))
++ list_for_each_entry(id, &dev->pnp.ids, list)
++ if (!strcmp(hid, id->id))
+ return 1;
+- }
+
+ return 0;
+ }
++
++static inline int is_itpm(struct acpi_device *dev)
++{
++ return has_hid(dev, "INTC0102");
++}
++
++static inline int is_fifo(struct acpi_device *dev)
++{
++ struct acpi_table_tpm2 *tbl;
++ acpi_status st;
++
++ /* TPM 1.2 FIFO */
++ if (!has_hid(dev, "MSFT0101"))
++ return 1;
++
++ st = acpi_get_table(ACPI_SIG_TPM2, 1,
++ (struct acpi_table_header **) &tbl);
++ if (ACPI_FAILURE(st)) {
++ dev_err(&dev->dev, "failed to get TPM2 ACPI table\n");
++ return 0;
++ }
++
++ if (le32_to_cpu(tbl->start_method) != TPM2_START_FIFO)
++ return 0;
++
++ /* TPM 2.0 FIFO */
++ return 1;
++}
+ #else
+-static inline int is_itpm(struct pnp_dev *dev)
++static inline int is_itpm(struct acpi_device *dev)
+ {
+ return 0;
+ }
++
++static inline int is_fifo(struct acpi_device *dev)
++{
++ return 1;
++}
+ #endif
+
+ /* Before we attempt to access the TPM we must see that the valid bit is set.
+@@ -600,9 +640,8 @@ static void tpm_tis_remove(struct tpm_chip *chip)
+ release_locality(chip, chip->vendor.locality, 1);
+ }
+
+-static int tpm_tis_init(struct device *dev, acpi_handle acpi_dev_handle,
+- resource_size_t start, resource_size_t len,
+- unsigned int irq)
++static int tpm_tis_init(struct device *dev, struct tpm_info *tpm_info,
++ acpi_handle acpi_dev_handle)
+ {
+ u32 vendor, intfcaps, intmask;
+ int rc, i, irq_s, irq_e, probe;
+@@ -622,7 +661,7 @@ static int tpm_tis_init(struct device *dev, acpi_handle acpi_dev_handle,
+ chip->acpi_dev_handle = acpi_dev_handle;
+ #endif
+
+- chip->vendor.iobase = devm_ioremap(dev, start, len);
++ chip->vendor.iobase = devm_ioremap(dev, tpm_info->start, tpm_info->len);
+ if (!chip->vendor.iobase)
+ return -EIO;
+
+@@ -707,7 +746,7 @@ static int tpm_tis_init(struct device *dev, acpi_handle acpi_dev_handle,
+ chip->vendor.iobase +
+ TPM_INT_ENABLE(chip->vendor.locality));
+ if (interrupts)
+- chip->vendor.irq = irq;
++ chip->vendor.irq = tpm_info->irq;
+ if (interrupts && !chip->vendor.irq) {
+ irq_s =
+ ioread8(chip->vendor.iobase +
+@@ -890,27 +929,27 @@ static SIMPLE_DEV_PM_OPS(tpm_tis_pm, tpm_pm_suspend, tpm_tis_resume);
+ static int tpm_tis_pnp_init(struct pnp_dev *pnp_dev,
+ const struct pnp_device_id *pnp_id)
+ {
+- resource_size_t start, len;
+- unsigned int irq = 0;
++ struct tpm_info tpm_info = tis_default_info;
+ acpi_handle acpi_dev_handle = NULL;
+
+- start = pnp_mem_start(pnp_dev, 0);
+- len = pnp_mem_len(pnp_dev, 0);
++ tpm_info.start = pnp_mem_start(pnp_dev, 0);
++ tpm_info.len = pnp_mem_len(pnp_dev, 0);
+
+ if (pnp_irq_valid(pnp_dev, 0))
+- irq = pnp_irq(pnp_dev, 0);
++ tpm_info.irq = pnp_irq(pnp_dev, 0);
+ else
+ interrupts = false;
+
+- if (is_itpm(pnp_dev))
+- itpm = true;
+-
+ #ifdef CONFIG_ACPI
+- if (pnp_acpi_device(pnp_dev))
++ if (pnp_acpi_device(pnp_dev)) {
++ if (is_itpm(pnp_acpi_device(pnp_dev)))
++ itpm = true;
++
+ acpi_dev_handle = pnp_acpi_device(pnp_dev)->handle;
++ }
+ #endif
+
+- return tpm_tis_init(&pnp_dev->dev, acpi_dev_handle, start, len, irq);
++ return tpm_tis_init(&pnp_dev->dev, &tpm_info, acpi_dev_handle);
+ }
+
+ static struct pnp_device_id tpm_pnp_tbl[] = {
+@@ -930,6 +969,7 @@ MODULE_DEVICE_TABLE(pnp, tpm_pnp_tbl);
+ static void tpm_tis_pnp_remove(struct pnp_dev *dev)
+ {
+ struct tpm_chip *chip = pnp_get_drvdata(dev);
++
+ tpm_chip_unregister(chip);
+ tpm_tis_remove(chip);
+ }
+@@ -950,6 +990,79 @@ module_param_string(hid, tpm_pnp_tbl[TIS_HID_USR_IDX].id,
+ MODULE_PARM_DESC(hid, "Set additional specific HID for this driver to probe");
+ #endif
+
++#ifdef CONFIG_ACPI
++static int tpm_check_resource(struct acpi_resource *ares, void *data)
++{
++ struct tpm_info *tpm_info = (struct tpm_info *) data;
++ struct resource res;
++
++ if (acpi_dev_resource_interrupt(ares, 0, &res)) {
++ tpm_info->irq = res.start;
++ } else if (acpi_dev_resource_memory(ares, &res)) {
++ tpm_info->start = res.start;
++ tpm_info->len = resource_size(&res);
++ }
++
++ return 1;
++}
++
++static int tpm_tis_acpi_init(struct acpi_device *acpi_dev)
++{
++ struct list_head resources;
++ struct tpm_info tpm_info = tis_default_info;
++ int ret;
++
++ if (!is_fifo(acpi_dev))
++ return -ENODEV;
++
++ INIT_LIST_HEAD(&resources);
++ ret = acpi_dev_get_resources(acpi_dev, &resources, tpm_check_resource,
++ &tpm_info);
++ if (ret < 0)
++ return ret;
++
++ acpi_dev_free_resource_list(&resources);
++
++ if (!tpm_info.irq)
++ interrupts = false;
++
++ if (is_itpm(acpi_dev))
++ itpm = true;
++
++ return tpm_tis_init(&acpi_dev->dev, &tpm_info, acpi_dev->handle);
++}
++
++static int tpm_tis_acpi_remove(struct acpi_device *dev)
++{
++ struct tpm_chip *chip = dev_get_drvdata(&dev->dev);
++
++ tpm_chip_unregister(chip);
++ tpm_tis_remove(chip);
++
++ return 0;
++}
++
++static struct acpi_device_id tpm_acpi_tbl[] = {
++ {"MSFT0101", 0}, /* TPM 2.0 */
++ /* Add new here */
++ {"", 0}, /* User Specified */
++ {"", 0} /* Terminator */
++};
++MODULE_DEVICE_TABLE(acpi, tpm_acpi_tbl);
++
++static struct acpi_driver tis_acpi_driver = {
++ .name = "tpm_tis",
++ .ids = tpm_acpi_tbl,
++ .ops = {
++ .add = tpm_tis_acpi_init,
++ .remove = tpm_tis_acpi_remove,
++ },
++ .drv = {
++ .pm = &tpm_tis_pm,
++ },
++};
++#endif
++
+ static struct platform_driver tis_drv = {
+ .driver = {
+ .name = "tpm_tis",
+@@ -966,9 +1079,25 @@ static int __init init_tis(void)
+ {
+ int rc;
+ #ifdef CONFIG_PNP
+- if (!force)
+- return pnp_register_driver(&tis_pnp_driver);
++ if (!force) {
++ rc = pnp_register_driver(&tis_pnp_driver);
++ if (rc)
++ return rc;
++ }
++#endif
++#ifdef CONFIG_ACPI
++ if (!force) {
++ rc = acpi_bus_register_driver(&tis_acpi_driver);
++ if (rc) {
++#ifdef CONFIG_PNP
++ pnp_unregister_driver(&tis_pnp_driver);
+ #endif
++ return rc;
++ }
++ }
++#endif
++ if (!force)
++ return 0;
+
+ rc = platform_driver_register(&tis_drv);
+ if (rc < 0)
+@@ -978,7 +1107,7 @@ static int __init init_tis(void)
+ rc = PTR_ERR(pdev);
+ goto err_dev;
+ }
+- rc = tpm_tis_init(&pdev->dev, NULL, TIS_MEM_BASE, TIS_MEM_LEN, 0);
++ rc = tpm_tis_init(&pdev->dev, &tis_default_info, NULL);
+ if (rc)
+ goto err_init;
+ return 0;
+@@ -992,9 +1121,14 @@ err_dev:
+ static void __exit cleanup_tis(void)
+ {
+ struct tpm_chip *chip;
+-#ifdef CONFIG_PNP
++#if defined(CONFIG_PNP) || defined(CONFIG_ACPI)
+ if (!force) {
++#ifdef CONFIG_ACPI
++ acpi_bus_unregister_driver(&tis_acpi_driver);
++#endif
++#ifdef CONFIG_PNP
+ pnp_unregister_driver(&tis_pnp_driver);
++#endif
+ return;
+ }
+ #endif
+diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
+index 2795d6db10e1..8b5988e210d5 100644
+--- a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
++++ b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
+@@ -1016,13 +1016,12 @@ static int atl1c_setup_ring_resources(struct atl1c_adapter *adapter)
+ sizeof(struct atl1c_recv_ret_status) * rx_desc_count +
+ 8 * 4;
+
+- ring_header->desc = pci_alloc_consistent(pdev, ring_header->size,
+- &ring_header->dma);
++ ring_header->desc = dma_zalloc_coherent(&pdev->dev, ring_header->size,
++ &ring_header->dma, GFP_KERNEL);
+ if (unlikely(!ring_header->desc)) {
+- dev_err(&pdev->dev, "pci_alloc_consistend failed\n");
++ dev_err(&pdev->dev, "could not get memory for DMA buffer\n");
+ goto err_nomem;
+ }
+- memset(ring_header->desc, 0, ring_header->size);
+ /* init TPD ring */
+
+ tpd_ring[0].dma = roundup(ring_header->dma, 8);
+diff --git a/drivers/net/ethernet/freescale/gianfar.c b/drivers/net/ethernet/freescale/gianfar.c
+index ce38d266f931..bcb933e2f0fe 100644
+--- a/drivers/net/ethernet/freescale/gianfar.c
++++ b/drivers/net/ethernet/freescale/gianfar.c
+@@ -894,7 +894,8 @@ static int gfar_of_init(struct platform_device *ofdev, struct net_device **pdev)
+ FSL_GIANFAR_DEV_HAS_VLAN |
+ FSL_GIANFAR_DEV_HAS_MAGIC_PACKET |
+ FSL_GIANFAR_DEV_HAS_EXTENDED_HASH |
+- FSL_GIANFAR_DEV_HAS_TIMER;
++ FSL_GIANFAR_DEV_HAS_TIMER |
++ FSL_GIANFAR_DEV_HAS_RX_FILER;
+
+ err = of_property_read_string(np, "phy-connection-type", &ctype);
+
+@@ -1393,8 +1394,9 @@ static int gfar_probe(struct platform_device *ofdev)
+ priv->rx_queue[i]->rxic = DEFAULT_RXIC;
+ }
+
+- /* always enable rx filer */
+- priv->rx_filer_enable = 1;
++ /* Always enable rx filer if available */
++ priv->rx_filer_enable =
++ (priv->device_flags & FSL_GIANFAR_DEV_HAS_RX_FILER) ? 1 : 0;
+ /* Enable most messages by default */
+ priv->msg_enable = (NETIF_MSG_IFUP << 1 ) - 1;
+ /* use pritority h/w tx queue scheduling for single queue devices */
+diff --git a/drivers/net/ethernet/freescale/gianfar.h b/drivers/net/ethernet/freescale/gianfar.h
+index 8c1994856e93..37553723d586 100644
+--- a/drivers/net/ethernet/freescale/gianfar.h
++++ b/drivers/net/ethernet/freescale/gianfar.h
+@@ -917,6 +917,7 @@ struct gfar {
+ #define FSL_GIANFAR_DEV_HAS_BD_STASHING 0x00000200
+ #define FSL_GIANFAR_DEV_HAS_BUF_STASHING 0x00000400
+ #define FSL_GIANFAR_DEV_HAS_TIMER 0x00000800
++#define FSL_GIANFAR_DEV_HAS_RX_FILER 0x00002000
+
+ #if (MAXGROUPS == 2)
+ #define DEFAULT_MAPPING 0xAA
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
+index 2f87909f5186..60ccc2980da1 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.c
++++ b/drivers/net/ethernet/qualcomm/qca_spi.c
+@@ -736,9 +736,8 @@ qcaspi_netdev_tx_timeout(struct net_device *dev)
+ netdev_info(qca->net_dev, "Transmit timeout at %ld, latency %ld\n",
+ jiffies, jiffies - dev->trans_start);
+ qca->net_dev->stats.tx_errors++;
+- /* wake the queue if there is room */
+- if (qcaspi_tx_ring_has_space(&qca->txr))
+- netif_wake_queue(dev);
++ /* Trigger tx queue flush and QCA7000 reset */
++ qca->sync = QCASPI_SYNC_UNKNOWN;
+ }
+
+ static int
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index a484d8beb855..f3cbf90c8201 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -1481,6 +1481,7 @@ static int sh_eth_rx(struct net_device *ndev, u32 intr_status, int *quota)
+ if (mdp->cd->shift_rd0)
+ desc_status >>= 16;
+
++ skb = mdp->rx_skbuff[entry];
+ if (desc_status & (RD_RFS1 | RD_RFS2 | RD_RFS3 | RD_RFS4 |
+ RD_RFS5 | RD_RFS6 | RD_RFS10)) {
+ ndev->stats.rx_errors++;
+@@ -1496,12 +1497,11 @@ static int sh_eth_rx(struct net_device *ndev, u32 intr_status, int *quota)
+ ndev->stats.rx_missed_errors++;
+ if (desc_status & RD_RFS10)
+ ndev->stats.rx_over_errors++;
+- } else {
++ } else if (skb) {
+ if (!mdp->cd->hw_swap)
+ sh_eth_soft_swap(
+ phys_to_virt(ALIGN(rxdesc->addr, 4)),
+ pkt_len + 2);
+- skb = mdp->rx_skbuff[entry];
+ mdp->rx_skbuff[entry] = NULL;
+ if (mdp->cd->rpadir)
+ skb_reserve(skb, NET_IP_ALIGN);
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index cf6312fafea5..e13ad6cdcc22 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -339,9 +339,18 @@ static int ksz9021_config_init(struct phy_device *phydev)
+ {
+ const struct device *dev = &phydev->dev;
+ const struct device_node *of_node = dev->of_node;
++ const struct device *dev_walker;
+
+- if (!of_node && dev->parent->of_node)
+- of_node = dev->parent->of_node;
++ /* The Micrel driver has a deprecated option to place phy OF
++ * properties in the MAC node. Walk up the tree of devices to
++ * find a device with an OF node.
++ */
++ dev_walker = &phydev->dev;
++ do {
++ of_node = dev_walker->of_node;
++ dev_walker = dev_walker->parent;
++
++ } while (!of_node && dev_walker);
+
+ if (of_node) {
+ ksz9021_load_values_from_of(phydev, of_node,
+diff --git a/drivers/net/ppp/pppoe.c b/drivers/net/ppp/pppoe.c
+index 5e0b43283bce..0a37f840fcc5 100644
+--- a/drivers/net/ppp/pppoe.c
++++ b/drivers/net/ppp/pppoe.c
+@@ -568,6 +568,9 @@ static int pppoe_create(struct net *net, struct socket *sock, int kern)
+ sk->sk_family = PF_PPPOX;
+ sk->sk_protocol = PX_PROTO_OE;
+
++ INIT_WORK(&pppox_sk(sk)->proto.pppoe.padt_work,
++ pppoe_unbind_sock_work);
++
+ return 0;
+ }
+
+@@ -632,8 +635,6 @@ static int pppoe_connect(struct socket *sock, struct sockaddr *uservaddr,
+
+ lock_sock(sk);
+
+- INIT_WORK(&po->proto.pppoe.padt_work, pppoe_unbind_sock_work);
+-
+ error = -EINVAL;
+ if (sp->sa_protocol != PX_PROTO_OE)
+ goto end;
+@@ -663,8 +664,13 @@ static int pppoe_connect(struct socket *sock, struct sockaddr *uservaddr,
+ po->pppoe_dev = NULL;
+ }
+
+- memset(sk_pppox(po) + 1, 0,
+- sizeof(struct pppox_sock) - sizeof(struct sock));
++ po->pppoe_ifindex = 0;
++ memset(&po->pppoe_pa, 0, sizeof(po->pppoe_pa));
++ memset(&po->pppoe_relay, 0, sizeof(po->pppoe_relay));
++ memset(&po->chan, 0, sizeof(po->chan));
++ po->next = NULL;
++ po->num = 0;
++
+ sk->sk_state = PPPOX_NONE;
+ }
+
+diff --git a/drivers/net/ppp/pptp.c b/drivers/net/ppp/pptp.c
+index 686f37daa262..b910caef3a38 100644
+--- a/drivers/net/ppp/pptp.c
++++ b/drivers/net/ppp/pptp.c
+@@ -418,6 +418,9 @@ static int pptp_bind(struct socket *sock, struct sockaddr *uservaddr,
+ struct pptp_opt *opt = &po->proto.pptp;
+ int error = 0;
+
++ if (sockaddr_len < sizeof(struct sockaddr_pppox))
++ return -EINVAL;
++
+ lock_sock(sk);
+
+ opt->src_addr = sp->sa_addr.pptp;
+@@ -439,6 +442,9 @@ static int pptp_connect(struct socket *sock, struct sockaddr *uservaddr,
+ struct flowi4 fl4;
+ int error = 0;
+
++ if (sockaddr_len < sizeof(struct sockaddr_pppox))
++ return -EINVAL;
++
+ if (sp->sa_protocol != PX_PROTO_PPTP)
+ return -EINVAL;
+
+diff --git a/drivers/net/usb/cdc_mbim.c b/drivers/net/usb/cdc_mbim.c
+index efc18e05af0a..b6ea6ff7fb7b 100644
+--- a/drivers/net/usb/cdc_mbim.c
++++ b/drivers/net/usb/cdc_mbim.c
+@@ -158,7 +158,7 @@ static int cdc_mbim_bind(struct usbnet *dev, struct usb_interface *intf)
+ if (!cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting))
+ goto err;
+
+- ret = cdc_ncm_bind_common(dev, intf, data_altsetting, 0);
++ ret = cdc_ncm_bind_common(dev, intf, data_altsetting, dev->driver_info->data);
+ if (ret)
+ goto err;
+
+@@ -582,6 +582,26 @@ static const struct driver_info cdc_mbim_info_zlp = {
+ .tx_fixup = cdc_mbim_tx_fixup,
+ };
+
++/* The spefication explicitly allows NDPs to be placed anywhere in the
++ * frame, but some devices fail unless the NDP is placed after the IP
++ * packets. Using the CDC_NCM_FLAG_NDP_TO_END flags to force this
++ * behaviour.
++ *
++ * Note: The current implementation of this feature restricts each NTB
++ * to a single NDP, implying that multiplexed sessions cannot share an
++ * NTB. This might affect performace for multiplexed sessions.
++ */
++static const struct driver_info cdc_mbim_info_ndp_to_end = {
++ .description = "CDC MBIM",
++ .flags = FLAG_NO_SETINT | FLAG_MULTI_PACKET | FLAG_WWAN,
++ .bind = cdc_mbim_bind,
++ .unbind = cdc_mbim_unbind,
++ .manage_power = cdc_mbim_manage_power,
++ .rx_fixup = cdc_mbim_rx_fixup,
++ .tx_fixup = cdc_mbim_tx_fixup,
++ .data = CDC_NCM_FLAG_NDP_TO_END,
++};
++
+ static const struct usb_device_id mbim_devs[] = {
+ /* This duplicate NCM entry is intentional. MBIM devices can
+ * be disguised as NCM by default, and this is necessary to
+@@ -597,6 +617,10 @@ static const struct usb_device_id mbim_devs[] = {
+ { USB_VENDOR_AND_INTERFACE_INFO(0x0bdb, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long)&cdc_mbim_info,
+ },
++ /* Huawei E3372 fails unless NDP comes after the IP packets */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x12d1, 0x157d, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
++ .driver_info = (unsigned long)&cdc_mbim_info_ndp_to_end,
++ },
+ /* default entry */
+ { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long)&cdc_mbim_info_zlp,
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index db40175b1a0b..fa41a6d2a3e5 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -1006,10 +1006,18 @@ static struct usb_cdc_ncm_ndp16 *cdc_ncm_ndp(struct cdc_ncm_ctx *ctx, struct sk_
+ * NTH16 header as we would normally do. NDP isn't written to the SKB yet, and
+ * the wNdpIndex field in the header is actually not consistent with reality. It will be later.
+ */
+- if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END)
++ if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) {
+ if (ctx->delayed_ndp16->dwSignature == sign)
+ return ctx->delayed_ndp16;
+
++ /* We can only push a single NDP to the end. Return
++ * NULL to send what we've already got and queue this
++ * skb for later.
++ */
++ else if (ctx->delayed_ndp16->dwSignature)
++ return NULL;
++ }
++
+ /* follow the chain of NDPs, looking for a match */
+ while (ndpoffset) {
+ ndp16 = (struct usb_cdc_ncm_ndp16 *)(skb->data + ndpoffset);
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index d9427ca3dba7..2e32c41536ae 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -3067,17 +3067,6 @@ static int rtl8152_open(struct net_device *netdev)
+
+ mutex_lock(&tp->control);
+
+- /* The WORK_ENABLE may be set when autoresume occurs */
+- if (test_bit(WORK_ENABLE, &tp->flags)) {
+- clear_bit(WORK_ENABLE, &tp->flags);
+- usb_kill_urb(tp->intr_urb);
+- cancel_delayed_work_sync(&tp->schedule);
+-
+- /* disable the tx/rx, if the workqueue has enabled them. */
+- if (netif_carrier_ok(netdev))
+- tp->rtl_ops.disable(tp);
+- }
+-
+ tp->rtl_ops.up(tp);
+
+ rtl8152_set_speed(tp, AUTONEG_ENABLE,
+@@ -3124,12 +3113,6 @@ static int rtl8152_close(struct net_device *netdev)
+ } else {
+ mutex_lock(&tp->control);
+
+- /* The autosuspend may have been enabled and wouldn't
+- * be disable when autoresume occurs, because the
+- * netif_running() would be false.
+- */
+- rtl_runtime_suspend_enable(tp, false);
+-
+ tp->rtl_ops.down(tp);
+
+ mutex_unlock(&tp->control);
+@@ -3512,7 +3495,7 @@ static int rtl8152_resume(struct usb_interface *intf)
+ netif_device_attach(tp->netdev);
+ }
+
+- if (netif_running(tp->netdev)) {
++ if (netif_running(tp->netdev) && tp->netdev->flags & IFF_UP) {
+ if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {
+ rtl_runtime_suspend_enable(tp, false);
+ clear_bit(SELECTIVE_SUSPEND, &tp->flags);
+@@ -3532,6 +3515,8 @@ static int rtl8152_resume(struct usb_interface *intf)
+ }
+ usb_submit_urb(tp->intr_urb, GFP_KERNEL);
+ } else if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {
++ if (tp->netdev->flags & IFF_UP)
++ rtl_runtime_suspend_enable(tp, false);
+ clear_bit(SELECTIVE_SUSPEND, &tp->flags);
+ }
+
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index c9e309cd9d82..374feba02565 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -581,6 +581,7 @@ static int vrf_newlink(struct net *src_net, struct net_device *dev,
+ {
+ struct net_vrf *vrf = netdev_priv(dev);
+ struct net_vrf_dev *vrf_ptr;
++ int err;
+
+ if (!data || !data[IFLA_VRF_TABLE])
+ return -EINVAL;
+@@ -589,16 +590,25 @@ static int vrf_newlink(struct net *src_net, struct net_device *dev,
+
+ dev->priv_flags |= IFF_VRF_MASTER;
+
++ err = -ENOMEM;
+ vrf_ptr = kmalloc(sizeof(*dev->vrf_ptr), GFP_KERNEL);
+ if (!vrf_ptr)
+- return -ENOMEM;
++ goto out_fail;
+
+ vrf_ptr->ifindex = dev->ifindex;
+ vrf_ptr->tb_id = vrf->tb_id;
+
++ err = register_netdevice(dev);
++ if (err < 0)
++ goto out_fail;
++
+ rcu_assign_pointer(dev->vrf_ptr, vrf_ptr);
+
+- return register_netdev(dev);
++ return 0;
++
++out_fail:
++ kfree(vrf_ptr);
++ return err;
+ }
+
+ static size_t vrf_nl_getsize(const struct net_device *dev)
+diff --git a/drivers/platform/x86/toshiba_acpi.c b/drivers/platform/x86/toshiba_acpi.c
+index f2372f400ddb..d2de91cc61c2 100644
+--- a/drivers/platform/x86/toshiba_acpi.c
++++ b/drivers/platform/x86/toshiba_acpi.c
+@@ -2676,6 +2676,7 @@ static int toshiba_acpi_add(struct acpi_device *acpi_dev)
+ ret = toshiba_function_keys_get(dev, &special_functions);
+ dev->kbd_function_keys_supported = !ret;
+
++ dev->hotkey_event_type = 0;
+ if (toshiba_acpi_setup_keyboard(dev))
+ pr_info("Unable to activate hotkeys\n");
+
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index b30e7423549b..26ca4f910cb0 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1838,6 +1838,11 @@ static const struct usb_device_id acm_ids[] = {
+ },
+ #endif
+
++ /* Exclude Infineon Flash Loader utility */
++ { USB_DEVICE(0x058b, 0x0041),
++ .driver_info = IGNORE_DEVICE,
++ },
++
+ /* control interfaces without any protocol set */
+ { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,
+ USB_CDC_PROTO_NONE) },
+diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
+index b9ddf0c1ffe5..894894f2ff93 100644
+--- a/drivers/usb/core/config.c
++++ b/drivers/usb/core/config.c
+@@ -115,7 +115,8 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
+ USB_SS_MULT(desc->bmAttributes) > 3) {
+ dev_warn(ddev, "Isoc endpoint has Mult of %d in "
+ "config %d interface %d altsetting %d ep %d: "
+- "setting to 3\n", desc->bmAttributes + 1,
++ "setting to 3\n",
++ USB_SS_MULT(desc->bmAttributes),
+ cfgno, inum, asnum, ep->desc.bEndpointAddress);
+ ep->ss_ep_comp.bmAttributes = 2;
+ }
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 431839bd291f..522f766a7d07 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -124,6 +124,10 @@ struct usb_hub *usb_hub_to_struct_hub(struct usb_device *hdev)
+
+ int usb_device_supports_lpm(struct usb_device *udev)
+ {
++ /* Some devices have trouble with LPM */
++ if (udev->quirks & USB_QUIRK_NO_LPM)
++ return 0;
++
+ /* USB 2.1 (and greater) devices indicate LPM support through
+ * their USB 2.0 Extended Capabilities BOS descriptor.
+ */
+@@ -4503,6 +4507,8 @@ hub_port_init (struct usb_hub *hub, struct usb_device *udev, int port1,
+ goto fail;
+ }
+
++ usb_detect_quirks(udev);
++
+ if (udev->wusb == 0 && le16_to_cpu(udev->descriptor.bcdUSB) >= 0x0201) {
+ retval = usb_get_bos_descriptor(udev);
+ if (!retval) {
+@@ -4701,7 +4707,6 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
+ if (status < 0)
+ goto loop;
+
+- usb_detect_quirks(udev);
+ if (udev->quirks & USB_QUIRK_DELAY_INIT)
+ msleep(1000);
+
+@@ -5317,9 +5322,6 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ if (udev->usb2_hw_lpm_enabled == 1)
+ usb_set_usb2_hardware_lpm(udev, 0);
+
+- bos = udev->bos;
+- udev->bos = NULL;
+-
+ /* Disable LPM and LTM while we reset the device and reinstall the alt
+ * settings. Device-initiated LPM settings, and system exit latency
+ * settings are cleared when the device is reset, so we have to set
+@@ -5328,15 +5330,18 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ ret = usb_unlocked_disable_lpm(udev);
+ if (ret) {
+ dev_err(&udev->dev, "%s Failed to disable LPM\n.", __func__);
+- goto re_enumerate;
++ goto re_enumerate_no_bos;
+ }
+ ret = usb_disable_ltm(udev);
+ if (ret) {
+ dev_err(&udev->dev, "%s Failed to disable LTM\n.",
+ __func__);
+- goto re_enumerate;
++ goto re_enumerate_no_bos;
+ }
+
++ bos = udev->bos;
++ udev->bos = NULL;
++
+ for (i = 0; i < SET_CONFIG_TRIES; ++i) {
+
+ /* ep0 maxpacket size may change; let the HCD know about it.
+@@ -5433,10 +5438,11 @@ done:
+ return 0;
+
+ re_enumerate:
+- /* LPM state doesn't matter when we're about to destroy the device. */
+- hub_port_logical_disconnect(parent_hub, port1);
+ usb_release_bos_descriptor(udev);
+ udev->bos = bos;
++re_enumerate_no_bos:
++ /* LPM state doesn't matter when we're about to destroy the device. */
++ hub_port_logical_disconnect(parent_hub, port1);
+ return -ENODEV;
+ }
+
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index f5a381945db2..017c1de53aa5 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -199,6 +199,12 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x1a0a, 0x0200), .driver_info =
+ USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL },
+
++ /* Blackmagic Design Intensity Shuttle */
++ { USB_DEVICE(0x1edb, 0xbd3b), .driver_info = USB_QUIRK_NO_LPM },
++
++ /* Blackmagic Design UltraStudio SDI */
++ { USB_DEVICE(0x1edb, 0xbd4f), .driver_info = USB_QUIRK_NO_LPM },
++
+ { } /* terminating entry must be last */
+ };
+
+diff --git a/drivers/usb/gadget/udc/pxa27x_udc.c b/drivers/usb/gadget/udc/pxa27x_udc.c
+index 670ac0b12f00..001a3b74a993 100644
+--- a/drivers/usb/gadget/udc/pxa27x_udc.c
++++ b/drivers/usb/gadget/udc/pxa27x_udc.c
+@@ -2536,6 +2536,9 @@ static int pxa_udc_suspend(struct platform_device *_dev, pm_message_t state)
+ udc->pullup_resume = udc->pullup_on;
+ dplus_pullup(udc, 0);
+
++ if (udc->driver)
++ udc->driver->disconnect(&udc->gadget);
++
+ return 0;
+ }
+
+diff --git a/drivers/usb/host/ohci-at91.c b/drivers/usb/host/ohci-at91.c
+index 342ffd140122..8c6e15bd6ff0 100644
+--- a/drivers/usb/host/ohci-at91.c
++++ b/drivers/usb/host/ohci-at91.c
+@@ -473,6 +473,8 @@ static int ohci_hcd_at91_drv_probe(struct platform_device *pdev)
+ if (!pdata)
+ return -ENOMEM;
+
++ pdev->dev.platform_data = pdata;
++
+ if (!of_property_read_u32(np, "num-ports", &ports))
+ pdata->ports = ports;
+
+@@ -483,6 +485,7 @@ static int ohci_hcd_at91_drv_probe(struct platform_device *pdev)
+ */
+ if (i >= pdata->ports) {
+ pdata->vbus_pin[i] = -EINVAL;
++ pdata->overcurrent_pin[i] = -EINVAL;
+ continue;
+ }
+
+@@ -513,10 +516,8 @@ static int ohci_hcd_at91_drv_probe(struct platform_device *pdev)
+ }
+
+ at91_for_each_port(i) {
+- if (i >= pdata->ports) {
+- pdata->overcurrent_pin[i] = -EINVAL;
+- continue;
+- }
++ if (i >= pdata->ports)
++ break;
+
+ pdata->overcurrent_pin[i] =
+ of_get_named_gpio_flags(np, "atmel,oc-gpio", i, &flags);
+@@ -552,8 +553,6 @@ static int ohci_hcd_at91_drv_probe(struct platform_device *pdev)
+ }
+ }
+
+- pdev->dev.platform_data = pdata;
+-
+ device_init_wakeup(&pdev->dev, 1);
+ return usb_hcd_at91_probe(&ohci_at91_hc_driver, pdev);
+ }
+diff --git a/drivers/usb/host/whci/qset.c b/drivers/usb/host/whci/qset.c
+index dc31c425ce01..9f1c0538b211 100644
+--- a/drivers/usb/host/whci/qset.c
++++ b/drivers/usb/host/whci/qset.c
+@@ -377,6 +377,10 @@ static int qset_fill_page_list(struct whc *whc, struct whc_std *std, gfp_t mem_f
+ if (std->pl_virt == NULL)
+ return -ENOMEM;
+ std->dma_addr = dma_map_single(whc->wusbhc.dev, std->pl_virt, pl_len, DMA_TO_DEVICE);
++ if (dma_mapping_error(whc->wusbhc.dev, std->dma_addr)) {
++ kfree(std->pl_virt);
++ return -EFAULT;
++ }
+
+ for (p = 0; p < std->num_pointers; p++) {
+ std->pl_virt[p].buf_ptr = cpu_to_le64(dma_addr);
+diff --git a/drivers/usb/musb/Kconfig b/drivers/usb/musb/Kconfig
+index 1f2037bbeb0d..45c83baf675d 100644
+--- a/drivers/usb/musb/Kconfig
++++ b/drivers/usb/musb/Kconfig
+@@ -159,7 +159,7 @@ config USB_TI_CPPI_DMA
+
+ config USB_TI_CPPI41_DMA
+ bool 'TI CPPI 4.1 (AM335x)'
+- depends on ARCH_OMAP
++ depends on ARCH_OMAP && DMADEVICES
+ select TI_CPPI41
+
+ config USB_TUSB_OMAP_DMA
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index eac7ccaa3c85..7d4f51a32e66 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -132,7 +132,6 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */
+ { USB_DEVICE(0x10C4, 0xEA61) }, /* Silicon Labs factory default */
+ { USB_DEVICE(0x10C4, 0xEA70) }, /* Silicon Labs factory default */
+- { USB_DEVICE(0x10C4, 0xEA80) }, /* Silicon Labs factory default */
+ { USB_DEVICE(0x10C4, 0xEA71) }, /* Infinity GPS-MIC-1 Radio Monophone */
+ { USB_DEVICE(0x10C4, 0xF001) }, /* Elan Digital Systems USBscope50 */
+ { USB_DEVICE(0x10C4, 0xF002) }, /* Elan Digital Systems USBwave12 */
+diff --git a/drivers/usb/serial/usb-serial-simple.c b/drivers/usb/serial/usb-serial-simple.c
+index 3658662898fc..a204782ae530 100644
+--- a/drivers/usb/serial/usb-serial-simple.c
++++ b/drivers/usb/serial/usb-serial-simple.c
+@@ -53,6 +53,7 @@ DEVICE(funsoft, FUNSOFT_IDS);
+
+ /* Infineon Flashloader driver */
+ #define FLASHLOADER_IDS() \
++ { USB_DEVICE_INTERFACE_CLASS(0x058b, 0x0041, USB_CLASS_CDC_DATA) }, \
+ { USB_DEVICE(0x8087, 0x0716) }
+ DEVICE(flashloader, FLASHLOADER_IDS);
+
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index f68921909552..43b1cafbfcc7 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -796,6 +796,10 @@ static int uas_slave_configure(struct scsi_device *sdev)
+ if (devinfo->flags & US_FL_NO_REPORT_OPCODES)
+ sdev->no_report_opcodes = 1;
+
++ /* A few buggy USB-ATA bridges don't understand FUA */
++ if (devinfo->flags & US_FL_BROKEN_FUA)
++ sdev->broken_fua = 1;
++
+ scsi_change_queue_depth(sdev, devinfo->qdepth - 2);
+ return 0;
+ }
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 6b2479123de7..7ffe4209067b 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -1987,7 +1987,7 @@ UNUSUAL_DEV( 0x14cd, 0x6600, 0x0201, 0x0201,
+ US_FL_IGNORE_RESIDUE ),
+
+ /* Reported by Michael Büsch <m@bues.ch> */
+-UNUSUAL_DEV( 0x152d, 0x0567, 0x0114, 0x0114,
++UNUSUAL_DEV( 0x152d, 0x0567, 0x0114, 0x0116,
+ "JMicron",
+ "USB to ATA/ATAPI Bridge",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index c85ea530085f..ccc113e83d88 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -132,7 +132,7 @@ UNUSUAL_DEV(0x152d, 0x0567, 0x0000, 0x9999,
+ "JMicron",
+ "JMS567",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+- US_FL_NO_REPORT_OPCODES),
++ US_FL_BROKEN_FUA | US_FL_NO_REPORT_OPCODES),
+
+ /* Reported-by: Hans de Goede <hdegoede@redhat.com> */
+ UNUSUAL_DEV(0x2109, 0x0711, 0x0000, 0x9999,
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index 43856d19cf4d..1ae6ba06f9ac 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -193,6 +193,12 @@ int acpi_ioapic_registered(acpi_handle handle, u32 gsi_base);
+ void acpi_irq_stats_init(void);
+ extern u32 acpi_irq_handled;
+ extern u32 acpi_irq_not_handled;
++extern unsigned int acpi_sci_irq;
++#define INVALID_ACPI_IRQ ((unsigned)-1)
++static inline bool acpi_sci_irq_valid(void)
++{
++ return acpi_sci_irq != INVALID_ACPI_IRQ;
++}
+
+ extern int sbf_port;
+ extern unsigned long acpi_realmode_flags;
+diff --git a/include/linux/usb/quirks.h b/include/linux/usb/quirks.h
+index 9948c874e3f1..1d0043dc34e4 100644
+--- a/include/linux/usb/quirks.h
++++ b/include/linux/usb/quirks.h
+@@ -47,4 +47,7 @@
+ /* device generates spurious wakeup, ignore remote wakeup capability */
+ #define USB_QUIRK_IGNORE_REMOTE_WAKEUP BIT(9)
+
++/* device can't handle Link Power Management */
++#define USB_QUIRK_NO_LPM BIT(10)
++
+ #endif /* __LINUX_USB_QUIRKS_H */
+diff --git a/include/net/dst.h b/include/net/dst.h
+index 9261d928303d..e7fa2e26dfc1 100644
+--- a/include/net/dst.h
++++ b/include/net/dst.h
+@@ -322,6 +322,39 @@ static inline void skb_dst_force(struct sk_buff *skb)
+ }
+ }
+
++/**
++ * dst_hold_safe - Take a reference on a dst if possible
++ * @dst: pointer to dst entry
++ *
++ * This helper returns false if it could not safely
++ * take a reference on a dst.
++ */
++static inline bool dst_hold_safe(struct dst_entry *dst)
++{
++ if (dst->flags & DST_NOCACHE)
++ return atomic_inc_not_zero(&dst->__refcnt);
++ dst_hold(dst);
++ return true;
++}
++
++/**
++ * skb_dst_force_safe - makes sure skb dst is refcounted
++ * @skb: buffer
++ *
++ * If dst is not yet refcounted and not destroyed, grab a ref on it.
++ */
++static inline void skb_dst_force_safe(struct sk_buff *skb)
++{
++ if (skb_dst_is_noref(skb)) {
++ struct dst_entry *dst = skb_dst(skb);
++
++ if (!dst_hold_safe(dst))
++ dst = NULL;
++
++ skb->_skb_refdst = (unsigned long)dst;
++ }
++}
++
+
+ /**
+ * __skb_tunnel_rx - prepare skb for rx reinsert
+diff --git a/include/net/inetpeer.h b/include/net/inetpeer.h
+index 4a6009d4486b..235c7811a86a 100644
+--- a/include/net/inetpeer.h
++++ b/include/net/inetpeer.h
+@@ -78,6 +78,7 @@ void inet_initpeers(void) __init;
+ static inline void inetpeer_set_addr_v4(struct inetpeer_addr *iaddr, __be32 ip)
+ {
+ iaddr->a4.addr = ip;
++ iaddr->a4.vif = 0;
+ iaddr->family = AF_INET;
+ }
+
+diff --git a/include/net/sock.h b/include/net/sock.h
+index e23717013a4e..bca709a65f68 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -387,6 +387,7 @@ struct sock {
+ sk_no_check_rx : 1,
+ sk_userlocks : 4,
+ sk_protocol : 8,
++#define SK_PROTOCOL_MAX U8_MAX
+ sk_type : 16;
+ kmemcheck_bitfield_end(flags);
+ int sk_wmem_queued;
+@@ -724,6 +725,8 @@ enum sock_flags {
+ SOCK_SELECT_ERR_QUEUE, /* Wake select on error queue */
+ };
+
++#define SK_FLAGS_TIMESTAMP ((1UL << SOCK_TIMESTAMP) | (1UL << SOCK_TIMESTAMPING_RX_SOFTWARE))
++
+ static inline void sock_copy_flags(struct sock *nsk, struct sock *osk)
+ {
+ nsk->sk_flags = osk->sk_flags;
+@@ -798,7 +801,7 @@ void sk_stream_write_space(struct sock *sk);
+ static inline void __sk_add_backlog(struct sock *sk, struct sk_buff *skb)
+ {
+ /* dont let skb dst not refcounted, we are going to leave rcu lock */
+- skb_dst_force(skb);
++ skb_dst_force_safe(skb);
+
+ if (!sk->sk_backlog.tail)
+ sk->sk_backlog.head = skb;
+diff --git a/include/net/vxlan.h b/include/net/vxlan.h
+index 480a319b4c92..f4a49720767c 100644
+--- a/include/net/vxlan.h
++++ b/include/net/vxlan.h
+@@ -79,7 +79,7 @@ struct vxlanhdr {
+ };
+
+ /* VXLAN header flags. */
+-#define VXLAN_HF_RCO BIT(24)
++#define VXLAN_HF_RCO BIT(21)
+ #define VXLAN_HF_VNI BIT(27)
+ #define VXLAN_HF_GBP BIT(31)
+
+diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
+index f7b2db44eb4b..7fc5733f71db 100644
+--- a/include/uapi/linux/Kbuild
++++ b/include/uapi/linux/Kbuild
+@@ -186,6 +186,7 @@ header-y += if_tunnel.h
+ header-y += if_vlan.h
+ header-y += if_x25.h
+ header-y += igmp.h
++header-y += ila.h
+ header-y += in6.h
+ header-y += inet_diag.h
+ header-y += in.h
+diff --git a/lib/rhashtable.c b/lib/rhashtable.c
+index a54ff8949f91..aa388a741f4b 100644
+--- a/lib/rhashtable.c
++++ b/lib/rhashtable.c
+@@ -503,10 +503,11 @@ int rhashtable_walk_init(struct rhashtable *ht, struct rhashtable_iter *iter)
+ if (!iter->walker)
+ return -ENOMEM;
+
+- mutex_lock(&ht->mutex);
+- iter->walker->tbl = rht_dereference(ht->tbl, ht);
++ spin_lock(&ht->lock);
++ iter->walker->tbl =
++ rcu_dereference_protected(ht->tbl, lockdep_is_held(&ht->lock));
+ list_add(&iter->walker->list, &iter->walker->tbl->walkers);
+- mutex_unlock(&ht->mutex);
++ spin_unlock(&ht->lock);
+
+ return 0;
+ }
+@@ -520,10 +521,10 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_init);
+ */
+ void rhashtable_walk_exit(struct rhashtable_iter *iter)
+ {
+- mutex_lock(&iter->ht->mutex);
++ spin_lock(&iter->ht->lock);
+ if (iter->walker->tbl)
+ list_del(&iter->walker->list);
+- mutex_unlock(&iter->ht->mutex);
++ spin_unlock(&iter->ht->lock);
+ kfree(iter->walker);
+ }
+ EXPORT_SYMBOL_GPL(rhashtable_walk_exit);
+@@ -547,14 +548,12 @@ int rhashtable_walk_start(struct rhashtable_iter *iter)
+ {
+ struct rhashtable *ht = iter->ht;
+
+- mutex_lock(&ht->mutex);
++ rcu_read_lock();
+
++ spin_lock(&ht->lock);
+ if (iter->walker->tbl)
+ list_del(&iter->walker->list);
+-
+- rcu_read_lock();
+-
+- mutex_unlock(&ht->mutex);
++ spin_unlock(&ht->lock);
+
+ if (!iter->walker->tbl) {
+ iter->walker->tbl = rht_dereference_rcu(ht->tbl, ht);
+@@ -723,9 +722,6 @@ int rhashtable_init(struct rhashtable *ht,
+ if (params->nulls_base && params->nulls_base < (1U << RHT_BASE_SHIFT))
+ return -EINVAL;
+
+- if (params->nelem_hint)
+- size = rounded_hashtable_size(params);
+-
+ memset(ht, 0, sizeof(*ht));
+ mutex_init(&ht->mutex);
+ spin_lock_init(&ht->lock);
+@@ -745,6 +741,9 @@ int rhashtable_init(struct rhashtable *ht,
+
+ ht->p.min_size = max(ht->p.min_size, HASH_MIN_SIZE);
+
++ if (params->nelem_hint)
++ size = rounded_hashtable_size(&ht->p);
++
+ /* The maximum (not average) chain length grows with the
+ * size of the hash table, at a rate of (log N)/(log log N).
+ * The value of 16 is selected so that even if the hash
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index ae3a47f9d1d5..fbd0acf80b13 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -805,6 +805,9 @@ static int ax25_create(struct net *net, struct socket *sock, int protocol,
+ struct sock *sk;
+ ax25_cb *ax25;
+
++ if (protocol < 0 || protocol > SK_PROTOCOL_MAX)
++ return -EINVAL;
++
+ if (!net_eq(net, &init_net))
+ return -EAFNOSUPPORT;
+
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index f315c8d0e43b..15cb6c5508bc 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -519,6 +519,9 @@ static int sco_sock_bind(struct socket *sock, struct sockaddr *addr, int addr_le
+ if (!addr || addr->sa_family != AF_BLUETOOTH)
+ return -EINVAL;
+
++ if (addr_len < sizeof(struct sockaddr_sco))
++ return -EINVAL;
++
+ lock_sock(sk);
+
+ if (sk->sk_state != BT_OPEN) {
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index fab4599ba8b2..1c1f87ca447c 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3643,7 +3643,8 @@ static void __skb_complete_tx_timestamp(struct sk_buff *skb,
+ serr->ee.ee_info = tstype;
+ if (sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID) {
+ serr->ee.ee_data = skb_shinfo(skb)->tskey;
+- if (sk->sk_protocol == IPPROTO_TCP)
++ if (sk->sk_protocol == IPPROTO_TCP &&
++ sk->sk_type == SOCK_STREAM)
+ serr->ee.ee_data -= sk->sk_tskey;
+ }
+
+@@ -4268,7 +4269,8 @@ static struct sk_buff *skb_reorder_vlan_header(struct sk_buff *skb)
+ return NULL;
+ }
+
+- memmove(skb->data - ETH_HLEN, skb->data - VLAN_ETH_HLEN, 2 * ETH_ALEN);
++ memmove(skb->data - ETH_HLEN, skb->data - skb->mac_len - VLAN_HLEN,
++ 2 * ETH_ALEN);
+ skb->mac_header += VLAN_HLEN;
+ return skb;
+ }
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 3307c02244d3..dbbda999f33c 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -422,8 +422,6 @@ static void sock_warn_obsolete_bsdism(const char *name)
+ }
+ }
+
+-#define SK_FLAGS_TIMESTAMP ((1UL << SOCK_TIMESTAMP) | (1UL << SOCK_TIMESTAMPING_RX_SOFTWARE))
+-
+ static void sock_disable_timestamp(struct sock *sk, unsigned long flags)
+ {
+ if (sk->sk_flags & flags) {
+@@ -862,7 +860,8 @@ set_rcvbuf:
+
+ if (val & SOF_TIMESTAMPING_OPT_ID &&
+ !(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID)) {
+- if (sk->sk_protocol == IPPROTO_TCP) {
++ if (sk->sk_protocol == IPPROTO_TCP &&
++ sk->sk_type == SOCK_STREAM) {
+ if (sk->sk_state != TCP_ESTABLISHED) {
+ ret = -EINVAL;
+ break;
+diff --git a/net/decnet/af_decnet.c b/net/decnet/af_decnet.c
+index 675cf94e04f8..6feddca286f3 100644
+--- a/net/decnet/af_decnet.c
++++ b/net/decnet/af_decnet.c
+@@ -678,6 +678,9 @@ static int dn_create(struct net *net, struct socket *sock, int protocol,
+ {
+ struct sock *sk;
+
++ if (protocol < 0 || protocol > SK_PROTOCOL_MAX)
++ return -EINVAL;
++
+ if (!net_eq(net, &init_net))
+ return -EAFNOSUPPORT;
+
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index 1d0c3adb6f34..4b16cf3df726 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -261,6 +261,9 @@ static int inet_create(struct net *net, struct socket *sock, int protocol,
+ int try_loading_module = 0;
+ int err;
+
++ if (protocol < 0 || protocol >= IPPROTO_MAX)
++ return -EINVAL;
++
+ sock->state = SS_UNCONNECTED;
+
+ /* Look for the requested type/protocol pair. */
+diff --git a/net/ipv4/fou.c b/net/ipv4/fou.c
+index e0fcbbbcfe54..bd903fe0f750 100644
+--- a/net/ipv4/fou.c
++++ b/net/ipv4/fou.c
+@@ -24,6 +24,7 @@ struct fou {
+ u16 type;
+ struct udp_offload udp_offloads;
+ struct list_head list;
++ struct rcu_head rcu;
+ };
+
+ #define FOU_F_REMCSUM_NOPARTIAL BIT(0)
+@@ -417,7 +418,7 @@ static void fou_release(struct fou *fou)
+ list_del(&fou->list);
+ udp_tunnel_sock_release(sock);
+
+- kfree(fou);
++ kfree_rcu(fou, rcu);
+ }
+
+ static int fou_encap_init(struct sock *sk, struct fou *fou, struct fou_cfg *cfg)
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index a7739c83aa84..d77be28147eb 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1509,7 +1509,7 @@ bool tcp_prequeue(struct sock *sk, struct sk_buff *skb)
+ if (likely(sk->sk_rx_dst))
+ skb_dst_drop(skb);
+ else
+- skb_dst_force(skb);
++ skb_dst_force_safe(skb);
+
+ __skb_queue_tail(&tp->ucopy.prequeue, skb);
+ tp->ucopy.memory += skb->truesize;
+@@ -1710,8 +1710,7 @@ void inet_sk_rx_dst_set(struct sock *sk, const struct sk_buff *skb)
+ {
+ struct dst_entry *dst = skb_dst(skb);
+
+- if (dst) {
+- dst_hold(dst);
++ if (dst && dst_hold_safe(dst)) {
+ sk->sk_rx_dst = dst;
+ inet_sk(sk)->rx_dst_ifindex = skb->skb_iif;
+ }
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 3dbee0d83b15..c9585962374f 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -3147,7 +3147,7 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn)
+ {
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct tcp_fastopen_request *fo = tp->fastopen_req;
+- int syn_loss = 0, space, err = 0, copied;
++ int syn_loss = 0, space, err = 0;
+ unsigned long last_syn_loss = 0;
+ struct sk_buff *syn_data;
+
+@@ -3185,17 +3185,18 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn)
+ goto fallback;
+ syn_data->ip_summed = CHECKSUM_PARTIAL;
+ memcpy(syn_data->cb, syn->cb, sizeof(syn->cb));
+- copied = copy_from_iter(skb_put(syn_data, space), space,
+- &fo->data->msg_iter);
+- if (unlikely(!copied)) {
+- kfree_skb(syn_data);
+- goto fallback;
+- }
+- if (copied != space) {
+- skb_trim(syn_data, copied);
+- space = copied;
++ if (space) {
++ int copied = copy_from_iter(skb_put(syn_data, space), space,
++ &fo->data->msg_iter);
++ if (unlikely(!copied)) {
++ kfree_skb(syn_data);
++ goto fallback;
++ }
++ if (copied != space) {
++ skb_trim(syn_data, copied);
++ space = copied;
++ }
+ }
+-
+ /* No more data pending in inet_wait_for_connect() */
+ if (space == fo->size)
+ fo->data = NULL;
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 3939dd290c44..ddd351145dea 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -349,6 +349,12 @@ static struct inet6_dev *ipv6_add_dev(struct net_device *dev)
+ setup_timer(&ndev->rs_timer, addrconf_rs_timer,
+ (unsigned long)ndev);
+ memcpy(&ndev->cnf, dev_net(dev)->ipv6.devconf_dflt, sizeof(ndev->cnf));
++
++ if (ndev->cnf.stable_secret.initialized)
++ ndev->addr_gen_mode = IN6_ADDR_GEN_MODE_STABLE_PRIVACY;
++ else
++ ndev->addr_gen_mode = IN6_ADDR_GEN_MODE_EUI64;
++
+ ndev->cnf.mtu6 = dev->mtu;
+ ndev->cnf.sysctl = NULL;
+ ndev->nd_parms = neigh_parms_alloc(dev, &nd_tbl);
+@@ -2453,7 +2459,7 @@ ok:
+ #ifdef CONFIG_IPV6_OPTIMISTIC_DAD
+ if (in6_dev->cnf.optimistic_dad &&
+ !net->ipv6.devconf_all->forwarding && sllao)
+- addr_flags = IFA_F_OPTIMISTIC;
++ addr_flags |= IFA_F_OPTIMISTIC;
+ #endif
+
+ /* Do not allow to create too much of autoconfigured
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index 38d66ddfb937..df095eed885f 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -109,6 +109,9 @@ static int inet6_create(struct net *net, struct socket *sock, int protocol,
+ int try_loading_module = 0;
+ int err;
+
++ if (protocol < 0 || protocol >= IPPROTO_MAX)
++ return -EINVAL;
++
+ /* Look for the requested type/protocol pair. */
+ lookup_protocol:
+ err = -ESOCKTNOSUPPORT;
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 3c7b9310b33f..e5ea177d34c6 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -1571,13 +1571,11 @@ static int ip6gre_changelink(struct net_device *dev, struct nlattr *tb[],
+ return -EEXIST;
+ } else {
+ t = nt;
+-
+- ip6gre_tunnel_unlink(ign, t);
+- ip6gre_tnl_change(t, &p, !tb[IFLA_MTU]);
+- ip6gre_tunnel_link(ign, t);
+- netdev_state_change(dev);
+ }
+
++ ip6gre_tunnel_unlink(ign, t);
++ ip6gre_tnl_change(t, &p, !tb[IFLA_MTU]);
++ ip6gre_tunnel_link(ign, t);
+ return 0;
+ }
+
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 9e9b77bd2d0a..8935dc173e65 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -93,10 +93,9 @@ static void inet6_sk_rx_dst_set(struct sock *sk, const struct sk_buff *skb)
+ {
+ struct dst_entry *dst = skb_dst(skb);
+
+- if (dst) {
++ if (dst && dst_hold_safe(dst)) {
+ const struct rt6_info *rt = (const struct rt6_info *)dst;
+
+- dst_hold(dst);
+ sk->sk_rx_dst = dst;
+ inet_sk(sk)->rx_dst_ifindex = skb->skb_iif;
+ inet6_sk(sk)->rx_dst_cookie = rt6_get_cookie(rt);
+diff --git a/net/irda/af_irda.c b/net/irda/af_irda.c
+index fae6822cc367..25f63a8ff402 100644
+--- a/net/irda/af_irda.c
++++ b/net/irda/af_irda.c
+@@ -1086,6 +1086,9 @@ static int irda_create(struct net *net, struct socket *sock, int protocol,
+ struct sock *sk;
+ struct irda_sock *self;
+
++ if (protocol < 0 || protocol > SK_PROTOCOL_MAX)
++ return -EINVAL;
++
+ if (net != &init_net)
+ return -EAFNOSUPPORT;
+
+diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
+index 50095820edb7..cad8c4b48ea6 100644
+--- a/net/openvswitch/conntrack.c
++++ b/net/openvswitch/conntrack.c
+@@ -53,6 +53,8 @@ struct ovs_conntrack_info {
+ struct md_labels labels;
+ };
+
++static void __ovs_ct_free_action(struct ovs_conntrack_info *ct_info);
++
+ static u16 key_to_nfproto(const struct sw_flow_key *key)
+ {
+ switch (ntohs(key->eth.type)) {
+@@ -141,6 +143,7 @@ static void __ovs_ct_update_key(struct sw_flow_key *key, u8 state,
+ * previously sent the packet to conntrack via the ct action.
+ */
+ static void ovs_ct_update_key(const struct sk_buff *skb,
++ const struct ovs_conntrack_info *info,
+ struct sw_flow_key *key, bool post_ct)
+ {
+ const struct nf_conntrack_zone *zone = &nf_ct_zone_dflt;
+@@ -158,13 +161,15 @@ static void ovs_ct_update_key(const struct sk_buff *skb,
+ zone = nf_ct_zone(ct);
+ } else if (post_ct) {
+ state = OVS_CS_F_TRACKED | OVS_CS_F_INVALID;
++ if (info)
++ zone = &info->zone;
+ }
+ __ovs_ct_update_key(key, state, zone, ct);
+ }
+
+ void ovs_ct_fill_key(const struct sk_buff *skb, struct sw_flow_key *key)
+ {
+- ovs_ct_update_key(skb, key, false);
++ ovs_ct_update_key(skb, NULL, key, false);
+ }
+
+ int ovs_ct_put_key(const struct sw_flow_key *key, struct sk_buff *skb)
+@@ -418,7 +423,7 @@ static int __ovs_ct_lookup(struct net *net, struct sw_flow_key *key,
+ }
+ }
+
+- ovs_ct_update_key(skb, key, true);
++ ovs_ct_update_key(skb, info, key, true);
+
+ return 0;
+ }
+@@ -708,7 +713,7 @@ int ovs_ct_copy_action(struct net *net, const struct nlattr *attr,
+ nf_conntrack_get(&ct_info.ct->ct_general);
+ return 0;
+ err_free_ct:
+- nf_conntrack_free(ct_info.ct);
++ __ovs_ct_free_action(&ct_info);
+ return err;
+ }
+
+@@ -750,6 +755,11 @@ void ovs_ct_free_action(const struct nlattr *a)
+ {
+ struct ovs_conntrack_info *ct_info = nla_data(a);
+
++ __ovs_ct_free_action(ct_info);
++}
++
++static void __ovs_ct_free_action(struct ovs_conntrack_info *ct_info)
++{
+ if (ct_info->helper)
+ module_put(ct_info->helper->me);
+ if (ct_info->ct)
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 7ec667dd4ce1..b5c2cf2aa6d4 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -950,7 +950,7 @@ qdisc_create(struct net_device *dev, struct netdev_queue *dev_queue,
+ }
+ lockdep_set_class(qdisc_lock(sch), &qdisc_tx_lock);
+ if (!netif_is_multiqueue(dev))
+- sch->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT;
++ sch->flags |= TCQ_F_ONETXQUEUE;
+ }
+
+ sch->handle = handle;
+diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
+index e917d27328ea..40677cfcd7aa 100644
+--- a/net/sctp/ipv6.c
++++ b/net/sctp/ipv6.c
+@@ -635,6 +635,7 @@ static struct sock *sctp_v6_create_accept_sk(struct sock *sk,
+ struct sock *newsk;
+ struct ipv6_pinfo *newnp, *np = inet6_sk(sk);
+ struct sctp6_sock *newsctp6sk;
++ struct ipv6_txoptions *opt;
+
+ newsk = sk_alloc(sock_net(sk), PF_INET6, GFP_KERNEL, sk->sk_prot, 0);
+ if (!newsk)
+@@ -654,6 +655,13 @@ static struct sock *sctp_v6_create_accept_sk(struct sock *sk,
+
+ memcpy(newnp, np, sizeof(struct ipv6_pinfo));
+
++ rcu_read_lock();
++ opt = rcu_dereference(np->opt);
++ if (opt)
++ opt = ipv6_dup_options(newsk, opt);
++ RCU_INIT_POINTER(newnp->opt, opt);
++ rcu_read_unlock();
++
+ /* Initialize sk's sport, dport, rcv_saddr and daddr for getsockname()
+ * and getpeername().
+ */
+diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
+index 7954c52e1794..8d67d72e4858 100644
+--- a/net/sctp/sm_make_chunk.c
++++ b/net/sctp/sm_make_chunk.c
+@@ -1652,7 +1652,7 @@ static sctp_cookie_param_t *sctp_pack_cookie(const struct sctp_endpoint *ep,
+
+ /* Set an expiration time for the cookie. */
+ cookie->c.expiration = ktime_add(asoc->cookie_life,
+- ktime_get());
++ ktime_get_real());
+
+ /* Copy the peer's init packet. */
+ memcpy(&cookie->c.peer_init[0], init_chunk->chunk_hdr,
+@@ -1780,7 +1780,7 @@ no_hmac:
+ if (sock_flag(ep->base.sk, SOCK_TIMESTAMP))
+ kt = skb_get_ktime(skb);
+ else
+- kt = ktime_get();
++ kt = ktime_get_real();
+
+ if (!asoc && ktime_before(bear_cookie->expiration, kt)) {
+ /*
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 3ec88be0faec..84b1b504538a 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -7163,6 +7163,7 @@ void sctp_copy_sock(struct sock *newsk, struct sock *sk,
+ newsk->sk_type = sk->sk_type;
+ newsk->sk_bound_dev_if = sk->sk_bound_dev_if;
+ newsk->sk_flags = sk->sk_flags;
++ newsk->sk_tsflags = sk->sk_tsflags;
+ newsk->sk_no_check_tx = sk->sk_no_check_tx;
+ newsk->sk_no_check_rx = sk->sk_no_check_rx;
+ newsk->sk_reuse = sk->sk_reuse;
+@@ -7195,6 +7196,9 @@ void sctp_copy_sock(struct sock *newsk, struct sock *sk,
+ newinet->mc_ttl = 1;
+ newinet->mc_index = 0;
+ newinet->mc_list = NULL;
++
++ if (newsk->sk_flags & SK_FLAGS_TIMESTAMP)
++ net_enable_timestamp();
+ }
+
+ static inline void sctp_copy_descendant(struct sock *sk_to,
+diff --git a/net/socket.c b/net/socket.c
+index 9963a0b53a64..f3fbe17b5765 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -1702,6 +1702,7 @@ SYSCALL_DEFINE6(recvfrom, int, fd, void __user *, ubuf, size_t, size,
+ msg.msg_name = addr ? (struct sockaddr *)&address : NULL;
+ /* We assume all kernel code knows the size of sockaddr_storage */
+ msg.msg_namelen = 0;
++ msg.msg_iocb = NULL;
+ if (sock->file->f_flags & O_NONBLOCK)
+ flags |= MSG_DONTWAIT;
+ err = sock_recvmsg(sock, &msg, iov_iter_count(&msg.msg_iter), flags);
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index 86f2e7c44694..73bdf1baa266 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -162,7 +162,7 @@ static int tipc_udp_send_msg(struct net *net, struct sk_buff *skb,
+ if (skb_headroom(skb) < UDP_MIN_HEADROOM) {
+ err = pskb_expand_head(skb, UDP_MIN_HEADROOM, 0, GFP_ATOMIC);
+ if (err)
+- goto tx_error;
++ return err;
+ }
+
+ clone = skb_clone(skb, GFP_ATOMIC);
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 128b0982c96b..0fc6dbaed39c 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -2255,14 +2255,7 @@ static int unix_stream_read_generic(struct unix_stream_read_state *state)
+ /* Lock the socket to prevent queue disordering
+ * while sleeps in memcpy_tomsg
+ */
+- err = mutex_lock_interruptible(&u->readlock);
+- if (unlikely(err)) {
+- /* recvmsg() in non blocking mode is supposed to return -EAGAIN
+- * sk_rcvtimeo is not honored by mutex_lock_interruptible()
+- */
+- err = noblock ? -EAGAIN : -ERESTARTSYS;
+- goto out;
+- }
++ mutex_lock(&u->readlock);
+
+ if (flags & MSG_PEEK)
+ skip = sk_peek_offset(sk, flags);
+@@ -2306,12 +2299,12 @@ again:
+ timeo = unix_stream_data_wait(sk, timeo, last,
+ last_len);
+
+- if (signal_pending(current) ||
+- mutex_lock_interruptible(&u->readlock)) {
++ if (signal_pending(current)) {
+ err = sock_intr_errno(timeo);
+ goto out;
+ }
+
++ mutex_lock(&u->readlock);
+ continue;
+ unlock:
+ unix_state_unlock(sk);
+diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c
+index 0b9ec78a7a7a..26f0e0a11ed6 100644
+--- a/security/keys/keyctl.c
++++ b/security/keys/keyctl.c
+@@ -757,16 +757,16 @@ long keyctl_read_key(key_serial_t keyid, char __user *buffer, size_t buflen)
+
+ /* the key is probably readable - now try to read it */
+ can_read_key:
+- ret = key_validate(key);
+- if (ret == 0) {
+- ret = -EOPNOTSUPP;
+- if (key->type->read) {
+- /* read the data with the semaphore held (since we
+- * might sleep) */
+- down_read(&key->sem);
++ ret = -EOPNOTSUPP;
++ if (key->type->read) {
++ /* Read the data with the semaphore held (since we might sleep)
++ * to protect against the key being updated or revoked.
++ */
++ down_read(&key->sem);
++ ret = key_validate(key);
++ if (ret == 0)
+ ret = key->type->read(key, buffer, buflen);
+- up_read(&key->sem);
+- }
++ up_read(&key->sem);
+ }
+
+ error2:
+diff --git a/security/keys/process_keys.c b/security/keys/process_keys.c
+index 43b4cddbf2b3..7877e5cd4e23 100644
+--- a/security/keys/process_keys.c
++++ b/security/keys/process_keys.c
+@@ -794,6 +794,7 @@ long join_session_keyring(const char *name)
+ ret = PTR_ERR(keyring);
+ goto error2;
+ } else if (keyring == new->session_keyring) {
++ key_put(keyring);
+ ret = 0;
+ goto error2;
+ }
diff --git a/1520_keyring-refleak-in-join-session-CVE-2016-0728.patch b/1520_keyring-refleak-in-join-session-CVE-2016-0728.patch
deleted file mode 100644
index 49020d7..0000000
--- a/1520_keyring-refleak-in-join-session-CVE-2016-0728.patch
+++ /dev/null
@@ -1,81 +0,0 @@
-From 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2 Mon Sep 17 00:00:00 2001
-From: Yevgeny Pats <yevgeny@perception-point.io>
-Date: Tue, 19 Jan 2016 22:09:04 +0000
-Subject: KEYS: Fix keyring ref leak in join_session_keyring()
-
-This fixes CVE-2016-0728.
-
-If a thread is asked to join as a session keyring the keyring that's already
-set as its session, we leak a keyring reference.
-
-This can be tested with the following program:
-
- #include <stddef.h>
- #include <stdio.h>
- #include <sys/types.h>
- #include <keyutils.h>
-
- int main(int argc, const char *argv[])
- {
- int i = 0;
- key_serial_t serial;
-
- serial = keyctl(KEYCTL_JOIN_SESSION_KEYRING,
- "leaked-keyring");
- if (serial < 0) {
- perror("keyctl");
- return -1;
- }
-
- if (keyctl(KEYCTL_SETPERM, serial,
- KEY_POS_ALL | KEY_USR_ALL) < 0) {
- perror("keyctl");
- return -1;
- }
-
- for (i = 0; i < 100; i++) {
- serial = keyctl(KEYCTL_JOIN_SESSION_KEYRING,
- "leaked-keyring");
- if (serial < 0) {
- perror("keyctl");
- return -1;
- }
- }
-
- return 0;
- }
-
-If, after the program has run, there something like the following line in
-/proc/keys:
-
-3f3d898f I--Q--- 100 perm 3f3f0000 0 0 keyring leaked-keyring: empty
-
-with a usage count of 100 * the number of times the program has been run,
-then the kernel is malfunctioning. If leaked-keyring has zero usages or
-has been garbage collected, then the problem is fixed.
-
-Reported-by: Yevgeny Pats <yevgeny@perception-point.io>
-Signed-off-by: David Howells <dhowells@redhat.com>
-Acked-by: Don Zickus <dzickus@redhat.com>
-Acked-by: Prarit Bhargava <prarit@redhat.com>
-Acked-by: Jarod Wilson <jarod@redhat.com>
-Signed-off-by: James Morris <james.l.morris@oracle.com>
----
- security/keys/process_keys.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/security/keys/process_keys.c b/security/keys/process_keys.c
-index a3f85d2..e6d50172 100644
---- a/security/keys/process_keys.c
-+++ b/security/keys/process_keys.c
-@@ -794,6 +794,7 @@ long join_session_keyring(const char *name)
- ret = PTR_ERR(keyring);
- goto error2;
- } else if (keyring == new->session_keyring) {
-+ key_put(keyring);
- ret = 0;
- goto error2;
- }
---
-cgit v0.12
-
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [gentoo-commits] proj/linux-patches:4.3 commit in: /
@ 2016-01-31 23:31 Mike Pagano
0 siblings, 0 replies; 9+ messages in thread
From: Mike Pagano @ 2016-01-31 23:31 UTC (permalink / raw
To: gentoo-commits
commit: 5c6031723f80c0670aa5fe939f24cbcbbfc2cbcd
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jan 31 23:31:11 2016 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jan 31 23:31:11 2016 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5c603172
Linux patch 4.3.5
0000_README | 4 +
1004_linux-4.3.5.patch | 5981 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 5985 insertions(+)
diff --git a/0000_README b/0000_README
index 5f4c1bc..74a7d33 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch: 1003_linux-4.3.4.patch
From: http://www.kernel.org
Desc: Linux 4.3.4
+Patch: 1004_linux-4.3.5.patch
+From: http://www.kernel.org
+Desc: Linux 4.3.5
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1004_linux-4.3.5.patch b/1004_linux-4.3.5.patch
new file mode 100644
index 0000000..e04b2cb
--- /dev/null
+++ b/1004_linux-4.3.5.patch
@@ -0,0 +1,5981 @@
+diff --git a/Documentation/ABI/testing/sysfs-bus-usb b/Documentation/ABI/testing/sysfs-bus-usb
+index 864637f25bee..01c7a41c18ac 100644
+--- a/Documentation/ABI/testing/sysfs-bus-usb
++++ b/Documentation/ABI/testing/sysfs-bus-usb
+@@ -114,19 +114,21 @@ Description:
+ enabled for the device. Developer can write y/Y/1 or n/N/0 to
+ the file to enable/disable the feature.
+
+-What: /sys/bus/usb/devices/.../power/usb3_hardware_lpm
+-Date: June 2015
++What: /sys/bus/usb/devices/.../power/usb3_hardware_lpm_u1
++ /sys/bus/usb/devices/.../power/usb3_hardware_lpm_u2
++Date: November 2015
+ Contact: Kevin Strasser <kevin.strasser@linux.intel.com>
++ Lu Baolu <baolu.lu@linux.intel.com>
+ Description:
+ If CONFIG_PM is set and a USB 3.0 lpm-capable device is plugged
+ in to a xHCI host which supports link PM, it will check if U1
+ and U2 exit latencies have been set in the BOS descriptor; if
+- the check is is passed and the host supports USB3 hardware LPM,
++ the check is passed and the host supports USB3 hardware LPM,
+ USB3 hardware LPM will be enabled for the device and the USB
+- device directory will contain a file named
+- power/usb3_hardware_lpm. The file holds a string value (enable
+- or disable) indicating whether or not USB3 hardware LPM is
+- enabled for the device.
++ device directory will contain two files named
++ power/usb3_hardware_lpm_u1 and power/usb3_hardware_lpm_u2. These
++ files hold a string value (enable or disable) indicating whether
++ or not USB3 hardware LPM U1 or U2 is enabled for the device.
+
+ What: /sys/bus/usb/devices/.../removable
+ Date: February 2012
+diff --git a/Documentation/usb/power-management.txt b/Documentation/usb/power-management.txt
+index 4a15c90bc11d..0a94ffe17ab6 100644
+--- a/Documentation/usb/power-management.txt
++++ b/Documentation/usb/power-management.txt
+@@ -537,17 +537,18 @@ relevant attribute files are usb2_hardware_lpm and usb3_hardware_lpm.
+ can write y/Y/1 or n/N/0 to the file to enable/disable
+ USB2 hardware LPM manually. This is for test purpose mainly.
+
+- power/usb3_hardware_lpm
++ power/usb3_hardware_lpm_u1
++ power/usb3_hardware_lpm_u2
+
+ When a USB 3.0 lpm-capable device is plugged in to a
+ xHCI host which supports link PM, it will check if U1
+ and U2 exit latencies have been set in the BOS
+ descriptor; if the check is is passed and the host
+ supports USB3 hardware LPM, USB3 hardware LPM will be
+- enabled for the device and this file will be created.
+- The file holds a string value (enable or disable)
+- indicating whether or not USB3 hardware LPM is
+- enabled for the device.
++ enabled for the device and these files will be created.
++ The files hold a string value (enable or disable)
++ indicating whether or not USB3 hardware LPM U1 or U2
++ is enabled for the device.
+
+ USB Port Power Control
+ ----------------------
+diff --git a/Makefile b/Makefile
+index 69430ed64270..efc7a766c470 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 3
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Blurry Fish Butt
+
+diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
+index 6984342da13d..61d96a645ff3 100644
+--- a/arch/arm/kvm/mmu.c
++++ b/arch/arm/kvm/mmu.c
+@@ -98,6 +98,11 @@ static void kvm_flush_dcache_pud(pud_t pud)
+ __kvm_flush_dcache_pud(pud);
+ }
+
++static bool kvm_is_device_pfn(unsigned long pfn)
++{
++ return !pfn_valid(pfn);
++}
++
+ /**
+ * stage2_dissolve_pmd() - clear and flush huge PMD entry
+ * @kvm: pointer to kvm structure.
+@@ -213,7 +218,7 @@ static void unmap_ptes(struct kvm *kvm, pmd_t *pmd,
+ kvm_tlb_flush_vmid_ipa(kvm, addr);
+
+ /* No need to invalidate the cache for device mappings */
+- if ((pte_val(old_pte) & PAGE_S2_DEVICE) != PAGE_S2_DEVICE)
++ if (!kvm_is_device_pfn(pte_pfn(old_pte)))
+ kvm_flush_dcache_pte(old_pte);
+
+ put_page(virt_to_page(pte));
+@@ -305,8 +310,7 @@ static void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd,
+
+ pte = pte_offset_kernel(pmd, addr);
+ do {
+- if (!pte_none(*pte) &&
+- (pte_val(*pte) & PAGE_S2_DEVICE) != PAGE_S2_DEVICE)
++ if (!pte_none(*pte) && !kvm_is_device_pfn(pte_pfn(*pte)))
+ kvm_flush_dcache_pte(*pte);
+ } while (pte++, addr += PAGE_SIZE, addr != end);
+ }
+@@ -1037,11 +1041,6 @@ static bool kvm_is_write_fault(struct kvm_vcpu *vcpu)
+ return kvm_vcpu_dabt_iswrite(vcpu);
+ }
+
+-static bool kvm_is_device_pfn(unsigned long pfn)
+-{
+- return !pfn_valid(pfn);
+-}
+-
+ /**
+ * stage2_wp_ptes - write protect PMD range
+ * @pmd: pointer to pmd entry
+diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
+index b8efb8cd1f73..4d25fd0fae10 100644
+--- a/arch/arm/net/bpf_jit_32.c
++++ b/arch/arm/net/bpf_jit_32.c
+@@ -182,19 +182,6 @@ static inline int mem_words_used(struct jit_ctx *ctx)
+ return fls(ctx->seen & SEEN_MEM);
+ }
+
+-static inline bool is_load_to_a(u16 inst)
+-{
+- switch (inst) {
+- case BPF_LD | BPF_W | BPF_LEN:
+- case BPF_LD | BPF_W | BPF_ABS:
+- case BPF_LD | BPF_H | BPF_ABS:
+- case BPF_LD | BPF_B | BPF_ABS:
+- return true;
+- default:
+- return false;
+- }
+-}
+-
+ static void jit_fill_hole(void *area, unsigned int size)
+ {
+ u32 *ptr;
+@@ -206,7 +193,6 @@ static void jit_fill_hole(void *area, unsigned int size)
+ static void build_prologue(struct jit_ctx *ctx)
+ {
+ u16 reg_set = saved_regs(ctx);
+- u16 first_inst = ctx->skf->insns[0].code;
+ u16 off;
+
+ #ifdef CONFIG_FRAME_POINTER
+@@ -236,7 +222,7 @@ static void build_prologue(struct jit_ctx *ctx)
+ emit(ARM_MOV_I(r_X, 0), ctx);
+
+ /* do not leak kernel data to userspace */
+- if ((first_inst != (BPF_RET | BPF_K)) && !(is_load_to_a(first_inst)))
++ if (bpf_needs_clear_a(&ctx->skf->insns[0]))
+ emit(ARM_MOV_I(r_A, 0), ctx);
+
+ /* stack space for the BPF_MEM words */
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 07d1811aa03f..a92266e634cd 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -311,6 +311,27 @@ config ARM64_ERRATUM_832075
+
+ If unsure, say Y.
+
++config ARM64_ERRATUM_834220
++ bool "Cortex-A57: 834220: Stage 2 translation fault might be incorrectly reported in presence of a Stage 1 fault"
++ depends on KVM
++ default y
++ help
++ This option adds an alternative code sequence to work around ARM
++ erratum 834220 on Cortex-A57 parts up to r1p2.
++
++ Affected Cortex-A57 parts might report a Stage 2 translation
++ fault as a the result of a Stage 1 fault for a load crossing
++ a page boundary when there is a Stage 1 permission or device
++ memory alignment fault and a Stage 2 translation fault
++
++ The workaround is to verify that the Stage-1 translation
++ doesn't generate a fault before handling the Stage-2 fault.
++ Please note that this does not necessarily enable the workaround,
++ as it depends on the alternative framework, which will only patch
++ the kernel if an affected CPU is detected.
++
++ If unsure, say Y.
++
+ config ARM64_ERRATUM_845719
+ bool "Cortex-A53: 845719: a load might read incorrect data"
+ depends on COMPAT
+diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h
+index b3b5c4ae3800..af5b9d5c5c23 100644
+--- a/arch/arm64/include/asm/atomic_ll_sc.h
++++ b/arch/arm64/include/asm/atomic_ll_sc.h
+@@ -211,7 +211,7 @@ __CMPXCHG_CASE( , , mb_8, dmb ish, l, "memory")
+ #undef __CMPXCHG_CASE
+
+ #define __CMPXCHG_DBL(name, mb, rel, cl) \
+-__LL_SC_INLINE int \
++__LL_SC_INLINE long \
+ __LL_SC_PREFIX(__cmpxchg_double##name(unsigned long old1, \
+ unsigned long old2, \
+ unsigned long new1, \
+diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h
+index 55d740e63459..4d548aa54b21 100644
+--- a/arch/arm64/include/asm/atomic_lse.h
++++ b/arch/arm64/include/asm/atomic_lse.h
+@@ -348,7 +348,7 @@ __CMPXCHG_CASE(x, , mb_8, al, "memory")
+ #define __LL_SC_CMPXCHG_DBL(op) __LL_SC_CALL(__cmpxchg_double##op)
+
+ #define __CMPXCHG_DBL(name, mb, cl...) \
+-static inline int __cmpxchg_double##name(unsigned long old1, \
++static inline long __cmpxchg_double##name(unsigned long old1, \
+ unsigned long old2, \
+ unsigned long new1, \
+ unsigned long new2, \
+diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
+index 171570702bb8..a1a5981526fe 100644
+--- a/arch/arm64/include/asm/cpufeature.h
++++ b/arch/arm64/include/asm/cpufeature.h
+@@ -27,8 +27,9 @@
+ #define ARM64_HAS_SYSREG_GIC_CPUIF 3
+ #define ARM64_HAS_PAN 4
+ #define ARM64_HAS_LSE_ATOMICS 5
++#define ARM64_WORKAROUND_834220 6
+
+-#define ARM64_NCAPS 6
++#define ARM64_NCAPS 7
+
+ #ifndef __ASSEMBLY__
+
+diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
+index 17e92f05b1fe..3ca894ecf699 100644
+--- a/arch/arm64/include/asm/kvm_emulate.h
++++ b/arch/arm64/include/asm/kvm_emulate.h
+@@ -99,11 +99,13 @@ static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
+ *vcpu_cpsr(vcpu) |= COMPAT_PSR_T_BIT;
+ }
+
++/*
++ * vcpu_reg should always be passed a register number coming from a
++ * read of ESR_EL2. Otherwise, it may give the wrong result on AArch32
++ * with banked registers.
++ */
+ static inline unsigned long *vcpu_reg(const struct kvm_vcpu *vcpu, u8 reg_num)
+ {
+- if (vcpu_mode_is_32bit(vcpu))
+- return vcpu_reg32(vcpu, reg_num);
+-
+ return (unsigned long *)&vcpu_gp_regs(vcpu)->regs.regs[reg_num];
+ }
+
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 6ffd91438560..dc0df822def3 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -74,6 +74,15 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ (1 << MIDR_VARIANT_SHIFT) | 2),
+ },
+ #endif
++#ifdef CONFIG_ARM64_ERRATUM_834220
++ {
++ /* Cortex-A57 r0p0 - r1p2 */
++ .desc = "ARM erratum 834220",
++ .capability = ARM64_WORKAROUND_834220,
++ MIDR_RANGE(MIDR_CORTEX_A57, 0x00,
++ (1 << MIDR_VARIANT_SHIFT) | 2),
++ },
++#endif
+ #ifdef CONFIG_ARM64_ERRATUM_845719
+ {
+ /* Cortex-A53 r0p[01234] */
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index 90d09eddd5b2..b84ef8376471 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -524,9 +524,14 @@ CPU_LE( movk x0, #0x30d0, lsl #16 ) // Clear EE and E0E on LE systems
+ #endif
+
+ /* EL2 debug */
++ mrs x0, id_aa64dfr0_el1 // Check ID_AA64DFR0_EL1 PMUVer
++ sbfx x0, x0, #8, #4
++ cmp x0, #1
++ b.lt 4f // Skip if no PMU present
+ mrs x0, pmcr_el0 // Disable debug access traps
+ ubfx x0, x0, #11, #5 // to EL2 and allow access to
+ msr mdcr_el2, x0 // all PMU counters from EL1
++4:
+
+ /* Stage-2 translation */
+ msr vttbr_el2, xzr
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index f9a74d4fff3b..f325af5b38e3 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -1159,9 +1159,6 @@ static void armv8pmu_reset(void *info)
+
+ /* Initialize & Reset PMNC: C and P bits. */
+ armv8pmu_pmcr_write(ARMV8_PMCR_P | ARMV8_PMCR_C);
+-
+- /* Disable access from userspace. */
+- asm volatile("msr pmuserenr_el0, %0" :: "r" (0));
+ }
+
+ static int armv8_pmuv3_map_event(struct perf_event *event)
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index 1971f491bb90..ff7f13239515 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -58,6 +58,12 @@
+ */
+ void ptrace_disable(struct task_struct *child)
+ {
++ /*
++ * This would be better off in core code, but PTRACE_DETACH has
++ * grown its fair share of arch-specific worts and changing it
++ * is likely to cause regressions on obscure architectures.
++ */
++ user_disable_single_step(child);
+ }
+
+ #ifdef CONFIG_HAVE_HW_BREAKPOINT
+diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
+index 232247945b1c..9f17dc72645a 100644
+--- a/arch/arm64/kernel/setup.c
++++ b/arch/arm64/kernel/setup.c
+@@ -558,6 +558,10 @@ static int c_show(struct seq_file *m, void *v)
+ */
+ seq_printf(m, "processor\t: %d\n", i);
+
++ seq_printf(m, "BogoMIPS\t: %lu.%02lu\n",
++ loops_per_jiffy / (500000UL/HZ),
++ loops_per_jiffy / (5000UL/HZ) % 100);
++
+ /*
+ * Dump out the common processor features in a single line.
+ * Userspace should read the hwcaps with getauxval(AT_HWCAP)
+diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
+index 44ca4143b013..dd6ad81d53aa 100644
+--- a/arch/arm64/kernel/suspend.c
++++ b/arch/arm64/kernel/suspend.c
+@@ -1,3 +1,4 @@
++#include <linux/ftrace.h>
+ #include <linux/percpu.h>
+ #include <linux/slab.h>
+ #include <asm/cacheflush.h>
+@@ -71,6 +72,13 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
+ local_dbg_save(flags);
+
+ /*
++ * Function graph tracer state gets incosistent when the kernel
++ * calls functions that never return (aka suspend finishers) hence
++ * disable graph tracing during their execution.
++ */
++ pause_graph_tracing();
++
++ /*
+ * mm context saved on the stack, it will be restored when
+ * the cpu comes out of reset through the identity mapped
+ * page tables, so that the thread address space is properly
+@@ -111,6 +119,8 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
+ hw_breakpoint_restore(NULL);
+ }
+
++ unpause_graph_tracing();
++
+ /*
+ * Restore pstate flags. OS lock and mdscr have been already
+ * restored, so from this point onwards, debugging is fully
+diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
+index e5836138ec42..3e840649b133 100644
+--- a/arch/arm64/kvm/hyp.S
++++ b/arch/arm64/kvm/hyp.S
+@@ -1007,9 +1007,15 @@ el1_trap:
+ b.ne 1f // Not an abort we care about
+
+ /* This is an abort. Check for permission fault */
++alternative_if_not ARM64_WORKAROUND_834220
+ and x2, x1, #ESR_ELx_FSC_TYPE
+ cmp x2, #FSC_PERM
+ b.ne 1f // Not a permission fault
++alternative_else
++ nop // Force a Stage-1 translation to occur
++ nop // and return to the guest if it failed
++ nop
++alternative_endif
+
+ /*
+ * Check for Stage-1 page table walk, which is guaranteed
+diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
+index 85c57158dcd9..648112e90ed5 100644
+--- a/arch/arm64/kvm/inject_fault.c
++++ b/arch/arm64/kvm/inject_fault.c
+@@ -48,7 +48,7 @@ static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
+
+ /* Note: These now point to the banked copies */
+ *vcpu_spsr(vcpu) = new_spsr_value;
+- *vcpu_reg(vcpu, 14) = *vcpu_pc(vcpu) + return_offset;
++ *vcpu_reg32(vcpu, 14) = *vcpu_pc(vcpu) + return_offset;
+
+ /* Branch to exception vector */
+ if (sctlr & (1 << 13))
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 9211b8527f25..9317974c9b8e 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -451,6 +451,9 @@ void __init paging_init(void)
+
+ empty_zero_page = virt_to_page(zero_page);
+
++ /* Ensure the zero page is visible to the page table walker */
++ dsb(ishst);
++
+ /*
+ * TTBR0 is only used for the identity mapping at this stage. Make it
+ * point to zero page to avoid speculatively fetching new entries.
+diff --git a/arch/arm64/mm/proc-macros.S b/arch/arm64/mm/proc-macros.S
+index 4c4d93c4bf65..d69dffffaa89 100644
+--- a/arch/arm64/mm/proc-macros.S
++++ b/arch/arm64/mm/proc-macros.S
+@@ -62,3 +62,15 @@
+ bfi \valreg, \tmpreg, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH
+ #endif
+ .endm
++
++/*
++ * reset_pmuserenr_el0 - reset PMUSERENR_EL0 if PMUv3 present
++ */
++ .macro reset_pmuserenr_el0, tmpreg
++ mrs \tmpreg, id_aa64dfr0_el1 // Check ID_AA64DFR0_EL1 PMUVer
++ sbfx \tmpreg, \tmpreg, #8, #4
++ cmp \tmpreg, #1 // Skip if no PMU present
++ b.lt 9000f
++ msr pmuserenr_el0, xzr // Disable PMU access from EL0
++9000:
++ .endm
+diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
+index e4ee7bd8830a..b722d3e26185 100644
+--- a/arch/arm64/mm/proc.S
++++ b/arch/arm64/mm/proc.S
+@@ -115,6 +115,7 @@ ENTRY(cpu_do_resume)
+ */
+ ubfx x11, x11, #1, #1
+ msr oslar_el1, x11
++ reset_pmuserenr_el0 x0 // Disable PMU access from EL0
+ mov x0, x12
+ dsb nsh // Make sure local tlb invalidation completed
+ isb
+@@ -153,6 +154,7 @@ ENTRY(__cpu_setup)
+ msr cpacr_el1, x0 // Enable FP/ASIMD
+ mov x0, #1 << 12 // Reset mdscr_el1 and disable
+ msr mdscr_el1, x0 // access to the DCC from EL0
++ reset_pmuserenr_el0 x0 // Disable PMU access from EL0
+ /*
+ * Memory region attributes for LPAE:
+ *
+diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h
+index 98a26ce82d26..aee5637ea436 100644
+--- a/arch/arm64/net/bpf_jit.h
++++ b/arch/arm64/net/bpf_jit.h
+@@ -1,7 +1,7 @@
+ /*
+ * BPF JIT compiler for ARM64
+ *
+- * Copyright (C) 2014 Zi Shen Lim <zlim.lnx@gmail.com>
++ * Copyright (C) 2014-2015 Zi Shen Lim <zlim.lnx@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+@@ -35,6 +35,7 @@
+ aarch64_insn_gen_comp_branch_imm(0, offset, Rt, A64_VARIANT(sf), \
+ AARCH64_INSN_BRANCH_COMP_##type)
+ #define A64_CBZ(sf, Rt, imm19) A64_COMP_BRANCH(sf, Rt, (imm19) << 2, ZERO)
++#define A64_CBNZ(sf, Rt, imm19) A64_COMP_BRANCH(sf, Rt, (imm19) << 2, NONZERO)
+
+ /* Conditional branch (immediate) */
+ #define A64_COND_BRANCH(cond, offset) \
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index c047598b09e0..6217f80702d2 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -1,7 +1,7 @@
+ /*
+ * BPF JIT compiler for ARM64
+ *
+- * Copyright (C) 2014 Zi Shen Lim <zlim.lnx@gmail.com>
++ * Copyright (C) 2014-2015 Zi Shen Lim <zlim.lnx@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+@@ -225,6 +225,17 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
+ u8 jmp_cond;
+ s32 jmp_offset;
+
++#define check_imm(bits, imm) do { \
++ if ((((imm) > 0) && ((imm) >> (bits))) || \
++ (((imm) < 0) && (~(imm) >> (bits)))) { \
++ pr_info("[%2d] imm=%d(0x%x) out of range\n", \
++ i, imm, imm); \
++ return -EINVAL; \
++ } \
++} while (0)
++#define check_imm19(imm) check_imm(19, imm)
++#define check_imm26(imm) check_imm(26, imm)
++
+ switch (code) {
+ /* dst = src */
+ case BPF_ALU | BPF_MOV | BPF_X:
+@@ -258,15 +269,33 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
+ break;
+ case BPF_ALU | BPF_DIV | BPF_X:
+ case BPF_ALU64 | BPF_DIV | BPF_X:
+- emit(A64_UDIV(is64, dst, dst, src), ctx);
+- break;
+ case BPF_ALU | BPF_MOD | BPF_X:
+ case BPF_ALU64 | BPF_MOD | BPF_X:
+- ctx->tmp_used = 1;
+- emit(A64_UDIV(is64, tmp, dst, src), ctx);
+- emit(A64_MUL(is64, tmp, tmp, src), ctx);
+- emit(A64_SUB(is64, dst, dst, tmp), ctx);
++ {
++ const u8 r0 = bpf2a64[BPF_REG_0];
++
++ /* if (src == 0) return 0 */
++ jmp_offset = 3; /* skip ahead to else path */
++ check_imm19(jmp_offset);
++ emit(A64_CBNZ(is64, src, jmp_offset), ctx);
++ emit(A64_MOVZ(1, r0, 0, 0), ctx);
++ jmp_offset = epilogue_offset(ctx);
++ check_imm26(jmp_offset);
++ emit(A64_B(jmp_offset), ctx);
++ /* else */
++ switch (BPF_OP(code)) {
++ case BPF_DIV:
++ emit(A64_UDIV(is64, dst, dst, src), ctx);
++ break;
++ case BPF_MOD:
++ ctx->tmp_used = 1;
++ emit(A64_UDIV(is64, tmp, dst, src), ctx);
++ emit(A64_MUL(is64, tmp, tmp, src), ctx);
++ emit(A64_SUB(is64, dst, dst, tmp), ctx);
++ break;
++ }
+ break;
++ }
+ case BPF_ALU | BPF_LSH | BPF_X:
+ case BPF_ALU64 | BPF_LSH | BPF_X:
+ emit(A64_LSLV(is64, dst, dst, src), ctx);
+@@ -393,17 +422,6 @@ emit_bswap_uxt:
+ emit(A64_ASR(is64, dst, dst, imm), ctx);
+ break;
+
+-#define check_imm(bits, imm) do { \
+- if ((((imm) > 0) && ((imm) >> (bits))) || \
+- (((imm) < 0) && (~(imm) >> (bits)))) { \
+- pr_info("[%2d] imm=%d(0x%x) out of range\n", \
+- i, imm, imm); \
+- return -EINVAL; \
+- } \
+-} while (0)
+-#define check_imm19(imm) check_imm(19, imm)
+-#define check_imm26(imm) check_imm(26, imm)
+-
+ /* JUMP off */
+ case BPF_JMP | BPF_JA:
+ jmp_offset = bpf2a64_offset(i + off, i, ctx);
+diff --git a/arch/mips/net/bpf_jit.c b/arch/mips/net/bpf_jit.c
+index 0c4a133f6216..26e947d61040 100644
+--- a/arch/mips/net/bpf_jit.c
++++ b/arch/mips/net/bpf_jit.c
+@@ -521,19 +521,6 @@ static inline u16 align_sp(unsigned int num)
+ return num;
+ }
+
+-static bool is_load_to_a(u16 inst)
+-{
+- switch (inst) {
+- case BPF_LD | BPF_W | BPF_LEN:
+- case BPF_LD | BPF_W | BPF_ABS:
+- case BPF_LD | BPF_H | BPF_ABS:
+- case BPF_LD | BPF_B | BPF_ABS:
+- return true;
+- default:
+- return false;
+- }
+-}
+-
+ static void save_bpf_jit_regs(struct jit_ctx *ctx, unsigned offset)
+ {
+ int i = 0, real_off = 0;
+@@ -614,7 +601,6 @@ static unsigned int get_stack_depth(struct jit_ctx *ctx)
+
+ static void build_prologue(struct jit_ctx *ctx)
+ {
+- u16 first_inst = ctx->skf->insns[0].code;
+ int sp_off;
+
+ /* Calculate the total offset for the stack pointer */
+@@ -641,7 +627,7 @@ static void build_prologue(struct jit_ctx *ctx)
+ emit_jit_reg_move(r_X, r_zero, ctx);
+
+ /* Do not leak kernel data to userspace */
+- if ((first_inst != (BPF_RET | BPF_K)) && !(is_load_to_a(first_inst)))
++ if (bpf_needs_clear_a(&ctx->skf->insns[0]))
+ emit_jit_reg_move(r_A, r_zero, ctx);
+ }
+
+diff --git a/arch/mn10300/Kconfig b/arch/mn10300/Kconfig
+index 4434b54e1d87..78ae5552fdb8 100644
+--- a/arch/mn10300/Kconfig
++++ b/arch/mn10300/Kconfig
+@@ -1,6 +1,7 @@
+ config MN10300
+ def_bool y
+ select HAVE_OPROFILE
++ select HAVE_UID16
+ select GENERIC_IRQ_SHOW
+ select ARCH_WANT_IPC_PARSE_VERSION
+ select HAVE_ARCH_TRACEHOOK
+@@ -37,9 +38,6 @@ config HIGHMEM
+ config NUMA
+ def_bool n
+
+-config UID16
+- def_bool y
+-
+ config RWSEM_GENERIC_SPINLOCK
+ def_bool y
+
+diff --git a/arch/powerpc/include/asm/cmpxchg.h b/arch/powerpc/include/asm/cmpxchg.h
+index ad6263cffb0f..d1a8d93cccfd 100644
+--- a/arch/powerpc/include/asm/cmpxchg.h
++++ b/arch/powerpc/include/asm/cmpxchg.h
+@@ -18,12 +18,12 @@ __xchg_u32(volatile void *p, unsigned long val)
+ unsigned long prev;
+
+ __asm__ __volatile__(
+- PPC_RELEASE_BARRIER
++ PPC_ATOMIC_ENTRY_BARRIER
+ "1: lwarx %0,0,%2 \n"
+ PPC405_ERR77(0,%2)
+ " stwcx. %3,0,%2 \n\
+ bne- 1b"
+- PPC_ACQUIRE_BARRIER
++ PPC_ATOMIC_EXIT_BARRIER
+ : "=&r" (prev), "+m" (*(volatile unsigned int *)p)
+ : "r" (p), "r" (val)
+ : "cc", "memory");
+@@ -61,12 +61,12 @@ __xchg_u64(volatile void *p, unsigned long val)
+ unsigned long prev;
+
+ __asm__ __volatile__(
+- PPC_RELEASE_BARRIER
++ PPC_ATOMIC_ENTRY_BARRIER
+ "1: ldarx %0,0,%2 \n"
+ PPC405_ERR77(0,%2)
+ " stdcx. %3,0,%2 \n\
+ bne- 1b"
+- PPC_ACQUIRE_BARRIER
++ PPC_ATOMIC_EXIT_BARRIER
+ : "=&r" (prev), "+m" (*(volatile unsigned long *)p)
+ : "r" (p), "r" (val)
+ : "cc", "memory");
+@@ -151,14 +151,14 @@ __cmpxchg_u32(volatile unsigned int *p, unsigned long old, unsigned long new)
+ unsigned int prev;
+
+ __asm__ __volatile__ (
+- PPC_RELEASE_BARRIER
++ PPC_ATOMIC_ENTRY_BARRIER
+ "1: lwarx %0,0,%2 # __cmpxchg_u32\n\
+ cmpw 0,%0,%3\n\
+ bne- 2f\n"
+ PPC405_ERR77(0,%2)
+ " stwcx. %4,0,%2\n\
+ bne- 1b"
+- PPC_ACQUIRE_BARRIER
++ PPC_ATOMIC_EXIT_BARRIER
+ "\n\
+ 2:"
+ : "=&r" (prev), "+m" (*p)
+@@ -197,13 +197,13 @@ __cmpxchg_u64(volatile unsigned long *p, unsigned long old, unsigned long new)
+ unsigned long prev;
+
+ __asm__ __volatile__ (
+- PPC_RELEASE_BARRIER
++ PPC_ATOMIC_ENTRY_BARRIER
+ "1: ldarx %0,0,%2 # __cmpxchg_u64\n\
+ cmpd 0,%0,%3\n\
+ bne- 2f\n\
+ stdcx. %4,0,%2\n\
+ bne- 1b"
+- PPC_ACQUIRE_BARRIER
++ PPC_ATOMIC_EXIT_BARRIER
+ "\n\
+ 2:"
+ : "=&r" (prev), "+m" (*p)
+diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
+index a908ada8e0a5..2220f7a60def 100644
+--- a/arch/powerpc/include/asm/reg.h
++++ b/arch/powerpc/include/asm/reg.h
+@@ -108,6 +108,7 @@
+ #define MSR_TS_T __MASK(MSR_TS_T_LG) /* Transaction Transactional */
+ #define MSR_TS_MASK (MSR_TS_T | MSR_TS_S) /* Transaction State bits */
+ #define MSR_TM_ACTIVE(x) (((x) & MSR_TS_MASK) != 0) /* Transaction active? */
++#define MSR_TM_RESV(x) (((x) & MSR_TS_MASK) == MSR_TS_MASK) /* Reserved */
+ #define MSR_TM_TRANSACTIONAL(x) (((x) & MSR_TS_MASK) == MSR_TS_T)
+ #define MSR_TM_SUSPENDED(x) (((x) & MSR_TS_MASK) == MSR_TS_S)
+
+diff --git a/arch/powerpc/include/asm/synch.h b/arch/powerpc/include/asm/synch.h
+index e682a7143edb..c50868681f9e 100644
+--- a/arch/powerpc/include/asm/synch.h
++++ b/arch/powerpc/include/asm/synch.h
+@@ -44,7 +44,7 @@ static inline void isync(void)
+ MAKE_LWSYNC_SECTION_ENTRY(97, __lwsync_fixup);
+ #define PPC_ACQUIRE_BARRIER "\n" stringify_in_c(__PPC_ACQUIRE_BARRIER)
+ #define PPC_RELEASE_BARRIER stringify_in_c(LWSYNC) "\n"
+-#define PPC_ATOMIC_ENTRY_BARRIER "\n" stringify_in_c(LWSYNC) "\n"
++#define PPC_ATOMIC_ENTRY_BARRIER "\n" stringify_in_c(sync) "\n"
+ #define PPC_ATOMIC_EXIT_BARRIER "\n" stringify_in_c(sync) "\n"
+ #else
+ #define PPC_ACQUIRE_BARRIER
+diff --git a/arch/powerpc/include/uapi/asm/elf.h b/arch/powerpc/include/uapi/asm/elf.h
+index 59dad113897b..c2d21d11c2d2 100644
+--- a/arch/powerpc/include/uapi/asm/elf.h
++++ b/arch/powerpc/include/uapi/asm/elf.h
+@@ -295,6 +295,8 @@ do { \
+ #define R_PPC64_TLSLD 108
+ #define R_PPC64_TOCSAVE 109
+
++#define R_PPC64_ENTRY 118
++
+ #define R_PPC64_REL16 249
+ #define R_PPC64_REL16_LO 250
+ #define R_PPC64_REL16_HI 251
+diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
+index 68384514506b..59663af9315f 100644
+--- a/arch/powerpc/kernel/module_64.c
++++ b/arch/powerpc/kernel/module_64.c
+@@ -635,6 +635,33 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
+ */
+ break;
+
++ case R_PPC64_ENTRY:
++ /*
++ * Optimize ELFv2 large code model entry point if
++ * the TOC is within 2GB range of current location.
++ */
++ value = my_r2(sechdrs, me) - (unsigned long)location;
++ if (value + 0x80008000 > 0xffffffff)
++ break;
++ /*
++ * Check for the large code model prolog sequence:
++ * ld r2, ...(r12)
++ * add r2, r2, r12
++ */
++ if ((((uint32_t *)location)[0] & ~0xfffc)
++ != 0xe84c0000)
++ break;
++ if (((uint32_t *)location)[1] != 0x7c426214)
++ break;
++ /*
++ * If found, replace it with:
++ * addis r2, r12, (.TOC.-func)@ha
++ * addi r2, r12, (.TOC.-func)@l
++ */
++ ((uint32_t *)location)[0] = 0x3c4c0000 + PPC_HA(value);
++ ((uint32_t *)location)[1] = 0x38420000 + PPC_LO(value);
++ break;
++
+ case R_PPC64_REL16_HA:
+ /* Subtract location pointer */
+ value -= (unsigned long)location;
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index 75b6676c1a0b..646bf4d222c1 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -551,6 +551,24 @@ static void tm_reclaim_thread(struct thread_struct *thr,
+ msr_diff &= MSR_FP | MSR_VEC | MSR_VSX | MSR_FE0 | MSR_FE1;
+ }
+
++ /*
++ * Use the current MSR TM suspended bit to track if we have
++ * checkpointed state outstanding.
++ * On signal delivery, we'd normally reclaim the checkpointed
++ * state to obtain stack pointer (see:get_tm_stackpointer()).
++ * This will then directly return to userspace without going
++ * through __switch_to(). However, if the stack frame is bad,
++ * we need to exit this thread which calls __switch_to() which
++ * will again attempt to reclaim the already saved tm state.
++ * Hence we need to check that we've not already reclaimed
++ * this state.
++ * We do this using the current MSR, rather tracking it in
++ * some specific thread_struct bit, as it has the additional
++ * benifit of checking for a potential TM bad thing exception.
++ */
++ if (!MSR_TM_SUSPENDED(mfmsr()))
++ return;
++
+ tm_reclaim(thr, thr->regs->msr, cause);
+
+ /* Having done the reclaim, we now have the checkpointed
+diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
+index 0dbee465af7a..ef7c24e84a62 100644
+--- a/arch/powerpc/kernel/signal_32.c
++++ b/arch/powerpc/kernel/signal_32.c
+@@ -875,6 +875,15 @@ static long restore_tm_user_regs(struct pt_regs *regs,
+ return 1;
+ #endif /* CONFIG_SPE */
+
++ /* Get the top half of the MSR from the user context */
++ if (__get_user(msr_hi, &tm_sr->mc_gregs[PT_MSR]))
++ return 1;
++ msr_hi <<= 32;
++ /* If TM bits are set to the reserved value, it's an invalid context */
++ if (MSR_TM_RESV(msr_hi))
++ return 1;
++ /* Pull in the MSR TM bits from the user context */
++ regs->msr = (regs->msr & ~MSR_TS_MASK) | (msr_hi & MSR_TS_MASK);
+ /* Now, recheckpoint. This loads up all of the checkpointed (older)
+ * registers, including FP and V[S]Rs. After recheckpointing, the
+ * transactional versions should be loaded.
+@@ -884,11 +893,6 @@ static long restore_tm_user_regs(struct pt_regs *regs,
+ current->thread.tm_texasr |= TEXASR_FS;
+ /* This loads the checkpointed FP/VEC state, if used */
+ tm_recheckpoint(¤t->thread, msr);
+- /* Get the top half of the MSR */
+- if (__get_user(msr_hi, &tm_sr->mc_gregs[PT_MSR]))
+- return 1;
+- /* Pull in MSR TM from user context */
+- regs->msr = (regs->msr & ~MSR_TS_MASK) | ((msr_hi<<32) & MSR_TS_MASK);
+
+ /* This loads the speculative FP/VEC state, if used */
+ if (msr & MSR_FP) {
+diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c
+index 20756dfb9f34..c676ecec0869 100644
+--- a/arch/powerpc/kernel/signal_64.c
++++ b/arch/powerpc/kernel/signal_64.c
+@@ -438,6 +438,10 @@ static long restore_tm_sigcontexts(struct pt_regs *regs,
+
+ /* get MSR separately, transfer the LE bit if doing signal return */
+ err |= __get_user(msr, &sc->gp_regs[PT_MSR]);
++ /* Don't allow reserved mode. */
++ if (MSR_TM_RESV(msr))
++ return -EINVAL;
++
+ /* pull in MSR TM from user context */
+ regs->msr = (regs->msr & ~MSR_TS_MASK) | (msr & MSR_TS_MASK);
+
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 9c26c5a96ea2..a7352b59e6f9 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -224,6 +224,12 @@ static void kvmppc_core_vcpu_put_hv(struct kvm_vcpu *vcpu)
+
+ static void kvmppc_set_msr_hv(struct kvm_vcpu *vcpu, u64 msr)
+ {
++ /*
++ * Check for illegal transactional state bit combination
++ * and if we find it, force the TS field to a safe state.
++ */
++ if ((msr & MSR_TS_MASK) == MSR_TS_MASK)
++ msr &= ~MSR_TS_MASK;
+ vcpu->arch.shregs.msr = msr;
+ kvmppc_end_cede(vcpu);
+ }
+@@ -2019,7 +2025,7 @@ static bool can_split_piggybacked_subcores(struct core_info *cip)
+ return false;
+ n_subcores += (cip->subcore_threads[sub] - 1) >> 1;
+ }
+- if (n_subcores > 3 || large_sub < 0)
++ if (large_sub < 0 || !subcore_config_ok(n_subcores + 1, 2))
+ return false;
+
+ /*
+diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
+index 17cea18a09d3..264c473c1b3c 100644
+--- a/arch/powerpc/net/bpf_jit_comp.c
++++ b/arch/powerpc/net/bpf_jit_comp.c
+@@ -78,18 +78,9 @@ static void bpf_jit_build_prologue(struct bpf_prog *fp, u32 *image,
+ PPC_LI(r_X, 0);
+ }
+
+- switch (filter[0].code) {
+- case BPF_RET | BPF_K:
+- case BPF_LD | BPF_W | BPF_LEN:
+- case BPF_LD | BPF_W | BPF_ABS:
+- case BPF_LD | BPF_H | BPF_ABS:
+- case BPF_LD | BPF_B | BPF_ABS:
+- /* first instruction sets A register (or is RET 'constant') */
+- break;
+- default:
+- /* make sure we dont leak kernel information to user */
++ /* make sure we dont leak kernel information to user */
++ if (bpf_needs_clear_a(&filter[0]))
+ PPC_LI(r_A, 0);
+- }
+ }
+
+ static void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx)
+diff --git a/arch/powerpc/platforms/powernv/opal-irqchip.c b/arch/powerpc/platforms/powernv/opal-irqchip.c
+index 2c91ee7800b9..e96027d27151 100644
+--- a/arch/powerpc/platforms/powernv/opal-irqchip.c
++++ b/arch/powerpc/platforms/powernv/opal-irqchip.c
+@@ -43,11 +43,34 @@ static unsigned int opal_irq_count;
+ static unsigned int *opal_irqs;
+
+ static void opal_handle_irq_work(struct irq_work *work);
+-static __be64 last_outstanding_events;
++static u64 last_outstanding_events;
+ static struct irq_work opal_event_irq_work = {
+ .func = opal_handle_irq_work,
+ };
+
++void opal_handle_events(uint64_t events)
++{
++ int virq, hwirq = 0;
++ u64 mask = opal_event_irqchip.mask;
++
++ if (!in_irq() && (events & mask)) {
++ last_outstanding_events = events;
++ irq_work_queue(&opal_event_irq_work);
++ return;
++ }
++
++ while (events & mask) {
++ hwirq = fls64(events) - 1;
++ if (BIT_ULL(hwirq) & mask) {
++ virq = irq_find_mapping(opal_event_irqchip.domain,
++ hwirq);
++ if (virq)
++ generic_handle_irq(virq);
++ }
++ events &= ~BIT_ULL(hwirq);
++ }
++}
++
+ static void opal_event_mask(struct irq_data *d)
+ {
+ clear_bit(d->hwirq, &opal_event_irqchip.mask);
+@@ -55,9 +78,21 @@ static void opal_event_mask(struct irq_data *d)
+
+ static void opal_event_unmask(struct irq_data *d)
+ {
++ __be64 events;
++
+ set_bit(d->hwirq, &opal_event_irqchip.mask);
+
+- opal_poll_events(&last_outstanding_events);
++ opal_poll_events(&events);
++ last_outstanding_events = be64_to_cpu(events);
++
++ /*
++ * We can't just handle the events now with opal_handle_events().
++ * If we did we would deadlock when opal_event_unmask() is called from
++ * handle_level_irq() with the irq descriptor lock held, because
++ * calling opal_handle_events() would call generic_handle_irq() and
++ * then handle_level_irq() which would try to take the descriptor lock
++ * again. Instead queue the events for later.
++ */
+ if (last_outstanding_events & opal_event_irqchip.mask)
+ /* Need to retrigger the interrupt */
+ irq_work_queue(&opal_event_irq_work);
+@@ -96,29 +131,6 @@ static int opal_event_map(struct irq_domain *d, unsigned int irq,
+ return 0;
+ }
+
+-void opal_handle_events(uint64_t events)
+-{
+- int virq, hwirq = 0;
+- u64 mask = opal_event_irqchip.mask;
+-
+- if (!in_irq() && (events & mask)) {
+- last_outstanding_events = events;
+- irq_work_queue(&opal_event_irq_work);
+- return;
+- }
+-
+- while (events & mask) {
+- hwirq = fls64(events) - 1;
+- if (BIT_ULL(hwirq) & mask) {
+- virq = irq_find_mapping(opal_event_irqchip.domain,
+- hwirq);
+- if (virq)
+- generic_handle_irq(virq);
+- }
+- events &= ~BIT_ULL(hwirq);
+- }
+-}
+-
+ static irqreturn_t opal_interrupt(int irq, void *data)
+ {
+ __be64 events;
+@@ -131,7 +143,7 @@ static irqreturn_t opal_interrupt(int irq, void *data)
+
+ static void opal_handle_irq_work(struct irq_work *work)
+ {
+- opal_handle_events(be64_to_cpu(last_outstanding_events));
++ opal_handle_events(last_outstanding_events);
+ }
+
+ static int opal_event_match(struct irq_domain *h, struct device_node *node,
+diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
+index 4296d55e88f3..57cffb80bc36 100644
+--- a/arch/powerpc/platforms/powernv/opal.c
++++ b/arch/powerpc/platforms/powernv/opal.c
+@@ -278,7 +278,7 @@ static void opal_handle_message(void)
+
+ /* Sanity check */
+ if (type >= OPAL_MSG_TYPE_MAX) {
+- pr_warning("%s: Unknown message type: %u\n", __func__, type);
++ pr_warn_once("%s: Unknown message type: %u\n", __func__, type);
+ return;
+ }
+ opal_message_do_notify(type, (void *)&msg);
+diff --git a/arch/sparc/net/bpf_jit_comp.c b/arch/sparc/net/bpf_jit_comp.c
+index f8b9f71b9a2b..17e71d2d96e5 100644
+--- a/arch/sparc/net/bpf_jit_comp.c
++++ b/arch/sparc/net/bpf_jit_comp.c
+@@ -420,22 +420,9 @@ void bpf_jit_compile(struct bpf_prog *fp)
+ }
+ emit_reg_move(O7, r_saved_O7);
+
+- switch (filter[0].code) {
+- case BPF_RET | BPF_K:
+- case BPF_LD | BPF_W | BPF_LEN:
+- case BPF_LD | BPF_W | BPF_ABS:
+- case BPF_LD | BPF_H | BPF_ABS:
+- case BPF_LD | BPF_B | BPF_ABS:
+- /* The first instruction sets the A register (or is
+- * a "RET 'constant'")
+- */
+- break;
+- default:
+- /* Make sure we dont leak kernel information to the
+- * user.
+- */
++ /* Make sure we dont leak kernel information to the user. */
++ if (bpf_needs_clear_a(&filter[0]))
+ emit_clear(r_A); /* A = 0 */
+- }
+
+ for (i = 0; i < flen; i++) {
+ unsigned int K = filter[i].k;
+diff --git a/arch/x86/include/asm/boot.h b/arch/x86/include/asm/boot.h
+index 4fa687a47a62..6b8d6e8cd449 100644
+--- a/arch/x86/include/asm/boot.h
++++ b/arch/x86/include/asm/boot.h
+@@ -27,7 +27,7 @@
+ #define BOOT_HEAP_SIZE 0x400000
+ #else /* !CONFIG_KERNEL_BZIP2 */
+
+-#define BOOT_HEAP_SIZE 0x8000
++#define BOOT_HEAP_SIZE 0x10000
+
+ #endif /* !CONFIG_KERNEL_BZIP2 */
+
+diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
+index 379cd3658799..bfd9b2a35a0b 100644
+--- a/arch/x86/include/asm/mmu_context.h
++++ b/arch/x86/include/asm/mmu_context.h
+@@ -116,8 +116,36 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
+ #endif
+ cpumask_set_cpu(cpu, mm_cpumask(next));
+
+- /* Re-load page tables */
++ /*
++ * Re-load page tables.
++ *
++ * This logic has an ordering constraint:
++ *
++ * CPU 0: Write to a PTE for 'next'
++ * CPU 0: load bit 1 in mm_cpumask. if nonzero, send IPI.
++ * CPU 1: set bit 1 in next's mm_cpumask
++ * CPU 1: load from the PTE that CPU 0 writes (implicit)
++ *
++ * We need to prevent an outcome in which CPU 1 observes
++ * the new PTE value and CPU 0 observes bit 1 clear in
++ * mm_cpumask. (If that occurs, then the IPI will never
++ * be sent, and CPU 0's TLB will contain a stale entry.)
++ *
++ * The bad outcome can occur if either CPU's load is
++ * reordered before that CPU's store, so both CPUs must
++ * execute full barriers to prevent this from happening.
++ *
++ * Thus, switch_mm needs a full barrier between the
++ * store to mm_cpumask and any operation that could load
++ * from next->pgd. TLB fills are special and can happen
++ * due to instruction fetches or for no reason at all,
++ * and neither LOCK nor MFENCE orders them.
++ * Fortunately, load_cr3() is serializing and gives the
++ * ordering guarantee we need.
++ *
++ */
+ load_cr3(next->pgd);
++
+ trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
+
+ /* Stop flush ipis for the previous mm */
+@@ -156,10 +184,14 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
+ * schedule, protecting us from simultaneous changes.
+ */
+ cpumask_set_cpu(cpu, mm_cpumask(next));
++
+ /*
+ * We were in lazy tlb mode and leave_mm disabled
+ * tlb flush IPI delivery. We must reload CR3
+ * to make sure to use no freed page tables.
++ *
++ * As above, load_cr3() is serializing and orders TLB
++ * fills with respect to the mm_cpumask write.
+ */
+ load_cr3(next->pgd);
+ trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
+diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
+index 10d0596433f8..c759b3cca663 100644
+--- a/arch/x86/include/asm/paravirt.h
++++ b/arch/x86/include/asm/paravirt.h
+@@ -19,6 +19,12 @@ static inline int paravirt_enabled(void)
+ return pv_info.paravirt_enabled;
+ }
+
++static inline int paravirt_has_feature(unsigned int feature)
++{
++ WARN_ON_ONCE(!pv_info.paravirt_enabled);
++ return (pv_info.features & feature);
++}
++
+ static inline void load_sp0(struct tss_struct *tss,
+ struct thread_struct *thread)
+ {
+diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
+index 31247b5bff7c..3d44191185f8 100644
+--- a/arch/x86/include/asm/paravirt_types.h
++++ b/arch/x86/include/asm/paravirt_types.h
+@@ -70,9 +70,14 @@ struct pv_info {
+ #endif
+
+ int paravirt_enabled;
++ unsigned int features; /* valid only if paravirt_enabled is set */
+ const char *name;
+ };
+
++#define paravirt_has(x) paravirt_has_feature(PV_SUPPORTED_##x)
++/* Supported features */
++#define PV_SUPPORTED_RTC (1<<0)
++
+ struct pv_init_ops {
+ /*
+ * Patch may replace one of the defined code sequences with
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 19577dd325fa..b7692daeaf92 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -472,6 +472,7 @@ static inline unsigned long current_top_of_stack(void)
+ #else
+ #define __cpuid native_cpuid
+ #define paravirt_enabled() 0
++#define paravirt_has(x) 0
+
+ static inline void load_sp0(struct tss_struct *tss,
+ struct thread_struct *thread)
+diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
+index 9d014b82a124..6b2c8229f9da 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce.c
++++ b/arch/x86/kernel/cpu/mcheck/mce.c
+@@ -999,6 +999,17 @@ void do_machine_check(struct pt_regs *regs, long error_code)
+ int flags = MF_ACTION_REQUIRED;
+ int lmce = 0;
+
++ /* If this CPU is offline, just bail out. */
++ if (cpu_is_offline(smp_processor_id())) {
++ u64 mcgstatus;
++
++ mcgstatus = mce_rdmsrl(MSR_IA32_MCG_STATUS);
++ if (mcgstatus & MCG_STATUS_RIPV) {
++ mce_wrmsrl(MSR_IA32_MCG_STATUS, 0);
++ return;
++ }
++ }
++
+ ist_enter(regs);
+
+ this_cpu_inc(mce_exception_count);
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index 02693dd9a079..f660d63f40fe 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -182,6 +182,14 @@ static struct dmi_system_id __initdata reboot_dmi_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "iMac9,1"),
+ },
+ },
++ { /* Handle problems with rebooting on the iMac10,1. */
++ .callback = set_pci_reboot,
++ .ident = "Apple iMac10,1",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "iMac10,1"),
++ },
++ },
+
+ /* ASRock */
+ { /* Handle problems with rebooting on ASRock Q1900DC-ITX */
+diff --git a/arch/x86/kernel/rtc.c b/arch/x86/kernel/rtc.c
+index cd9685235df9..4af8d063fb36 100644
+--- a/arch/x86/kernel/rtc.c
++++ b/arch/x86/kernel/rtc.c
+@@ -200,6 +200,9 @@ static __init int add_rtc_cmos(void)
+ }
+ #endif
+
++ if (paravirt_enabled() && !paravirt_has(RTC))
++ return -ENODEV;
++
+ platform_device_register(&rtc_device);
+ dev_info(&rtc_device.dev,
+ "registered platform RTC device (no PNP device found)\n");
+diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
+index da52e6bb5c7f..7d2b2ed33dee 100644
+--- a/arch/x86/kernel/signal.c
++++ b/arch/x86/kernel/signal.c
+@@ -688,12 +688,15 @@ handle_signal(struct ksignal *ksig, struct pt_regs *regs)
+ signal_setup_done(failed, ksig, stepping);
+ }
+
+-#ifdef CONFIG_X86_32
+-#define NR_restart_syscall __NR_restart_syscall
+-#else /* !CONFIG_X86_32 */
+-#define NR_restart_syscall \
+- test_thread_flag(TIF_IA32) ? __NR_ia32_restart_syscall : __NR_restart_syscall
+-#endif /* CONFIG_X86_32 */
++static inline unsigned long get_nr_restart_syscall(const struct pt_regs *regs)
++{
++#if defined(CONFIG_X86_32) || !defined(CONFIG_X86_64)
++ return __NR_restart_syscall;
++#else /* !CONFIG_X86_32 && CONFIG_X86_64 */
++ return test_thread_flag(TIF_IA32) ? __NR_ia32_restart_syscall :
++ __NR_restart_syscall | (regs->orig_ax & __X32_SYSCALL_BIT);
++#endif /* CONFIG_X86_32 || !CONFIG_X86_64 */
++}
+
+ /*
+ * Note that 'init' is a special process: it doesn't get signals it doesn't
+@@ -722,7 +725,7 @@ void do_signal(struct pt_regs *regs)
+ break;
+
+ case -ERESTART_RESTARTBLOCK:
+- regs->ax = NR_restart_syscall;
++ regs->ax = get_nr_restart_syscall(regs);
+ regs->ip -= 2;
+ break;
+ }
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 892ee2e5ecbc..fbabe4fcc7fb 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -509,7 +509,7 @@ void __inquire_remote_apic(int apicid)
+ */
+ #define UDELAY_10MS_DEFAULT 10000
+
+-static unsigned int init_udelay = INT_MAX;
++static unsigned int init_udelay = UINT_MAX;
+
+ static int __init cpu_init_udelay(char *str)
+ {
+@@ -522,14 +522,15 @@ early_param("cpu_init_udelay", cpu_init_udelay);
+ static void __init smp_quirk_init_udelay(void)
+ {
+ /* if cmdline changed it from default, leave it alone */
+- if (init_udelay != INT_MAX)
++ if (init_udelay != UINT_MAX)
+ return;
+
+ /* if modern processor, use no delay */
+ if (((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) && (boot_cpu_data.x86 == 6)) ||
+- ((boot_cpu_data.x86_vendor == X86_VENDOR_AMD) && (boot_cpu_data.x86 >= 0xF)))
++ ((boot_cpu_data.x86_vendor == X86_VENDOR_AMD) && (boot_cpu_data.x86 >= 0xF))) {
+ init_udelay = 0;
+-
++ return;
++ }
+ /* else, use legacy delay */
+ init_udelay = UDELAY_10MS_DEFAULT;
+ }
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index d7f89387ba0c..22d181350ec9 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -1108,6 +1108,7 @@ static void init_vmcb(struct vcpu_svm *svm)
+ set_exception_intercept(svm, UD_VECTOR);
+ set_exception_intercept(svm, MC_VECTOR);
+ set_exception_intercept(svm, AC_VECTOR);
++ set_exception_intercept(svm, DB_VECTOR);
+
+ set_intercept(svm, INTERCEPT_INTR);
+ set_intercept(svm, INTERCEPT_NMI);
+@@ -1642,20 +1643,13 @@ static void svm_set_segment(struct kvm_vcpu *vcpu,
+ mark_dirty(svm->vmcb, VMCB_SEG);
+ }
+
+-static void update_db_bp_intercept(struct kvm_vcpu *vcpu)
++static void update_bp_intercept(struct kvm_vcpu *vcpu)
+ {
+ struct vcpu_svm *svm = to_svm(vcpu);
+
+- clr_exception_intercept(svm, DB_VECTOR);
+ clr_exception_intercept(svm, BP_VECTOR);
+
+- if (svm->nmi_singlestep)
+- set_exception_intercept(svm, DB_VECTOR);
+-
+ if (vcpu->guest_debug & KVM_GUESTDBG_ENABLE) {
+- if (vcpu->guest_debug &
+- (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP))
+- set_exception_intercept(svm, DB_VECTOR);
+ if (vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP)
+ set_exception_intercept(svm, BP_VECTOR);
+ } else
+@@ -1761,7 +1755,6 @@ static int db_interception(struct vcpu_svm *svm)
+ if (!(svm->vcpu.guest_debug & KVM_GUESTDBG_SINGLESTEP))
+ svm->vmcb->save.rflags &=
+ ~(X86_EFLAGS_TF | X86_EFLAGS_RF);
+- update_db_bp_intercept(&svm->vcpu);
+ }
+
+ if (svm->vcpu.guest_debug &
+@@ -3761,7 +3754,6 @@ static void enable_nmi_window(struct kvm_vcpu *vcpu)
+ */
+ svm->nmi_singlestep = true;
+ svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF);
+- update_db_bp_intercept(vcpu);
+ }
+
+ static int svm_set_tss_addr(struct kvm *kvm, unsigned int addr)
+@@ -4383,7 +4375,7 @@ static struct kvm_x86_ops svm_x86_ops = {
+ .vcpu_load = svm_vcpu_load,
+ .vcpu_put = svm_vcpu_put,
+
+- .update_db_bp_intercept = update_db_bp_intercept,
++ .update_db_bp_intercept = update_bp_intercept,
+ .get_msr = svm_get_msr,
+ .set_msr = svm_set_msr,
+ .get_segment_base = svm_get_segment_base,
+diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h
+index 4eae7c35ddf5..08b668cb3462 100644
+--- a/arch/x86/kvm/trace.h
++++ b/arch/x86/kvm/trace.h
+@@ -250,7 +250,7 @@ TRACE_EVENT(kvm_inj_virq,
+ #define kvm_trace_sym_exc \
+ EXS(DE), EXS(DB), EXS(BP), EXS(OF), EXS(BR), EXS(UD), EXS(NM), \
+ EXS(DF), EXS(TS), EXS(NP), EXS(SS), EXS(GP), EXS(PF), \
+- EXS(MF), EXS(MC)
++ EXS(MF), EXS(AC), EXS(MC)
+
+ /*
+ * Tracepoint for kvm interrupt injection:
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 343d3692dd65..2e0bd4884652 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -3644,20 +3644,21 @@ static int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+ if (!is_paging(vcpu)) {
+ hw_cr4 &= ~X86_CR4_PAE;
+ hw_cr4 |= X86_CR4_PSE;
+- /*
+- * SMEP/SMAP is disabled if CPU is in non-paging mode
+- * in hardware. However KVM always uses paging mode to
+- * emulate guest non-paging mode with TDP.
+- * To emulate this behavior, SMEP/SMAP needs to be
+- * manually disabled when guest switches to non-paging
+- * mode.
+- */
+- hw_cr4 &= ~(X86_CR4_SMEP | X86_CR4_SMAP);
+ } else if (!(cr4 & X86_CR4_PAE)) {
+ hw_cr4 &= ~X86_CR4_PAE;
+ }
+ }
+
++ if (!enable_unrestricted_guest && !is_paging(vcpu))
++ /*
++ * SMEP/SMAP is disabled if CPU is in non-paging mode in
++ * hardware. However KVM always uses paging mode without
++ * unrestricted guest.
++ * To emulate this behavior, SMEP/SMAP needs to be manually
++ * disabled when guest switches to non-paging mode.
++ */
++ hw_cr4 &= ~(X86_CR4_SMEP | X86_CR4_SMAP);
++
+ vmcs_writel(CR4_READ_SHADOW, cr4);
+ vmcs_writel(GUEST_CR4, hw_cr4);
+ return 0;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 43609af03283..37bbbf842350 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -942,7 +942,7 @@ static u32 msrs_to_save[] = {
+ MSR_CSTAR, MSR_KERNEL_GS_BASE, MSR_SYSCALL_MASK, MSR_LSTAR,
+ #endif
+ MSR_IA32_TSC, MSR_IA32_CR_PAT, MSR_VM_HSAVE_PA,
+- MSR_IA32_FEATURE_CONTROL, MSR_IA32_BNDCFGS
++ MSR_IA32_FEATURE_CONTROL, MSR_IA32_BNDCFGS, MSR_TSC_AUX,
+ };
+
+ static unsigned num_msrs_to_save;
+@@ -3847,16 +3847,17 @@ static void kvm_init_msr_list(void)
+
+ /*
+ * Even MSRs that are valid in the host may not be exposed
+- * to the guests in some cases. We could work around this
+- * in VMX with the generic MSR save/load machinery, but it
+- * is not really worthwhile since it will really only
+- * happen with nested virtualization.
++ * to the guests in some cases.
+ */
+ switch (msrs_to_save[i]) {
+ case MSR_IA32_BNDCFGS:
+ if (!kvm_x86_ops->mpx_supported())
+ continue;
+ break;
++ case MSR_TSC_AUX:
++ if (!kvm_x86_ops->rdtscp_supported())
++ continue;
++ break;
+ default:
+ break;
+ }
+diff --git a/arch/x86/lguest/boot.c b/arch/x86/lguest/boot.c
+index a0d09f6c6533..a43b2eafc466 100644
+--- a/arch/x86/lguest/boot.c
++++ b/arch/x86/lguest/boot.c
+@@ -1414,6 +1414,7 @@ __init void lguest_init(void)
+ pv_info.kernel_rpl = 1;
+ /* Everyone except Xen runs with this set. */
+ pv_info.shared_kernel_pmd = 1;
++ pv_info.features = 0;
+
+ /*
+ * We set up all the lguest overrides for sensitive operations. These
+diff --git a/arch/x86/mm/mpx.c b/arch/x86/mm/mpx.c
+index 71fc79a58a15..78e47ff74f9d 100644
+--- a/arch/x86/mm/mpx.c
++++ b/arch/x86/mm/mpx.c
+@@ -101,19 +101,19 @@ static int get_reg_offset(struct insn *insn, struct pt_regs *regs,
+ switch (type) {
+ case REG_TYPE_RM:
+ regno = X86_MODRM_RM(insn->modrm.value);
+- if (X86_REX_B(insn->rex_prefix.value) == 1)
++ if (X86_REX_B(insn->rex_prefix.value))
+ regno += 8;
+ break;
+
+ case REG_TYPE_INDEX:
+ regno = X86_SIB_INDEX(insn->sib.value);
+- if (X86_REX_X(insn->rex_prefix.value) == 1)
++ if (X86_REX_X(insn->rex_prefix.value))
+ regno += 8;
+ break;
+
+ case REG_TYPE_BASE:
+ regno = X86_SIB_BASE(insn->sib.value);
+- if (X86_REX_B(insn->rex_prefix.value) == 1)
++ if (X86_REX_B(insn->rex_prefix.value))
+ regno += 8;
+ break;
+
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 8ddb5d0d66fb..8f4cc3dfac32 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -161,7 +161,10 @@ void flush_tlb_current_task(void)
+ preempt_disable();
+
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
++
++ /* This is an implicit full barrier that synchronizes with switch_mm. */
+ local_flush_tlb();
++
+ trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL);
+ if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
+ flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL);
+@@ -188,17 +191,29 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
+ unsigned long base_pages_to_flush = TLB_FLUSH_ALL;
+
+ preempt_disable();
+- if (current->active_mm != mm)
++ if (current->active_mm != mm) {
++ /* Synchronize with switch_mm. */
++ smp_mb();
++
+ goto out;
++ }
+
+ if (!current->mm) {
+ leave_mm(smp_processor_id());
++
++ /* Synchronize with switch_mm. */
++ smp_mb();
++
+ goto out;
+ }
+
+ if ((end != TLB_FLUSH_ALL) && !(vmflag & VM_HUGETLB))
+ base_pages_to_flush = (end - start) >> PAGE_SHIFT;
+
++ /*
++ * Both branches below are implicit full barriers (MOV to CR or
++ * INVLPG) that synchronize with switch_mm.
++ */
+ if (base_pages_to_flush > tlb_single_page_flush_ceiling) {
+ base_pages_to_flush = TLB_FLUSH_ALL;
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
+@@ -228,10 +243,18 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long start)
+ preempt_disable();
+
+ if (current->active_mm == mm) {
+- if (current->mm)
++ if (current->mm) {
++ /*
++ * Implicit full barrier (INVLPG) that synchronizes
++ * with switch_mm.
++ */
+ __flush_tlb_one(start);
+- else
++ } else {
+ leave_mm(smp_processor_id());
++
++ /* Synchronize with switch_mm. */
++ smp_mb();
++ }
+ }
+
+ if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index 993b7a71386d..aeb385d86e95 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -1191,7 +1191,7 @@ static const struct pv_info xen_info __initconst = {
+ #ifdef CONFIG_X86_64
+ .extra_user_64bit_cs = FLAT_USER_CS64,
+ #endif
+-
++ .features = 0,
+ .name = "Xen",
+ };
+
+@@ -1534,6 +1534,8 @@ asmlinkage __visible void __init xen_start_kernel(void)
+
+ /* Install Xen paravirt ops */
+ pv_info = xen_info;
++ if (xen_initial_domain())
++ pv_info.features |= PV_SUPPORTED_RTC;
+ pv_init_ops = xen_init_ops;
+ pv_apic_ops = xen_apic_ops;
+ if (!xen_pvh_domain()) {
+diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
+index feddabdab448..4299aa924b9f 100644
+--- a/arch/x86/xen/suspend.c
++++ b/arch/x86/xen/suspend.c
+@@ -33,7 +33,8 @@ static void xen_hvm_post_suspend(int suspend_cancelled)
+ {
+ #ifdef CONFIG_XEN_PVHVM
+ int cpu;
+- xen_hvm_init_shared_info();
++ if (!suspend_cancelled)
++ xen_hvm_init_shared_info();
+ xen_callback_vector();
+ xen_unplug_emulated_devices();
+ if (xen_feature(XENFEAT_hvm_safe_pvclock)) {
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index 654f6f36a071..54bccf7db592 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -412,18 +412,42 @@ static enum si_sm_result start_next_msg(struct smi_info *smi_info)
+ return rv;
+ }
+
+-static void start_check_enables(struct smi_info *smi_info)
++static void smi_mod_timer(struct smi_info *smi_info, unsigned long new_val)
++{
++ smi_info->last_timeout_jiffies = jiffies;
++ mod_timer(&smi_info->si_timer, new_val);
++ smi_info->timer_running = true;
++}
++
++/*
++ * Start a new message and (re)start the timer and thread.
++ */
++static void start_new_msg(struct smi_info *smi_info, unsigned char *msg,
++ unsigned int size)
++{
++ smi_mod_timer(smi_info, jiffies + SI_TIMEOUT_JIFFIES);
++
++ if (smi_info->thread)
++ wake_up_process(smi_info->thread);
++
++ smi_info->handlers->start_transaction(smi_info->si_sm, msg, size);
++}
++
++static void start_check_enables(struct smi_info *smi_info, bool start_timer)
+ {
+ unsigned char msg[2];
+
+ msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
+ msg[1] = IPMI_GET_BMC_GLOBAL_ENABLES_CMD;
+
+- smi_info->handlers->start_transaction(smi_info->si_sm, msg, 2);
++ if (start_timer)
++ start_new_msg(smi_info, msg, 2);
++ else
++ smi_info->handlers->start_transaction(smi_info->si_sm, msg, 2);
+ smi_info->si_state = SI_CHECKING_ENABLES;
+ }
+
+-static void start_clear_flags(struct smi_info *smi_info)
++static void start_clear_flags(struct smi_info *smi_info, bool start_timer)
+ {
+ unsigned char msg[3];
+
+@@ -432,7 +456,10 @@ static void start_clear_flags(struct smi_info *smi_info)
+ msg[1] = IPMI_CLEAR_MSG_FLAGS_CMD;
+ msg[2] = WDT_PRE_TIMEOUT_INT;
+
+- smi_info->handlers->start_transaction(smi_info->si_sm, msg, 3);
++ if (start_timer)
++ start_new_msg(smi_info, msg, 3);
++ else
++ smi_info->handlers->start_transaction(smi_info->si_sm, msg, 3);
+ smi_info->si_state = SI_CLEARING_FLAGS;
+ }
+
+@@ -442,10 +469,8 @@ static void start_getting_msg_queue(struct smi_info *smi_info)
+ smi_info->curr_msg->data[1] = IPMI_GET_MSG_CMD;
+ smi_info->curr_msg->data_size = 2;
+
+- smi_info->handlers->start_transaction(
+- smi_info->si_sm,
+- smi_info->curr_msg->data,
+- smi_info->curr_msg->data_size);
++ start_new_msg(smi_info, smi_info->curr_msg->data,
++ smi_info->curr_msg->data_size);
+ smi_info->si_state = SI_GETTING_MESSAGES;
+ }
+
+@@ -455,20 +480,11 @@ static void start_getting_events(struct smi_info *smi_info)
+ smi_info->curr_msg->data[1] = IPMI_READ_EVENT_MSG_BUFFER_CMD;
+ smi_info->curr_msg->data_size = 2;
+
+- smi_info->handlers->start_transaction(
+- smi_info->si_sm,
+- smi_info->curr_msg->data,
+- smi_info->curr_msg->data_size);
++ start_new_msg(smi_info, smi_info->curr_msg->data,
++ smi_info->curr_msg->data_size);
+ smi_info->si_state = SI_GETTING_EVENTS;
+ }
+
+-static void smi_mod_timer(struct smi_info *smi_info, unsigned long new_val)
+-{
+- smi_info->last_timeout_jiffies = jiffies;
+- mod_timer(&smi_info->si_timer, new_val);
+- smi_info->timer_running = true;
+-}
+-
+ /*
+ * When we have a situtaion where we run out of memory and cannot
+ * allocate messages, we just leave them in the BMC and run the system
+@@ -478,11 +494,11 @@ static void smi_mod_timer(struct smi_info *smi_info, unsigned long new_val)
+ * Note that we cannot just use disable_irq(), since the interrupt may
+ * be shared.
+ */
+-static inline bool disable_si_irq(struct smi_info *smi_info)
++static inline bool disable_si_irq(struct smi_info *smi_info, bool start_timer)
+ {
+ if ((smi_info->irq) && (!smi_info->interrupt_disabled)) {
+ smi_info->interrupt_disabled = true;
+- start_check_enables(smi_info);
++ start_check_enables(smi_info, start_timer);
+ return true;
+ }
+ return false;
+@@ -492,7 +508,7 @@ static inline bool enable_si_irq(struct smi_info *smi_info)
+ {
+ if ((smi_info->irq) && (smi_info->interrupt_disabled)) {
+ smi_info->interrupt_disabled = false;
+- start_check_enables(smi_info);
++ start_check_enables(smi_info, true);
+ return true;
+ }
+ return false;
+@@ -510,7 +526,7 @@ static struct ipmi_smi_msg *alloc_msg_handle_irq(struct smi_info *smi_info)
+
+ msg = ipmi_alloc_smi_msg();
+ if (!msg) {
+- if (!disable_si_irq(smi_info))
++ if (!disable_si_irq(smi_info, true))
+ smi_info->si_state = SI_NORMAL;
+ } else if (enable_si_irq(smi_info)) {
+ ipmi_free_smi_msg(msg);
+@@ -526,7 +542,7 @@ static void handle_flags(struct smi_info *smi_info)
+ /* Watchdog pre-timeout */
+ smi_inc_stat(smi_info, watchdog_pretimeouts);
+
+- start_clear_flags(smi_info);
++ start_clear_flags(smi_info, true);
+ smi_info->msg_flags &= ~WDT_PRE_TIMEOUT_INT;
+ if (smi_info->intf)
+ ipmi_smi_watchdog_pretimeout(smi_info->intf);
+@@ -879,8 +895,7 @@ static enum si_sm_result smi_event_handler(struct smi_info *smi_info,
+ msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
+ msg[1] = IPMI_GET_MSG_FLAGS_CMD;
+
+- smi_info->handlers->start_transaction(
+- smi_info->si_sm, msg, 2);
++ start_new_msg(smi_info, msg, 2);
+ smi_info->si_state = SI_GETTING_FLAGS;
+ goto restart;
+ }
+@@ -910,7 +925,7 @@ static enum si_sm_result smi_event_handler(struct smi_info *smi_info,
+ * disable and messages disabled.
+ */
+ if (smi_info->supports_event_msg_buff || smi_info->irq) {
+- start_check_enables(smi_info);
++ start_check_enables(smi_info, true);
+ } else {
+ smi_info->curr_msg = alloc_msg_handle_irq(smi_info);
+ if (!smi_info->curr_msg)
+@@ -1208,14 +1223,14 @@ static int smi_start_processing(void *send_info,
+
+ new_smi->intf = intf;
+
+- /* Try to claim any interrupts. */
+- if (new_smi->irq_setup)
+- new_smi->irq_setup(new_smi);
+-
+ /* Set up the timer that drives the interface. */
+ setup_timer(&new_smi->si_timer, smi_timeout, (long)new_smi);
+ smi_mod_timer(new_smi, jiffies + SI_TIMEOUT_JIFFIES);
+
++ /* Try to claim any interrupts. */
++ if (new_smi->irq_setup)
++ new_smi->irq_setup(new_smi);
++
+ /*
+ * Check if the user forcefully enabled the daemon.
+ */
+@@ -3613,7 +3628,7 @@ static int try_smi_init(struct smi_info *new_smi)
+ * Start clearing the flags before we enable interrupts or the
+ * timer to avoid racing with the timer.
+ */
+- start_clear_flags(new_smi);
++ start_clear_flags(new_smi, false);
+
+ /*
+ * IRQ is defined to be set when non-zero. req_events will
+@@ -3908,7 +3923,7 @@ static void cleanup_one_si(struct smi_info *to_clean)
+ poll(to_clean);
+ schedule_timeout_uninterruptible(1);
+ }
+- disable_si_irq(to_clean);
++ disable_si_irq(to_clean, false);
+ while (to_clean->curr_msg || (to_clean->si_state != SI_NORMAL)) {
+ poll(to_clean);
+ schedule_timeout_uninterruptible(1);
+diff --git a/drivers/connector/connector.c b/drivers/connector/connector.c
+index 30f522848c73..c19e7fc717c3 100644
+--- a/drivers/connector/connector.c
++++ b/drivers/connector/connector.c
+@@ -178,26 +178,21 @@ static int cn_call_callback(struct sk_buff *skb)
+ *
+ * It checks skb, netlink header and msg sizes, and calls callback helper.
+ */
+-static void cn_rx_skb(struct sk_buff *__skb)
++static void cn_rx_skb(struct sk_buff *skb)
+ {
+ struct nlmsghdr *nlh;
+- struct sk_buff *skb;
+ int len, err;
+
+- skb = skb_get(__skb);
+-
+ if (skb->len >= NLMSG_HDRLEN) {
+ nlh = nlmsg_hdr(skb);
+ len = nlmsg_len(nlh);
+
+ if (len < (int)sizeof(struct cn_msg) ||
+ skb->len < nlh->nlmsg_len ||
+- len > CONNECTOR_MAX_MSG_SIZE) {
+- kfree_skb(skb);
++ len > CONNECTOR_MAX_MSG_SIZE)
+ return;
+- }
+
+- err = cn_call_callback(skb);
++ err = cn_call_callback(skb_get(skb));
+ if (err < 0)
+ kfree_skb(skb);
+ }
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 70a11ac38119..c0fbf4ed58ec 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1611,7 +1611,7 @@ int hid_connect(struct hid_device *hdev, unsigned int connect_mask)
+ "Multi-Axis Controller"
+ };
+ const char *type, *bus;
+- char buf[64];
++ char buf[64] = "";
+ unsigned int i;
+ int len;
+ int ret;
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 0215ab62bb93..cba008ac9cff 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1628,6 +1628,7 @@ static void wacom_wac_finger_usage_mapping(struct hid_device *hdev,
+ wacom_map_usage(input, usage, field, EV_KEY, BTN_TOUCH, 0);
+ break;
+ case HID_DG_CONTACTCOUNT:
++ wacom_wac->hid_data.cc_report = field->report->id;
+ wacom_wac->hid_data.cc_index = field->index;
+ wacom_wac->hid_data.cc_value_index = usage->usage_index;
+ break;
+@@ -1715,7 +1716,32 @@ static void wacom_wac_finger_pre_report(struct hid_device *hdev,
+ struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+ struct hid_data* hid_data = &wacom_wac->hid_data;
+
+- if (hid_data->cc_index >= 0) {
++ if (hid_data->cc_report != 0 &&
++ hid_data->cc_report != report->id) {
++ int i;
++
++ hid_data->cc_report = report->id;
++ hid_data->cc_index = -1;
++ hid_data->cc_value_index = -1;
++
++ for (i = 0; i < report->maxfield; i++) {
++ struct hid_field *field = report->field[i];
++ int j;
++
++ for (j = 0; j < field->maxusage; j++) {
++ if (field->usage[j].hid == HID_DG_CONTACTCOUNT) {
++ hid_data->cc_index = i;
++ hid_data->cc_value_index = j;
++
++ /* break */
++ i = report->maxfield;
++ j = field->maxusage;
++ }
++ }
++ }
++ }
++ if (hid_data->cc_report != 0 &&
++ hid_data->cc_index >= 0) {
+ struct hid_field *field = report->field[hid_data->cc_index];
+ int value = field->value[hid_data->cc_value_index];
+ if (value)
+diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
+index 1e270d401e18..809c03e34f74 100644
+--- a/drivers/hid/wacom_wac.h
++++ b/drivers/hid/wacom_wac.h
+@@ -198,6 +198,7 @@ struct hid_data {
+ int width;
+ int height;
+ int id;
++ int cc_report;
+ int cc_index;
+ int cc_value_index;
+ int num_expected;
+diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
+index 2d0dbbf38ceb..558c1e784613 100644
+--- a/drivers/infiniband/hw/mlx5/cq.c
++++ b/drivers/infiniband/hw/mlx5/cq.c
+@@ -756,7 +756,7 @@ struct ib_cq *mlx5_ib_create_cq(struct ib_device *ibdev,
+ int uninitialized_var(index);
+ int uninitialized_var(inlen);
+ int cqe_size;
+- int irqn;
++ unsigned int irqn;
+ int eqn;
+ int err;
+
+diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
+index 286e890e7d64..ef7862056978 100644
+--- a/drivers/iommu/arm-smmu-v3.c
++++ b/drivers/iommu/arm-smmu-v3.c
+@@ -1427,7 +1427,7 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
+ struct io_pgtable_cfg *pgtbl_cfg)
+ {
+ int ret;
+- u16 asid;
++ int asid;
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+ struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
+
+@@ -1439,10 +1439,11 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
+ &cfg->cdptr_dma, GFP_KERNEL);
+ if (!cfg->cdptr) {
+ dev_warn(smmu->dev, "failed to allocate context descriptor\n");
++ ret = -ENOMEM;
+ goto out_free_asid;
+ }
+
+- cfg->cd.asid = asid;
++ cfg->cd.asid = (u16)asid;
+ cfg->cd.ttbr = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[0];
+ cfg->cd.tcr = pgtbl_cfg->arm_lpae_s1_cfg.tcr;
+ cfg->cd.mair = pgtbl_cfg->arm_lpae_s1_cfg.mair[0];
+@@ -1456,7 +1457,7 @@ out_free_asid:
+ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
+ struct io_pgtable_cfg *pgtbl_cfg)
+ {
+- u16 vmid;
++ int vmid;
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+ struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
+
+@@ -1464,7 +1465,7 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
+ if (IS_ERR_VALUE(vmid))
+ return vmid;
+
+- cfg->vmid = vmid;
++ cfg->vmid = (u16)vmid;
+ cfg->vttbr = pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
+ cfg->vtcr = pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
+ return 0;
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index d65cf42399e8..dfc64662b386 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -4194,14 +4194,17 @@ int dmar_find_matched_atsr_unit(struct pci_dev *dev)
+ dev = pci_physfn(dev);
+ for (bus = dev->bus; bus; bus = bus->parent) {
+ bridge = bus->self;
+- if (!bridge || !pci_is_pcie(bridge) ||
++ /* If it's an integrated device, allow ATS */
++ if (!bridge)
++ return 1;
++ /* Connected via non-PCIe: no ATS */
++ if (!pci_is_pcie(bridge) ||
+ pci_pcie_type(bridge) == PCI_EXP_TYPE_PCI_BRIDGE)
+ return 0;
++ /* If we found the root port, look it up in the ATSR */
+ if (pci_pcie_type(bridge) == PCI_EXP_TYPE_ROOT_PORT)
+ break;
+ }
+- if (!bridge)
+- return 0;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(atsru, &dmar_atsr_units, list) {
+diff --git a/drivers/isdn/i4l/isdn_ppp.c b/drivers/isdn/i4l/isdn_ppp.c
+index c4198fa490bf..9c1e8adaf4fc 100644
+--- a/drivers/isdn/i4l/isdn_ppp.c
++++ b/drivers/isdn/i4l/isdn_ppp.c
+@@ -301,6 +301,8 @@ isdn_ppp_open(int min, struct file *file)
+ is->compflags = 0;
+
+ is->reset = isdn_ppp_ccp_reset_alloc(is);
++ if (!is->reset)
++ return -ENOMEM;
+
+ is->lp = NULL;
+ is->mp_seqno = 0; /* MP sequence number */
+@@ -320,6 +322,10 @@ isdn_ppp_open(int min, struct file *file)
+ * VJ header compression init
+ */
+ is->slcomp = slhc_init(16, 16); /* not necessary for 2. link in bundle */
++ if (IS_ERR(is->slcomp)) {
++ isdn_ppp_ccp_reset_free(is);
++ return PTR_ERR(is->slcomp);
++ }
+ #endif
+ #ifdef CONFIG_IPPP_FILTER
+ is->pass_filter = NULL;
+@@ -567,10 +573,8 @@ isdn_ppp_ioctl(int min, struct file *file, unsigned int cmd, unsigned long arg)
+ is->maxcid = val;
+ #ifdef CONFIG_ISDN_PPP_VJ
+ sltmp = slhc_init(16, val);
+- if (!sltmp) {
+- printk(KERN_ERR "ippp, can't realloc slhc struct\n");
+- return -ENOMEM;
+- }
++ if (IS_ERR(sltmp))
++ return PTR_ERR(sltmp);
+ if (is->slcomp)
+ slhc_free(is->slcomp);
+ is->slcomp = sltmp;
+diff --git a/drivers/media/platform/vivid/vivid-osd.c b/drivers/media/platform/vivid/vivid-osd.c
+index 084d346fb4c4..e15eef6a94e5 100644
+--- a/drivers/media/platform/vivid/vivid-osd.c
++++ b/drivers/media/platform/vivid/vivid-osd.c
+@@ -85,6 +85,7 @@ static int vivid_fb_ioctl(struct fb_info *info, unsigned cmd, unsigned long arg)
+ case FBIOGET_VBLANK: {
+ struct fb_vblank vblank;
+
++ memset(&vblank, 0, sizeof(vblank));
+ vblank.flags = FB_VBLANK_HAVE_COUNT | FB_VBLANK_HAVE_VCOUNT |
+ FB_VBLANK_HAVE_VSYNC;
+ vblank.count = 0;
+diff --git a/drivers/media/usb/airspy/airspy.c b/drivers/media/usb/airspy/airspy.c
+index 8f2e1c277c5f..7b91327bd472 100644
+--- a/drivers/media/usb/airspy/airspy.c
++++ b/drivers/media/usb/airspy/airspy.c
+@@ -132,7 +132,7 @@ struct airspy {
+ int urbs_submitted;
+
+ /* USB control message buffer */
+- #define BUF_SIZE 24
++ #define BUF_SIZE 128
+ u8 buf[BUF_SIZE];
+
+ /* Current configuration */
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index bcd7bddbe312..509440cb6411 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1207,7 +1207,6 @@ static int bond_master_upper_dev_link(struct net_device *bond_dev,
+ err = netdev_master_upper_dev_link_private(slave_dev, bond_dev, slave);
+ if (err)
+ return err;
+- slave_dev->flags |= IFF_SLAVE;
+ rtmsg_ifinfo(RTM_NEWLINK, slave_dev, IFF_SLAVE, GFP_KERNEL);
+ return 0;
+ }
+@@ -1465,6 +1464,9 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev)
+ }
+ }
+
++ /* set slave flag before open to prevent IPv6 addrconf */
++ slave_dev->flags |= IFF_SLAVE;
++
+ /* open the slave since the application closed it */
+ res = dev_open(slave_dev);
+ if (res) {
+@@ -1725,6 +1727,7 @@ err_close:
+ dev_close(slave_dev);
+
+ err_restore_mac:
++ slave_dev->flags &= ~IFF_SLAVE;
+ if (!bond->params.fail_over_mac ||
+ BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP) {
+ /* XXX TODO - fom follow mode needs to change master's
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 443632df2010..394744bfbf89 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -746,7 +746,7 @@ static int mlx5e_create_cq(struct mlx5e_channel *c,
+ struct mlx5_core_dev *mdev = priv->mdev;
+ struct mlx5_core_cq *mcq = &cq->mcq;
+ int eqn_not_used;
+- int irqn;
++ unsigned int irqn;
+ int err;
+ u32 i;
+
+@@ -800,7 +800,7 @@ static int mlx5e_enable_cq(struct mlx5e_cq *cq, struct mlx5e_cq_param *param)
+ void *in;
+ void *cqc;
+ int inlen;
+- int irqn_not_used;
++ unsigned int irqn_not_used;
+ int eqn;
+ int err;
+
+@@ -1498,7 +1498,7 @@ static int mlx5e_create_drop_cq(struct mlx5e_priv *priv,
+ struct mlx5_core_dev *mdev = priv->mdev;
+ struct mlx5_core_cq *mcq = &cq->mcq;
+ int eqn_not_used;
+- int irqn;
++ unsigned int irqn;
+ int err;
+
+ err = mlx5_cqwq_create(mdev, ¶m->wq, param->cqc, &cq->wq,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 03aabdd79abe..af9593baf1bb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -520,7 +520,8 @@ static void mlx5_irq_clear_affinity_hints(struct mlx5_core_dev *mdev)
+ mlx5_irq_clear_affinity_hint(mdev, i);
+ }
+
+-int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn, int *irqn)
++int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn,
++ unsigned int *irqn)
+ {
+ struct mlx5_eq_table *table = &dev->priv.eq_table;
+ struct mlx5_eq *eq, *n;
+diff --git a/drivers/net/ethernet/synopsys/dwc_eth_qos.c b/drivers/net/ethernet/synopsys/dwc_eth_qos.c
+index 85b3326775b8..37640e11afa6 100644
+--- a/drivers/net/ethernet/synopsys/dwc_eth_qos.c
++++ b/drivers/net/ethernet/synopsys/dwc_eth_qos.c
+@@ -2107,7 +2107,7 @@ static int dwceqos_tx_frags(struct sk_buff *skb, struct net_local *lp,
+ dd = &lp->tx_descs[lp->tx_next];
+
+ /* Set DMA Descriptor fields */
+- dd->des0 = dma_handle;
++ dd->des0 = dma_handle + consumed_size;
+ dd->des1 = 0;
+ dd->des2 = dma_size;
+
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index ed00446759b2..9a863c6a6a33 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -721,10 +721,8 @@ static long ppp_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ val &= 0xffff;
+ }
+ vj = slhc_init(val2+1, val+1);
+- if (!vj) {
+- netdev_err(ppp->dev,
+- "PPP: no memory (VJ compressor)\n");
+- err = -ENOMEM;
++ if (IS_ERR(vj)) {
++ err = PTR_ERR(vj);
+ break;
+ }
+ ppp_lock(ppp);
+diff --git a/drivers/net/slip/slhc.c b/drivers/net/slip/slhc.c
+index 079f7adfcde5..27ed25252aac 100644
+--- a/drivers/net/slip/slhc.c
++++ b/drivers/net/slip/slhc.c
+@@ -84,8 +84,9 @@ static long decode(unsigned char **cpp);
+ static unsigned char * put16(unsigned char *cp, unsigned short x);
+ static unsigned short pull16(unsigned char **cpp);
+
+-/* Initialize compression data structure
++/* Allocate compression data structure
+ * slots must be in range 0 to 255 (zero meaning no compression)
++ * Returns pointer to structure or ERR_PTR() on error.
+ */
+ struct slcompress *
+ slhc_init(int rslots, int tslots)
+@@ -94,11 +95,14 @@ slhc_init(int rslots, int tslots)
+ register struct cstate *ts;
+ struct slcompress *comp;
+
++ if (rslots < 0 || rslots > 255 || tslots < 0 || tslots > 255)
++ return ERR_PTR(-EINVAL);
++
+ comp = kzalloc(sizeof(struct slcompress), GFP_KERNEL);
+ if (! comp)
+ goto out_fail;
+
+- if ( rslots > 0 && rslots < 256 ) {
++ if (rslots > 0) {
+ size_t rsize = rslots * sizeof(struct cstate);
+ comp->rstate = kzalloc(rsize, GFP_KERNEL);
+ if (! comp->rstate)
+@@ -106,7 +110,7 @@ slhc_init(int rslots, int tslots)
+ comp->rslot_limit = rslots - 1;
+ }
+
+- if ( tslots > 0 && tslots < 256 ) {
++ if (tslots > 0) {
+ size_t tsize = tslots * sizeof(struct cstate);
+ comp->tstate = kzalloc(tsize, GFP_KERNEL);
+ if (! comp->tstate)
+@@ -141,7 +145,7 @@ out_free2:
+ out_free:
+ kfree(comp);
+ out_fail:
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+ }
+
+
+diff --git a/drivers/net/slip/slip.c b/drivers/net/slip/slip.c
+index 05387b1e2e95..a17d86a57734 100644
+--- a/drivers/net/slip/slip.c
++++ b/drivers/net/slip/slip.c
+@@ -164,7 +164,7 @@ static int sl_alloc_bufs(struct slip *sl, int mtu)
+ if (cbuff == NULL)
+ goto err_exit;
+ slcomp = slhc_init(16, 16);
+- if (slcomp == NULL)
++ if (IS_ERR(slcomp))
+ goto err_exit;
+ #endif
+ spin_lock_bh(&sl->lock);
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 651d35ea22c5..59fefca74263 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -1845,10 +1845,10 @@ static int team_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid)
+ struct team *team = netdev_priv(dev);
+ struct team_port *port;
+
+- rcu_read_lock();
+- list_for_each_entry_rcu(port, &team->port_list, list)
++ mutex_lock(&team->lock);
++ list_for_each_entry(port, &team->port_list, list)
+ vlan_vid_del(port->dev, proto, vid);
+- rcu_read_unlock();
++ mutex_unlock(&team->lock);
+
+ return 0;
+ }
+diff --git a/drivers/net/usb/cdc_mbim.c b/drivers/net/usb/cdc_mbim.c
+index b6ea6ff7fb7b..d87b4acdfa5b 100644
+--- a/drivers/net/usb/cdc_mbim.c
++++ b/drivers/net/usb/cdc_mbim.c
+@@ -100,7 +100,7 @@ static const struct net_device_ops cdc_mbim_netdev_ops = {
+ .ndo_stop = usbnet_stop,
+ .ndo_start_xmit = usbnet_start_xmit,
+ .ndo_tx_timeout = usbnet_tx_timeout,
+- .ndo_change_mtu = usbnet_change_mtu,
++ .ndo_change_mtu = cdc_ncm_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_vlan_rx_add_vid = cdc_mbim_rx_add_vid,
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index fa41a6d2a3e5..e278a7a4956d 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -41,6 +41,7 @@
+ #include <linux/module.h>
+ #include <linux/netdevice.h>
+ #include <linux/ctype.h>
++#include <linux/etherdevice.h>
+ #include <linux/ethtool.h>
+ #include <linux/workqueue.h>
+ #include <linux/mii.h>
+@@ -689,6 +690,33 @@ static void cdc_ncm_free(struct cdc_ncm_ctx *ctx)
+ kfree(ctx);
+ }
+
++/* we need to override the usbnet change_mtu ndo for two reasons:
++ * - respect the negotiated maximum datagram size
++ * - avoid unwanted changes to rx and tx buffers
++ */
++int cdc_ncm_change_mtu(struct net_device *net, int new_mtu)
++{
++ struct usbnet *dev = netdev_priv(net);
++ struct cdc_ncm_ctx *ctx = (struct cdc_ncm_ctx *)dev->data[0];
++ int maxmtu = ctx->max_datagram_size - cdc_ncm_eth_hlen(dev);
++
++ if (new_mtu <= 0 || new_mtu > maxmtu)
++ return -EINVAL;
++ net->mtu = new_mtu;
++ return 0;
++}
++EXPORT_SYMBOL_GPL(cdc_ncm_change_mtu);
++
++static const struct net_device_ops cdc_ncm_netdev_ops = {
++ .ndo_open = usbnet_open,
++ .ndo_stop = usbnet_stop,
++ .ndo_start_xmit = usbnet_start_xmit,
++ .ndo_tx_timeout = usbnet_tx_timeout,
++ .ndo_change_mtu = cdc_ncm_change_mtu,
++ .ndo_set_mac_address = eth_mac_addr,
++ .ndo_validate_addr = eth_validate_addr,
++};
++
+ int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_altsetting, int drvflags)
+ {
+ const struct usb_cdc_union_desc *union_desc = NULL;
+@@ -874,6 +902,9 @@ advance:
+ /* add our sysfs attrs */
+ dev->net->sysfs_groups[0] = &cdc_ncm_sysfs_attr_group;
+
++ /* must handle MTU changes */
++ dev->net->netdev_ops = &cdc_ncm_netdev_ops;
++
+ return 0;
+
+ error2:
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index 0ef4a5ad5557..ba21d072be31 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -117,12 +117,6 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
+ kfree_skb(skb);
+ goto drop;
+ }
+- /* don't change ip_summed == CHECKSUM_PARTIAL, as that
+- * will cause bad checksum on forwarded packets
+- */
+- if (skb->ip_summed == CHECKSUM_NONE &&
+- rcv->features & NETIF_F_RXCSUM)
+- skb->ip_summed = CHECKSUM_UNNECESSARY;
+
+ if (likely(dev_forward_skb(rcv, skb) == NET_RX_SUCCESS)) {
+ struct pcpu_vstats *stats = this_cpu_ptr(dev->vstats);
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index c1587ece28cf..40b5f8af47a3 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -2660,7 +2660,7 @@ static int vxlan_dev_configure(struct net *src_net, struct net_device *dev,
+ struct vxlan_config *conf)
+ {
+ struct vxlan_net *vn = net_generic(src_net, vxlan_net_id);
+- struct vxlan_dev *vxlan = netdev_priv(dev);
++ struct vxlan_dev *vxlan = netdev_priv(dev), *tmp;
+ struct vxlan_rdst *dst = &vxlan->default_dst;
+ int err;
+ bool use_ipv6 = false;
+@@ -2725,9 +2725,15 @@ static int vxlan_dev_configure(struct net *src_net, struct net_device *dev,
+ if (!vxlan->cfg.age_interval)
+ vxlan->cfg.age_interval = FDB_AGE_DEFAULT;
+
+- if (vxlan_find_vni(src_net, conf->vni, use_ipv6 ? AF_INET6 : AF_INET,
+- vxlan->cfg.dst_port, vxlan->flags))
++ list_for_each_entry(tmp, &vn->vxlan_list, next) {
++ if (tmp->cfg.vni == conf->vni &&
++ (tmp->default_dst.remote_ip.sa.sa_family == AF_INET6 ||
++ tmp->cfg.saddr.sa.sa_family == AF_INET6) == use_ipv6 &&
++ tmp->cfg.dst_port == vxlan->cfg.dst_port &&
++ (tmp->flags & VXLAN_F_RCV_FLAGS) ==
++ (vxlan->flags & VXLAN_F_RCV_FLAGS))
+ return -EEXIST;
++ }
+
+ dev->ethtool_ops = &vxlan_ethtool_ops;
+
+diff --git a/drivers/parisc/iommu-helpers.h b/drivers/parisc/iommu-helpers.h
+index 761e77bfce5d..e56f1569f6c3 100644
+--- a/drivers/parisc/iommu-helpers.h
++++ b/drivers/parisc/iommu-helpers.h
+@@ -104,7 +104,11 @@ iommu_coalesce_chunks(struct ioc *ioc, struct device *dev,
+ struct scatterlist *contig_sg; /* contig chunk head */
+ unsigned long dma_offset, dma_len; /* start/len of DMA stream */
+ unsigned int n_mappings = 0;
+- unsigned int max_seg_size = dma_get_max_seg_size(dev);
++ unsigned int max_seg_size = min(dma_get_max_seg_size(dev),
++ (unsigned)DMA_CHUNK_SIZE);
++ unsigned int max_seg_boundary = dma_get_seg_boundary(dev) + 1;
++ if (max_seg_boundary) /* check if the addition above didn't overflow */
++ max_seg_size = min(max_seg_size, max_seg_boundary);
+
+ while (nents > 0) {
+
+@@ -138,14 +142,11 @@ iommu_coalesce_chunks(struct ioc *ioc, struct device *dev,
+
+ /*
+ ** First make sure current dma stream won't
+- ** exceed DMA_CHUNK_SIZE if we coalesce the
++ ** exceed max_seg_size if we coalesce the
+ ** next entry.
+ */
+- if(unlikely(ALIGN(dma_len + dma_offset + startsg->length,
+- IOVP_SIZE) > DMA_CHUNK_SIZE))
+- break;
+-
+- if (startsg->length + dma_len > max_seg_size)
++ if (unlikely(ALIGN(dma_len + dma_offset + startsg->length, IOVP_SIZE) >
++ max_seg_size))
+ break;
+
+ /*
+diff --git a/drivers/staging/lustre/lustre/obdecho/echo_client.c b/drivers/staging/lustre/lustre/obdecho/echo_client.c
+index 27bd170c3a28..ef2c5e032f10 100644
+--- a/drivers/staging/lustre/lustre/obdecho/echo_client.c
++++ b/drivers/staging/lustre/lustre/obdecho/echo_client.c
+@@ -1268,6 +1268,7 @@ static int
+ echo_copyout_lsm(struct lov_stripe_md *lsm, void *_ulsm, int ulsm_nob)
+ {
+ struct lov_stripe_md *ulsm = _ulsm;
++ struct lov_oinfo **p;
+ int nob, i;
+
+ nob = offsetof(struct lov_stripe_md, lsm_oinfo[lsm->lsm_stripe_count]);
+@@ -1277,9 +1278,10 @@ echo_copyout_lsm(struct lov_stripe_md *lsm, void *_ulsm, int ulsm_nob)
+ if (copy_to_user(ulsm, lsm, sizeof(*ulsm)))
+ return -EFAULT;
+
+- for (i = 0; i < lsm->lsm_stripe_count; i++) {
+- if (copy_to_user(ulsm->lsm_oinfo[i], lsm->lsm_oinfo[i],
+- sizeof(lsm->lsm_oinfo[0])))
++ for (i = 0, p = lsm->lsm_oinfo; i < lsm->lsm_stripe_count; i++, p++) {
++ struct lov_oinfo __user *up;
++ if (get_user(up, ulsm->lsm_oinfo + i) ||
++ copy_to_user(up, *p, sizeof(struct lov_oinfo)))
+ return -EFAULT;
+ }
+ return 0;
+@@ -1287,9 +1289,10 @@ echo_copyout_lsm(struct lov_stripe_md *lsm, void *_ulsm, int ulsm_nob)
+
+ static int
+ echo_copyin_lsm(struct echo_device *ed, struct lov_stripe_md *lsm,
+- void *ulsm, int ulsm_nob)
++ struct lov_stripe_md __user *ulsm, int ulsm_nob)
+ {
+ struct echo_client_obd *ec = ed->ed_ec;
++ struct lov_oinfo **p;
+ int i;
+
+ if (ulsm_nob < sizeof(*lsm))
+@@ -1305,11 +1308,10 @@ echo_copyin_lsm(struct echo_device *ed, struct lov_stripe_md *lsm,
+ return -EINVAL;
+
+
+- for (i = 0; i < lsm->lsm_stripe_count; i++) {
+- if (copy_from_user(lsm->lsm_oinfo[i],
+- ((struct lov_stripe_md *)ulsm)-> \
+- lsm_oinfo[i],
+- sizeof(lsm->lsm_oinfo[0])))
++ for (i = 0, p = lsm->lsm_oinfo; i < lsm->lsm_stripe_count; i++, p++) {
++ struct lov_oinfo __user *up;
++ if (get_user(up, ulsm->lsm_oinfo + i) ||
++ copy_from_user(*p, up, sizeof(struct lov_oinfo)))
+ return -EFAULT;
+ }
+ return 0;
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 522f766a7d07..62084335a608 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -1035,10 +1035,20 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ unsigned delay;
+
+ /* Continue a partial initialization */
+- if (type == HUB_INIT2)
+- goto init2;
+- if (type == HUB_INIT3)
++ if (type == HUB_INIT2 || type == HUB_INIT3) {
++ device_lock(hub->intfdev);
++
++ /* Was the hub disconnected while we were waiting? */
++ if (hub->disconnected) {
++ device_unlock(hub->intfdev);
++ kref_put(&hub->kref, hub_release);
++ return;
++ }
++ if (type == HUB_INIT2)
++ goto init2;
+ goto init3;
++ }
++ kref_get(&hub->kref);
+
+ /* The superspeed hub except for root hub has to use Hub Depth
+ * value as an offset into the route string to locate the bits
+@@ -1236,6 +1246,7 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ queue_delayed_work(system_power_efficient_wq,
+ &hub->init_work,
+ msecs_to_jiffies(delay));
++ device_unlock(hub->intfdev);
+ return; /* Continues at init3: below */
+ } else {
+ msleep(delay);
+@@ -1257,6 +1268,11 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ /* Allow autosuspend if it was suppressed */
+ if (type <= HUB_INIT3)
+ usb_autopm_put_interface_async(to_usb_interface(hub->intfdev));
++
++ if (type == HUB_INIT2 || type == HUB_INIT3)
++ device_unlock(hub->intfdev);
++
++ kref_put(&hub->kref, hub_release);
+ }
+
+ /* Implement the continuations for the delays above */
+@@ -3870,17 +3886,30 @@ static void usb_enable_link_state(struct usb_hcd *hcd, struct usb_device *udev,
+ return;
+ }
+
+- if (usb_set_lpm_timeout(udev, state, timeout))
++ if (usb_set_lpm_timeout(udev, state, timeout)) {
+ /* If we can't set the parent hub U1/U2 timeout,
+ * device-initiated LPM won't be allowed either, so let the xHCI
+ * host know that this link state won't be enabled.
+ */
+ hcd->driver->disable_usb3_lpm_timeout(hcd, udev, state);
++ } else {
++ /* Only a configured device will accept the Set Feature
++ * U1/U2_ENABLE
++ */
++ if (udev->actconfig)
++ usb_set_device_initiated_lpm(udev, state, true);
+
+- /* Only a configured device will accept the Set Feature U1/U2_ENABLE */
+- else if (udev->actconfig)
+- usb_set_device_initiated_lpm(udev, state, true);
+-
++ /* As soon as usb_set_lpm_timeout(timeout) returns 0, the
++ * hub-initiated LPM is enabled. Thus, LPM is enabled no
++ * matter the result of usb_set_device_initiated_lpm().
++ * The only difference is whether device is able to initiate
++ * LPM.
++ */
++ if (state == USB3_LPM_U1)
++ udev->usb3_lpm_u1_enabled = 1;
++ else if (state == USB3_LPM_U2)
++ udev->usb3_lpm_u2_enabled = 1;
++ }
+ }
+
+ /*
+@@ -3920,6 +3949,18 @@ static int usb_disable_link_state(struct usb_hcd *hcd, struct usb_device *udev,
+ dev_warn(&udev->dev, "Could not disable xHCI %s timeout, "
+ "bus schedule bandwidth may be impacted.\n",
+ usb3_lpm_names[state]);
++
++ /* As soon as usb_set_lpm_timeout(0) return 0, hub initiated LPM
++ * is disabled. Hub will disallows link to enter U1/U2 as well,
++ * even device is initiating LPM. Hence LPM is disabled if hub LPM
++ * timeout set to 0, no matter device-initiated LPM is disabled or
++ * not.
++ */
++ if (state == USB3_LPM_U1)
++ udev->usb3_lpm_u1_enabled = 0;
++ else if (state == USB3_LPM_U2)
++ udev->usb3_lpm_u2_enabled = 0;
++
+ return 0;
+ }
+
+@@ -3954,8 +3995,6 @@ int usb_disable_lpm(struct usb_device *udev)
+ if (usb_disable_link_state(hcd, udev, USB3_LPM_U2))
+ goto enable_lpm;
+
+- udev->usb3_lpm_enabled = 0;
+-
+ return 0;
+
+ enable_lpm:
+@@ -4013,8 +4052,6 @@ void usb_enable_lpm(struct usb_device *udev)
+
+ usb_enable_link_state(hcd, udev, USB3_LPM_U1);
+ usb_enable_link_state(hcd, udev, USB3_LPM_U2);
+-
+- udev->usb3_lpm_enabled = 1;
+ }
+ EXPORT_SYMBOL_GPL(usb_enable_lpm);
+
+diff --git a/drivers/usb/core/sysfs.c b/drivers/usb/core/sysfs.c
+index cfc68c11c3f5..c54fd8b73966 100644
+--- a/drivers/usb/core/sysfs.c
++++ b/drivers/usb/core/sysfs.c
+@@ -531,7 +531,7 @@ static ssize_t usb2_lpm_besl_store(struct device *dev,
+ }
+ static DEVICE_ATTR_RW(usb2_lpm_besl);
+
+-static ssize_t usb3_hardware_lpm_show(struct device *dev,
++static ssize_t usb3_hardware_lpm_u1_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct usb_device *udev = to_usb_device(dev);
+@@ -539,7 +539,7 @@ static ssize_t usb3_hardware_lpm_show(struct device *dev,
+
+ usb_lock_device(udev);
+
+- if (udev->usb3_lpm_enabled)
++ if (udev->usb3_lpm_u1_enabled)
+ p = "enabled";
+ else
+ p = "disabled";
+@@ -548,7 +548,26 @@ static ssize_t usb3_hardware_lpm_show(struct device *dev,
+
+ return sprintf(buf, "%s\n", p);
+ }
+-static DEVICE_ATTR_RO(usb3_hardware_lpm);
++static DEVICE_ATTR_RO(usb3_hardware_lpm_u1);
++
++static ssize_t usb3_hardware_lpm_u2_show(struct device *dev,
++ struct device_attribute *attr, char *buf)
++{
++ struct usb_device *udev = to_usb_device(dev);
++ const char *p;
++
++ usb_lock_device(udev);
++
++ if (udev->usb3_lpm_u2_enabled)
++ p = "enabled";
++ else
++ p = "disabled";
++
++ usb_unlock_device(udev);
++
++ return sprintf(buf, "%s\n", p);
++}
++static DEVICE_ATTR_RO(usb3_hardware_lpm_u2);
+
+ static struct attribute *usb2_hardware_lpm_attr[] = {
+ &dev_attr_usb2_hardware_lpm.attr,
+@@ -562,7 +581,8 @@ static struct attribute_group usb2_hardware_lpm_attr_group = {
+ };
+
+ static struct attribute *usb3_hardware_lpm_attr[] = {
+- &dev_attr_usb3_hardware_lpm.attr,
++ &dev_attr_usb3_hardware_lpm_u1.attr,
++ &dev_attr_usb3_hardware_lpm_u2.attr,
+ NULL,
+ };
+ static struct attribute_group usb3_hardware_lpm_attr_group = {
+@@ -592,7 +612,8 @@ static int add_power_attributes(struct device *dev)
+ if (udev->usb2_hw_lpm_capable == 1)
+ rc = sysfs_merge_group(&dev->kobj,
+ &usb2_hardware_lpm_attr_group);
+- if (udev->lpm_capable == 1)
++ if (udev->speed == USB_SPEED_SUPER &&
++ udev->lpm_capable == 1)
+ rc = sysfs_merge_group(&dev->kobj,
+ &usb3_hardware_lpm_attr_group);
+ }
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 385f9f5d6715..e40c300ff8d6 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -4778,8 +4778,16 @@ int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
+ ctrl_ctx->add_flags |= cpu_to_le32(SLOT_FLAG);
+ slot_ctx = xhci_get_slot_ctx(xhci, config_cmd->in_ctx);
+ slot_ctx->dev_info |= cpu_to_le32(DEV_HUB);
++ /*
++ * refer to section 6.2.2: MTT should be 0 for full speed hub,
++ * but it may be already set to 1 when setup an xHCI virtual
++ * device, so clear it anyway.
++ */
+ if (tt->multi)
+ slot_ctx->dev_info |= cpu_to_le32(DEV_MTT);
++ else if (hdev->speed == USB_SPEED_FULL)
++ slot_ctx->dev_info &= cpu_to_le32(~DEV_MTT);
++
+ if (xhci->hci_version > 0x95) {
+ xhci_dbg(xhci, "xHCI version %x needs hub "
+ "TT think time and number of ports\n",
+@@ -5034,6 +5042,10 @@ static int __init xhci_hcd_init(void)
+ BUILD_BUG_ON(sizeof(struct xhci_intr_reg) != 8*32/8);
+ /* xhci_run_regs has eight fields and embeds 128 xhci_intr_regs */
+ BUILD_BUG_ON(sizeof(struct xhci_run_regs) != (8+8*128)*32/8);
++
++ if (usb_disabled())
++ return -ENODEV;
++
+ return 0;
+ }
+
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 7d4f51a32e66..59b2126b21a3 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -160,6 +160,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x17F4, 0xAAAA) }, /* Wavesense Jazz blood glucose meter */
+ { USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */
+ { USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */
++ { USB_DEVICE(0x18EF, 0xE025) }, /* ELV Marble Sound Board 1 */
+ { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
+ { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */
+ { USB_DEVICE(0x1BA4, 0x0002) }, /* Silicon Labs 358x factory default */
+diff --git a/drivers/usb/serial/ipaq.c b/drivers/usb/serial/ipaq.c
+index f51a5d52c0ed..ec1b8f2c1183 100644
+--- a/drivers/usb/serial/ipaq.c
++++ b/drivers/usb/serial/ipaq.c
+@@ -531,7 +531,8 @@ static int ipaq_open(struct tty_struct *tty,
+ * through. Since this has a reasonably high failure rate, we retry
+ * several times.
+ */
+- while (retries--) {
++ while (retries) {
++ retries--;
+ result = usb_control_msg(serial->dev,
+ usb_sndctrlpipe(serial->dev, 0), 0x22, 0x21,
+ 0x1, 0, NULL, 0, 100);
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index 2ea0b3b2a91d..1be5dd048622 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -804,7 +804,7 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
+
+ vma->vm_ops = &gntdev_vmops;
+
+- vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
++ vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP | VM_IO;
+
+ if (use_ptemod)
+ vma->vm_flags |= VM_DONTCOPY;
+diff --git a/fs/direct-io.c b/fs/direct-io.c
+index 11256291642e..3e116320f01b 100644
+--- a/fs/direct-io.c
++++ b/fs/direct-io.c
+@@ -1161,6 +1161,16 @@ do_blockdev_direct_IO(struct kiocb *iocb, struct inode *inode,
+ }
+ }
+
++ /* Once we sampled i_size check for reads beyond EOF */
++ dio->i_size = i_size_read(inode);
++ if (iov_iter_rw(iter) == READ && offset >= dio->i_size) {
++ if (dio->flags & DIO_LOCKING)
++ mutex_unlock(&inode->i_mutex);
++ kmem_cache_free(dio_cache, dio);
++ retval = 0;
++ goto out;
++ }
++
+ /*
+ * For file extending writes updating i_size before data writeouts
+ * complete can expose uninitialized blocks in dumb filesystems.
+@@ -1214,7 +1224,6 @@ do_blockdev_direct_IO(struct kiocb *iocb, struct inode *inode,
+ sdio.next_block_for_io = -1;
+
+ dio->iocb = iocb;
+- dio->i_size = i_size_read(inode);
+
+ spin_lock_init(&dio->bio_lock);
+ dio->refcount = 1;
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index fa2cab985e57..d42a5b832ad3 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -459,6 +459,25 @@ static inline void bpf_jit_free(struct bpf_prog *fp)
+
+ #define BPF_ANC BIT(15)
+
++static inline bool bpf_needs_clear_a(const struct sock_filter *first)
++{
++ switch (first->code) {
++ case BPF_RET | BPF_K:
++ case BPF_LD | BPF_W | BPF_LEN:
++ return false;
++
++ case BPF_LD | BPF_W | BPF_ABS:
++ case BPF_LD | BPF_H | BPF_ABS:
++ case BPF_LD | BPF_B | BPF_ABS:
++ if (first->k == SKF_AD_OFF + SKF_AD_ALU_XOR_X)
++ return true;
++ return false;
++
++ default:
++ return true;
++ }
++}
++
+ static inline u16 bpf_anc_helper(const struct sock_filter *ftest)
+ {
+ BUG_ON(ftest->code & BPF_ANC);
+diff --git a/include/linux/mlx5/cq.h b/include/linux/mlx5/cq.h
+index abc4767695e4..b2c9fada8eac 100644
+--- a/include/linux/mlx5/cq.h
++++ b/include/linux/mlx5/cq.h
+@@ -45,7 +45,7 @@ struct mlx5_core_cq {
+ atomic_t refcount;
+ struct completion free;
+ unsigned vector;
+- int irqn;
++ unsigned int irqn;
+ void (*comp) (struct mlx5_core_cq *);
+ void (*event) (struct mlx5_core_cq *, enum mlx5_event);
+ struct mlx5_uar *uar;
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 8b6d6f2154a4..2b013dcc1d7e 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -303,7 +303,7 @@ struct mlx5_eq {
+ u32 cons_index;
+ struct mlx5_buf buf;
+ int size;
+- u8 irqn;
++ unsigned int irqn;
+ u8 eqn;
+ int nent;
+ u64 mask;
+@@ -738,7 +738,8 @@ int mlx5_create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq, u8 vecidx,
+ int mlx5_destroy_unmap_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq);
+ int mlx5_start_eqs(struct mlx5_core_dev *dev);
+ int mlx5_stop_eqs(struct mlx5_core_dev *dev);
+-int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn, int *irqn);
++int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn,
++ unsigned int *irqn);
+ int mlx5_core_attach_mcg(struct mlx5_core_dev *dev, union ib_gid *mgid, u32 qpn);
+ int mlx5_core_detach_mcg(struct mlx5_core_dev *dev, union ib_gid *mgid, u32 qpn);
+
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index b7b9501b41af..f477e87ca46f 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -830,6 +830,7 @@ struct user_struct {
+ unsigned long mq_bytes; /* How many bytes can be allocated to mqueue? */
+ #endif
+ unsigned long locked_shm; /* How many pages of mlocked shm ? */
++ unsigned long unix_inflight; /* How many files in flight in unix sockets */
+
+ #ifdef CONFIG_KEYS
+ struct key *uid_keyring; /* UID specific keyring */
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 4398411236f1..23ce309bd93f 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -3437,7 +3437,8 @@ struct skb_gso_cb {
+ int encap_level;
+ __u16 csum_start;
+ };
+-#define SKB_GSO_CB(skb) ((struct skb_gso_cb *)(skb)->cb)
++#define SKB_SGO_CB_OFFSET 32
++#define SKB_GSO_CB(skb) ((struct skb_gso_cb *)((skb)->cb + SKB_SGO_CB_OFFSET))
+
+ static inline int skb_tnl_header_len(const struct sk_buff *inner_skb)
+ {
+diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
+index a460e2ef2843..42c36bb6b74a 100644
+--- a/include/linux/syscalls.h
++++ b/include/linux/syscalls.h
+@@ -524,7 +524,7 @@ asmlinkage long sys_chown(const char __user *filename,
+ asmlinkage long sys_lchown(const char __user *filename,
+ uid_t user, gid_t group);
+ asmlinkage long sys_fchown(unsigned int fd, uid_t user, gid_t group);
+-#ifdef CONFIG_UID16
++#ifdef CONFIG_HAVE_UID16
+ asmlinkage long sys_chown16(const char __user *filename,
+ old_uid_t user, old_gid_t group);
+ asmlinkage long sys_lchown16(const char __user *filename,
+diff --git a/include/linux/types.h b/include/linux/types.h
+index c314989d9158..89f63da62be6 100644
+--- a/include/linux/types.h
++++ b/include/linux/types.h
+@@ -35,7 +35,7 @@ typedef __kernel_gid16_t gid16_t;
+
+ typedef unsigned long uintptr_t;
+
+-#ifdef CONFIG_UID16
++#ifdef CONFIG_HAVE_UID16
+ /* This is defined by include/asm-{arch}/posix_types.h */
+ typedef __kernel_old_uid_t old_uid_t;
+ typedef __kernel_old_gid_t old_gid_t;
+diff --git a/include/linux/usb.h b/include/linux/usb.h
+index 447fe29b55b4..4aec2113107c 100644
+--- a/include/linux/usb.h
++++ b/include/linux/usb.h
+@@ -507,6 +507,8 @@ struct usb3_lpm_parameters {
+ * @usb2_hw_lpm_enabled: USB2 hardware LPM is enabled
+ * @usb2_hw_lpm_allowed: Userspace allows USB 2.0 LPM to be enabled
+ * @usb3_lpm_enabled: USB3 hardware LPM enabled
++ * @usb3_lpm_u1_enabled: USB3 hardware U1 LPM enabled
++ * @usb3_lpm_u2_enabled: USB3 hardware U2 LPM enabled
+ * @string_langid: language ID for strings
+ * @product: iProduct string, if present (static)
+ * @manufacturer: iManufacturer string, if present (static)
+@@ -580,6 +582,8 @@ struct usb_device {
+ unsigned usb2_hw_lpm_enabled:1;
+ unsigned usb2_hw_lpm_allowed:1;
+ unsigned usb3_lpm_enabled:1;
++ unsigned usb3_lpm_u1_enabled:1;
++ unsigned usb3_lpm_u2_enabled:1;
+ int string_langid;
+
+ /* static strings from the device */
+diff --git a/include/linux/usb/cdc_ncm.h b/include/linux/usb/cdc_ncm.h
+index 1f6526c76ee8..3a375d07d0dc 100644
+--- a/include/linux/usb/cdc_ncm.h
++++ b/include/linux/usb/cdc_ncm.h
+@@ -138,6 +138,7 @@ struct cdc_ncm_ctx {
+ };
+
+ u8 cdc_ncm_select_altsetting(struct usb_interface *intf);
++int cdc_ncm_change_mtu(struct net_device *net, int new_mtu);
+ int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_altsetting, int drvflags);
+ void cdc_ncm_unbind(struct usbnet *dev, struct usb_interface *intf);
+ struct sk_buff *cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign);
+diff --git a/include/net/inet_ecn.h b/include/net/inet_ecn.h
+index 84b20835b736..0dc0a51da38f 100644
+--- a/include/net/inet_ecn.h
++++ b/include/net/inet_ecn.h
+@@ -111,11 +111,24 @@ static inline void ipv4_copy_dscp(unsigned int dscp, struct iphdr *inner)
+
+ struct ipv6hdr;
+
+-static inline int IP6_ECN_set_ce(struct ipv6hdr *iph)
++/* Note:
++ * IP_ECN_set_ce() has to tweak IPV4 checksum when setting CE,
++ * meaning both changes have no effect on skb->csum if/when CHECKSUM_COMPLETE
++ * In IPv6 case, no checksum compensates the change in IPv6 header,
++ * so we have to update skb->csum.
++ */
++static inline int IP6_ECN_set_ce(struct sk_buff *skb, struct ipv6hdr *iph)
+ {
++ __be32 from, to;
++
+ if (INET_ECN_is_not_ect(ipv6_get_dsfield(iph)))
+ return 0;
+- *(__be32*)iph |= htonl(INET_ECN_CE << 20);
++
++ from = *(__be32 *)iph;
++ to = from | htonl(INET_ECN_CE << 20);
++ *(__be32 *)iph = to;
++ if (skb->ip_summed == CHECKSUM_COMPLETE)
++ skb->csum = csum_add(csum_sub(skb->csum, from), to);
+ return 1;
+ }
+
+@@ -142,7 +155,7 @@ static inline int INET_ECN_set_ce(struct sk_buff *skb)
+ case cpu_to_be16(ETH_P_IPV6):
+ if (skb_network_header(skb) + sizeof(struct ipv6hdr) <=
+ skb_tail_pointer(skb))
+- return IP6_ECN_set_ce(ipv6_hdr(skb));
++ return IP6_ECN_set_ce(skb, ipv6_hdr(skb));
+ break;
+ }
+
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index b074b23000d6..36c6efeffdd5 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1058,6 +1058,16 @@ static int check_alu_op(struct reg_state *regs, struct bpf_insn *insn)
+ return -EINVAL;
+ }
+
++ if ((opcode == BPF_LSH || opcode == BPF_RSH ||
++ opcode == BPF_ARSH) && BPF_SRC(insn->code) == BPF_K) {
++ int size = BPF_CLASS(insn->code) == BPF_ALU64 ? 64 : 32;
++
++ if (insn->imm < 0 || insn->imm >= size) {
++ verbose("invalid shift %d\n", insn->imm);
++ return -EINVAL;
++ }
++ }
++
+ /* pattern match 'bpf_add Rx, imm' instruction */
+ if (opcode == BPF_ADD && BPF_CLASS(insn->code) == BPF_ALU64 &&
+ regs[insn->dst_reg].type == FRAME_PTR &&
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index 84190f02b521..101240bfff1e 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -970,13 +970,29 @@ EXPORT_SYMBOL(add_timer);
+ */
+ void add_timer_on(struct timer_list *timer, int cpu)
+ {
+- struct tvec_base *base = per_cpu_ptr(&tvec_bases, cpu);
++ struct tvec_base *new_base = per_cpu_ptr(&tvec_bases, cpu);
++ struct tvec_base *base;
+ unsigned long flags;
+
+ timer_stats_timer_set_start_info(timer);
+ BUG_ON(timer_pending(timer) || !timer->function);
+- spin_lock_irqsave(&base->lock, flags);
+- timer->flags = (timer->flags & ~TIMER_BASEMASK) | cpu;
++
++ /*
++ * If @timer was on a different CPU, it should be migrated with the
++ * old base locked to prevent other operations proceeding with the
++ * wrong base locked. See lock_timer_base().
++ */
++ base = lock_timer_base(timer, &flags);
++ if (base != new_base) {
++ timer->flags |= TIMER_MIGRATING;
++
++ spin_unlock(&base->lock);
++ base = new_base;
++ spin_lock(&base->lock);
++ WRITE_ONCE(timer->flags,
++ (timer->flags & ~TIMER_BASEMASK) | cpu);
++ }
++
+ debug_activate(timer, timer->expires);
+ internal_add_timer(base, timer);
+ spin_unlock_irqrestore(&base->lock, flags);
+diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c
+index 191a70290dca..f5d2fe5e31cc 100644
+--- a/net/batman-adv/bridge_loop_avoidance.c
++++ b/net/batman-adv/bridge_loop_avoidance.c
+@@ -127,21 +127,17 @@ batadv_backbone_gw_free_ref(struct batadv_bla_backbone_gw *backbone_gw)
+ }
+
+ /* finally deinitialize the claim */
+-static void batadv_claim_free_rcu(struct rcu_head *rcu)
++static void batadv_claim_release(struct batadv_bla_claim *claim)
+ {
+- struct batadv_bla_claim *claim;
+-
+- claim = container_of(rcu, struct batadv_bla_claim, rcu);
+-
+ batadv_backbone_gw_free_ref(claim->backbone_gw);
+- kfree(claim);
++ kfree_rcu(claim, rcu);
+ }
+
+ /* free a claim, call claim_free_rcu if its the last reference */
+ static void batadv_claim_free_ref(struct batadv_bla_claim *claim)
+ {
+ if (atomic_dec_and_test(&claim->refcount))
+- call_rcu(&claim->rcu, batadv_claim_free_rcu);
++ batadv_claim_release(claim);
+ }
+
+ /**
+diff --git a/net/batman-adv/hard-interface.h b/net/batman-adv/hard-interface.h
+index 5a31420513e1..7b12ea8ea29d 100644
+--- a/net/batman-adv/hard-interface.h
++++ b/net/batman-adv/hard-interface.h
+@@ -75,18 +75,6 @@ batadv_hardif_free_ref(struct batadv_hard_iface *hard_iface)
+ call_rcu(&hard_iface->rcu, batadv_hardif_free_rcu);
+ }
+
+-/**
+- * batadv_hardif_free_ref_now - decrement the hard interface refcounter and
+- * possibly free it (without rcu callback)
+- * @hard_iface: the hard interface to free
+- */
+-static inline void
+-batadv_hardif_free_ref_now(struct batadv_hard_iface *hard_iface)
+-{
+- if (atomic_dec_and_test(&hard_iface->refcount))
+- batadv_hardif_free_rcu(&hard_iface->rcu);
+-}
+-
+ static inline struct batadv_hard_iface *
+ batadv_primary_if_get_selected(struct batadv_priv *bat_priv)
+ {
+diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c
+index f5276be2c77c..d0956f726547 100644
+--- a/net/batman-adv/network-coding.c
++++ b/net/batman-adv/network-coding.c
+@@ -203,28 +203,25 @@ void batadv_nc_init_orig(struct batadv_orig_node *orig_node)
+ }
+
+ /**
+- * batadv_nc_node_free_rcu - rcu callback to free an nc node and remove
+- * its refcount on the orig_node
+- * @rcu: rcu pointer of the nc node
++ * batadv_nc_node_release - release nc_node from lists and queue for free after
++ * rcu grace period
++ * @nc_node: the nc node to free
+ */
+-static void batadv_nc_node_free_rcu(struct rcu_head *rcu)
++static void batadv_nc_node_release(struct batadv_nc_node *nc_node)
+ {
+- struct batadv_nc_node *nc_node;
+-
+- nc_node = container_of(rcu, struct batadv_nc_node, rcu);
+ batadv_orig_node_free_ref(nc_node->orig_node);
+- kfree(nc_node);
++ kfree_rcu(nc_node, rcu);
+ }
+
+ /**
+- * batadv_nc_node_free_ref - decrements the nc node refcounter and possibly
+- * frees it
++ * batadv_nc_node_free_ref - decrement the nc node refcounter and possibly
++ * release it
+ * @nc_node: the nc node to free
+ */
+ static void batadv_nc_node_free_ref(struct batadv_nc_node *nc_node)
+ {
+ if (atomic_dec_and_test(&nc_node->refcount))
+- call_rcu(&nc_node->rcu, batadv_nc_node_free_rcu);
++ batadv_nc_node_release(nc_node);
+ }
+
+ /**
+diff --git a/net/batman-adv/originator.c b/net/batman-adv/originator.c
+index 7486df9ed48d..17851d3aaf22 100644
+--- a/net/batman-adv/originator.c
++++ b/net/batman-adv/originator.c
+@@ -163,92 +163,66 @@ err:
+ }
+
+ /**
+- * batadv_neigh_ifinfo_free_rcu - free the neigh_ifinfo object
+- * @rcu: rcu pointer of the neigh_ifinfo object
+- */
+-static void batadv_neigh_ifinfo_free_rcu(struct rcu_head *rcu)
+-{
+- struct batadv_neigh_ifinfo *neigh_ifinfo;
+-
+- neigh_ifinfo = container_of(rcu, struct batadv_neigh_ifinfo, rcu);
+-
+- if (neigh_ifinfo->if_outgoing != BATADV_IF_DEFAULT)
+- batadv_hardif_free_ref_now(neigh_ifinfo->if_outgoing);
+-
+- kfree(neigh_ifinfo);
+-}
+-
+-/**
+- * batadv_neigh_ifinfo_free_now - decrement the refcounter and possibly free
+- * the neigh_ifinfo (without rcu callback)
++ * batadv_neigh_ifinfo_release - release neigh_ifinfo from lists and queue for
++ * free after rcu grace period
+ * @neigh_ifinfo: the neigh_ifinfo object to release
+ */
+ static void
+-batadv_neigh_ifinfo_free_ref_now(struct batadv_neigh_ifinfo *neigh_ifinfo)
++batadv_neigh_ifinfo_release(struct batadv_neigh_ifinfo *neigh_ifinfo)
+ {
+- if (atomic_dec_and_test(&neigh_ifinfo->refcount))
+- batadv_neigh_ifinfo_free_rcu(&neigh_ifinfo->rcu);
++ if (neigh_ifinfo->if_outgoing != BATADV_IF_DEFAULT)
++ batadv_hardif_free_ref(neigh_ifinfo->if_outgoing);
++
++ kfree_rcu(neigh_ifinfo, rcu);
+ }
+
+ /**
+- * batadv_neigh_ifinfo_free_ref - decrement the refcounter and possibly free
++ * batadv_neigh_ifinfo_free_ref - decrement the refcounter and possibly release
+ * the neigh_ifinfo
+ * @neigh_ifinfo: the neigh_ifinfo object to release
+ */
+ void batadv_neigh_ifinfo_free_ref(struct batadv_neigh_ifinfo *neigh_ifinfo)
+ {
+ if (atomic_dec_and_test(&neigh_ifinfo->refcount))
+- call_rcu(&neigh_ifinfo->rcu, batadv_neigh_ifinfo_free_rcu);
++ batadv_neigh_ifinfo_release(neigh_ifinfo);
+ }
+
+ /**
+ * batadv_neigh_node_free_rcu - free the neigh_node
+- * @rcu: rcu pointer of the neigh_node
++ * batadv_neigh_node_release - release neigh_node from lists and queue for
++ * free after rcu grace period
++ * @neigh_node: neigh neighbor to free
+ */
+-static void batadv_neigh_node_free_rcu(struct rcu_head *rcu)
++static void batadv_neigh_node_release(struct batadv_neigh_node *neigh_node)
+ {
+ struct hlist_node *node_tmp;
+- struct batadv_neigh_node *neigh_node;
+ struct batadv_neigh_ifinfo *neigh_ifinfo;
+ struct batadv_algo_ops *bao;
+
+- neigh_node = container_of(rcu, struct batadv_neigh_node, rcu);
+ bao = neigh_node->orig_node->bat_priv->bat_algo_ops;
+
+ hlist_for_each_entry_safe(neigh_ifinfo, node_tmp,
+ &neigh_node->ifinfo_list, list) {
+- batadv_neigh_ifinfo_free_ref_now(neigh_ifinfo);
++ batadv_neigh_ifinfo_free_ref(neigh_ifinfo);
+ }
+
+ if (bao->bat_neigh_free)
+ bao->bat_neigh_free(neigh_node);
+
+- batadv_hardif_free_ref_now(neigh_node->if_incoming);
++ batadv_hardif_free_ref(neigh_node->if_incoming);
+
+- kfree(neigh_node);
+-}
+-
+-/**
+- * batadv_neigh_node_free_ref_now - decrement the neighbors refcounter
+- * and possibly free it (without rcu callback)
+- * @neigh_node: neigh neighbor to free
+- */
+-static void
+-batadv_neigh_node_free_ref_now(struct batadv_neigh_node *neigh_node)
+-{
+- if (atomic_dec_and_test(&neigh_node->refcount))
+- batadv_neigh_node_free_rcu(&neigh_node->rcu);
++ kfree_rcu(neigh_node, rcu);
+ }
+
+ /**
+ * batadv_neigh_node_free_ref - decrement the neighbors refcounter
+- * and possibly free it
++ * and possibly release it
+ * @neigh_node: neigh neighbor to free
+ */
+ void batadv_neigh_node_free_ref(struct batadv_neigh_node *neigh_node)
+ {
+ if (atomic_dec_and_test(&neigh_node->refcount))
+- call_rcu(&neigh_node->rcu, batadv_neigh_node_free_rcu);
++ batadv_neigh_node_release(neigh_node);
+ }
+
+ /**
+@@ -532,108 +506,99 @@ out:
+ }
+
+ /**
+- * batadv_orig_ifinfo_free_rcu - free the orig_ifinfo object
+- * @rcu: rcu pointer of the orig_ifinfo object
++ * batadv_orig_ifinfo_release - release orig_ifinfo from lists and queue for
++ * free after rcu grace period
++ * @orig_ifinfo: the orig_ifinfo object to release
+ */
+-static void batadv_orig_ifinfo_free_rcu(struct rcu_head *rcu)
++static void batadv_orig_ifinfo_release(struct batadv_orig_ifinfo *orig_ifinfo)
+ {
+- struct batadv_orig_ifinfo *orig_ifinfo;
+ struct batadv_neigh_node *router;
+
+- orig_ifinfo = container_of(rcu, struct batadv_orig_ifinfo, rcu);
+-
+ if (orig_ifinfo->if_outgoing != BATADV_IF_DEFAULT)
+- batadv_hardif_free_ref_now(orig_ifinfo->if_outgoing);
++ batadv_hardif_free_ref(orig_ifinfo->if_outgoing);
+
+ /* this is the last reference to this object */
+ router = rcu_dereference_protected(orig_ifinfo->router, true);
+ if (router)
+- batadv_neigh_node_free_ref_now(router);
+- kfree(orig_ifinfo);
++ batadv_neigh_node_free_ref(router);
++
++ kfree_rcu(orig_ifinfo, rcu);
+ }
+
+ /**
+- * batadv_orig_ifinfo_free_ref - decrement the refcounter and possibly free
+- * the orig_ifinfo (without rcu callback)
++ * batadv_orig_ifinfo_free_ref - decrement the refcounter and possibly release
++ * the orig_ifinfo
+ * @orig_ifinfo: the orig_ifinfo object to release
+ */
+-static void
+-batadv_orig_ifinfo_free_ref_now(struct batadv_orig_ifinfo *orig_ifinfo)
++void batadv_orig_ifinfo_free_ref(struct batadv_orig_ifinfo *orig_ifinfo)
+ {
+ if (atomic_dec_and_test(&orig_ifinfo->refcount))
+- batadv_orig_ifinfo_free_rcu(&orig_ifinfo->rcu);
++ batadv_orig_ifinfo_release(orig_ifinfo);
+ }
+
+ /**
+- * batadv_orig_ifinfo_free_ref - decrement the refcounter and possibly free
+- * the orig_ifinfo
+- * @orig_ifinfo: the orig_ifinfo object to release
++ * batadv_orig_node_free_rcu - free the orig_node
++ * @rcu: rcu pointer of the orig_node
+ */
+-void batadv_orig_ifinfo_free_ref(struct batadv_orig_ifinfo *orig_ifinfo)
++static void batadv_orig_node_free_rcu(struct rcu_head *rcu)
+ {
+- if (atomic_dec_and_test(&orig_ifinfo->refcount))
+- call_rcu(&orig_ifinfo->rcu, batadv_orig_ifinfo_free_rcu);
++ struct batadv_orig_node *orig_node;
++
++ orig_node = container_of(rcu, struct batadv_orig_node, rcu);
++
++ batadv_mcast_purge_orig(orig_node);
++
++ batadv_frag_purge_orig(orig_node, NULL);
++
++ if (orig_node->bat_priv->bat_algo_ops->bat_orig_free)
++ orig_node->bat_priv->bat_algo_ops->bat_orig_free(orig_node);
++
++ kfree(orig_node->tt_buff);
++ kfree(orig_node);
+ }
+
+-static void batadv_orig_node_free_rcu(struct rcu_head *rcu)
++/**
++ * batadv_orig_node_release - release orig_node from lists and queue for
++ * free after rcu grace period
++ * @orig_node: the orig node to free
++ */
++static void batadv_orig_node_release(struct batadv_orig_node *orig_node)
+ {
+ struct hlist_node *node_tmp;
+ struct batadv_neigh_node *neigh_node;
+- struct batadv_orig_node *orig_node;
+ struct batadv_orig_ifinfo *orig_ifinfo;
+
+- orig_node = container_of(rcu, struct batadv_orig_node, rcu);
+-
+ spin_lock_bh(&orig_node->neigh_list_lock);
+
+ /* for all neighbors towards this originator ... */
+ hlist_for_each_entry_safe(neigh_node, node_tmp,
+ &orig_node->neigh_list, list) {
+ hlist_del_rcu(&neigh_node->list);
+- batadv_neigh_node_free_ref_now(neigh_node);
++ batadv_neigh_node_free_ref(neigh_node);
+ }
+
+ hlist_for_each_entry_safe(orig_ifinfo, node_tmp,
+ &orig_node->ifinfo_list, list) {
+ hlist_del_rcu(&orig_ifinfo->list);
+- batadv_orig_ifinfo_free_ref_now(orig_ifinfo);
++ batadv_orig_ifinfo_free_ref(orig_ifinfo);
+ }
+ spin_unlock_bh(&orig_node->neigh_list_lock);
+
+- batadv_mcast_purge_orig(orig_node);
+-
+ /* Free nc_nodes */
+ batadv_nc_purge_orig(orig_node->bat_priv, orig_node, NULL);
+
+- batadv_frag_purge_orig(orig_node, NULL);
+-
+- if (orig_node->bat_priv->bat_algo_ops->bat_orig_free)
+- orig_node->bat_priv->bat_algo_ops->bat_orig_free(orig_node);
+-
+- kfree(orig_node->tt_buff);
+- kfree(orig_node);
++ call_rcu(&orig_node->rcu, batadv_orig_node_free_rcu);
+ }
+
+ /**
+ * batadv_orig_node_free_ref - decrement the orig node refcounter and possibly
+- * schedule an rcu callback for freeing it
++ * release it
+ * @orig_node: the orig node to free
+ */
+ void batadv_orig_node_free_ref(struct batadv_orig_node *orig_node)
+ {
+ if (atomic_dec_and_test(&orig_node->refcount))
+- call_rcu(&orig_node->rcu, batadv_orig_node_free_rcu);
+-}
+-
+-/**
+- * batadv_orig_node_free_ref_now - decrement the orig node refcounter and
+- * possibly free it (without rcu callback)
+- * @orig_node: the orig node to free
+- */
+-void batadv_orig_node_free_ref_now(struct batadv_orig_node *orig_node)
+-{
+- if (atomic_dec_and_test(&orig_node->refcount))
+- batadv_orig_node_free_rcu(&orig_node->rcu);
++ batadv_orig_node_release(orig_node);
+ }
+
+ void batadv_originator_free(struct batadv_priv *bat_priv)
+diff --git a/net/batman-adv/originator.h b/net/batman-adv/originator.h
+index fa18f9bf266b..a5c37882b409 100644
+--- a/net/batman-adv/originator.h
++++ b/net/batman-adv/originator.h
+@@ -38,7 +38,6 @@ int batadv_originator_init(struct batadv_priv *bat_priv);
+ void batadv_originator_free(struct batadv_priv *bat_priv);
+ void batadv_purge_orig_ref(struct batadv_priv *bat_priv);
+ void batadv_orig_node_free_ref(struct batadv_orig_node *orig_node);
+-void batadv_orig_node_free_ref_now(struct batadv_orig_node *orig_node);
+ struct batadv_orig_node *batadv_orig_node_new(struct batadv_priv *bat_priv,
+ const u8 *addr);
+ struct batadv_neigh_node *
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 4228b10c47ea..900e94be4393 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -238,20 +238,6 @@ int batadv_tt_global_hash_count(struct batadv_priv *bat_priv,
+ return count;
+ }
+
+-static void batadv_tt_orig_list_entry_free_rcu(struct rcu_head *rcu)
+-{
+- struct batadv_tt_orig_list_entry *orig_entry;
+-
+- orig_entry = container_of(rcu, struct batadv_tt_orig_list_entry, rcu);
+-
+- /* We are in an rcu callback here, therefore we cannot use
+- * batadv_orig_node_free_ref() and its call_rcu():
+- * An rcu_barrier() wouldn't wait for that to finish
+- */
+- batadv_orig_node_free_ref_now(orig_entry->orig_node);
+- kfree(orig_entry);
+-}
+-
+ /**
+ * batadv_tt_local_size_mod - change the size by v of the local table identified
+ * by vid
+@@ -347,13 +333,25 @@ static void batadv_tt_global_size_dec(struct batadv_orig_node *orig_node,
+ batadv_tt_global_size_mod(orig_node, vid, -1);
+ }
+
++/**
++ * batadv_tt_orig_list_entry_release - release tt orig entry from lists and
++ * queue for free after rcu grace period
++ * @orig_entry: tt orig entry to be free'd
++ */
++static void
++batadv_tt_orig_list_entry_release(struct batadv_tt_orig_list_entry *orig_entry)
++{
++ batadv_orig_node_free_ref(orig_entry->orig_node);
++ kfree_rcu(orig_entry, rcu);
++}
++
+ static void
+ batadv_tt_orig_list_entry_free_ref(struct batadv_tt_orig_list_entry *orig_entry)
+ {
+ if (!atomic_dec_and_test(&orig_entry->refcount))
+ return;
+
+- call_rcu(&orig_entry->rcu, batadv_tt_orig_list_entry_free_rcu);
++ batadv_tt_orig_list_entry_release(orig_entry);
+ }
+
+ /**
+diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c
+index 6ed2feb51e3c..9780603ba411 100644
+--- a/net/bridge/br_device.c
++++ b/net/bridge/br_device.c
+@@ -28,6 +28,8 @@
+ const struct nf_br_ops __rcu *nf_br_ops __read_mostly;
+ EXPORT_SYMBOL_GPL(nf_br_ops);
+
++static struct lock_class_key bridge_netdev_addr_lock_key;
++
+ /* net device transmit always called with BH disabled */
+ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+@@ -87,6 +89,11 @@ out:
+ return NETDEV_TX_OK;
+ }
+
++static void br_set_lockdep_class(struct net_device *dev)
++{
++ lockdep_set_class(&dev->addr_list_lock, &bridge_netdev_addr_lock_key);
++}
++
+ static int br_dev_init(struct net_device *dev)
+ {
+ struct net_bridge *br = netdev_priv(dev);
+@@ -99,6 +106,7 @@ static int br_dev_init(struct net_device *dev)
+ err = br_vlan_init(br);
+ if (err)
+ free_percpu(br->stats);
++ br_set_lockdep_class(dev);
+
+ return err;
+ }
+diff --git a/net/bridge/br_stp_if.c b/net/bridge/br_stp_if.c
+index 4ca449a16132..49d8d28222d8 100644
+--- a/net/bridge/br_stp_if.c
++++ b/net/bridge/br_stp_if.c
+@@ -130,7 +130,10 @@ static void br_stp_start(struct net_bridge *br)
+ char *envp[] = { NULL };
+ struct net_bridge_port *p;
+
+- r = call_usermodehelper(BR_STP_PROG, argv, envp, UMH_WAIT_PROC);
++ if (net_eq(dev_net(br->dev), &init_net))
++ r = call_usermodehelper(BR_STP_PROG, argv, envp, UMH_WAIT_PROC);
++ else
++ r = -ENOENT;
+
+ spin_lock_bh(&br->lock);
+
+diff --git a/net/core/dev.c b/net/core/dev.c
+index c14748d051e7..6369c456e326 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -2539,6 +2539,8 @@ static inline bool skb_needs_check(struct sk_buff *skb, bool tx_path)
+ *
+ * It may return NULL if the skb requires no segmentation. This is
+ * only possible when GSO is used for verifying header integrity.
++ *
++ * Segmentation preserves SKB_SGO_CB_OFFSET bytes of previous skb cb.
+ */
+ struct sk_buff *__skb_gso_segment(struct sk_buff *skb,
+ netdev_features_t features, bool tx_path)
+@@ -2553,6 +2555,9 @@ struct sk_buff *__skb_gso_segment(struct sk_buff *skb,
+ return ERR_PTR(err);
+ }
+
++ BUILD_BUG_ON(SKB_SGO_CB_OFFSET +
++ sizeof(*SKB_GSO_CB(skb)) > sizeof(skb->cb));
++
+ SKB_GSO_CB(skb)->mac_offset = skb_headroom(skb);
+ SKB_GSO_CB(skb)->encap_level = 0;
+
+diff --git a/net/core/dst.c b/net/core/dst.c
+index d6a5a0bc7df5..8852021a7093 100644
+--- a/net/core/dst.c
++++ b/net/core/dst.c
+@@ -301,12 +301,13 @@ void dst_release(struct dst_entry *dst)
+ {
+ if (dst) {
+ int newrefcnt;
++ unsigned short nocache = dst->flags & DST_NOCACHE;
+
+ newrefcnt = atomic_dec_return(&dst->__refcnt);
+ if (unlikely(newrefcnt < 0))
+ net_warn_ratelimited("%s: dst:%p refcnt:%d\n",
+ __func__, dst, newrefcnt);
+- if (!newrefcnt && unlikely(dst->flags & DST_NOCACHE))
++ if (!newrefcnt && unlikely(nocache))
+ call_rcu(&dst->rcu_head, dst_destroy_rcu);
+ }
+ }
+diff --git a/net/core/filter.c b/net/core/filter.c
+index bb18c3680001..49b44879dc7f 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -781,6 +781,11 @@ static int bpf_check_classic(const struct sock_filter *filter,
+ if (ftest->k == 0)
+ return -EINVAL;
+ break;
++ case BPF_ALU | BPF_LSH | BPF_K:
++ case BPF_ALU | BPF_RSH | BPF_K:
++ if (ftest->k >= 32)
++ return -EINVAL;
++ break;
+ case BPF_LD | BPF_MEM:
+ case BPF_LDX | BPF_MEM:
+ case BPF_ST:
+diff --git a/net/core/pktgen.c b/net/core/pktgen.c
+index de8d5cc5eb24..4da4d51a2ccf 100644
+--- a/net/core/pktgen.c
++++ b/net/core/pktgen.c
+@@ -2787,7 +2787,9 @@ static struct sk_buff *pktgen_alloc_skb(struct net_device *dev,
+ } else {
+ skb = __netdev_alloc_skb(dev, size, GFP_NOWAIT);
+ }
+- skb_reserve(skb, LL_RESERVED_SPACE(dev));
++
++ if (likely(skb))
++ skb_reserve(skb, LL_RESERVED_SPACE(dev));
+
+ return skb;
+ }
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 0138fada0951..b945f1e9d7ba 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -240,6 +240,7 @@ static int ip_finish_output_gso(struct sock *sk, struct sk_buff *skb,
+ * from host network stack.
+ */
+ features = netif_skb_features(skb);
++ BUILD_BUG_ON(sizeof(*IPCB(skb)) > SKB_SGO_CB_OFFSET);
+ segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);
+ if (IS_ERR_OR_NULL(segs)) {
+ kfree_skb(skb);
+@@ -918,7 +919,7 @@ static int __ip_append_data(struct sock *sk,
+ if (((length > mtu) || (skb && skb_is_gso(skb))) &&
+ (sk->sk_protocol == IPPROTO_UDP) &&
+ (rt->dst.dev->features & NETIF_F_UFO) && !rt->dst.header_len &&
+- (sk->sk_type == SOCK_DGRAM)) {
++ (sk->sk_type == SOCK_DGRAM) && !sk->sk_no_check_tx) {
+ err = ip_ufo_append_data(sk, queue, getfrag, from, length,
+ hh_len, fragheaderlen, transhdrlen,
+ maxfraglen, flags);
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 0a2b61dbcd4e..064f1a0bef6d 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -2525,6 +2525,9 @@ static void tcp_cwnd_reduction(struct sock *sk, const int prior_unsacked,
+ int newly_acked_sacked = prior_unsacked -
+ (tp->packets_out - tp->sacked_out);
+
++ if (newly_acked_sacked <= 0 || WARN_ON_ONCE(!tp->prior_cwnd))
++ return;
++
+ tp->prr_delivered += newly_acked_sacked;
+ if (delta < 0) {
+ u64 dividend = (u64)tp->snd_ssthresh * tp->prr_delivered +
+diff --git a/net/ipv4/tcp_yeah.c b/net/ipv4/tcp_yeah.c
+index 17d35662930d..3e6a472e6b88 100644
+--- a/net/ipv4/tcp_yeah.c
++++ b/net/ipv4/tcp_yeah.c
+@@ -219,7 +219,7 @@ static u32 tcp_yeah_ssthresh(struct sock *sk)
+ yeah->fast_count = 0;
+ yeah->reno_count = max(yeah->reno_count>>1, 2U);
+
+- return tp->snd_cwnd - reduction;
++ return max_t(int, tp->snd_cwnd - reduction, 2);
+ }
+
+ static struct tcp_congestion_ops tcp_yeah __read_mostly = {
+diff --git a/net/ipv4/xfrm4_policy.c b/net/ipv4/xfrm4_policy.c
+index c10a9ee68433..126ff9020bad 100644
+--- a/net/ipv4/xfrm4_policy.c
++++ b/net/ipv4/xfrm4_policy.c
+@@ -236,7 +236,7 @@ static void xfrm4_dst_ifdown(struct dst_entry *dst, struct net_device *dev,
+ xfrm_dst_ifdown(dst, dev);
+ }
+
+-static struct dst_ops xfrm4_dst_ops = {
++static struct dst_ops xfrm4_dst_ops_template = {
+ .family = AF_INET,
+ .gc = xfrm4_garbage_collect,
+ .update_pmtu = xfrm4_update_pmtu,
+@@ -250,7 +250,7 @@ static struct dst_ops xfrm4_dst_ops = {
+
+ static struct xfrm_policy_afinfo xfrm4_policy_afinfo = {
+ .family = AF_INET,
+- .dst_ops = &xfrm4_dst_ops,
++ .dst_ops = &xfrm4_dst_ops_template,
+ .dst_lookup = xfrm4_dst_lookup,
+ .get_saddr = xfrm4_get_saddr,
+ .decode_session = _decode_session4,
+@@ -272,7 +272,7 @@ static struct ctl_table xfrm4_policy_table[] = {
+ { }
+ };
+
+-static int __net_init xfrm4_net_init(struct net *net)
++static int __net_init xfrm4_net_sysctl_init(struct net *net)
+ {
+ struct ctl_table *table;
+ struct ctl_table_header *hdr;
+@@ -300,7 +300,7 @@ err_alloc:
+ return -ENOMEM;
+ }
+
+-static void __net_exit xfrm4_net_exit(struct net *net)
++static void __net_exit xfrm4_net_sysctl_exit(struct net *net)
+ {
+ struct ctl_table *table;
+
+@@ -312,12 +312,44 @@ static void __net_exit xfrm4_net_exit(struct net *net)
+ if (!net_eq(net, &init_net))
+ kfree(table);
+ }
++#else /* CONFIG_SYSCTL */
++static int inline xfrm4_net_sysctl_init(struct net *net)
++{
++ return 0;
++}
++
++static void inline xfrm4_net_sysctl_exit(struct net *net)
++{
++}
++#endif
++
++static int __net_init xfrm4_net_init(struct net *net)
++{
++ int ret;
++
++ memcpy(&net->xfrm.xfrm4_dst_ops, &xfrm4_dst_ops_template,
++ sizeof(xfrm4_dst_ops_template));
++ ret = dst_entries_init(&net->xfrm.xfrm4_dst_ops);
++ if (ret)
++ return ret;
++
++ ret = xfrm4_net_sysctl_init(net);
++ if (ret)
++ dst_entries_destroy(&net->xfrm.xfrm4_dst_ops);
++
++ return ret;
++}
++
++static void __net_exit xfrm4_net_exit(struct net *net)
++{
++ xfrm4_net_sysctl_exit(net);
++ dst_entries_destroy(&net->xfrm.xfrm4_dst_ops);
++}
+
+ static struct pernet_operations __net_initdata xfrm4_net_ops = {
+ .init = xfrm4_net_init,
+ .exit = xfrm4_net_exit,
+ };
+-#endif
+
+ static void __init xfrm4_policy_init(void)
+ {
+@@ -326,13 +358,9 @@ static void __init xfrm4_policy_init(void)
+
+ void __init xfrm4_init(void)
+ {
+- dst_entries_init(&xfrm4_dst_ops);
+-
+ xfrm4_state_init();
+ xfrm4_policy_init();
+ xfrm4_protocol_init();
+-#ifdef CONFIG_SYSCTL
+ register_pernet_subsys(&xfrm4_net_ops);
+-#endif
+ }
+
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index ddd351145dea..5462bfdbd2e7 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -5349,13 +5349,10 @@ static int addrconf_sysctl_stable_secret(struct ctl_table *ctl, int write,
+ goto out;
+ }
+
+- if (!write) {
+- err = snprintf(str, sizeof(str), "%pI6",
+- &secret->secret);
+- if (err >= sizeof(str)) {
+- err = -EIO;
+- goto out;
+- }
++ err = snprintf(str, sizeof(str), "%pI6", &secret->secret);
++ if (err >= sizeof(str)) {
++ err = -EIO;
++ goto out;
+ }
+
+ err = proc_dostring(&lctl, write, buffer, lenp, ppos);
+diff --git a/net/ipv6/addrlabel.c b/net/ipv6/addrlabel.c
+index 882124ebb438..a8f6986dcbe5 100644
+--- a/net/ipv6/addrlabel.c
++++ b/net/ipv6/addrlabel.c
+@@ -552,7 +552,7 @@ static int ip6addrlbl_get(struct sk_buff *in_skb, struct nlmsghdr *nlh)
+
+ rcu_read_lock();
+ p = __ipv6_addr_label(net, addr, ipv6_addr_type(addr), ifal->ifal_index);
+- if (p && ip6addrlbl_hold(p))
++ if (p && !ip6addrlbl_hold(p))
+ p = NULL;
+ lseq = ip6addrlbl_table.seq;
+ rcu_read_unlock();
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index f84ec4e9b2de..fb7973a6e9c1 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1345,7 +1345,7 @@ emsgsize:
+ (skb && skb_is_gso(skb))) &&
+ (sk->sk_protocol == IPPROTO_UDP) &&
+ (rt->dst.dev->features & NETIF_F_UFO) &&
+- (sk->sk_type == SOCK_DGRAM)) {
++ (sk->sk_type == SOCK_DGRAM) && !udp_get_no_check6_tx(sk)) {
+ err = ip6_ufo_append_data(sk, queue, getfrag, from, length,
+ hh_len, fragheaderlen,
+ transhdrlen, mtu, flags, fl6);
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 8935dc173e65..a71fb262ea3f 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -462,8 +462,10 @@ static int tcp_v6_send_synack(struct sock *sk, struct dst_entry *dst,
+ fl6->flowlabel = ip6_flowlabel(ipv6_hdr(ireq->pktopts));
+
+ skb_set_queue_mapping(skb, queue_mapping);
++ rcu_read_lock();
+ err = ip6_xmit(sk, skb, fl6, rcu_dereference(np->opt),
+ np->tclass);
++ rcu_read_unlock();
+ err = net_xmit_eval(err);
+ }
+
+diff --git a/net/ipv6/xfrm6_mode_tunnel.c b/net/ipv6/xfrm6_mode_tunnel.c
+index f7fbdbabe50e..372855eeaf42 100644
+--- a/net/ipv6/xfrm6_mode_tunnel.c
++++ b/net/ipv6/xfrm6_mode_tunnel.c
+@@ -23,7 +23,7 @@ static inline void ipip6_ecn_decapsulate(struct sk_buff *skb)
+ struct ipv6hdr *inner_iph = ipipv6_hdr(skb);
+
+ if (INET_ECN_is_ce(XFRM_MODE_SKB_CB(skb)->tos))
+- IP6_ECN_set_ce(inner_iph);
++ IP6_ECN_set_ce(skb, inner_iph);
+ }
+
+ /* Add encapsulation header.
+diff --git a/net/ipv6/xfrm6_policy.c b/net/ipv6/xfrm6_policy.c
+index da55e0c85bb8..d51a18d607ac 100644
+--- a/net/ipv6/xfrm6_policy.c
++++ b/net/ipv6/xfrm6_policy.c
+@@ -281,7 +281,7 @@ static void xfrm6_dst_ifdown(struct dst_entry *dst, struct net_device *dev,
+ xfrm_dst_ifdown(dst, dev);
+ }
+
+-static struct dst_ops xfrm6_dst_ops = {
++static struct dst_ops xfrm6_dst_ops_template = {
+ .family = AF_INET6,
+ .gc = xfrm6_garbage_collect,
+ .update_pmtu = xfrm6_update_pmtu,
+@@ -295,7 +295,7 @@ static struct dst_ops xfrm6_dst_ops = {
+
+ static struct xfrm_policy_afinfo xfrm6_policy_afinfo = {
+ .family = AF_INET6,
+- .dst_ops = &xfrm6_dst_ops,
++ .dst_ops = &xfrm6_dst_ops_template,
+ .dst_lookup = xfrm6_dst_lookup,
+ .get_saddr = xfrm6_get_saddr,
+ .decode_session = _decode_session6,
+@@ -327,7 +327,7 @@ static struct ctl_table xfrm6_policy_table[] = {
+ { }
+ };
+
+-static int __net_init xfrm6_net_init(struct net *net)
++static int __net_init xfrm6_net_sysctl_init(struct net *net)
+ {
+ struct ctl_table *table;
+ struct ctl_table_header *hdr;
+@@ -355,7 +355,7 @@ err_alloc:
+ return -ENOMEM;
+ }
+
+-static void __net_exit xfrm6_net_exit(struct net *net)
++static void __net_exit xfrm6_net_sysctl_exit(struct net *net)
+ {
+ struct ctl_table *table;
+
+@@ -367,24 +367,52 @@ static void __net_exit xfrm6_net_exit(struct net *net)
+ if (!net_eq(net, &init_net))
+ kfree(table);
+ }
++#else /* CONFIG_SYSCTL */
++static int inline xfrm6_net_sysctl_init(struct net *net)
++{
++ return 0;
++}
++
++static void inline xfrm6_net_sysctl_exit(struct net *net)
++{
++}
++#endif
++
++static int __net_init xfrm6_net_init(struct net *net)
++{
++ int ret;
++
++ memcpy(&net->xfrm.xfrm6_dst_ops, &xfrm6_dst_ops_template,
++ sizeof(xfrm6_dst_ops_template));
++ ret = dst_entries_init(&net->xfrm.xfrm6_dst_ops);
++ if (ret)
++ return ret;
++
++ ret = xfrm6_net_sysctl_init(net);
++ if (ret)
++ dst_entries_destroy(&net->xfrm.xfrm6_dst_ops);
++
++ return ret;
++}
++
++static void __net_exit xfrm6_net_exit(struct net *net)
++{
++ xfrm6_net_sysctl_exit(net);
++ dst_entries_destroy(&net->xfrm.xfrm6_dst_ops);
++}
+
+ static struct pernet_operations xfrm6_net_ops = {
+ .init = xfrm6_net_init,
+ .exit = xfrm6_net_exit,
+ };
+-#endif
+
+ int __init xfrm6_init(void)
+ {
+ int ret;
+
+- dst_entries_init(&xfrm6_dst_ops);
+-
+ ret = xfrm6_policy_init();
+- if (ret) {
+- dst_entries_destroy(&xfrm6_dst_ops);
++ if (ret)
+ goto out;
+- }
+ ret = xfrm6_state_init();
+ if (ret)
+ goto out_policy;
+@@ -393,9 +421,7 @@ int __init xfrm6_init(void)
+ if (ret)
+ goto out_state;
+
+-#ifdef CONFIG_SYSCTL
+ register_pernet_subsys(&xfrm6_net_ops);
+-#endif
+ out:
+ return ret;
+ out_state:
+@@ -407,11 +433,8 @@ out_policy:
+
+ void xfrm6_fini(void)
+ {
+-#ifdef CONFIG_SYSCTL
+ unregister_pernet_subsys(&xfrm6_net_ops);
+-#endif
+ xfrm6_protocol_fini();
+ xfrm6_policy_fini();
+ xfrm6_state_fini();
+- dst_entries_destroy(&xfrm6_dst_ops);
+ }
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index c5d08ee37730..6e9a2220939d 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -337,12 +337,10 @@ static int queue_gso_packets(struct datapath *dp, struct sk_buff *skb,
+ unsigned short gso_type = skb_shinfo(skb)->gso_type;
+ struct sw_flow_key later_key;
+ struct sk_buff *segs, *nskb;
+- struct ovs_skb_cb ovs_cb;
+ int err;
+
+- ovs_cb = *OVS_CB(skb);
++ BUILD_BUG_ON(sizeof(*OVS_CB(skb)) > SKB_SGO_CB_OFFSET);
+ segs = __skb_gso_segment(skb, NETIF_F_SG, false);
+- *OVS_CB(skb) = ovs_cb;
+ if (IS_ERR(segs))
+ return PTR_ERR(segs);
+ if (segs == NULL)
+@@ -360,7 +358,6 @@ static int queue_gso_packets(struct datapath *dp, struct sk_buff *skb,
+ /* Queue all of the segments. */
+ skb = segs;
+ do {
+- *OVS_CB(skb) = ovs_cb;
+ if (gso_type & SKB_GSO_UDP && skb != segs)
+ key = &later_key;
+
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 38536c137c54..45635118cc86 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2382,7 +2382,9 @@ static int set_action_to_attr(const struct nlattr *a, struct sk_buff *skb)
+ if (!start)
+ return -EMSGSIZE;
+
+- err = ovs_nla_put_tunnel_info(skb, tun_info);
++ err = ipv4_tun_to_nlattr(skb, &tun_info->key,
++ ip_tunnel_info_opts(tun_info),
++ tun_info->options_len);
+ if (err)
+ return err;
+ nla_nest_end(skb, start);
+diff --git a/net/phonet/af_phonet.c b/net/phonet/af_phonet.c
+index 10d42f3220ab..f925753668a7 100644
+--- a/net/phonet/af_phonet.c
++++ b/net/phonet/af_phonet.c
+@@ -377,6 +377,10 @@ static int phonet_rcv(struct sk_buff *skb, struct net_device *dev,
+ struct sockaddr_pn sa;
+ u16 len;
+
++ skb = skb_share_check(skb, GFP_ATOMIC);
++ if (!skb)
++ return NET_RX_DROP;
++
+ /* check we have at least a full Phonet header */
+ if (!pskb_pull(skb, sizeof(struct phonethdr)))
+ goto out;
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 57692947ebbe..95b021243233 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -252,23 +252,28 @@ static int fl_set_key(struct net *net, struct nlattr **tb,
+ fl_set_key_val(tb, key->eth.src, TCA_FLOWER_KEY_ETH_SRC,
+ mask->eth.src, TCA_FLOWER_KEY_ETH_SRC_MASK,
+ sizeof(key->eth.src));
++
+ fl_set_key_val(tb, &key->basic.n_proto, TCA_FLOWER_KEY_ETH_TYPE,
+ &mask->basic.n_proto, TCA_FLOWER_UNSPEC,
+ sizeof(key->basic.n_proto));
++
+ if (key->basic.n_proto == htons(ETH_P_IP) ||
+ key->basic.n_proto == htons(ETH_P_IPV6)) {
+ fl_set_key_val(tb, &key->basic.ip_proto, TCA_FLOWER_KEY_IP_PROTO,
+ &mask->basic.ip_proto, TCA_FLOWER_UNSPEC,
+ sizeof(key->basic.ip_proto));
+ }
+- if (key->control.addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
++
++ if (tb[TCA_FLOWER_KEY_IPV4_SRC] || tb[TCA_FLOWER_KEY_IPV4_DST]) {
++ key->control.addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS;
+ fl_set_key_val(tb, &key->ipv4.src, TCA_FLOWER_KEY_IPV4_SRC,
+ &mask->ipv4.src, TCA_FLOWER_KEY_IPV4_SRC_MASK,
+ sizeof(key->ipv4.src));
+ fl_set_key_val(tb, &key->ipv4.dst, TCA_FLOWER_KEY_IPV4_DST,
+ &mask->ipv4.dst, TCA_FLOWER_KEY_IPV4_DST_MASK,
+ sizeof(key->ipv4.dst));
+- } else if (key->control.addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
++ } else if (tb[TCA_FLOWER_KEY_IPV6_SRC] || tb[TCA_FLOWER_KEY_IPV6_DST]) {
++ key->control.addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
+ fl_set_key_val(tb, &key->ipv6.src, TCA_FLOWER_KEY_IPV6_SRC,
+ &mask->ipv6.src, TCA_FLOWER_KEY_IPV6_SRC_MASK,
+ sizeof(key->ipv6.src));
+@@ -276,6 +281,7 @@ static int fl_set_key(struct net *net, struct nlattr **tb,
+ &mask->ipv6.dst, TCA_FLOWER_KEY_IPV6_DST_MASK,
+ sizeof(key->ipv6.dst));
+ }
++
+ if (key->basic.ip_proto == IPPROTO_TCP) {
+ fl_set_key_val(tb, &key->tp.src, TCA_FLOWER_KEY_TCP_SRC,
+ &mask->tp.src, TCA_FLOWER_UNSPEC,
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index e82a1ad80aa5..16bc83b2842a 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -658,8 +658,10 @@ static void qdisc_rcu_free(struct rcu_head *head)
+ {
+ struct Qdisc *qdisc = container_of(head, struct Qdisc, rcu_head);
+
+- if (qdisc_is_percpu_stats(qdisc))
++ if (qdisc_is_percpu_stats(qdisc)) {
+ free_percpu(qdisc->cpu_bstats);
++ free_percpu(qdisc->cpu_qstats);
++ }
+
+ kfree((char *) qdisc - qdisc->padded);
+ }
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index d7eaa7354cf7..c89586e2bacb 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -4829,7 +4829,8 @@ sctp_disposition_t sctp_sf_do_9_1_prm_abort(
+
+ retval = SCTP_DISPOSITION_CONSUME;
+
+- sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(abort));
++ if (abort)
++ sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(abort));
+
+ /* Even if we can't send the ABORT due to low memory delete the
+ * TCB. This is a departure from our typical NOMEM handling.
+@@ -4966,7 +4967,8 @@ sctp_disposition_t sctp_sf_cookie_wait_prm_abort(
+ SCTP_TO(SCTP_EVENT_TIMEOUT_T1_INIT));
+ retval = SCTP_DISPOSITION_CONSUME;
+
+- sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(abort));
++ if (abort)
++ sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(abort));
+
+ sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE,
+ SCTP_STATE(SCTP_STATE_CLOSED));
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 84b1b504538a..9dee804b35cd 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -1513,8 +1513,7 @@ static void sctp_close(struct sock *sk, long timeout)
+ struct sctp_chunk *chunk;
+
+ chunk = sctp_make_abort_user(asoc, NULL, 0);
+- if (chunk)
+- sctp_primitive_ABORT(net, asoc, chunk);
++ sctp_primitive_ABORT(net, asoc, chunk);
+ } else
+ sctp_primitive_SHUTDOWN(net, asoc, NULL);
+ }
+diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c
+index 26d50c565f54..3e0fc5127225 100644
+--- a/net/sctp/sysctl.c
++++ b/net/sctp/sysctl.c
+@@ -320,7 +320,7 @@ static int proc_sctp_do_hmac_alg(struct ctl_table *ctl, int write,
+ struct ctl_table tbl;
+ bool changed = false;
+ char *none = "none";
+- char tmp[8];
++ char tmp[8] = {0};
+ int ret;
+
+ memset(&tbl, 0, sizeof(struct ctl_table));
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 0fc6dbaed39c..7926de14e930 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -952,32 +952,20 @@ fail:
+ return NULL;
+ }
+
+-static int unix_mknod(const char *sun_path, umode_t mode, struct path *res)
++static int unix_mknod(struct dentry *dentry, struct path *path, umode_t mode,
++ struct path *res)
+ {
+- struct dentry *dentry;
+- struct path path;
+- int err = 0;
+- /*
+- * Get the parent directory, calculate the hash for last
+- * component.
+- */
+- dentry = kern_path_create(AT_FDCWD, sun_path, &path, 0);
+- err = PTR_ERR(dentry);
+- if (IS_ERR(dentry))
+- return err;
++ int err;
+
+- /*
+- * All right, let's create it.
+- */
+- err = security_path_mknod(&path, dentry, mode, 0);
++ err = security_path_mknod(path, dentry, mode, 0);
+ if (!err) {
+- err = vfs_mknod(d_inode(path.dentry), dentry, mode, 0);
++ err = vfs_mknod(d_inode(path->dentry), dentry, mode, 0);
+ if (!err) {
+- res->mnt = mntget(path.mnt);
++ res->mnt = mntget(path->mnt);
+ res->dentry = dget(dentry);
+ }
+ }
+- done_path_create(&path, dentry);
++
+ return err;
+ }
+
+@@ -988,10 +976,12 @@ static int unix_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ struct unix_sock *u = unix_sk(sk);
+ struct sockaddr_un *sunaddr = (struct sockaddr_un *)uaddr;
+ char *sun_path = sunaddr->sun_path;
+- int err;
++ int err, name_err;
+ unsigned int hash;
+ struct unix_address *addr;
+ struct hlist_head *list;
++ struct path path;
++ struct dentry *dentry;
+
+ err = -EINVAL;
+ if (sunaddr->sun_family != AF_UNIX)
+@@ -1007,14 +997,34 @@ static int unix_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ goto out;
+ addr_len = err;
+
++ name_err = 0;
++ dentry = NULL;
++ if (sun_path[0]) {
++ /* Get the parent directory, calculate the hash for last
++ * component.
++ */
++ dentry = kern_path_create(AT_FDCWD, sun_path, &path, 0);
++
++ if (IS_ERR(dentry)) {
++ /* delay report until after 'already bound' check */
++ name_err = PTR_ERR(dentry);
++ dentry = NULL;
++ }
++ }
++
+ err = mutex_lock_interruptible(&u->readlock);
+ if (err)
+- goto out;
++ goto out_path;
+
+ err = -EINVAL;
+ if (u->addr)
+ goto out_up;
+
++ if (name_err) {
++ err = name_err == -EEXIST ? -EADDRINUSE : name_err;
++ goto out_up;
++ }
++
+ err = -ENOMEM;
+ addr = kmalloc(sizeof(*addr)+addr_len, GFP_KERNEL);
+ if (!addr)
+@@ -1025,11 +1035,11 @@ static int unix_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ addr->hash = hash ^ sk->sk_type;
+ atomic_set(&addr->refcnt, 1);
+
+- if (sun_path[0]) {
+- struct path path;
++ if (dentry) {
++ struct path u_path;
+ umode_t mode = S_IFSOCK |
+ (SOCK_INODE(sock)->i_mode & ~current_umask());
+- err = unix_mknod(sun_path, mode, &path);
++ err = unix_mknod(dentry, &path, mode, &u_path);
+ if (err) {
+ if (err == -EEXIST)
+ err = -EADDRINUSE;
+@@ -1037,9 +1047,9 @@ static int unix_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ goto out_up;
+ }
+ addr->hash = UNIX_HASH_SIZE;
+- hash = d_backing_inode(path.dentry)->i_ino & (UNIX_HASH_SIZE-1);
++ hash = d_backing_inode(dentry)->i_ino & (UNIX_HASH_SIZE - 1);
+ spin_lock(&unix_table_lock);
+- u->path = path;
++ u->path = u_path;
+ list = &unix_socket_table[hash];
+ } else {
+ spin_lock(&unix_table_lock);
+@@ -1062,6 +1072,10 @@ out_unlock:
+ spin_unlock(&unix_table_lock);
+ out_up:
+ mutex_unlock(&u->readlock);
++out_path:
++ if (dentry)
++ done_path_create(&path, dentry);
++
+ out:
+ return err;
+ }
+@@ -1498,6 +1512,21 @@ static void unix_destruct_scm(struct sk_buff *skb)
+ sock_wfree(skb);
+ }
+
++/*
++ * The "user->unix_inflight" variable is protected by the garbage
++ * collection lock, and we just read it locklessly here. If you go
++ * over the limit, there might be a tiny race in actually noticing
++ * it across threads. Tough.
++ */
++static inline bool too_many_unix_fds(struct task_struct *p)
++{
++ struct user_struct *user = current_user();
++
++ if (unlikely(user->unix_inflight > task_rlimit(p, RLIMIT_NOFILE)))
++ return !capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN);
++ return false;
++}
++
+ #define MAX_RECURSION_LEVEL 4
+
+ static int unix_attach_fds(struct scm_cookie *scm, struct sk_buff *skb)
+@@ -1506,6 +1535,9 @@ static int unix_attach_fds(struct scm_cookie *scm, struct sk_buff *skb)
+ unsigned char max_level = 0;
+ int unix_sock_count = 0;
+
++ if (too_many_unix_fds(current))
++ return -ETOOMANYREFS;
++
+ for (i = scm->fp->count - 1; i >= 0; i--) {
+ struct sock *sk = unix_get_socket(scm->fp->fp[i]);
+
+@@ -1527,10 +1559,8 @@ static int unix_attach_fds(struct scm_cookie *scm, struct sk_buff *skb)
+ if (!UNIXCB(skb).fp)
+ return -ENOMEM;
+
+- if (unix_sock_count) {
+- for (i = scm->fp->count - 1; i >= 0; i--)
+- unix_inflight(scm->fp->fp[i]);
+- }
++ for (i = scm->fp->count - 1; i >= 0; i--)
++ unix_inflight(scm->fp->fp[i]);
+ return max_level;
+ }
+
+diff --git a/net/unix/garbage.c b/net/unix/garbage.c
+index a73a226f2d33..8fcdc2283af5 100644
+--- a/net/unix/garbage.c
++++ b/net/unix/garbage.c
+@@ -120,11 +120,11 @@ void unix_inflight(struct file *fp)
+ {
+ struct sock *s = unix_get_socket(fp);
+
++ spin_lock(&unix_gc_lock);
++
+ if (s) {
+ struct unix_sock *u = unix_sk(s);
+
+- spin_lock(&unix_gc_lock);
+-
+ if (atomic_long_inc_return(&u->inflight) == 1) {
+ BUG_ON(!list_empty(&u->link));
+ list_add_tail(&u->link, &gc_inflight_list);
+@@ -132,25 +132,28 @@ void unix_inflight(struct file *fp)
+ BUG_ON(list_empty(&u->link));
+ }
+ unix_tot_inflight++;
+- spin_unlock(&unix_gc_lock);
+ }
++ fp->f_cred->user->unix_inflight++;
++ spin_unlock(&unix_gc_lock);
+ }
+
+ void unix_notinflight(struct file *fp)
+ {
+ struct sock *s = unix_get_socket(fp);
+
++ spin_lock(&unix_gc_lock);
++
+ if (s) {
+ struct unix_sock *u = unix_sk(s);
+
+- spin_lock(&unix_gc_lock);
+ BUG_ON(list_empty(&u->link));
+
+ if (atomic_long_dec_and_test(&u->inflight))
+ list_del_init(&u->link);
+ unix_tot_inflight--;
+- spin_unlock(&unix_gc_lock);
+ }
++ fp->f_cred->user->unix_inflight--;
++ spin_unlock(&unix_gc_lock);
+ }
+
+ static void scan_inflight(struct sock *x, void (*func)(struct unix_sock *),
+diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
+index 68ada2ca4b60..443f78c33de2 100644
+--- a/net/xfrm/xfrm_output.c
++++ b/net/xfrm/xfrm_output.c
+@@ -165,6 +165,8 @@ static int xfrm_output_gso(struct sock *sk, struct sk_buff *skb)
+ {
+ struct sk_buff *segs;
+
++ BUILD_BUG_ON(sizeof(*IPCB(skb)) > SKB_SGO_CB_OFFSET);
++ BUILD_BUG_ON(sizeof(*IP6CB(skb)) > SKB_SGO_CB_OFFSET);
+ segs = skb_gso_segment(skb, 0);
+ kfree_skb(skb);
+ if (IS_ERR(segs))
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 94af3d065785..bacd30bda10d 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -2807,7 +2807,6 @@ static struct neighbour *xfrm_neigh_lookup(const struct dst_entry *dst,
+
+ int xfrm_policy_register_afinfo(struct xfrm_policy_afinfo *afinfo)
+ {
+- struct net *net;
+ int err = 0;
+ if (unlikely(afinfo == NULL))
+ return -EINVAL;
+@@ -2838,26 +2837,6 @@ int xfrm_policy_register_afinfo(struct xfrm_policy_afinfo *afinfo)
+ }
+ spin_unlock(&xfrm_policy_afinfo_lock);
+
+- rtnl_lock();
+- for_each_net(net) {
+- struct dst_ops *xfrm_dst_ops;
+-
+- switch (afinfo->family) {
+- case AF_INET:
+- xfrm_dst_ops = &net->xfrm.xfrm4_dst_ops;
+- break;
+-#if IS_ENABLED(CONFIG_IPV6)
+- case AF_INET6:
+- xfrm_dst_ops = &net->xfrm.xfrm6_dst_ops;
+- break;
+-#endif
+- default:
+- BUG();
+- }
+- *xfrm_dst_ops = *afinfo->dst_ops;
+- }
+- rtnl_unlock();
+-
+ return err;
+ }
+ EXPORT_SYMBOL(xfrm_policy_register_afinfo);
+@@ -2893,22 +2872,6 @@ int xfrm_policy_unregister_afinfo(struct xfrm_policy_afinfo *afinfo)
+ }
+ EXPORT_SYMBOL(xfrm_policy_unregister_afinfo);
+
+-static void __net_init xfrm_dst_ops_init(struct net *net)
+-{
+- struct xfrm_policy_afinfo *afinfo;
+-
+- rcu_read_lock();
+- afinfo = rcu_dereference(xfrm_policy_afinfo[AF_INET]);
+- if (afinfo)
+- net->xfrm.xfrm4_dst_ops = *afinfo->dst_ops;
+-#if IS_ENABLED(CONFIG_IPV6)
+- afinfo = rcu_dereference(xfrm_policy_afinfo[AF_INET6]);
+- if (afinfo)
+- net->xfrm.xfrm6_dst_ops = *afinfo->dst_ops;
+-#endif
+- rcu_read_unlock();
+-}
+-
+ static int xfrm_dev_event(struct notifier_block *this, unsigned long event, void *ptr)
+ {
+ struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+@@ -3057,7 +3020,6 @@ static int __net_init xfrm_net_init(struct net *net)
+ rv = xfrm_policy_init(net);
+ if (rv < 0)
+ goto out_policy;
+- xfrm_dst_ops_init(net);
+ rv = xfrm_sysctl_init(net);
+ if (rv < 0)
+ goto out_sysctl;
+diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c
+index 3d1984e59a30..e00bcd129336 100644
+--- a/scripts/recordmcount.c
++++ b/scripts/recordmcount.c
+@@ -42,6 +42,7 @@
+
+ #ifndef EM_AARCH64
+ #define EM_AARCH64 183
++#define R_AARCH64_NONE 0
+ #define R_AARCH64_ABS64 257
+ #endif
+
+@@ -160,6 +161,22 @@ static int make_nop_x86(void *map, size_t const offset)
+ return 0;
+ }
+
++static unsigned char ideal_nop4_arm64[4] = {0x1f, 0x20, 0x03, 0xd5};
++static int make_nop_arm64(void *map, size_t const offset)
++{
++ uint32_t *ptr;
++
++ ptr = map + offset;
++ /* bl <_mcount> is 0x94000000 before relocation */
++ if (*ptr != 0x94000000)
++ return -1;
++
++ /* Convert to nop */
++ ulseek(fd_map, offset, SEEK_SET);
++ uwrite(fd_map, ideal_nop, 4);
++ return 0;
++}
++
+ /*
+ * Get the whole file as a programming convenience in order to avoid
+ * malloc+lseek+read+free of many pieces. If successful, then mmap
+@@ -353,7 +370,12 @@ do_file(char const *const fname)
+ altmcount = "__gnu_mcount_nc";
+ break;
+ case EM_AARCH64:
+- reltype = R_AARCH64_ABS64; gpfx = '_'; break;
++ reltype = R_AARCH64_ABS64;
++ make_nop = make_nop_arm64;
++ rel_type_nop = R_AARCH64_NONE;
++ ideal_nop = ideal_nop4_arm64;
++ gpfx = '_';
++ break;
+ case EM_IA_64: reltype = R_IA64_IMM64; gpfx = '_'; break;
+ case EM_METAG: reltype = R_METAG_ADDR32;
+ altmcount = "_mcount_wrapper";
+diff --git a/scripts/recordmcount.h b/scripts/recordmcount.h
+index 49b582a225b0..b9897e2be404 100644
+--- a/scripts/recordmcount.h
++++ b/scripts/recordmcount.h
+@@ -377,7 +377,7 @@ static void nop_mcount(Elf_Shdr const *const relhdr,
+
+ if (mcountsym == Elf_r_sym(relp) && !is_fake_mcount(relp)) {
+ if (make_nop)
+- ret = make_nop((void *)ehdr, shdr->sh_offset + relp->r_offset);
++ ret = make_nop((void *)ehdr, _w(shdr->sh_offset) + _w(relp->r_offset));
+ if (warn_on_notrace_sect && !once) {
+ printf("Section %s has mcount callers being ignored\n",
+ txtname);
+diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
+index 826470d7f000..96e2486a6fc4 100755
+--- a/scripts/recordmcount.pl
++++ b/scripts/recordmcount.pl
+@@ -263,7 +263,8 @@ if ($arch eq "x86_64") {
+
+ } elsif ($arch eq "powerpc") {
+ $local_regex = "^[0-9a-fA-F]+\\s+t\\s+(\\.?\\S+)";
+- $function_regex = "^([0-9a-fA-F]+)\\s+<(\\.?.*?)>:";
++ # See comment in the sparc64 section for why we use '\w'.
++ $function_regex = "^([0-9a-fA-F]+)\\s+<(\\.?\\w*?)>:";
+ $mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s\\.?_mcount\$";
+
+ if ($bits == 64) {
+diff --git a/sound/core/control.c b/sound/core/control.c
+index 196a6fe100ca..a85d45595d02 100644
+--- a/sound/core/control.c
++++ b/sound/core/control.c
+@@ -1405,6 +1405,8 @@ static int snd_ctl_tlv_ioctl(struct snd_ctl_file *file,
+ return -EFAULT;
+ if (tlv.length < sizeof(unsigned int) * 2)
+ return -EINVAL;
++ if (!tlv.numid)
++ return -EINVAL;
+ down_read(&card->controls_rwsem);
+ kctl = snd_ctl_find_numid(card, tlv.numid);
+ if (kctl == NULL) {
+diff --git a/sound/core/hrtimer.c b/sound/core/hrtimer.c
+index f845ecf7e172..656d9a9032dc 100644
+--- a/sound/core/hrtimer.c
++++ b/sound/core/hrtimer.c
+@@ -90,7 +90,7 @@ static int snd_hrtimer_start(struct snd_timer *t)
+ struct snd_hrtimer *stime = t->private_data;
+
+ atomic_set(&stime->running, 0);
+- hrtimer_cancel(&stime->hrt);
++ hrtimer_try_to_cancel(&stime->hrt);
+ hrtimer_start(&stime->hrt, ns_to_ktime(t->sticks * resolution),
+ HRTIMER_MODE_REL);
+ atomic_set(&stime->running, 1);
+@@ -101,6 +101,7 @@ static int snd_hrtimer_stop(struct snd_timer *t)
+ {
+ struct snd_hrtimer *stime = t->private_data;
+ atomic_set(&stime->running, 0);
++ hrtimer_try_to_cancel(&stime->hrt);
+ return 0;
+ }
+
+diff --git a/sound/core/pcm_compat.c b/sound/core/pcm_compat.c
+index b48b434444ed..9630e9f72b7b 100644
+--- a/sound/core/pcm_compat.c
++++ b/sound/core/pcm_compat.c
+@@ -255,10 +255,15 @@ static int snd_pcm_ioctl_hw_params_compat(struct snd_pcm_substream *substream,
+ if (! (runtime = substream->runtime))
+ return -ENOTTY;
+
+- /* only fifo_size is different, so just copy all */
+- data = memdup_user(data32, sizeof(*data32));
+- if (IS_ERR(data))
+- return PTR_ERR(data);
++ data = kmalloc(sizeof(*data), GFP_KERNEL);
++ if (!data)
++ return -ENOMEM;
++
++ /* only fifo_size (RO from userspace) is different, so just copy all */
++ if (copy_from_user(data, data32, sizeof(*data32))) {
++ err = -EFAULT;
++ goto error;
++ }
+
+ if (refine)
+ err = snd_pcm_hw_refine(substream, data);
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index b64f20deba90..13cfa815732d 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -1962,7 +1962,7 @@ static int snd_seq_ioctl_remove_events(struct snd_seq_client *client,
+ * No restrictions so for a user client we can clear
+ * the whole fifo
+ */
+- if (client->type == USER_CLIENT)
++ if (client->type == USER_CLIENT && client->data.user.fifo)
+ snd_seq_fifo_clear(client->data.user.fifo);
+ }
+
+diff --git a/sound/core/seq/seq_compat.c b/sound/core/seq/seq_compat.c
+index 81f7c109dc46..65175902a68a 100644
+--- a/sound/core/seq/seq_compat.c
++++ b/sound/core/seq/seq_compat.c
+@@ -49,11 +49,12 @@ static int snd_seq_call_port_info_ioctl(struct snd_seq_client *client, unsigned
+ struct snd_seq_port_info *data;
+ mm_segment_t fs;
+
+- data = memdup_user(data32, sizeof(*data32));
+- if (IS_ERR(data))
+- return PTR_ERR(data);
++ data = kmalloc(sizeof(*data), GFP_KERNEL);
++ if (!data)
++ return -ENOMEM;
+
+- if (get_user(data->flags, &data32->flags) ||
++ if (copy_from_user(data, data32, sizeof(*data32)) ||
++ get_user(data->flags, &data32->flags) ||
+ get_user(data->time_queue, &data32->time_queue))
+ goto error;
+ data->kernel = NULL;
+diff --git a/sound/core/seq/seq_queue.c b/sound/core/seq/seq_queue.c
+index 7dfd0f429410..0bec02e89d51 100644
+--- a/sound/core/seq/seq_queue.c
++++ b/sound/core/seq/seq_queue.c
+@@ -142,8 +142,10 @@ static struct snd_seq_queue *queue_new(int owner, int locked)
+ static void queue_delete(struct snd_seq_queue *q)
+ {
+ /* stop and release the timer */
++ mutex_lock(&q->timer_mutex);
+ snd_seq_timer_stop(q->timer);
+ snd_seq_timer_close(q);
++ mutex_unlock(&q->timer_mutex);
+ /* wait until access free */
+ snd_use_lock_sync(&q->use_lock);
+ /* release resources... */
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index 31f40f03e5b7..0a049c4578f1 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -65,6 +65,7 @@ struct snd_timer_user {
+ int qtail;
+ int qused;
+ int queue_size;
++ bool disconnected;
+ struct snd_timer_read *queue;
+ struct snd_timer_tread *tqueue;
+ spinlock_t qlock;
+@@ -73,7 +74,7 @@ struct snd_timer_user {
+ struct timespec tstamp; /* trigger tstamp */
+ wait_queue_head_t qchange_sleep;
+ struct fasync_struct *fasync;
+- struct mutex tread_sem;
++ struct mutex ioctl_lock;
+ };
+
+ /* list of timers */
+@@ -215,11 +216,13 @@ static void snd_timer_check_master(struct snd_timer_instance *master)
+ slave->slave_id == master->slave_id) {
+ list_move_tail(&slave->open_list, &master->slave_list_head);
+ spin_lock_irq(&slave_active_lock);
++ spin_lock(&master->timer->lock);
+ slave->master = master;
+ slave->timer = master->timer;
+ if (slave->flags & SNDRV_TIMER_IFLG_RUNNING)
+ list_add_tail(&slave->active_list,
+ &master->slave_active_head);
++ spin_unlock(&master->timer->lock);
+ spin_unlock_irq(&slave_active_lock);
+ }
+ }
+@@ -288,6 +291,9 @@ int snd_timer_open(struct snd_timer_instance **ti,
+ mutex_unlock(®ister_mutex);
+ return -ENOMEM;
+ }
++ /* take a card refcount for safe disconnection */
++ if (timer->card)
++ get_device(&timer->card->card_dev);
+ timeri->slave_class = tid->dev_sclass;
+ timeri->slave_id = slave_id;
+ if (list_empty(&timer->open_list_head) && timer->hw.open)
+@@ -346,15 +352,21 @@ int snd_timer_close(struct snd_timer_instance *timeri)
+ timer->hw.close)
+ timer->hw.close(timer);
+ /* remove slave links */
++ spin_lock_irq(&slave_active_lock);
++ spin_lock(&timer->lock);
+ list_for_each_entry_safe(slave, tmp, &timeri->slave_list_head,
+ open_list) {
+- spin_lock_irq(&slave_active_lock);
+- _snd_timer_stop(slave, 1, SNDRV_TIMER_EVENT_RESOLUTION);
+ list_move_tail(&slave->open_list, &snd_timer_slave_list);
+ slave->master = NULL;
+ slave->timer = NULL;
+- spin_unlock_irq(&slave_active_lock);
++ list_del_init(&slave->ack_list);
++ list_del_init(&slave->active_list);
+ }
++ spin_unlock(&timer->lock);
++ spin_unlock_irq(&slave_active_lock);
++ /* release a card refcount for safe disconnection */
++ if (timer->card)
++ put_device(&timer->card->card_dev);
+ mutex_unlock(®ister_mutex);
+ }
+ out:
+@@ -441,9 +453,12 @@ static int snd_timer_start_slave(struct snd_timer_instance *timeri)
+
+ spin_lock_irqsave(&slave_active_lock, flags);
+ timeri->flags |= SNDRV_TIMER_IFLG_RUNNING;
+- if (timeri->master)
++ if (timeri->master && timeri->timer) {
++ spin_lock(&timeri->timer->lock);
+ list_add_tail(&timeri->active_list,
+ &timeri->master->slave_active_head);
++ spin_unlock(&timeri->timer->lock);
++ }
+ spin_unlock_irqrestore(&slave_active_lock, flags);
+ return 1; /* delayed start */
+ }
+@@ -467,6 +482,8 @@ int snd_timer_start(struct snd_timer_instance *timeri, unsigned int ticks)
+ timer = timeri->timer;
+ if (timer == NULL)
+ return -EINVAL;
++ if (timer->card && timer->card->shutdown)
++ return -ENODEV;
+ spin_lock_irqsave(&timer->lock, flags);
+ timeri->ticks = timeri->cticks = ticks;
+ timeri->pticks = 0;
+@@ -489,6 +506,8 @@ static int _snd_timer_stop(struct snd_timer_instance * timeri,
+ if (!keep_flag) {
+ spin_lock_irqsave(&slave_active_lock, flags);
+ timeri->flags &= ~SNDRV_TIMER_IFLG_RUNNING;
++ list_del_init(&timeri->ack_list);
++ list_del_init(&timeri->active_list);
+ spin_unlock_irqrestore(&slave_active_lock, flags);
+ }
+ goto __end;
+@@ -499,6 +518,10 @@ static int _snd_timer_stop(struct snd_timer_instance * timeri,
+ spin_lock_irqsave(&timer->lock, flags);
+ list_del_init(&timeri->ack_list);
+ list_del_init(&timeri->active_list);
++ if (timer->card && timer->card->shutdown) {
++ spin_unlock_irqrestore(&timer->lock, flags);
++ return 0;
++ }
+ if ((timeri->flags & SNDRV_TIMER_IFLG_RUNNING) &&
+ !(--timer->running)) {
+ timer->hw.stop(timer);
+@@ -561,6 +584,8 @@ int snd_timer_continue(struct snd_timer_instance *timeri)
+ timer = timeri->timer;
+ if (! timer)
+ return -EINVAL;
++ if (timer->card && timer->card->shutdown)
++ return -ENODEV;
+ spin_lock_irqsave(&timer->lock, flags);
+ if (!timeri->cticks)
+ timeri->cticks = 1;
+@@ -624,6 +649,9 @@ static void snd_timer_tasklet(unsigned long arg)
+ unsigned long resolution, ticks;
+ unsigned long flags;
+
++ if (timer->card && timer->card->shutdown)
++ return;
++
+ spin_lock_irqsave(&timer->lock, flags);
+ /* now process all callbacks */
+ while (!list_empty(&timer->sack_list_head)) {
+@@ -664,6 +692,9 @@ void snd_timer_interrupt(struct snd_timer * timer, unsigned long ticks_left)
+ if (timer == NULL)
+ return;
+
++ if (timer->card && timer->card->shutdown)
++ return;
++
+ spin_lock_irqsave(&timer->lock, flags);
+
+ /* remember the current resolution */
+@@ -694,7 +725,7 @@ void snd_timer_interrupt(struct snd_timer * timer, unsigned long ticks_left)
+ } else {
+ ti->flags &= ~SNDRV_TIMER_IFLG_RUNNING;
+ if (--timer->running)
+- list_del(&ti->active_list);
++ list_del_init(&ti->active_list);
+ }
+ if ((timer->hw.flags & SNDRV_TIMER_HW_TASKLET) ||
+ (ti->flags & SNDRV_TIMER_IFLG_FAST))
+@@ -874,11 +905,28 @@ static int snd_timer_dev_register(struct snd_device *dev)
+ return 0;
+ }
+
++/* just for reference in snd_timer_dev_disconnect() below */
++static void snd_timer_user_ccallback(struct snd_timer_instance *timeri,
++ int event, struct timespec *tstamp,
++ unsigned long resolution);
++
+ static int snd_timer_dev_disconnect(struct snd_device *device)
+ {
+ struct snd_timer *timer = device->device_data;
++ struct snd_timer_instance *ti;
++
+ mutex_lock(®ister_mutex);
+ list_del_init(&timer->device_list);
++ /* wake up pending sleepers */
++ list_for_each_entry(ti, &timer->open_list_head, open_list) {
++ /* FIXME: better to have a ti.disconnect() op */
++ if (ti->ccallback == snd_timer_user_ccallback) {
++ struct snd_timer_user *tu = ti->callback_data;
++
++ tu->disconnected = true;
++ wake_up(&tu->qchange_sleep);
++ }
++ }
+ mutex_unlock(®ister_mutex);
+ return 0;
+ }
+@@ -889,6 +937,8 @@ void snd_timer_notify(struct snd_timer *timer, int event, struct timespec *tstam
+ unsigned long resolution = 0;
+ struct snd_timer_instance *ti, *ts;
+
++ if (timer->card && timer->card->shutdown)
++ return;
+ if (! (timer->hw.flags & SNDRV_TIMER_HW_SLAVE))
+ return;
+ if (snd_BUG_ON(event < SNDRV_TIMER_EVENT_MSTART ||
+@@ -1047,6 +1097,8 @@ static void snd_timer_proc_read(struct snd_info_entry *entry,
+
+ mutex_lock(®ister_mutex);
+ list_for_each_entry(timer, &snd_timer_list, device_list) {
++ if (timer->card && timer->card->shutdown)
++ continue;
+ switch (timer->tmr_class) {
+ case SNDRV_TIMER_CLASS_GLOBAL:
+ snd_iprintf(buffer, "G%i: ", timer->tmr_device);
+@@ -1253,7 +1305,7 @@ static int snd_timer_user_open(struct inode *inode, struct file *file)
+ return -ENOMEM;
+ spin_lock_init(&tu->qlock);
+ init_waitqueue_head(&tu->qchange_sleep);
+- mutex_init(&tu->tread_sem);
++ mutex_init(&tu->ioctl_lock);
+ tu->ticks = 1;
+ tu->queue_size = 128;
+ tu->queue = kmalloc(tu->queue_size * sizeof(struct snd_timer_read),
+@@ -1273,8 +1325,10 @@ static int snd_timer_user_release(struct inode *inode, struct file *file)
+ if (file->private_data) {
+ tu = file->private_data;
+ file->private_data = NULL;
++ mutex_lock(&tu->ioctl_lock);
+ if (tu->timeri)
+ snd_timer_close(tu->timeri);
++ mutex_unlock(&tu->ioctl_lock);
+ kfree(tu->queue);
+ kfree(tu->tqueue);
+ kfree(tu);
+@@ -1512,7 +1566,6 @@ static int snd_timer_user_tselect(struct file *file,
+ int err = 0;
+
+ tu = file->private_data;
+- mutex_lock(&tu->tread_sem);
+ if (tu->timeri) {
+ snd_timer_close(tu->timeri);
+ tu->timeri = NULL;
+@@ -1556,7 +1609,6 @@ static int snd_timer_user_tselect(struct file *file,
+ }
+
+ __err:
+- mutex_unlock(&tu->tread_sem);
+ return err;
+ }
+
+@@ -1769,7 +1821,7 @@ enum {
+ SNDRV_TIMER_IOCTL_PAUSE_OLD = _IO('T', 0x23),
+ };
+
+-static long snd_timer_user_ioctl(struct file *file, unsigned int cmd,
++static long __snd_timer_user_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+ {
+ struct snd_timer_user *tu;
+@@ -1786,17 +1838,11 @@ static long snd_timer_user_ioctl(struct file *file, unsigned int cmd,
+ {
+ int xarg;
+
+- mutex_lock(&tu->tread_sem);
+- if (tu->timeri) { /* too late */
+- mutex_unlock(&tu->tread_sem);
++ if (tu->timeri) /* too late */
+ return -EBUSY;
+- }
+- if (get_user(xarg, p)) {
+- mutex_unlock(&tu->tread_sem);
++ if (get_user(xarg, p))
+ return -EFAULT;
+- }
+ tu->tread = xarg ? 1 : 0;
+- mutex_unlock(&tu->tread_sem);
+ return 0;
+ }
+ case SNDRV_TIMER_IOCTL_GINFO:
+@@ -1829,6 +1875,18 @@ static long snd_timer_user_ioctl(struct file *file, unsigned int cmd,
+ return -ENOTTY;
+ }
+
++static long snd_timer_user_ioctl(struct file *file, unsigned int cmd,
++ unsigned long arg)
++{
++ struct snd_timer_user *tu = file->private_data;
++ long ret;
++
++ mutex_lock(&tu->ioctl_lock);
++ ret = __snd_timer_user_ioctl(file, cmd, arg);
++ mutex_unlock(&tu->ioctl_lock);
++ return ret;
++}
++
+ static int snd_timer_user_fasync(int fd, struct file * file, int on)
+ {
+ struct snd_timer_user *tu;
+@@ -1866,6 +1924,10 @@ static ssize_t snd_timer_user_read(struct file *file, char __user *buffer,
+
+ remove_wait_queue(&tu->qchange_sleep, &wait);
+
++ if (tu->disconnected) {
++ err = -ENODEV;
++ break;
++ }
+ if (signal_pending(current)) {
+ err = -ERESTARTSYS;
+ break;
+@@ -1915,6 +1977,8 @@ static unsigned int snd_timer_user_poll(struct file *file, poll_table * wait)
+ mask = 0;
+ if (tu->qused)
+ mask |= POLLIN | POLLRDNORM;
++ if (tu->disconnected)
++ mask |= POLLERR;
+
+ return mask;
+ }
+diff --git a/sound/firewire/bebob/Makefile b/sound/firewire/bebob/Makefile
+index 6cf470c80d1f..af7ed6643266 100644
+--- a/sound/firewire/bebob/Makefile
++++ b/sound/firewire/bebob/Makefile
+@@ -1,4 +1,4 @@
+ snd-bebob-objs := bebob_command.o bebob_stream.o bebob_proc.o bebob_midi.o \
+ bebob_pcm.o bebob_hwdep.o bebob_terratec.o bebob_yamaha.o \
+ bebob_focusrite.o bebob_maudio.o bebob.o
+-obj-m += snd-bebob.o
++obj-$(CONFIG_SND_BEBOB) += snd-bebob.o
+diff --git a/sound/firewire/dice/Makefile b/sound/firewire/dice/Makefile
+index 9ef228ef7baf..55b4be9b0034 100644
+--- a/sound/firewire/dice/Makefile
++++ b/sound/firewire/dice/Makefile
+@@ -1,3 +1,3 @@
+ snd-dice-objs := dice-transaction.o dice-stream.o dice-proc.o dice-midi.o \
+ dice-pcm.o dice-hwdep.o dice.o
+-obj-m += snd-dice.o
++obj-$(CONFIG_SND_DICE) += snd-dice.o
+diff --git a/sound/firewire/fireworks/Makefile b/sound/firewire/fireworks/Makefile
+index 0c7440826db8..15ef7f75a8ef 100644
+--- a/sound/firewire/fireworks/Makefile
++++ b/sound/firewire/fireworks/Makefile
+@@ -1,4 +1,4 @@
+ snd-fireworks-objs := fireworks_transaction.o fireworks_command.o \
+ fireworks_stream.o fireworks_proc.o fireworks_midi.o \
+ fireworks_pcm.o fireworks_hwdep.o fireworks.o
+-obj-m += snd-fireworks.o
++obj-$(CONFIG_SND_FIREWORKS) += snd-fireworks.o
+diff --git a/sound/firewire/oxfw/Makefile b/sound/firewire/oxfw/Makefile
+index a926850864f6..06ff50f4e6c0 100644
+--- a/sound/firewire/oxfw/Makefile
++++ b/sound/firewire/oxfw/Makefile
+@@ -1,3 +1,3 @@
+ snd-oxfw-objs := oxfw-command.o oxfw-stream.o oxfw-control.o oxfw-pcm.o \
+ oxfw-proc.o oxfw-midi.o oxfw-hwdep.o oxfw.o
+-obj-m += snd-oxfw.o
++obj-$(CONFIG_SND_OXFW) += snd-oxfw.o
+diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c
+index 944455997fdc..4013af376327 100644
+--- a/sound/pci/hda/hda_controller.c
++++ b/sound/pci/hda/hda_controller.c
+@@ -1059,6 +1059,9 @@ int azx_bus_init(struct azx *chip, const char *model,
+ bus->needs_damn_long_delay = 1;
+ }
+
++ if (chip->driver_caps & AZX_DCAPS_4K_BDLE_BOUNDARY)
++ bus->core.align_bdle_4k = true;
++
+ /* AMD chipsets often cause the communication stalls upon certain
+ * sequence like the pin-detection. It seems that forcing the synced
+ * access works around the stall. Grrr...
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index c38c68f57938..e61fbf4270e1 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -334,6 +334,7 @@ enum {
+
+ #define AZX_DCAPS_PRESET_CTHDA \
+ (AZX_DCAPS_NO_MSI | AZX_DCAPS_POSFIX_LPIB |\
++ AZX_DCAPS_NO_64BIT |\
+ AZX_DCAPS_4K_BDLE_BOUNDARY | AZX_DCAPS_SNOOP_OFF)
+
+ /*
+@@ -926,6 +927,36 @@ static int azx_resume(struct device *dev)
+ }
+ #endif /* CONFIG_PM_SLEEP || SUPPORT_VGA_SWITCHEROO */
+
++#ifdef CONFIG_PM_SLEEP
++/* put codec down to D3 at hibernation for Intel SKL+;
++ * otherwise BIOS may still access the codec and screw up the driver
++ */
++#define IS_SKL(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0xa170)
++#define IS_SKL_LP(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x9d70)
++#define IS_BXT(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x5a98)
++#define IS_SKL_PLUS(pci) (IS_SKL(pci) || IS_SKL_LP(pci) || IS_BXT(pci))
++
++static int azx_freeze_noirq(struct device *dev)
++{
++ struct pci_dev *pci = to_pci_dev(dev);
++
++ if (IS_SKL_PLUS(pci))
++ pci_set_power_state(pci, PCI_D3hot);
++
++ return 0;
++}
++
++static int azx_thaw_noirq(struct device *dev)
++{
++ struct pci_dev *pci = to_pci_dev(dev);
++
++ if (IS_SKL_PLUS(pci))
++ pci_set_power_state(pci, PCI_D0);
++
++ return 0;
++}
++#endif /* CONFIG_PM_SLEEP */
++
+ #ifdef CONFIG_PM
+ static int azx_runtime_suspend(struct device *dev)
+ {
+@@ -1035,6 +1066,10 @@ static int azx_runtime_idle(struct device *dev)
+
+ static const struct dev_pm_ops azx_pm = {
+ SET_SYSTEM_SLEEP_PM_OPS(azx_suspend, azx_resume)
++#ifdef CONFIG_PM_SLEEP
++ .freeze_noirq = azx_freeze_noirq,
++ .thaw_noirq = azx_thaw_noirq,
++#endif
+ SET_RUNTIME_PM_OPS(azx_runtime_suspend, azx_runtime_resume, azx_runtime_idle)
+ };
+
+@@ -2065,9 +2100,17 @@ i915_power_fail:
+ static void azx_remove(struct pci_dev *pci)
+ {
+ struct snd_card *card = pci_get_drvdata(pci);
++ struct azx *chip;
++ struct hda_intel *hda;
++
++ if (card) {
++ /* flush the pending probing work */
++ chip = card->private_data;
++ hda = container_of(chip, struct hda_intel, chip);
++ flush_work(&hda->probe_work);
+
+- if (card)
+ snd_card_free(card);
++ }
+ }
+
+ static void azx_shutdown(struct pci_dev *pci)
+@@ -2104,6 +2147,11 @@ static const struct pci_device_id azx_ids[] = {
+ .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH },
+ { PCI_DEVICE(0x8086, 0x8d21),
+ .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH },
++ /* Lewisburg */
++ { PCI_DEVICE(0x8086, 0xa1f0),
++ .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH },
++ { PCI_DEVICE(0x8086, 0xa270),
++ .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH },
+ /* Lynx Point-LP */
+ { PCI_DEVICE(0x8086, 0x9c20),
+ .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH },
+@@ -2284,11 +2332,13 @@ static const struct pci_device_id azx_ids[] = {
+ .class = PCI_CLASS_MULTIMEDIA_HD_AUDIO << 8,
+ .class_mask = 0xffffff,
+ .driver_data = AZX_DRIVER_CTX | AZX_DCAPS_CTX_WORKAROUND |
++ AZX_DCAPS_NO_64BIT |
+ AZX_DCAPS_RIRB_PRE_DELAY | AZX_DCAPS_POSFIX_LPIB },
+ #else
+ /* this entry seems still valid -- i.e. without emu20kx chip */
+ { PCI_DEVICE(0x1102, 0x0009),
+ .driver_data = AZX_DRIVER_CTX | AZX_DCAPS_CTX_WORKAROUND |
++ AZX_DCAPS_NO_64BIT |
+ AZX_DCAPS_RIRB_PRE_DELAY | AZX_DCAPS_POSFIX_LPIB },
+ #endif
+ /* CM8888 */
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 186792fe226e..5b8a5b84a03c 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -778,7 +778,8 @@ static const struct hda_pintbl alienware_pincfgs[] = {
+ };
+
+ static const struct snd_pci_quirk ca0132_quirks[] = {
+- SND_PCI_QUIRK(0x1028, 0x0685, "Alienware 15", QUIRK_ALIENWARE),
++ SND_PCI_QUIRK(0x1028, 0x0685, "Alienware 15 2015", QUIRK_ALIENWARE),
++ SND_PCI_QUIRK(0x1028, 0x0688, "Alienware 17 2015", QUIRK_ALIENWARE),
+ {}
+ };
+
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index f22f5c409447..d1c74295a362 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -2330,6 +2330,12 @@ static void intel_pin_eld_notify(void *audio_ptr, int port)
+ struct hda_codec *codec = audio_ptr;
+ int pin_nid = port + 0x04;
+
++ /* skip notification during system suspend (but not in runtime PM);
++ * the state will be updated at resume
++ */
++ if (snd_power_get_state(codec->card) != SNDRV_CTL_POWER_D0)
++ return;
++
+ check_presence_and_report(codec, pin_nid);
+ }
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 16b8dcba5c12..887f37761f18 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -67,6 +67,10 @@ enum {
+ ALC_HEADSET_TYPE_OMTP,
+ };
+
++enum {
++ ALC_KEY_MICMUTE_INDEX,
++};
++
+ struct alc_customize_define {
+ unsigned int sku_cfg;
+ unsigned char port_connectivity;
+@@ -111,6 +115,7 @@ struct alc_spec {
+ void (*power_hook)(struct hda_codec *codec);
+ #endif
+ void (*shutup)(struct hda_codec *codec);
++ void (*reboot_notify)(struct hda_codec *codec);
+
+ int init_amp;
+ int codec_variant; /* flag for other variants */
+@@ -122,6 +127,7 @@ struct alc_spec {
+ unsigned int pll_coef_idx, pll_coef_bit;
+ unsigned int coef0;
+ struct input_dev *kb_dev;
++ u8 alc_mute_keycode_map[1];
+ };
+
+ /*
+@@ -773,6 +779,25 @@ static inline void alc_shutup(struct hda_codec *codec)
+ snd_hda_shutup_pins(codec);
+ }
+
++static void alc_reboot_notify(struct hda_codec *codec)
++{
++ struct alc_spec *spec = codec->spec;
++
++ if (spec && spec->reboot_notify)
++ spec->reboot_notify(codec);
++ else
++ alc_shutup(codec);
++}
++
++/* power down codec to D3 at reboot/shutdown; set as reboot_notify ops */
++static void alc_d3_at_reboot(struct hda_codec *codec)
++{
++ snd_hda_codec_set_power_to_all(codec, codec->core.afg, AC_PWRST_D3);
++ snd_hda_codec_write(codec, codec->core.afg, 0,
++ AC_VERB_SET_POWER_STATE, AC_PWRST_D3);
++ msleep(10);
++}
++
+ #define alc_free snd_hda_gen_free
+
+ #ifdef CONFIG_PM
+@@ -818,7 +843,7 @@ static const struct hda_codec_ops alc_patch_ops = {
+ .suspend = alc_suspend,
+ .check_power_status = snd_hda_gen_check_power_status,
+ #endif
+- .reboot_notify = alc_shutup,
++ .reboot_notify = alc_reboot_notify,
+ };
+
+
+@@ -1765,10 +1790,12 @@ enum {
+ ALC889_FIXUP_MBA11_VREF,
+ ALC889_FIXUP_MBA21_VREF,
+ ALC889_FIXUP_MP11_VREF,
++ ALC889_FIXUP_MP41_VREF,
+ ALC882_FIXUP_INV_DMIC,
+ ALC882_FIXUP_NO_PRIMARY_HP,
+ ALC887_FIXUP_ASUS_BASS,
+ ALC887_FIXUP_BASS_CHMAP,
++ ALC882_FIXUP_DISABLE_AAMIX,
+ };
+
+ static void alc889_fixup_coef(struct hda_codec *codec,
+@@ -1852,7 +1879,7 @@ static void alc889_fixup_mbp_vref(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+ {
+ struct alc_spec *spec = codec->spec;
+- static hda_nid_t nids[2] = { 0x14, 0x15 };
++ static hda_nid_t nids[3] = { 0x14, 0x15, 0x19 };
+ int i;
+
+ if (action != HDA_FIXUP_ACT_INIT)
+@@ -1930,6 +1957,8 @@ static void alc882_fixup_no_primary_hp(struct hda_codec *codec,
+
+ static void alc_fixup_bass_chmap(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action);
++static void alc_fixup_disable_aamix(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action);
+
+ static const struct hda_fixup alc882_fixups[] = {
+ [ALC882_FIXUP_ABIT_AW9D_MAX] = {
+@@ -2140,6 +2169,12 @@ static const struct hda_fixup alc882_fixups[] = {
+ .chained = true,
+ .chain_id = ALC885_FIXUP_MACPRO_GPIO,
+ },
++ [ALC889_FIXUP_MP41_VREF] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc889_fixup_mbp_vref,
++ .chained = true,
++ .chain_id = ALC885_FIXUP_MACPRO_GPIO,
++ },
+ [ALC882_FIXUP_INV_DMIC] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc_fixup_inv_dmic,
+@@ -2161,6 +2196,10 @@ static const struct hda_fixup alc882_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc_fixup_bass_chmap,
+ },
++ [ALC882_FIXUP_DISABLE_AAMIX] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc_fixup_disable_aamix,
++ },
+ };
+
+ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+@@ -2218,7 +2257,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x106b, 0x3f00, "Macbook 5,1", ALC889_FIXUP_IMAC91_VREF),
+ SND_PCI_QUIRK(0x106b, 0x4000, "MacbookPro 5,1", ALC889_FIXUP_IMAC91_VREF),
+ SND_PCI_QUIRK(0x106b, 0x4100, "Macmini 3,1", ALC889_FIXUP_IMAC91_VREF),
+- SND_PCI_QUIRK(0x106b, 0x4200, "Mac Pro 5,1", ALC885_FIXUP_MACPRO_GPIO),
++ SND_PCI_QUIRK(0x106b, 0x4200, "Mac Pro 4,1/5,1", ALC889_FIXUP_MP41_VREF),
+ SND_PCI_QUIRK(0x106b, 0x4300, "iMac 9,1", ALC889_FIXUP_IMAC91_VREF),
+ SND_PCI_QUIRK(0x106b, 0x4600, "MacbookPro 5,2", ALC889_FIXUP_IMAC91_VREF),
+ SND_PCI_QUIRK(0x106b, 0x4900, "iMac 9,1 Aluminum", ALC889_FIXUP_IMAC91_VREF),
+@@ -2228,6 +2267,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1462, 0x7350, "MSI-7350", ALC889_FIXUP_CD),
+ SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3),
+ SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1458, 0xa182, "Gigabyte Z170X-UD3", ALC882_FIXUP_DISABLE_AAMIX),
+ SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX),
+ SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD),
+ SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD),
+@@ -3437,12 +3477,43 @@ static void gpio2_mic_hotkey_event(struct hda_codec *codec,
+
+ /* GPIO2 just toggles on a keypress/keyrelease cycle. Therefore
+ send both key on and key off event for every interrupt. */
+- input_report_key(spec->kb_dev, KEY_MICMUTE, 1);
++ input_report_key(spec->kb_dev, spec->alc_mute_keycode_map[ALC_KEY_MICMUTE_INDEX], 1);
+ input_sync(spec->kb_dev);
+- input_report_key(spec->kb_dev, KEY_MICMUTE, 0);
++ input_report_key(spec->kb_dev, spec->alc_mute_keycode_map[ALC_KEY_MICMUTE_INDEX], 0);
+ input_sync(spec->kb_dev);
+ }
+
++static int alc_register_micmute_input_device(struct hda_codec *codec)
++{
++ struct alc_spec *spec = codec->spec;
++ int i;
++
++ spec->kb_dev = input_allocate_device();
++ if (!spec->kb_dev) {
++ codec_err(codec, "Out of memory (input_allocate_device)\n");
++ return -ENOMEM;
++ }
++
++ spec->alc_mute_keycode_map[ALC_KEY_MICMUTE_INDEX] = KEY_MICMUTE;
++
++ spec->kb_dev->name = "Microphone Mute Button";
++ spec->kb_dev->evbit[0] = BIT_MASK(EV_KEY);
++ spec->kb_dev->keycodesize = sizeof(spec->alc_mute_keycode_map[0]);
++ spec->kb_dev->keycodemax = ARRAY_SIZE(spec->alc_mute_keycode_map);
++ spec->kb_dev->keycode = spec->alc_mute_keycode_map;
++ for (i = 0; i < ARRAY_SIZE(spec->alc_mute_keycode_map); i++)
++ set_bit(spec->alc_mute_keycode_map[i], spec->kb_dev->keybit);
++
++ if (input_register_device(spec->kb_dev)) {
++ codec_err(codec, "input_register_device failed\n");
++ input_free_device(spec->kb_dev);
++ spec->kb_dev = NULL;
++ return -ENOMEM;
++ }
++
++ return 0;
++}
++
+ static void alc280_fixup_hp_gpio2_mic_hotkey(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+ {
+@@ -3460,20 +3531,8 @@ static void alc280_fixup_hp_gpio2_mic_hotkey(struct hda_codec *codec,
+ struct alc_spec *spec = codec->spec;
+
+ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+- spec->kb_dev = input_allocate_device();
+- if (!spec->kb_dev) {
+- codec_err(codec, "Out of memory (input_allocate_device)\n");
++ if (alc_register_micmute_input_device(codec) != 0)
+ return;
+- }
+- spec->kb_dev->name = "Microphone Mute Button";
+- spec->kb_dev->evbit[0] = BIT_MASK(EV_KEY);
+- spec->kb_dev->keybit[BIT_WORD(KEY_MICMUTE)] = BIT_MASK(KEY_MICMUTE);
+- if (input_register_device(spec->kb_dev)) {
+- codec_err(codec, "input_register_device failed\n");
+- input_free_device(spec->kb_dev);
+- spec->kb_dev = NULL;
+- return;
+- }
+
+ snd_hda_add_verbs(codec, gpio_init);
+ snd_hda_codec_write_cache(codec, codec->core.afg, 0,
+@@ -3503,6 +3562,47 @@ static void alc280_fixup_hp_gpio2_mic_hotkey(struct hda_codec *codec,
+ }
+ }
+
++static void alc233_fixup_lenovo_line2_mic_hotkey(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ /* Line2 = mic mute hotkey
++ GPIO2 = mic mute LED */
++ static const struct hda_verb gpio_init[] = {
++ { 0x01, AC_VERB_SET_GPIO_MASK, 0x04 },
++ { 0x01, AC_VERB_SET_GPIO_DIRECTION, 0x04 },
++ {}
++ };
++
++ struct alc_spec *spec = codec->spec;
++
++ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++ if (alc_register_micmute_input_device(codec) != 0)
++ return;
++
++ snd_hda_add_verbs(codec, gpio_init);
++ snd_hda_jack_detect_enable_callback(codec, 0x1b,
++ gpio2_mic_hotkey_event);
++
++ spec->gen.cap_sync_hook = alc_fixup_gpio_mic_mute_hook;
++ spec->gpio_led = 0;
++ spec->mute_led_polarity = 0;
++ spec->gpio_mic_led_mask = 0x04;
++ return;
++ }
++
++ if (!spec->kb_dev)
++ return;
++
++ switch (action) {
++ case HDA_FIXUP_ACT_PROBE:
++ spec->init_amp = ALC_INIT_DEFAULT;
++ break;
++ case HDA_FIXUP_ACT_FREE:
++ input_unregister_device(spec->kb_dev);
++ spec->kb_dev = NULL;
++ }
++}
++
+ static void alc269_fixup_hp_line1_mic1_led(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+ {
+@@ -4200,6 +4300,8 @@ static void alc_fixup_tpt440_dock(struct hda_codec *codec,
+ struct alc_spec *spec = codec->spec;
+
+ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++ spec->shutup = alc_no_shutup; /* reduce click noise */
++ spec->reboot_notify = alc_d3_at_reboot; /* reduce noise */
+ spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP;
+ codec->power_save_node = 0; /* avoid click noises */
+ snd_hda_apply_pincfgs(codec, pincfgs);
+@@ -4574,12 +4676,14 @@ enum {
+ ALC290_FIXUP_SUBWOOFER,
+ ALC290_FIXUP_SUBWOOFER_HSJACK,
+ ALC269_FIXUP_THINKPAD_ACPI,
++ ALC269_FIXUP_DMIC_THINKPAD_ACPI,
+ ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ ALC255_FIXUP_DELL2_MIC_NO_PRESENCE,
+ ALC255_FIXUP_HEADSET_MODE,
+ ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC,
+ ALC293_FIXUP_DELL1_MIC_NO_PRESENCE,
+ ALC292_FIXUP_TPT440_DOCK,
++ ALC292_FIXUP_TPT440,
+ ALC283_FIXUP_BXBT2807_MIC,
+ ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED,
+ ALC282_FIXUP_ASPIRE_V5_PINS,
+@@ -4595,7 +4699,12 @@ enum {
+ ALC288_FIXUP_DISABLE_AAMIX,
+ ALC292_FIXUP_DELL_E7X,
+ ALC292_FIXUP_DISABLE_AAMIX,
++ ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK,
+ ALC298_FIXUP_DELL1_MIC_NO_PRESENCE,
++ ALC275_FIXUP_DELL_XPS,
++ ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE,
++ ALC293_FIXUP_LENOVO_SPK_NOISE,
++ ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -5005,6 +5114,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = hda_fixup_thinkpad_acpi,
+ },
++ [ALC269_FIXUP_DMIC_THINKPAD_ACPI] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc_fixup_inv_dmic,
++ .chained = true,
++ .chain_id = ALC269_FIXUP_THINKPAD_ACPI,
++ },
+ [ALC255_FIXUP_DELL1_MIC_NO_PRESENCE] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+@@ -5050,6 +5165,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST
+ },
++ [ALC292_FIXUP_TPT440] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc_fixup_disable_aamix,
++ .chained = true,
++ .chain_id = ALC292_FIXUP_TPT440_DOCK,
++ },
+ [ALC283_FIXUP_BXBT2807_MIC] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+@@ -5149,6 +5270,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_DELL2_MIC_NO_PRESENCE
+ },
++ [ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc_fixup_disable_aamix,
++ .chained = true,
++ .chain_id = ALC293_FIXUP_DELL1_MIC_NO_PRESENCE
++ },
+ [ALC292_FIXUP_DELL_E7X] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc_fixup_dell_xps13,
+@@ -5165,6 +5292,38 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MODE
+ },
++ [ALC275_FIXUP_DELL_XPS] = {
++ .type = HDA_FIXUP_VERBS,
++ .v.verbs = (const struct hda_verb[]) {
++ /* Enables internal speaker */
++ {0x20, AC_VERB_SET_COEF_INDEX, 0x1f},
++ {0x20, AC_VERB_SET_PROC_COEF, 0x00c0},
++ {0x20, AC_VERB_SET_COEF_INDEX, 0x30},
++ {0x20, AC_VERB_SET_PROC_COEF, 0x00b1},
++ {}
++ }
++ },
++ [ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE] = {
++ .type = HDA_FIXUP_VERBS,
++ .v.verbs = (const struct hda_verb[]) {
++ /* Disable pass-through path for FRONT 14h */
++ {0x20, AC_VERB_SET_COEF_INDEX, 0x36},
++ {0x20, AC_VERB_SET_PROC_COEF, 0x1737},
++ {}
++ },
++ .chained = true,
++ .chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE
++ },
++ [ALC293_FIXUP_LENOVO_SPK_NOISE] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc_fixup_disable_aamix,
++ .chained = true,
++ .chain_id = ALC269_FIXUP_THINKPAD_ACPI
++ },
++ [ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc233_fixup_lenovo_line2_mic_hotkey,
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -5178,7 +5337,11 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x0742, "Acer AO756", ALC271_FIXUP_HP_GATE_MIC_JACK),
+ SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
+ SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS),
++ SND_PCI_QUIRK(0x1025, 0x106d, "Acer Cloudbook 14", ALC283_FIXUP_CHROME_BOOK),
+ SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
++ SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS),
++ SND_PCI_QUIRK(0x1028, 0x05bd, "Dell Latitude E6440", ALC292_FIXUP_DELL_E7X),
++ SND_PCI_QUIRK(0x1028, 0x05be, "Dell Latitude E6540", ALC292_FIXUP_DELL_E7X),
+ SND_PCI_QUIRK(0x1028, 0x05ca, "Dell Latitude E7240", ALC292_FIXUP_DELL_E7X),
+ SND_PCI_QUIRK(0x1028, 0x05cb, "Dell Latitude E7440", ALC292_FIXUP_DELL_E7X),
+ SND_PCI_QUIRK(0x1028, 0x05da, "Dell Vostro 5460", ALC290_FIXUP_SUBWOOFER),
+@@ -5187,6 +5350,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x05f6, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0615, "Dell Vostro 5470", ALC290_FIXUP_SUBWOOFER_HSJACK),
+ SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_SUBWOOFER_HSJACK),
++ SND_PCI_QUIRK(0x1028, 0x062c, "Dell Latitude E5550", ALC292_FIXUP_DELL_E7X),
+ SND_PCI_QUIRK(0x1028, 0x062e, "Dell Latitude E7450", ALC292_FIXUP_DELL_E7X),
+ SND_PCI_QUIRK(0x1028, 0x0638, "Dell Inspiron 5439", ALC290_FIXUP_MONO_SPEAKERS_HSJACK),
+ SND_PCI_QUIRK(0x1028, 0x064a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+@@ -5196,11 +5360,12 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x06c7, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x06d9, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x06da, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+- SND_PCI_QUIRK(0x1028, 0x06db, "Dell", ALC292_FIXUP_DISABLE_AAMIX),
+- SND_PCI_QUIRK(0x1028, 0x06dd, "Dell", ALC292_FIXUP_DISABLE_AAMIX),
+- SND_PCI_QUIRK(0x1028, 0x06de, "Dell", ALC292_FIXUP_DISABLE_AAMIX),
+- SND_PCI_QUIRK(0x1028, 0x06df, "Dell", ALC292_FIXUP_DISABLE_AAMIX),
+- SND_PCI_QUIRK(0x1028, 0x06e0, "Dell", ALC292_FIXUP_DISABLE_AAMIX),
++ SND_PCI_QUIRK(0x1028, 0x06db, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK),
++ SND_PCI_QUIRK(0x1028, 0x06dd, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK),
++ SND_PCI_QUIRK(0x1028, 0x06de, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK),
++ SND_PCI_QUIRK(0x1028, 0x06df, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK),
++ SND_PCI_QUIRK(0x1028, 0x06e0, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK),
++ SND_PCI_QUIRK(0x1028, 0x0704, "Dell XPS 13", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
+ SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+@@ -5299,15 +5464,19 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x21fb, "Thinkpad T430s", ALC269_FIXUP_LENOVO_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x2203, "Thinkpad X230 Tablet", ALC269_FIXUP_LENOVO_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x2208, "Thinkpad T431s", ALC269_FIXUP_LENOVO_DOCK),
+- SND_PCI_QUIRK(0x17aa, 0x220c, "Thinkpad T440s", ALC292_FIXUP_TPT440_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x220c, "Thinkpad T440s", ALC292_FIXUP_TPT440),
+ SND_PCI_QUIRK(0x17aa, 0x220e, "Thinkpad T440p", ALC292_FIXUP_TPT440_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x2210, "Thinkpad T540p", ALC292_FIXUP_TPT440_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x2211, "Thinkpad W541", ALC292_FIXUP_TPT440_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x2212, "Thinkpad T440", ALC292_FIXUP_TPT440_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad X240", ALC292_FIXUP_TPT440_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++ SND_PCI_QUIRK(0x17aa, 0x2218, "Thinkpad X1 Carbon 2nd", ALC292_FIXUP_TPT440_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x2223, "ThinkPad T550", ALC292_FIXUP_TPT440_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x2226, "ThinkPad X250", ALC292_FIXUP_TPT440_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x2233, "Thinkpad", ALC293_FIXUP_LENOVO_SPK_NOISE),
++ SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
++ SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP),
+ SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -5317,6 +5486,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x5034, "Thinkpad T450", ALC292_FIXUP_TPT440_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x5036, "Thinkpad T450s", ALC292_FIXUP_TPT440_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x503c, "Thinkpad L450", ALC292_FIXUP_TPT440_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x504b, "Thinkpad", ALC293_FIXUP_LENOVO_SPK_NOISE),
+ SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+ SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+@@ -5397,6 +5567,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ {.id = ALC283_FIXUP_CHROME_BOOK, .name = "alc283-dac-wcaps"},
+ {.id = ALC283_FIXUP_SENSE_COMBO_JACK, .name = "alc283-sense-combo"},
+ {.id = ALC292_FIXUP_TPT440_DOCK, .name = "tpt440-dock"},
++ {.id = ALC292_FIXUP_TPT440, .name = "tpt440"},
+ {}
+ };
+
+@@ -5466,6 +5637,10 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ {0x21, 0x02211040}),
+ SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ {0x12, 0x90a60170},
++ {0x14, 0x90171130},
++ {0x21, 0x02211040}),
++ SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++ {0x12, 0x90a60170},
+ {0x14, 0x90170140},
+ {0x21, 0x02211050}),
+ SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell Inspiron 5548", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+@@ -6383,6 +6558,7 @@ static const struct hda_fixup alc662_fixups[] = {
+ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1019, 0x9087, "ECS", ALC662_FIXUP_ASUS_MODE2),
+ SND_PCI_QUIRK(0x1025, 0x022f, "Acer Aspire One", ALC662_FIXUP_INV_DMIC),
++ SND_PCI_QUIRK(0x1025, 0x0241, "Packard Bell DOTS", ALC662_FIXUP_INV_DMIC),
+ SND_PCI_QUIRK(0x1025, 0x0308, "Acer Aspire 8942G", ALC662_FIXUP_ASPIRE),
+ SND_PCI_QUIRK(0x1025, 0x031c, "Gateway NV79", ALC662_FIXUP_SKU_IGNORE),
+ SND_PCI_QUIRK(0x1025, 0x0349, "eMachines eM250", ALC662_FIXUP_INV_DMIC),
+@@ -6400,6 +6576,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
+ SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_BASS_1A),
++ SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A),
+ SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_MODE4_CHMAP),
+ SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16),
+ SND_PCI_QUIRK(0x1043, 0x1b73, "ASUS N55SF", ALC662_FIXUP_BASS_16),
+diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
+index def5cc8dff02..14a62b8117fd 100644
+--- a/sound/pci/hda/patch_sigmatel.c
++++ b/sound/pci/hda/patch_sigmatel.c
+@@ -702,6 +702,7 @@ static bool hp_bnb2011_with_dock(struct hda_codec *codec)
+ static bool hp_blike_system(u32 subsystem_id)
+ {
+ switch (subsystem_id) {
++ case 0x103c1473: /* HP ProBook 6550b */
+ case 0x103c1520:
+ case 0x103c1521:
+ case 0x103c1523:
+@@ -3109,6 +3110,29 @@ static void stac92hd71bxx_fixup_hp_hdx(struct hda_codec *codec,
+ spec->gpio_led = 0x08;
+ }
+
++static bool is_hp_output(struct hda_codec *codec, hda_nid_t pin)
++{
++ unsigned int pin_cfg = snd_hda_codec_get_pincfg(codec, pin);
++
++ /* count line-out, too, as BIOS sets often so */
++ return get_defcfg_connect(pin_cfg) != AC_JACK_PORT_NONE &&
++ (get_defcfg_device(pin_cfg) == AC_JACK_LINE_OUT ||
++ get_defcfg_device(pin_cfg) == AC_JACK_HP_OUT);
++}
++
++static void fixup_hp_headphone(struct hda_codec *codec, hda_nid_t pin)
++{
++ unsigned int pin_cfg = snd_hda_codec_get_pincfg(codec, pin);
++
++ /* It was changed in the BIOS to just satisfy MS DTM.
++ * Lets turn it back into slaved HP
++ */
++ pin_cfg = (pin_cfg & (~AC_DEFCFG_DEVICE)) |
++ (AC_JACK_HP_OUT << AC_DEFCFG_DEVICE_SHIFT);
++ pin_cfg = (pin_cfg & (~(AC_DEFCFG_DEF_ASSOC | AC_DEFCFG_SEQUENCE))) |
++ 0x1f;
++ snd_hda_codec_set_pincfg(codec, pin, pin_cfg);
++}
+
+ static void stac92hd71bxx_fixup_hp(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+@@ -3118,22 +3142,12 @@ static void stac92hd71bxx_fixup_hp(struct hda_codec *codec,
+ if (action != HDA_FIXUP_ACT_PRE_PROBE)
+ return;
+
+- if (hp_blike_system(codec->core.subsystem_id)) {
+- unsigned int pin_cfg = snd_hda_codec_get_pincfg(codec, 0x0f);
+- if (get_defcfg_device(pin_cfg) == AC_JACK_LINE_OUT ||
+- get_defcfg_device(pin_cfg) == AC_JACK_SPEAKER ||
+- get_defcfg_device(pin_cfg) == AC_JACK_HP_OUT) {
+- /* It was changed in the BIOS to just satisfy MS DTM.
+- * Lets turn it back into slaved HP
+- */
+- pin_cfg = (pin_cfg & (~AC_DEFCFG_DEVICE))
+- | (AC_JACK_HP_OUT <<
+- AC_DEFCFG_DEVICE_SHIFT);
+- pin_cfg = (pin_cfg & (~(AC_DEFCFG_DEF_ASSOC
+- | AC_DEFCFG_SEQUENCE)))
+- | 0x1f;
+- snd_hda_codec_set_pincfg(codec, 0x0f, pin_cfg);
+- }
++ /* when both output A and F are assigned, these are supposedly
++ * dock and built-in headphones; fix both pin configs
++ */
++ if (is_hp_output(codec, 0x0a) && is_hp_output(codec, 0x0f)) {
++ fixup_hp_headphone(codec, 0x0a);
++ fixup_hp_headphone(codec, 0x0f);
+ }
+
+ if (find_mute_led_cfg(codec, 1))
+diff --git a/sound/pci/rme96.c b/sound/pci/rme96.c
+index 2306ccf7281e..77c963ced67a 100644
+--- a/sound/pci/rme96.c
++++ b/sound/pci/rme96.c
+@@ -741,10 +741,11 @@ snd_rme96_playback_setrate(struct rme96 *rme96,
+ {
+ /* change to/from double-speed: reset the DAC (if available) */
+ snd_rme96_reset_dac(rme96);
++ return 1; /* need to restore volume */
+ } else {
+ writel(rme96->wcreg, rme96->iobase + RME96_IO_CONTROL_REGISTER);
++ return 0;
+ }
+- return 0;
+ }
+
+ static int
+@@ -980,6 +981,7 @@ snd_rme96_playback_hw_params(struct snd_pcm_substream *substream,
+ struct rme96 *rme96 = snd_pcm_substream_chip(substream);
+ struct snd_pcm_runtime *runtime = substream->runtime;
+ int err, rate, dummy;
++ bool apply_dac_volume = false;
+
+ runtime->dma_area = (void __force *)(rme96->iobase +
+ RME96_IO_PLAY_BUFFER);
+@@ -993,24 +995,26 @@ snd_rme96_playback_hw_params(struct snd_pcm_substream *substream,
+ {
+ /* slave clock */
+ if ((int)params_rate(params) != rate) {
+- spin_unlock_irq(&rme96->lock);
+- return -EIO;
+- }
+- } else if ((err = snd_rme96_playback_setrate(rme96, params_rate(params))) < 0) {
+- spin_unlock_irq(&rme96->lock);
+- return err;
+- }
+- if ((err = snd_rme96_playback_setformat(rme96, params_format(params))) < 0) {
+- spin_unlock_irq(&rme96->lock);
+- return err;
++ err = -EIO;
++ goto error;
++ }
++ } else {
++ err = snd_rme96_playback_setrate(rme96, params_rate(params));
++ if (err < 0)
++ goto error;
++ apply_dac_volume = err > 0; /* need to restore volume later? */
+ }
++
++ err = snd_rme96_playback_setformat(rme96, params_format(params));
++ if (err < 0)
++ goto error;
+ snd_rme96_setframelog(rme96, params_channels(params), 1);
+ if (rme96->capture_periodsize != 0) {
+ if (params_period_size(params) << rme96->playback_frlog !=
+ rme96->capture_periodsize)
+ {
+- spin_unlock_irq(&rme96->lock);
+- return -EBUSY;
++ err = -EBUSY;
++ goto error;
+ }
+ }
+ rme96->playback_periodsize =
+@@ -1021,9 +1025,16 @@ snd_rme96_playback_hw_params(struct snd_pcm_substream *substream,
+ rme96->wcreg &= ~(RME96_WCR_PRO | RME96_WCR_DOLBY | RME96_WCR_EMP);
+ writel(rme96->wcreg |= rme96->wcreg_spdif_stream, rme96->iobase + RME96_IO_CONTROL_REGISTER);
+ }
++
++ err = 0;
++ error:
+ spin_unlock_irq(&rme96->lock);
+-
+- return 0;
++ if (apply_dac_volume) {
++ usleep_range(3000, 10000);
++ snd_rme96_apply_dac_volume(rme96);
++ }
++
++ return err;
+ }
+
+ static int
+diff --git a/sound/soc/codecs/arizona.c b/sound/soc/codecs/arizona.c
+index 8a2221ab3d10..a3c8e734ff2f 100644
+--- a/sound/soc/codecs/arizona.c
++++ b/sound/soc/codecs/arizona.c
+@@ -1499,7 +1499,7 @@ static int arizona_hw_params(struct snd_pcm_substream *substream,
+ bool reconfig;
+ unsigned int aif_tx_state, aif_rx_state;
+
+- if (params_rate(params) % 8000)
++ if (params_rate(params) % 4000)
+ rates = &arizona_44k1_bclk_rates[0];
+ else
+ rates = &arizona_48k_bclk_rates[0];
+diff --git a/sound/soc/codecs/es8328.c b/sound/soc/codecs/es8328.c
+index 6a091016e0fc..fb7b61faeac5 100644
+--- a/sound/soc/codecs/es8328.c
++++ b/sound/soc/codecs/es8328.c
+@@ -85,7 +85,15 @@ static const DECLARE_TLV_DB_SCALE(pga_tlv, 0, 300, 0);
+ static const DECLARE_TLV_DB_SCALE(bypass_tlv, -1500, 300, 0);
+ static const DECLARE_TLV_DB_SCALE(mic_tlv, 0, 300, 0);
+
+-static const int deemph_settings[] = { 0, 32000, 44100, 48000 };
++static const struct {
++ int rate;
++ unsigned int val;
++} deemph_settings[] = {
++ { 0, ES8328_DACCONTROL6_DEEMPH_OFF },
++ { 32000, ES8328_DACCONTROL6_DEEMPH_32k },
++ { 44100, ES8328_DACCONTROL6_DEEMPH_44_1k },
++ { 48000, ES8328_DACCONTROL6_DEEMPH_48k },
++};
+
+ static int es8328_set_deemph(struct snd_soc_codec *codec)
+ {
+@@ -97,21 +105,22 @@ static int es8328_set_deemph(struct snd_soc_codec *codec)
+ * rate.
+ */
+ if (es8328->deemph) {
+- best = 1;
+- for (i = 2; i < ARRAY_SIZE(deemph_settings); i++) {
+- if (abs(deemph_settings[i] - es8328->playback_fs) <
+- abs(deemph_settings[best] - es8328->playback_fs))
++ best = 0;
++ for (i = 1; i < ARRAY_SIZE(deemph_settings); i++) {
++ if (abs(deemph_settings[i].rate - es8328->playback_fs) <
++ abs(deemph_settings[best].rate - es8328->playback_fs))
+ best = i;
+ }
+
+- val = best << 1;
++ val = deemph_settings[best].val;
+ } else {
+- val = 0;
++ val = ES8328_DACCONTROL6_DEEMPH_OFF;
+ }
+
+ dev_dbg(codec->dev, "Set deemphasis %d\n", val);
+
+- return snd_soc_update_bits(codec, ES8328_DACCONTROL6, 0x6, val);
++ return snd_soc_update_bits(codec, ES8328_DACCONTROL6,
++ ES8328_DACCONTROL6_DEEMPH_MASK, val);
+ }
+
+ static int es8328_get_deemph(struct snd_kcontrol *kcontrol,
+diff --git a/sound/soc/codecs/es8328.h b/sound/soc/codecs/es8328.h
+index cb36afe10c0e..156c748c89c7 100644
+--- a/sound/soc/codecs/es8328.h
++++ b/sound/soc/codecs/es8328.h
+@@ -153,6 +153,7 @@ int es8328_probe(struct device *dev, struct regmap *regmap);
+ #define ES8328_DACCONTROL6_CLICKFREE (1 << 3)
+ #define ES8328_DACCONTROL6_DAC_INVR (1 << 4)
+ #define ES8328_DACCONTROL6_DAC_INVL (1 << 5)
++#define ES8328_DACCONTROL6_DEEMPH_MASK (3 << 6)
+ #define ES8328_DACCONTROL6_DEEMPH_OFF (0 << 6)
+ #define ES8328_DACCONTROL6_DEEMPH_32k (1 << 6)
+ #define ES8328_DACCONTROL6_DEEMPH_44_1k (2 << 6)
+diff --git a/sound/soc/codecs/rt286.c b/sound/soc/codecs/rt286.c
+index bd9365885f73..2088dfa0612d 100644
+--- a/sound/soc/codecs/rt286.c
++++ b/sound/soc/codecs/rt286.c
+@@ -38,7 +38,7 @@
+ #define RT288_VENDOR_ID 0x10ec0288
+
+ struct rt286_priv {
+- const struct reg_default *index_cache;
++ struct reg_default *index_cache;
+ int index_cache_size;
+ struct regmap *regmap;
+ struct snd_soc_codec *codec;
+@@ -1161,7 +1161,11 @@ static int rt286_i2c_probe(struct i2c_client *i2c,
+ return -ENODEV;
+ }
+
+- rt286->index_cache = rt286_index_def;
++ rt286->index_cache = devm_kmemdup(&i2c->dev, rt286_index_def,
++ sizeof(rt286_index_def), GFP_KERNEL);
++ if (!rt286->index_cache)
++ return -ENOMEM;
++
+ rt286->index_cache_size = INDEX_CACHE_SIZE;
+ rt286->i2c = i2c;
+ i2c_set_clientdata(i2c, rt286);
+diff --git a/sound/soc/codecs/wm5110.c b/sound/soc/codecs/wm5110.c
+index 9756578fc752..8b4a56f538ea 100644
+--- a/sound/soc/codecs/wm5110.c
++++ b/sound/soc/codecs/wm5110.c
+@@ -354,15 +354,13 @@ static int wm5110_hp_ev(struct snd_soc_dapm_widget *w,
+
+ static int wm5110_clear_pga_volume(struct arizona *arizona, int output)
+ {
+- struct reg_sequence clear_pga = {
+- ARIZONA_OUTPUT_PATH_CONFIG_1L + output * 4, 0x80
+- };
++ unsigned int reg = ARIZONA_OUTPUT_PATH_CONFIG_1L + output * 4;
+ int ret;
+
+- ret = regmap_multi_reg_write_bypassed(arizona->regmap, &clear_pga, 1);
++ ret = regmap_write(arizona->regmap, reg, 0x80);
+ if (ret)
+ dev_err(arizona->dev, "Failed to clear PGA (0x%x): %d\n",
+- clear_pga.reg, ret);
++ reg, ret);
+
+ return ret;
+ }
+diff --git a/sound/soc/codecs/wm8962.c b/sound/soc/codecs/wm8962.c
+index 39ebd7bf4f53..a7e79784fc16 100644
+--- a/sound/soc/codecs/wm8962.c
++++ b/sound/soc/codecs/wm8962.c
+@@ -365,8 +365,8 @@ static const struct reg_default wm8962_reg[] = {
+ { 16924, 0x0059 }, /* R16924 - HDBASS_PG_1 */
+ { 16925, 0x999A }, /* R16925 - HDBASS_PG_0 */
+
+- { 17048, 0x0083 }, /* R17408 - HPF_C_1 */
+- { 17049, 0x98AD }, /* R17409 - HPF_C_0 */
++ { 17408, 0x0083 }, /* R17408 - HPF_C_1 */
++ { 17409, 0x98AD }, /* R17409 - HPF_C_0 */
+
+ { 17920, 0x007F }, /* R17920 - ADCL_RETUNE_C1_1 */
+ { 17921, 0xFFFF }, /* R17921 - ADCL_RETUNE_C1_0 */
+diff --git a/sound/soc/codecs/wm8974.c b/sound/soc/codecs/wm8974.c
+index 0a60677397b3..4c29bd2ae75c 100644
+--- a/sound/soc/codecs/wm8974.c
++++ b/sound/soc/codecs/wm8974.c
+@@ -574,6 +574,7 @@ static const struct regmap_config wm8974_regmap = {
+ .max_register = WM8974_MONOMIX,
+ .reg_defaults = wm8974_reg_defaults,
+ .num_reg_defaults = ARRAY_SIZE(wm8974_reg_defaults),
++ .cache_type = REGCACHE_FLAT,
+ };
+
+ static int wm8974_probe(struct snd_soc_codec *codec)
+diff --git a/sound/soc/davinci/davinci-mcasp.c b/sound/soc/davinci/davinci-mcasp.c
+index 7d45d98a861f..226b3606e8a4 100644
+--- a/sound/soc/davinci/davinci-mcasp.c
++++ b/sound/soc/davinci/davinci-mcasp.c
+@@ -222,8 +222,8 @@ static void mcasp_start_tx(struct davinci_mcasp *mcasp)
+
+ /* wait for XDATA to be cleared */
+ cnt = 0;
+- while (!(mcasp_get_reg(mcasp, DAVINCI_MCASP_TXSTAT_REG) &
+- ~XRDATA) && (cnt < 100000))
++ while ((mcasp_get_reg(mcasp, DAVINCI_MCASP_TXSTAT_REG) & XRDATA) &&
++ (cnt < 100000))
+ cnt++;
+
+ /* Release TX state machine */
+diff --git a/sound/soc/sh/rcar/gen.c b/sound/soc/sh/rcar/gen.c
+index f04d17bc6e3d..916b38d54fda 100644
+--- a/sound/soc/sh/rcar/gen.c
++++ b/sound/soc/sh/rcar/gen.c
+@@ -231,7 +231,7 @@ static int rsnd_gen2_probe(struct platform_device *pdev,
+ RSND_GEN_S_REG(SCU_SYS_STATUS0, 0x1c8),
+ RSND_GEN_S_REG(SCU_SYS_INT_EN0, 0x1cc),
+ RSND_GEN_S_REG(SCU_SYS_STATUS1, 0x1d0),
+- RSND_GEN_S_REG(SCU_SYS_INT_EN1, 0x1c4),
++ RSND_GEN_S_REG(SCU_SYS_INT_EN1, 0x1d4),
+ RSND_GEN_M_REG(SRC_SWRSR, 0x200, 0x40),
+ RSND_GEN_M_REG(SRC_SRCIR, 0x204, 0x40),
+ RSND_GEN_M_REG(SRC_ADINR, 0x214, 0x40),
+diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c
+index 025c38fbe3c0..1874cf0e6cab 100644
+--- a/sound/soc/soc-compress.c
++++ b/sound/soc/soc-compress.c
+@@ -623,6 +623,7 @@ int soc_new_compress(struct snd_soc_pcm_runtime *rtd, int num)
+ struct snd_pcm *be_pcm;
+ char new_name[64];
+ int ret = 0, direction = 0;
++ int playback = 0, capture = 0;
+
+ if (rtd->num_codecs > 1) {
+ dev_err(rtd->card->dev, "Multicodec not supported for compressed stream\n");
+@@ -634,11 +635,27 @@ int soc_new_compress(struct snd_soc_pcm_runtime *rtd, int num)
+ rtd->dai_link->stream_name, codec_dai->name, num);
+
+ if (codec_dai->driver->playback.channels_min)
++ playback = 1;
++ if (codec_dai->driver->capture.channels_min)
++ capture = 1;
++
++ capture = capture && cpu_dai->driver->capture.channels_min;
++ playback = playback && cpu_dai->driver->playback.channels_min;
++
++ /*
++ * Compress devices are unidirectional so only one of the directions
++ * should be set, check for that (xor)
++ */
++ if (playback + capture != 1) {
++ dev_err(rtd->card->dev, "Invalid direction for compress P %d, C %d\n",
++ playback, capture);
++ return -EINVAL;
++ }
++
++ if(playback)
+ direction = SND_COMPRESS_PLAYBACK;
+- else if (codec_dai->driver->capture.channels_min)
+- direction = SND_COMPRESS_CAPTURE;
+ else
+- return -EINVAL;
++ direction = SND_COMPRESS_CAPTURE;
+
+ compr = kzalloc(sizeof(*compr), GFP_KERNEL);
+ if (compr == NULL) {
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index 18f56646ce86..1f09d9591276 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -675,6 +675,8 @@ int snd_usb_autoresume(struct snd_usb_audio *chip)
+
+ void snd_usb_autosuspend(struct snd_usb_audio *chip)
+ {
++ if (atomic_read(&chip->shutdown))
++ return;
+ if (atomic_dec_and_test(&chip->active))
+ usb_autopm_put_interface(chip->pm_intf);
+ }
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index f494dced3c11..4f85757009b3 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -1354,6 +1354,8 @@ static void build_feature_ctl(struct mixer_build *state, void *raw_desc,
+ }
+ }
+
++ snd_usb_mixer_fu_apply_quirk(state->mixer, cval, unitid, kctl);
++
+ range = (cval->max - cval->min) / cval->res;
+ /*
+ * Are there devices with volume range more than 255? I use a bit more
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index 6a803eff87f7..ddca6547399b 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -348,13 +348,6 @@ static struct usbmix_name_map bose_companion5_map[] = {
+ { 0 } /* terminator */
+ };
+
+-/* Dragonfly DAC 1.2, the dB conversion factor is 1 instead of 256 */
+-static struct usbmix_dB_map dragonfly_1_2_dB = {0, 5000};
+-static struct usbmix_name_map dragonfly_1_2_map[] = {
+- { 7, NULL, .dB = &dragonfly_1_2_dB },
+- { 0 } /* terminator */
+-};
+-
+ /*
+ * Control map entries
+ */
+@@ -470,11 +463,6 @@ static struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ .id = USB_ID(0x05a7, 0x1020),
+ .map = bose_companion5_map,
+ },
+- {
+- /* Dragonfly DAC 1.2 */
+- .id = USB_ID(0x21b4, 0x0081),
+- .map = dragonfly_1_2_map,
+- },
+ { 0 } /* terminator */
+ };
+
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index d3608c0a29f3..4aeccd78e5dc 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -37,6 +37,7 @@
+ #include <sound/control.h>
+ #include <sound/hwdep.h>
+ #include <sound/info.h>
++#include <sound/tlv.h>
+
+ #include "usbaudio.h"
+ #include "mixer.h"
+@@ -792,7 +793,7 @@ static int snd_nativeinstruments_control_put(struct snd_kcontrol *kcontrol,
+ return 0;
+
+ kcontrol->private_value &= ~(0xff << 24);
+- kcontrol->private_value |= newval;
++ kcontrol->private_value |= (unsigned int)newval << 24;
+ err = snd_ni_update_cur_val(list);
+ return err < 0 ? err : 1;
+ }
+@@ -1825,3 +1826,39 @@ void snd_usb_mixer_rc_memory_change(struct usb_mixer_interface *mixer,
+ }
+ }
+
++static void snd_dragonfly_quirk_db_scale(struct usb_mixer_interface *mixer,
++ struct snd_kcontrol *kctl)
++{
++ /* Approximation using 10 ranges based on output measurement on hw v1.2.
++ * This seems close to the cubic mapping e.g. alsamixer uses. */
++ static const DECLARE_TLV_DB_RANGE(scale,
++ 0, 1, TLV_DB_MINMAX_ITEM(-5300, -4970),
++ 2, 5, TLV_DB_MINMAX_ITEM(-4710, -4160),
++ 6, 7, TLV_DB_MINMAX_ITEM(-3884, -3710),
++ 8, 14, TLV_DB_MINMAX_ITEM(-3443, -2560),
++ 15, 16, TLV_DB_MINMAX_ITEM(-2475, -2324),
++ 17, 19, TLV_DB_MINMAX_ITEM(-2228, -2031),
++ 20, 26, TLV_DB_MINMAX_ITEM(-1910, -1393),
++ 27, 31, TLV_DB_MINMAX_ITEM(-1322, -1032),
++ 32, 40, TLV_DB_MINMAX_ITEM(-968, -490),
++ 41, 50, TLV_DB_MINMAX_ITEM(-441, 0),
++ );
++
++ usb_audio_info(mixer->chip, "applying DragonFly dB scale quirk\n");
++ kctl->tlv.p = scale;
++ kctl->vd[0].access |= SNDRV_CTL_ELEM_ACCESS_TLV_READ;
++ kctl->vd[0].access &= ~SNDRV_CTL_ELEM_ACCESS_TLV_CALLBACK;
++}
++
++void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,
++ struct usb_mixer_elem_info *cval, int unitid,
++ struct snd_kcontrol *kctl)
++{
++ switch (mixer->chip->usb_id) {
++ case USB_ID(0x21b4, 0x0081): /* AudioQuest DragonFly */
++ if (unitid == 7 && cval->min == 0 && cval->max == 50)
++ snd_dragonfly_quirk_db_scale(mixer, kctl);
++ break;
++ }
++}
++
+diff --git a/sound/usb/mixer_quirks.h b/sound/usb/mixer_quirks.h
+index bdbfab093816..177c329cd4dd 100644
+--- a/sound/usb/mixer_quirks.h
++++ b/sound/usb/mixer_quirks.h
+@@ -9,5 +9,9 @@ void snd_emuusb_set_samplerate(struct snd_usb_audio *chip,
+ void snd_usb_mixer_rc_memory_change(struct usb_mixer_interface *mixer,
+ int unitid);
+
++void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,
++ struct usb_mixer_elem_info *cval, int unitid,
++ struct snd_kcontrol *kctl);
++
+ #endif /* SND_USB_MIXER_QUIRKS_H */
+
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index eef9b8e4b949..fb9a8a5787a6 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1122,6 +1122,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
+ case USB_ID(0x045E, 0x0779): /* MS Lifecam HD-3000 */
+ case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */
+ case USB_ID(0x074D, 0x3553): /* Outlaw RR2150 (Micronas UAC3553B) */
++ case USB_ID(0x21B4, 0x0081): /* AudioQuest DragonFly */
+ return true;
+ }
+ return false;
+@@ -1265,6 +1266,7 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ case USB_ID(0x20b1, 0x3008): /* iFi Audio micro/nano iDSD */
+ case USB_ID(0x20b1, 0x2008): /* Matrix Audio X-Sabre */
+ case USB_ID(0x20b1, 0x300a): /* Matrix Audio Mini-i Pro */
++ case USB_ID(0x22d8, 0x0416): /* OPPO HA-1*/
+ if (fp->altsetting == 2)
+ return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+ break;
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [gentoo-commits] proj/linux-patches:4.3 commit in: /
@ 2016-02-19 23:30 Mike Pagano
0 siblings, 0 replies; 9+ messages in thread
From: Mike Pagano @ 2016-02-19 23:30 UTC (permalink / raw
To: gentoo-commits
commit: afdf80395cd18368043156124343e742f3cb572c
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Feb 19 23:30:13 2016 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Feb 19 23:30:13 2016 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=afdf8039
Linux patch 4.3.6
0000_README | 4 +
1005_linux-4.3.6.patch | 8015 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 8019 insertions(+)
diff --git a/0000_README b/0000_README
index 74a7d33..72b3e45 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 1004_linux-4.3.5.patch
From: http://www.kernel.org
Desc: Linux 4.3.5
+Patch: 1005_linux-4.3.6.patch
+From: http://www.kernel.org
+Desc: Linux 4.3.6
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1005_linux-4.3.6.patch b/1005_linux-4.3.6.patch
new file mode 100644
index 0000000..1be422d
--- /dev/null
+++ b/1005_linux-4.3.6.patch
@@ -0,0 +1,8015 @@
+diff --git a/Makefile b/Makefile
+index efc7a766c470..95568cb1f5c6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 3
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Blurry Fish Butt
+
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 9317974c9b8e..326564386cfa 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -301,6 +301,7 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
+ }
+
+ #ifdef CONFIG_DEBUG_RODATA
++#define SWAPPER_BLOCK_SIZE (PAGE_SHIFT == 12 ? SECTION_SIZE : PAGE_SIZE)
+ static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
+ {
+ /*
+@@ -308,8 +309,8 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end)
+ * for now. This will get more fine grained later once all memory
+ * is mapped
+ */
+- unsigned long kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
+- unsigned long kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);
++ unsigned long kernel_x_start = round_down(__pa(_stext), SWAPPER_BLOCK_SIZE);
++ unsigned long kernel_x_end = round_up(__pa(__init_end), SWAPPER_BLOCK_SIZE);
+
+ if (end < kernel_x_start) {
+ create_mapping(start, __phys_to_virt(start),
+@@ -397,18 +398,18 @@ void __init fixup_executable(void)
+ {
+ #ifdef CONFIG_DEBUG_RODATA
+ /* now that we are actually fully mapped, make the start/end more fine grained */
+- if (!IS_ALIGNED((unsigned long)_stext, SECTION_SIZE)) {
++ if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) {
+ unsigned long aligned_start = round_down(__pa(_stext),
+- SECTION_SIZE);
++ SWAPPER_BLOCK_SIZE);
+
+ create_mapping(aligned_start, __phys_to_virt(aligned_start),
+ __pa(_stext) - aligned_start,
+ PAGE_KERNEL);
+ }
+
+- if (!IS_ALIGNED((unsigned long)__init_end, SECTION_SIZE)) {
++ if (!IS_ALIGNED((unsigned long)__init_end, SWAPPER_BLOCK_SIZE)) {
+ unsigned long aligned_end = round_up(__pa(__init_end),
+- SECTION_SIZE);
++ SWAPPER_BLOCK_SIZE);
+ create_mapping(__pa(__init_end), (unsigned long)__init_end,
+ aligned_end - __pa(__init_end),
+ PAGE_KERNEL);
+diff --git a/arch/parisc/include/asm/compat.h b/arch/parisc/include/asm/compat.h
+index 94710cfc1ce8..0448a2c8eafb 100644
+--- a/arch/parisc/include/asm/compat.h
++++ b/arch/parisc/include/asm/compat.h
+@@ -206,10 +206,10 @@ struct compat_ipc64_perm {
+
+ struct compat_semid64_ds {
+ struct compat_ipc64_perm sem_perm;
+- compat_time_t sem_otime;
+ unsigned int __unused1;
+- compat_time_t sem_ctime;
++ compat_time_t sem_otime;
+ unsigned int __unused2;
++ compat_time_t sem_ctime;
+ compat_ulong_t sem_nsems;
+ compat_ulong_t __unused3;
+ compat_ulong_t __unused4;
+diff --git a/arch/parisc/include/uapi/asm/ipcbuf.h b/arch/parisc/include/uapi/asm/ipcbuf.h
+index bd956c425785..790c4119f647 100644
+--- a/arch/parisc/include/uapi/asm/ipcbuf.h
++++ b/arch/parisc/include/uapi/asm/ipcbuf.h
+@@ -1,6 +1,9 @@
+ #ifndef __PARISC_IPCBUF_H__
+ #define __PARISC_IPCBUF_H__
+
++#include <asm/bitsperlong.h>
++#include <linux/posix_types.h>
++
+ /*
+ * The ipc64_perm structure for PA-RISC is almost identical to
+ * kern_ipc_perm as we have always had 32-bit UIDs and GIDs in the kernel.
+@@ -10,16 +13,18 @@
+
+ struct ipc64_perm
+ {
+- key_t key;
+- uid_t uid;
+- gid_t gid;
+- uid_t cuid;
+- gid_t cgid;
++ __kernel_key_t key;
++ __kernel_uid_t uid;
++ __kernel_gid_t gid;
++ __kernel_uid_t cuid;
++ __kernel_gid_t cgid;
++#if __BITS_PER_LONG != 64
+ unsigned short int __pad1;
+- mode_t mode;
++#endif
++ __kernel_mode_t mode;
+ unsigned short int __pad2;
+ unsigned short int seq;
+- unsigned int __pad3;
++ unsigned int __pad3;
+ unsigned long long int __unused1;
+ unsigned long long int __unused2;
+ };
+diff --git a/arch/parisc/include/uapi/asm/mman.h b/arch/parisc/include/uapi/asm/mman.h
+index 294d251ca7b2..2ae13ce592e8 100644
+--- a/arch/parisc/include/uapi/asm/mman.h
++++ b/arch/parisc/include/uapi/asm/mman.h
+@@ -46,16 +46,6 @@
+ #define MADV_DONTFORK 10 /* don't inherit across fork */
+ #define MADV_DOFORK 11 /* do inherit across fork */
+
+-/* The range 12-64 is reserved for page size specification. */
+-#define MADV_4K_PAGES 12 /* Use 4K pages */
+-#define MADV_16K_PAGES 14 /* Use 16K pages */
+-#define MADV_64K_PAGES 16 /* Use 64K pages */
+-#define MADV_256K_PAGES 18 /* Use 256K pages */
+-#define MADV_1M_PAGES 20 /* Use 1 Megabyte pages */
+-#define MADV_4M_PAGES 22 /* Use 4 Megabyte pages */
+-#define MADV_16M_PAGES 24 /* Use 16 Megabyte pages */
+-#define MADV_64M_PAGES 26 /* Use 64 Megabyte pages */
+-
+ #define MADV_MERGEABLE 65 /* KSM may merge identical pages */
+ #define MADV_UNMERGEABLE 66 /* KSM may not merge identical pages */
+
+diff --git a/arch/parisc/include/uapi/asm/msgbuf.h b/arch/parisc/include/uapi/asm/msgbuf.h
+index 342138983914..2e83ac758e19 100644
+--- a/arch/parisc/include/uapi/asm/msgbuf.h
++++ b/arch/parisc/include/uapi/asm/msgbuf.h
+@@ -27,13 +27,13 @@ struct msqid64_ds {
+ unsigned int __pad3;
+ #endif
+ __kernel_time_t msg_ctime; /* last change time */
+- unsigned int msg_cbytes; /* current number of bytes on queue */
+- unsigned int msg_qnum; /* number of messages in queue */
+- unsigned int msg_qbytes; /* max number of bytes on queue */
++ unsigned long msg_cbytes; /* current number of bytes on queue */
++ unsigned long msg_qnum; /* number of messages in queue */
++ unsigned long msg_qbytes; /* max number of bytes on queue */
+ __kernel_pid_t msg_lspid; /* pid of last msgsnd */
+ __kernel_pid_t msg_lrpid; /* last receive pid */
+- unsigned int __unused1;
+- unsigned int __unused2;
++ unsigned long __unused1;
++ unsigned long __unused2;
+ };
+
+ #endif /* _PARISC_MSGBUF_H */
+diff --git a/arch/parisc/include/uapi/asm/posix_types.h b/arch/parisc/include/uapi/asm/posix_types.h
+index b9344256f76b..f3b5f70b9a5f 100644
+--- a/arch/parisc/include/uapi/asm/posix_types.h
++++ b/arch/parisc/include/uapi/asm/posix_types.h
+@@ -7,8 +7,10 @@
+ * assume GCC is being used.
+ */
+
++#ifndef __LP64__
+ typedef unsigned short __kernel_mode_t;
+ #define __kernel_mode_t __kernel_mode_t
++#endif
+
+ typedef unsigned short __kernel_ipc_pid_t;
+ #define __kernel_ipc_pid_t __kernel_ipc_pid_t
+diff --git a/arch/parisc/include/uapi/asm/sembuf.h b/arch/parisc/include/uapi/asm/sembuf.h
+index f01d89e30d73..c20971bf520f 100644
+--- a/arch/parisc/include/uapi/asm/sembuf.h
++++ b/arch/parisc/include/uapi/asm/sembuf.h
+@@ -23,9 +23,9 @@ struct semid64_ds {
+ unsigned int __pad2;
+ #endif
+ __kernel_time_t sem_ctime; /* last change time */
+- unsigned int sem_nsems; /* no. of semaphores in array */
+- unsigned int __unused1;
+- unsigned int __unused2;
++ unsigned long sem_nsems; /* no. of semaphores in array */
++ unsigned long __unused1;
++ unsigned long __unused2;
+ };
+
+ #endif /* _PARISC_SEMBUF_H */
+diff --git a/arch/parisc/include/uapi/asm/shmbuf.h b/arch/parisc/include/uapi/asm/shmbuf.h
+index 8496c38560c6..750e13e77991 100644
+--- a/arch/parisc/include/uapi/asm/shmbuf.h
++++ b/arch/parisc/include/uapi/asm/shmbuf.h
+@@ -30,12 +30,12 @@ struct shmid64_ds {
+ #if __BITS_PER_LONG != 64
+ unsigned int __pad4;
+ #endif
+- size_t shm_segsz; /* size of segment (bytes) */
++ __kernel_size_t shm_segsz; /* size of segment (bytes) */
+ __kernel_pid_t shm_cpid; /* pid of creator */
+ __kernel_pid_t shm_lpid; /* pid of last operator */
+- unsigned int shm_nattch; /* no. of current attaches */
+- unsigned int __unused1;
+- unsigned int __unused2;
++ unsigned long shm_nattch; /* no. of current attaches */
++ unsigned long __unused1;
++ unsigned long __unused2;
+ };
+
+ struct shminfo64 {
+diff --git a/arch/parisc/include/uapi/asm/siginfo.h b/arch/parisc/include/uapi/asm/siginfo.h
+index d7034728f377..1c75565d984b 100644
+--- a/arch/parisc/include/uapi/asm/siginfo.h
++++ b/arch/parisc/include/uapi/asm/siginfo.h
+@@ -1,6 +1,10 @@
+ #ifndef _PARISC_SIGINFO_H
+ #define _PARISC_SIGINFO_H
+
++#if defined(__LP64__)
++#define __ARCH_SI_PREAMBLE_SIZE (4 * sizeof(int))
++#endif
++
+ #include <asm-generic/siginfo.h>
+
+ #undef NSIGTRAP
+diff --git a/arch/parisc/kernel/signal.c b/arch/parisc/kernel/signal.c
+index dc1ea796fd60..2264f68f3c2f 100644
+--- a/arch/parisc/kernel/signal.c
++++ b/arch/parisc/kernel/signal.c
+@@ -435,6 +435,55 @@ handle_signal(struct ksignal *ksig, struct pt_regs *regs, int in_syscall)
+ regs->gr[28]);
+ }
+
++/*
++ * Check how the syscall number gets loaded into %r20 within
++ * the delay branch in userspace and adjust as needed.
++ */
++
++static void check_syscallno_in_delay_branch(struct pt_regs *regs)
++{
++ u32 opcode, source_reg;
++ u32 __user *uaddr;
++ int err;
++
++ /* Usually we don't have to restore %r20 (the system call number)
++ * because it gets loaded in the delay slot of the branch external
++ * instruction via the ldi instruction.
++ * In some cases a register-to-register copy instruction might have
++ * been used instead, in which case we need to copy the syscall
++ * number into the source register before returning to userspace.
++ */
++
++ /* A syscall is just a branch, so all we have to do is fiddle the
++ * return pointer so that the ble instruction gets executed again.
++ */
++ regs->gr[31] -= 8; /* delayed branching */
++
++ /* Get assembler opcode of code in delay branch */
++ uaddr = (unsigned int *) ((regs->gr[31] & ~3) + 4);
++ err = get_user(opcode, uaddr);
++ if (err)
++ return;
++
++ /* Check if delay branch uses "ldi int,%r20" */
++ if ((opcode & 0xffff0000) == 0x34140000)
++ return; /* everything ok, just return */
++
++ /* Check if delay branch uses "nop" */
++ if (opcode == INSN_NOP)
++ return;
++
++ /* Check if delay branch uses "copy %rX,%r20" */
++ if ((opcode & 0xffe0ffff) == 0x08000254) {
++ source_reg = (opcode >> 16) & 31;
++ regs->gr[source_reg] = regs->gr[20];
++ return;
++ }
++
++ pr_warn("syscall restart: %s (pid %d): unexpected opcode 0x%08x\n",
++ current->comm, task_pid_nr(current), opcode);
++}
++
+ static inline void
+ syscall_restart(struct pt_regs *regs, struct k_sigaction *ka)
+ {
+@@ -457,10 +506,7 @@ syscall_restart(struct pt_regs *regs, struct k_sigaction *ka)
+ }
+ /* fallthrough */
+ case -ERESTARTNOINTR:
+- /* A syscall is just a branch, so all
+- * we have to do is fiddle the return pointer.
+- */
+- regs->gr[31] -= 8; /* delayed branching */
++ check_syscallno_in_delay_branch(regs);
+ break;
+ }
+ }
+@@ -510,15 +556,9 @@ insert_restart_trampoline(struct pt_regs *regs)
+ }
+ case -ERESTARTNOHAND:
+ case -ERESTARTSYS:
+- case -ERESTARTNOINTR: {
+- /* Hooray for delayed branching. We don't
+- * have to restore %r20 (the system call
+- * number) because it gets loaded in the delay
+- * slot of the branch external instruction.
+- */
+- regs->gr[31] -= 8;
++ case -ERESTARTNOINTR:
++ check_syscallno_in_delay_branch(regs);
+ return;
+- }
+ default:
+ break;
+ }
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index c229427fa546..c5fec4890fdf 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -23,6 +23,7 @@
+ #include <linux/unistd.h>
+ #include <linux/nodemask.h> /* for node_online_map */
+ #include <linux/pagemap.h> /* for release_pages and page_cache_release */
++#include <linux/compat.h>
+
+ #include <asm/pgalloc.h>
+ #include <asm/pgtable.h>
+@@ -30,6 +31,7 @@
+ #include <asm/pdc_chassis.h>
+ #include <asm/mmzone.h>
+ #include <asm/sections.h>
++#include <asm/msgbuf.h>
+
+ extern int data_start;
+ extern void parisc_kernel_start(void); /* Kernel entry point in head.S */
+@@ -590,6 +592,20 @@ unsigned long pcxl_dma_start __read_mostly;
+
+ void __init mem_init(void)
+ {
++ /* Do sanity checks on IPC (compat) structures */
++ BUILD_BUG_ON(sizeof(struct ipc64_perm) != 48);
++#ifndef CONFIG_64BIT
++ BUILD_BUG_ON(sizeof(struct semid64_ds) != 80);
++ BUILD_BUG_ON(sizeof(struct msqid64_ds) != 104);
++ BUILD_BUG_ON(sizeof(struct shmid64_ds) != 104);
++#endif
++#ifdef CONFIG_COMPAT
++ BUILD_BUG_ON(sizeof(struct compat_ipc64_perm) != sizeof(struct ipc64_perm));
++ BUILD_BUG_ON(sizeof(struct compat_semid64_ds) != 80);
++ BUILD_BUG_ON(sizeof(struct compat_msqid64_ds) != 104);
++ BUILD_BUG_ON(sizeof(struct compat_shmid64_ds) != 104);
++#endif
++
+ /* Do sanity checks on page table constants */
+ BUILD_BUG_ON(PTE_ENTRY_SIZE != sizeof(pte_t));
+ BUILD_BUG_ON(PMD_ENTRY_SIZE != sizeof(pmd_t));
+diff --git a/arch/sh/include/uapi/asm/unistd_64.h b/arch/sh/include/uapi/asm/unistd_64.h
+index e6820c86e8c7..47ebd5b5ed55 100644
+--- a/arch/sh/include/uapi/asm/unistd_64.h
++++ b/arch/sh/include/uapi/asm/unistd_64.h
+@@ -278,7 +278,7 @@
+ #define __NR_fsetxattr 256
+ #define __NR_getxattr 257
+ #define __NR_lgetxattr 258
+-#define __NR_fgetxattr 269
++#define __NR_fgetxattr 259
+ #define __NR_listxattr 260
+ #define __NR_llistxattr 261
+ #define __NR_flistxattr 262
+diff --git a/arch/x86/crypto/chacha20-ssse3-x86_64.S b/arch/x86/crypto/chacha20-ssse3-x86_64.S
+index 712b13047b41..3a33124e9112 100644
+--- a/arch/x86/crypto/chacha20-ssse3-x86_64.S
++++ b/arch/x86/crypto/chacha20-ssse3-x86_64.S
+@@ -157,7 +157,9 @@ ENTRY(chacha20_4block_xor_ssse3)
+ # done with the slightly better performing SSSE3 byte shuffling,
+ # 7/12-bit word rotation uses traditional shift+OR.
+
+- sub $0x40,%rsp
++ mov %rsp,%r11
++ sub $0x80,%rsp
++ and $~63,%rsp
+
+ # x0..15[0-3] = s0..3[0..3]
+ movq 0x00(%rdi),%xmm1
+@@ -620,6 +622,6 @@ ENTRY(chacha20_4block_xor_ssse3)
+ pxor %xmm1,%xmm15
+ movdqu %xmm15,0xf0(%rsi)
+
+- add $0x40,%rsp
++ mov %r11,%rsp
+ ret
+ ENDPROC(chacha20_4block_xor_ssse3)
+diff --git a/arch/x86/crypto/crc32c-pcl-intel-asm_64.S b/arch/x86/crypto/crc32c-pcl-intel-asm_64.S
+index 225be06edc80..4fe27e074194 100644
+--- a/arch/x86/crypto/crc32c-pcl-intel-asm_64.S
++++ b/arch/x86/crypto/crc32c-pcl-intel-asm_64.S
+@@ -330,7 +330,7 @@ ENDPROC(crc_pcl)
+ ## PCLMULQDQ tables
+ ## Table is 128 entries x 2 words (8 bytes) each
+ ################################################################
+-.section .rotata, "a", %progbits
++.section .rodata, "a", %progbits
+ .align 8
+ K_table:
+ .long 0x493c7d27, 0x00000001
+diff --git a/arch/xtensa/include/asm/asmmacro.h b/arch/xtensa/include/asm/asmmacro.h
+index 755320f6e0bc..746dcc8b5abc 100644
+--- a/arch/xtensa/include/asm/asmmacro.h
++++ b/arch/xtensa/include/asm/asmmacro.h
+@@ -35,9 +35,10 @@
+ * __loop as
+ * restart loop. 'as' register must not have been modified!
+ *
+- * __endla ar, at, incr
++ * __endla ar, as, incr
+ * ar start address (modified)
+- * as scratch register used by macro
++ * as scratch register used by __loops/__loopi macros or
++ * end address used by __loopt macro
+ * inc increment
+ */
+
+@@ -97,7 +98,7 @@
+ .endm
+
+ /*
+- * loop from ar to ax
++ * loop from ar to as
+ */
+
+ .macro __loopt ar, as, at, incr_log2
+diff --git a/arch/xtensa/include/asm/vectors.h b/arch/xtensa/include/asm/vectors.h
+index a46c53f36113..986b5d0cb9e0 100644
+--- a/arch/xtensa/include/asm/vectors.h
++++ b/arch/xtensa/include/asm/vectors.h
+@@ -48,6 +48,9 @@
+ #define LOAD_MEMORY_ADDRESS 0xD0003000
+ #endif
+
++#define RESET_VECTOR1_VADDR (VIRTUAL_MEMORY_ADDRESS + \
++ XCHAL_RESET_VECTOR1_PADDR)
++
+ #else /* !defined(CONFIG_MMU) */
+ /* MMU Not being used - Virtual == Physical */
+
+@@ -60,6 +63,8 @@
+ /* Loaded just above possibly live vectors */
+ #define LOAD_MEMORY_ADDRESS (PLATFORM_DEFAULT_MEM_START + 0x3000)
+
++#define RESET_VECTOR1_VADDR (XCHAL_RESET_VECTOR1_VADDR)
++
+ #endif /* CONFIG_MMU */
+
+ #define XC_VADDR(offset) (VIRTUAL_MEMORY_ADDRESS + offset)
+@@ -71,10 +76,6 @@
+ VECBASE_RESET_VADDR)
+ #define RESET_VECTOR_VADDR XC_VADDR(RESET_VECTOR_VECOFS)
+
+-#define RESET_VECTOR1_VECOFS (XCHAL_RESET_VECTOR1_VADDR - \
+- VECBASE_RESET_VADDR)
+-#define RESET_VECTOR1_VADDR XC_VADDR(RESET_VECTOR1_VECOFS)
+-
+ #if defined(XCHAL_HAVE_VECBASE) && XCHAL_HAVE_VECBASE
+
+ #define USER_VECTOR_VADDR XC_VADDR(XCHAL_USER_VECOFS)
+diff --git a/arch/xtensa/kernel/Makefile b/arch/xtensa/kernel/Makefile
+index 50137bc9e150..4db730290d2d 100644
+--- a/arch/xtensa/kernel/Makefile
++++ b/arch/xtensa/kernel/Makefile
+@@ -16,6 +16,7 @@ obj-$(CONFIG_SMP) += smp.o mxhead.o
+ obj-$(CONFIG_XTENSA_VARIANT_HAVE_PERF_EVENTS) += perf_event.o
+
+ AFLAGS_head.o += -mtext-section-literals
++AFLAGS_mxhead.o += -mtext-section-literals
+
+ # In the Xtensa architecture, assembly generates literals which must always
+ # precede the L32R instruction with a relative offset less than 256 kB.
+diff --git a/arch/xtensa/kernel/entry.S b/arch/xtensa/kernel/entry.S
+index 504130357597..db5c1765b413 100644
+--- a/arch/xtensa/kernel/entry.S
++++ b/arch/xtensa/kernel/entry.S
+@@ -367,8 +367,10 @@ common_exception:
+ s32i a2, a1, PT_SYSCALL
+ movi a2, 0
+ s32i a3, a1, PT_EXCVADDR
++#if XCHAL_HAVE_LOOPS
+ xsr a2, lcount
+ s32i a2, a1, PT_LCOUNT
++#endif
+
+ /* It is now save to restore the EXC_TABLE_FIXUP variable. */
+
+@@ -429,11 +431,12 @@ common_exception:
+ rsync # PS.WOE => rsync => overflow
+
+ /* Save lbeg, lend */
+-
++#if XCHAL_HAVE_LOOPS
+ rsr a4, lbeg
+ rsr a3, lend
+ s32i a4, a1, PT_LBEG
+ s32i a3, a1, PT_LEND
++#endif
+
+ /* Save SCOMPARE1 */
+
+@@ -724,13 +727,14 @@ common_exception_exit:
+ wsr a3, sar
+
+ /* Restore LBEG, LEND, LCOUNT */
+-
++#if XCHAL_HAVE_LOOPS
+ l32i a2, a1, PT_LBEG
+ l32i a3, a1, PT_LEND
+ wsr a2, lbeg
+ l32i a2, a1, PT_LCOUNT
+ wsr a3, lend
+ wsr a2, lcount
++#endif
+
+ /* We control single stepping through the ICOUNTLEVEL register. */
+
+diff --git a/arch/xtensa/kernel/head.S b/arch/xtensa/kernel/head.S
+index 15a461e2a0ed..9ed55649ac8e 100644
+--- a/arch/xtensa/kernel/head.S
++++ b/arch/xtensa/kernel/head.S
+@@ -249,7 +249,7 @@ ENTRY(_startup)
+
+ __loopt a2, a3, a4, 2
+ s32i a0, a2, 0
+- __endla a2, a4, 4
++ __endla a2, a3, 4
+
+ #if XCHAL_DCACHE_IS_WRITEBACK
+
+diff --git a/arch/xtensa/kernel/setup.c b/arch/xtensa/kernel/setup.c
+index 28fc57ef5b86..4e06ec9769d1 100644
+--- a/arch/xtensa/kernel/setup.c
++++ b/arch/xtensa/kernel/setup.c
+@@ -334,7 +334,10 @@ extern char _Level5InterruptVector_text_end;
+ extern char _Level6InterruptVector_text_start;
+ extern char _Level6InterruptVector_text_end;
+ #endif
+-
++#ifdef CONFIG_SMP
++extern char _SecondaryResetVector_text_start;
++extern char _SecondaryResetVector_text_end;
++#endif
+
+
+ #ifdef CONFIG_S32C1I_SELFTEST
+@@ -506,6 +509,10 @@ void __init setup_arch(char **cmdline_p)
+ __pa(&_Level6InterruptVector_text_end), 0);
+ #endif
+
++#ifdef CONFIG_SMP
++ mem_reserve(__pa(&_SecondaryResetVector_text_start),
++ __pa(&_SecondaryResetVector_text_end), 0);
++#endif
+ parse_early_param();
+ bootmem_init();
+
+diff --git a/arch/xtensa/kernel/vectors.S b/arch/xtensa/kernel/vectors.S
+index abcdb527f18a..fc25318e75ad 100644
+--- a/arch/xtensa/kernel/vectors.S
++++ b/arch/xtensa/kernel/vectors.S
+@@ -478,6 +478,9 @@ _DoubleExceptionVector_handle_exception:
+
+ ENDPROC(_DoubleExceptionVector)
+
++ .end literal_prefix
++
++ .text
+ /*
+ * Fixup handler for TLB miss in double exception handler for window owerflow.
+ * We get here with windowbase set to the window that was being spilled and
+@@ -587,7 +590,6 @@ ENTRY(window_overflow_restore_a0_fixup)
+
+ ENDPROC(window_overflow_restore_a0_fixup)
+
+- .end literal_prefix
+ /*
+ * Debug interrupt vector
+ *
+diff --git a/arch/xtensa/kernel/vmlinux.lds.S b/arch/xtensa/kernel/vmlinux.lds.S
+index fc1bc2ba8d5d..d66cd408be13 100644
+--- a/arch/xtensa/kernel/vmlinux.lds.S
++++ b/arch/xtensa/kernel/vmlinux.lds.S
+@@ -166,8 +166,6 @@ SECTIONS
+ RELOCATE_ENTRY(_DebugInterruptVector_text,
+ .DebugInterruptVector.text);
+ #if defined(CONFIG_SMP)
+- RELOCATE_ENTRY(_SecondaryResetVector_literal,
+- .SecondaryResetVector.literal);
+ RELOCATE_ENTRY(_SecondaryResetVector_text,
+ .SecondaryResetVector.text);
+ #endif
+@@ -282,17 +280,11 @@ SECTIONS
+
+ #if defined(CONFIG_SMP)
+
+- SECTION_VECTOR (_SecondaryResetVector_literal,
+- .SecondaryResetVector.literal,
+- RESET_VECTOR1_VADDR - 4,
+- SIZEOF(.DoubleExceptionVector.text),
+- .DoubleExceptionVector.text)
+-
+ SECTION_VECTOR (_SecondaryResetVector_text,
+ .SecondaryResetVector.text,
+ RESET_VECTOR1_VADDR,
+- 4,
+- .SecondaryResetVector.literal)
++ SIZEOF(.DoubleExceptionVector.text),
++ .DoubleExceptionVector.text)
+
+ . = LOADADDR(.SecondaryResetVector.text)+SIZEOF(.SecondaryResetVector.text);
+
+diff --git a/arch/xtensa/lib/usercopy.S b/arch/xtensa/lib/usercopy.S
+index ace1892a875e..7ea4dd68893e 100644
+--- a/arch/xtensa/lib/usercopy.S
++++ b/arch/xtensa/lib/usercopy.S
+@@ -222,8 +222,8 @@ __xtensa_copy_user:
+ loopnez a7, .Loop2done
+ #else /* !XCHAL_HAVE_LOOPS */
+ beqz a7, .Loop2done
+- slli a10, a7, 4
+- add a10, a10, a3 # a10 = end of last 16B source chunk
++ slli a12, a7, 4
++ add a12, a12, a3 # a12 = end of last 16B source chunk
+ #endif /* !XCHAL_HAVE_LOOPS */
+ .Loop2:
+ EX(l32i, a7, a3, 4, l_fixup)
+@@ -241,7 +241,7 @@ __xtensa_copy_user:
+ EX(s32i, a9, a5, 12, s_fixup)
+ addi a5, a5, 16
+ #if !XCHAL_HAVE_LOOPS
+- blt a3, a10, .Loop2
++ blt a3, a12, .Loop2
+ #endif /* !XCHAL_HAVE_LOOPS */
+ .Loop2done:
+ bbci.l a4, 3, .L12
+diff --git a/arch/xtensa/platforms/iss/setup.c b/arch/xtensa/platforms/iss/setup.c
+index da7d18240866..391820539f0a 100644
+--- a/arch/xtensa/platforms/iss/setup.c
++++ b/arch/xtensa/platforms/iss/setup.c
+@@ -61,7 +61,9 @@ void platform_restart(void)
+ #if XCHAL_NUM_IBREAK > 0
+ "wsr a2, ibreakenable\n\t"
+ #endif
++#if XCHAL_HAVE_LOOPS
+ "wsr a2, lcount\n\t"
++#endif
+ "movi a2, 0x1f\n\t"
+ "wsr a2, ps\n\t"
+ "isync\n\t"
+diff --git a/arch/xtensa/platforms/xt2000/setup.c b/arch/xtensa/platforms/xt2000/setup.c
+index b90555cb8089..87678961a8c8 100644
+--- a/arch/xtensa/platforms/xt2000/setup.c
++++ b/arch/xtensa/platforms/xt2000/setup.c
+@@ -72,7 +72,9 @@ void platform_restart(void)
+ #if XCHAL_NUM_IBREAK > 0
+ "wsr a2, ibreakenable\n\t"
+ #endif
++#if XCHAL_HAVE_LOOPS
+ "wsr a2, lcount\n\t"
++#endif
+ "movi a2, 0x1f\n\t"
+ "wsr a2, ps\n\t"
+ "isync\n\t"
+diff --git a/arch/xtensa/platforms/xtfpga/setup.c b/arch/xtensa/platforms/xtfpga/setup.c
+index b4cf70e535ab..e9f65f79cf2e 100644
+--- a/arch/xtensa/platforms/xtfpga/setup.c
++++ b/arch/xtensa/platforms/xtfpga/setup.c
+@@ -63,7 +63,9 @@ void platform_restart(void)
+ #if XCHAL_NUM_IBREAK > 0
+ "wsr a2, ibreakenable\n\t"
+ #endif
++#if XCHAL_HAVE_LOOPS
+ "wsr a2, lcount\n\t"
++#endif
+ "movi a2, 0x1f\n\t"
+ "wsr a2, ps\n\t"
+ "isync\n\t"
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 18e92a6645e2..b128c1609347 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -1616,8 +1616,6 @@ static void blk_queue_bio(struct request_queue *q, struct bio *bio)
+ struct request *req;
+ unsigned int request_count = 0;
+
+- blk_queue_split(q, &bio, q->bio_split);
+-
+ /*
+ * low level driver can indicate that it wants pages above a
+ * certain limit bounced to low memory (ie for highmem, or even
+@@ -1625,6 +1623,8 @@ static void blk_queue_bio(struct request_queue *q, struct bio *bio)
+ */
+ blk_queue_bounce(q, &bio);
+
++ blk_queue_split(q, &bio, q->bio_split);
++
+ if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {
+ bio->bi_error = -EIO;
+ bio_endio(bio);
+@@ -2023,7 +2023,8 @@ void submit_bio(int rw, struct bio *bio)
+ EXPORT_SYMBOL(submit_bio);
+
+ /**
+- * blk_rq_check_limits - Helper function to check a request for the queue limit
++ * blk_cloned_rq_check_limits - Helper function to check a cloned request
++ * for new the queue limits
+ * @q: the queue
+ * @rq: the request being checked
+ *
+@@ -2034,20 +2035,13 @@ EXPORT_SYMBOL(submit_bio);
+ * after it is inserted to @q, it should be checked against @q before
+ * the insertion using this generic function.
+ *
+- * This function should also be useful for request stacking drivers
+- * in some cases below, so export this function.
+ * Request stacking drivers like request-based dm may change the queue
+- * limits while requests are in the queue (e.g. dm's table swapping).
+- * Such request stacking drivers should check those requests against
+- * the new queue limits again when they dispatch those requests,
+- * although such checkings are also done against the old queue limits
+- * when submitting requests.
++ * limits when retrying requests on other queues. Those requests need
++ * to be checked against the new queue limits again during dispatch.
+ */
+-int blk_rq_check_limits(struct request_queue *q, struct request *rq)
++static int blk_cloned_rq_check_limits(struct request_queue *q,
++ struct request *rq)
+ {
+- if (!rq_mergeable(rq))
+- return 0;
+-
+ if (blk_rq_sectors(rq) > blk_queue_get_max_sectors(q, rq->cmd_flags)) {
+ printk(KERN_ERR "%s: over max size limit.\n", __func__);
+ return -EIO;
+@@ -2067,7 +2061,6 @@ int blk_rq_check_limits(struct request_queue *q, struct request *rq)
+
+ return 0;
+ }
+-EXPORT_SYMBOL_GPL(blk_rq_check_limits);
+
+ /**
+ * blk_insert_cloned_request - Helper for stacking drivers to submit a request
+@@ -2079,7 +2072,7 @@ int blk_insert_cloned_request(struct request_queue *q, struct request *rq)
+ unsigned long flags;
+ int where = ELEVATOR_INSERT_BACK;
+
+- if (blk_rq_check_limits(q, rq))
++ if (blk_cloned_rq_check_limits(q, rq))
+ return -EIO;
+
+ if (rq->rq_disk &&
+diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
+index b4ffc5be1a93..e5b5721809e2 100644
+--- a/crypto/ablkcipher.c
++++ b/crypto/ablkcipher.c
+@@ -277,12 +277,12 @@ static int ablkcipher_walk_first(struct ablkcipher_request *req,
+ if (WARN_ON_ONCE(in_irq()))
+ return -EDEADLK;
+
++ walk->iv = req->info;
+ walk->nbytes = walk->total;
+ if (unlikely(!walk->total))
+ return 0;
+
+ walk->iv_buffer = NULL;
+- walk->iv = req->info;
+ if (unlikely(((unsigned long)walk->iv & alignmask))) {
+ int err = ablkcipher_copy_iv(walk, tfm, alignmask);
+
+diff --git a/crypto/af_alg.c b/crypto/af_alg.c
+index a8e7aa3e257b..f5e18c2a4852 100644
+--- a/crypto/af_alg.c
++++ b/crypto/af_alg.c
+@@ -76,6 +76,8 @@ int af_alg_register_type(const struct af_alg_type *type)
+ goto unlock;
+
+ type->ops->owner = THIS_MODULE;
++ if (type->ops_nokey)
++ type->ops_nokey->owner = THIS_MODULE;
+ node->type = type;
+ list_add(&node->list, &alg_types);
+ err = 0;
+@@ -125,6 +127,26 @@ int af_alg_release(struct socket *sock)
+ }
+ EXPORT_SYMBOL_GPL(af_alg_release);
+
++void af_alg_release_parent(struct sock *sk)
++{
++ struct alg_sock *ask = alg_sk(sk);
++ unsigned int nokey = ask->nokey_refcnt;
++ bool last = nokey && !ask->refcnt;
++
++ sk = ask->parent;
++ ask = alg_sk(sk);
++
++ lock_sock(sk);
++ ask->nokey_refcnt -= nokey;
++ if (!last)
++ last = !--ask->refcnt;
++ release_sock(sk);
++
++ if (last)
++ sock_put(sk);
++}
++EXPORT_SYMBOL_GPL(af_alg_release_parent);
++
+ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ {
+ const u32 forbidden = CRYPTO_ALG_INTERNAL;
+@@ -133,6 +155,7 @@ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ struct sockaddr_alg *sa = (void *)uaddr;
+ const struct af_alg_type *type;
+ void *private;
++ int err;
+
+ if (sock->state == SS_CONNECTED)
+ return -EINVAL;
+@@ -160,16 +183,22 @@ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ return PTR_ERR(private);
+ }
+
++ err = -EBUSY;
+ lock_sock(sk);
++ if (ask->refcnt | ask->nokey_refcnt)
++ goto unlock;
+
+ swap(ask->type, type);
+ swap(ask->private, private);
+
++ err = 0;
++
++unlock:
+ release_sock(sk);
+
+ alg_do_release(type, private);
+
+- return 0;
++ return err;
+ }
+
+ static int alg_setkey(struct sock *sk, char __user *ukey,
+@@ -202,11 +231,15 @@ static int alg_setsockopt(struct socket *sock, int level, int optname,
+ struct sock *sk = sock->sk;
+ struct alg_sock *ask = alg_sk(sk);
+ const struct af_alg_type *type;
+- int err = -ENOPROTOOPT;
++ int err = -EBUSY;
+
+ lock_sock(sk);
++ if (ask->refcnt)
++ goto unlock;
++
+ type = ask->type;
+
++ err = -ENOPROTOOPT;
+ if (level != SOL_ALG || !type)
+ goto unlock;
+
+@@ -238,6 +271,7 @@ int af_alg_accept(struct sock *sk, struct socket *newsock)
+ struct alg_sock *ask = alg_sk(sk);
+ const struct af_alg_type *type;
+ struct sock *sk2;
++ unsigned int nokey;
+ int err;
+
+ lock_sock(sk);
+@@ -257,20 +291,29 @@ int af_alg_accept(struct sock *sk, struct socket *newsock)
+ security_sk_clone(sk, sk2);
+
+ err = type->accept(ask->private, sk2);
+- if (err) {
+- sk_free(sk2);
++
++ nokey = err == -ENOKEY;
++ if (nokey && type->accept_nokey)
++ err = type->accept_nokey(ask->private, sk2);
++
++ if (err)
+ goto unlock;
+- }
+
+ sk2->sk_family = PF_ALG;
+
+- sock_hold(sk);
++ if (nokey || !ask->refcnt++)
++ sock_hold(sk);
++ ask->nokey_refcnt += nokey;
+ alg_sk(sk2)->parent = sk;
+ alg_sk(sk2)->type = type;
++ alg_sk(sk2)->nokey_refcnt = nokey;
+
+ newsock->ops = type->ops;
+ newsock->state = SS_CONNECTED;
+
++ if (nokey)
++ newsock->ops = type->ops_nokey;
++
+ err = 0;
+
+ unlock:
+diff --git a/crypto/ahash.c b/crypto/ahash.c
+index 9c1dc8d6106a..d19b52324cf5 100644
+--- a/crypto/ahash.c
++++ b/crypto/ahash.c
+@@ -451,6 +451,7 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
+ struct ahash_alg *alg = crypto_ahash_alg(hash);
+
+ hash->setkey = ahash_nosetkey;
++ hash->has_setkey = false;
+ hash->export = ahash_no_export;
+ hash->import = ahash_no_import;
+
+@@ -463,8 +464,10 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
+ hash->finup = alg->finup ?: ahash_def_finup;
+ hash->digest = alg->digest;
+
+- if (alg->setkey)
++ if (alg->setkey) {
+ hash->setkey = alg->setkey;
++ hash->has_setkey = true;
++ }
+ if (alg->export)
+ hash->export = alg->export;
+ if (alg->import)
+diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
+index 1396ad0787fc..68a5ceaa04c8 100644
+--- a/crypto/algif_hash.c
++++ b/crypto/algif_hash.c
+@@ -34,6 +34,11 @@ struct hash_ctx {
+ struct ahash_request req;
+ };
+
++struct algif_hash_tfm {
++ struct crypto_ahash *hash;
++ bool has_key;
++};
++
+ static int hash_sendmsg(struct socket *sock, struct msghdr *msg,
+ size_t ignored)
+ {
+@@ -49,7 +54,8 @@ static int hash_sendmsg(struct socket *sock, struct msghdr *msg,
+
+ lock_sock(sk);
+ if (!ctx->more) {
+- err = crypto_ahash_init(&ctx->req);
++ err = af_alg_wait_for_completion(crypto_ahash_init(&ctx->req),
++ &ctx->completion);
+ if (err)
+ goto unlock;
+ }
+@@ -120,6 +126,7 @@ static ssize_t hash_sendpage(struct socket *sock, struct page *page,
+ } else {
+ if (!ctx->more) {
+ err = crypto_ahash_init(&ctx->req);
++ err = af_alg_wait_for_completion(err, &ctx->completion);
+ if (err)
+ goto unlock;
+ }
+@@ -181,9 +188,14 @@ static int hash_accept(struct socket *sock, struct socket *newsock, int flags)
+ struct sock *sk2;
+ struct alg_sock *ask2;
+ struct hash_ctx *ctx2;
++ bool more;
+ int err;
+
+- err = crypto_ahash_export(req, state);
++ lock_sock(sk);
++ more = ctx->more;
++ err = more ? crypto_ahash_export(req, state) : 0;
++ release_sock(sk);
++
+ if (err)
+ return err;
+
+@@ -194,7 +206,10 @@ static int hash_accept(struct socket *sock, struct socket *newsock, int flags)
+ sk2 = newsock->sk;
+ ask2 = alg_sk(sk2);
+ ctx2 = ask2->private;
+- ctx2->more = 1;
++ ctx2->more = more;
++
++ if (!more)
++ return err;
+
+ err = crypto_ahash_import(&ctx2->req, state);
+ if (err) {
+@@ -227,19 +242,151 @@ static struct proto_ops algif_hash_ops = {
+ .accept = hash_accept,
+ };
+
++static int hash_check_key(struct socket *sock)
++{
++ int err = 0;
++ struct sock *psk;
++ struct alg_sock *pask;
++ struct algif_hash_tfm *tfm;
++ struct sock *sk = sock->sk;
++ struct alg_sock *ask = alg_sk(sk);
++
++ lock_sock(sk);
++ if (ask->refcnt)
++ goto unlock_child;
++
++ psk = ask->parent;
++ pask = alg_sk(ask->parent);
++ tfm = pask->private;
++
++ err = -ENOKEY;
++ lock_sock_nested(psk, SINGLE_DEPTH_NESTING);
++ if (!tfm->has_key)
++ goto unlock;
++
++ if (!pask->refcnt++)
++ sock_hold(psk);
++
++ ask->refcnt = 1;
++ sock_put(psk);
++
++ err = 0;
++
++unlock:
++ release_sock(psk);
++unlock_child:
++ release_sock(sk);
++
++ return err;
++}
++
++static int hash_sendmsg_nokey(struct socket *sock, struct msghdr *msg,
++ size_t size)
++{
++ int err;
++
++ err = hash_check_key(sock);
++ if (err)
++ return err;
++
++ return hash_sendmsg(sock, msg, size);
++}
++
++static ssize_t hash_sendpage_nokey(struct socket *sock, struct page *page,
++ int offset, size_t size, int flags)
++{
++ int err;
++
++ err = hash_check_key(sock);
++ if (err)
++ return err;
++
++ return hash_sendpage(sock, page, offset, size, flags);
++}
++
++static int hash_recvmsg_nokey(struct socket *sock, struct msghdr *msg,
++ size_t ignored, int flags)
++{
++ int err;
++
++ err = hash_check_key(sock);
++ if (err)
++ return err;
++
++ return hash_recvmsg(sock, msg, ignored, flags);
++}
++
++static int hash_accept_nokey(struct socket *sock, struct socket *newsock,
++ int flags)
++{
++ int err;
++
++ err = hash_check_key(sock);
++ if (err)
++ return err;
++
++ return hash_accept(sock, newsock, flags);
++}
++
++static struct proto_ops algif_hash_ops_nokey = {
++ .family = PF_ALG,
++
++ .connect = sock_no_connect,
++ .socketpair = sock_no_socketpair,
++ .getname = sock_no_getname,
++ .ioctl = sock_no_ioctl,
++ .listen = sock_no_listen,
++ .shutdown = sock_no_shutdown,
++ .getsockopt = sock_no_getsockopt,
++ .mmap = sock_no_mmap,
++ .bind = sock_no_bind,
++ .setsockopt = sock_no_setsockopt,
++ .poll = sock_no_poll,
++
++ .release = af_alg_release,
++ .sendmsg = hash_sendmsg_nokey,
++ .sendpage = hash_sendpage_nokey,
++ .recvmsg = hash_recvmsg_nokey,
++ .accept = hash_accept_nokey,
++};
++
+ static void *hash_bind(const char *name, u32 type, u32 mask)
+ {
+- return crypto_alloc_ahash(name, type, mask);
++ struct algif_hash_tfm *tfm;
++ struct crypto_ahash *hash;
++
++ tfm = kzalloc(sizeof(*tfm), GFP_KERNEL);
++ if (!tfm)
++ return ERR_PTR(-ENOMEM);
++
++ hash = crypto_alloc_ahash(name, type, mask);
++ if (IS_ERR(hash)) {
++ kfree(tfm);
++ return ERR_CAST(hash);
++ }
++
++ tfm->hash = hash;
++
++ return tfm;
+ }
+
+ static void hash_release(void *private)
+ {
+- crypto_free_ahash(private);
++ struct algif_hash_tfm *tfm = private;
++
++ crypto_free_ahash(tfm->hash);
++ kfree(tfm);
+ }
+
+ static int hash_setkey(void *private, const u8 *key, unsigned int keylen)
+ {
+- return crypto_ahash_setkey(private, key, keylen);
++ struct algif_hash_tfm *tfm = private;
++ int err;
++
++ err = crypto_ahash_setkey(tfm->hash, key, keylen);
++ tfm->has_key = !err;
++
++ return err;
+ }
+
+ static void hash_sock_destruct(struct sock *sk)
+@@ -253,12 +400,14 @@ static void hash_sock_destruct(struct sock *sk)
+ af_alg_release_parent(sk);
+ }
+
+-static int hash_accept_parent(void *private, struct sock *sk)
++static int hash_accept_parent_nokey(void *private, struct sock *sk)
+ {
+ struct hash_ctx *ctx;
+ struct alg_sock *ask = alg_sk(sk);
+- unsigned len = sizeof(*ctx) + crypto_ahash_reqsize(private);
+- unsigned ds = crypto_ahash_digestsize(private);
++ struct algif_hash_tfm *tfm = private;
++ struct crypto_ahash *hash = tfm->hash;
++ unsigned len = sizeof(*ctx) + crypto_ahash_reqsize(hash);
++ unsigned ds = crypto_ahash_digestsize(hash);
+
+ ctx = sock_kmalloc(sk, len, GFP_KERNEL);
+ if (!ctx)
+@@ -278,7 +427,7 @@ static int hash_accept_parent(void *private, struct sock *sk)
+
+ ask->private = ctx;
+
+- ahash_request_set_tfm(&ctx->req, private);
++ ahash_request_set_tfm(&ctx->req, hash);
+ ahash_request_set_callback(&ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+ af_alg_complete, &ctx->completion);
+
+@@ -287,12 +436,24 @@ static int hash_accept_parent(void *private, struct sock *sk)
+ return 0;
+ }
+
++static int hash_accept_parent(void *private, struct sock *sk)
++{
++ struct algif_hash_tfm *tfm = private;
++
++ if (!tfm->has_key && crypto_ahash_has_setkey(tfm->hash))
++ return -ENOKEY;
++
++ return hash_accept_parent_nokey(private, sk);
++}
++
+ static const struct af_alg_type algif_type_hash = {
+ .bind = hash_bind,
+ .release = hash_release,
+ .setkey = hash_setkey,
+ .accept = hash_accept_parent,
++ .accept_nokey = hash_accept_parent_nokey,
+ .ops = &algif_hash_ops,
++ .ops_nokey = &algif_hash_ops_nokey,
+ .name = "hash",
+ .owner = THIS_MODULE
+ };
+diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
+index af31a0ee4057..8ea54ac958ac 100644
+--- a/crypto/algif_skcipher.c
++++ b/crypto/algif_skcipher.c
+@@ -31,6 +31,11 @@ struct skcipher_sg_list {
+ struct scatterlist sg[0];
+ };
+
++struct skcipher_tfm {
++ struct crypto_skcipher *skcipher;
++ bool has_key;
++};
++
+ struct skcipher_ctx {
+ struct list_head tsgl;
+ struct af_alg_sgl rsgl;
+@@ -47,7 +52,7 @@ struct skcipher_ctx {
+ bool merge;
+ bool enc;
+
+- struct ablkcipher_request req;
++ struct skcipher_request req;
+ };
+
+ struct skcipher_async_rsgl {
+@@ -60,18 +65,10 @@ struct skcipher_async_req {
+ struct skcipher_async_rsgl first_sgl;
+ struct list_head list;
+ struct scatterlist *tsg;
+- char iv[];
++ atomic_t *inflight;
++ struct skcipher_request req;
+ };
+
+-#define GET_SREQ(areq, ctx) (struct skcipher_async_req *)((char *)areq + \
+- crypto_ablkcipher_reqsize(crypto_ablkcipher_reqtfm(&ctx->req)))
+-
+-#define GET_REQ_SIZE(ctx) \
+- crypto_ablkcipher_reqsize(crypto_ablkcipher_reqtfm(&ctx->req))
+-
+-#define GET_IV_SIZE(ctx) \
+- crypto_ablkcipher_ivsize(crypto_ablkcipher_reqtfm(&ctx->req))
+-
+ #define MAX_SGL_ENTS ((4096 - sizeof(struct skcipher_sg_list)) / \
+ sizeof(struct scatterlist) - 1)
+
+@@ -97,15 +94,12 @@ static void skcipher_free_async_sgls(struct skcipher_async_req *sreq)
+
+ static void skcipher_async_cb(struct crypto_async_request *req, int err)
+ {
+- struct sock *sk = req->data;
+- struct alg_sock *ask = alg_sk(sk);
+- struct skcipher_ctx *ctx = ask->private;
+- struct skcipher_async_req *sreq = GET_SREQ(req, ctx);
++ struct skcipher_async_req *sreq = req->data;
+ struct kiocb *iocb = sreq->iocb;
+
+- atomic_dec(&ctx->inflight);
++ atomic_dec(sreq->inflight);
+ skcipher_free_async_sgls(sreq);
+- kfree(req);
++ kzfree(sreq);
+ iocb->ki_complete(iocb, err, err);
+ }
+
+@@ -301,9 +295,12 @@ static int skcipher_sendmsg(struct socket *sock, struct msghdr *msg,
+ {
+ struct sock *sk = sock->sk;
+ struct alg_sock *ask = alg_sk(sk);
++ struct sock *psk = ask->parent;
++ struct alg_sock *pask = alg_sk(psk);
+ struct skcipher_ctx *ctx = ask->private;
+- struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(&ctx->req);
+- unsigned ivsize = crypto_ablkcipher_ivsize(tfm);
++ struct skcipher_tfm *skc = pask->private;
++ struct crypto_skcipher *tfm = skc->skcipher;
++ unsigned ivsize = crypto_skcipher_ivsize(tfm);
+ struct skcipher_sg_list *sgl;
+ struct af_alg_control con = {};
+ long copied = 0;
+@@ -387,7 +384,8 @@ static int skcipher_sendmsg(struct socket *sock, struct msghdr *msg,
+
+ sgl = list_entry(ctx->tsgl.prev, struct skcipher_sg_list, list);
+ sg = sgl->sg;
+- sg_unmark_end(sg + sgl->cur);
++ if (sgl->cur)
++ sg_unmark_end(sg + sgl->cur - 1);
+ do {
+ i = sgl->cur;
+ plen = min_t(int, len, PAGE_SIZE);
+@@ -503,37 +501,43 @@ static int skcipher_recvmsg_async(struct socket *sock, struct msghdr *msg,
+ {
+ struct sock *sk = sock->sk;
+ struct alg_sock *ask = alg_sk(sk);
++ struct sock *psk = ask->parent;
++ struct alg_sock *pask = alg_sk(psk);
+ struct skcipher_ctx *ctx = ask->private;
++ struct skcipher_tfm *skc = pask->private;
++ struct crypto_skcipher *tfm = skc->skcipher;
+ struct skcipher_sg_list *sgl;
+ struct scatterlist *sg;
+ struct skcipher_async_req *sreq;
+- struct ablkcipher_request *req;
++ struct skcipher_request *req;
+ struct skcipher_async_rsgl *last_rsgl = NULL;
+- unsigned int txbufs = 0, len = 0, tx_nents = skcipher_all_sg_nents(ctx);
+- unsigned int reqlen = sizeof(struct skcipher_async_req) +
+- GET_REQ_SIZE(ctx) + GET_IV_SIZE(ctx);
++ unsigned int txbufs = 0, len = 0, tx_nents;
++ unsigned int reqsize = crypto_skcipher_reqsize(tfm);
++ unsigned int ivsize = crypto_skcipher_ivsize(tfm);
+ int err = -ENOMEM;
+ bool mark = false;
++ char *iv;
+
+- lock_sock(sk);
+- req = kmalloc(reqlen, GFP_KERNEL);
+- if (unlikely(!req))
+- goto unlock;
++ sreq = kzalloc(sizeof(*sreq) + reqsize + ivsize, GFP_KERNEL);
++ if (unlikely(!sreq))
++ goto out;
+
+- sreq = GET_SREQ(req, ctx);
++ req = &sreq->req;
++ iv = (char *)(req + 1) + reqsize;
+ sreq->iocb = msg->msg_iocb;
+- memset(&sreq->first_sgl, '\0', sizeof(struct skcipher_async_rsgl));
+ INIT_LIST_HEAD(&sreq->list);
++ sreq->inflight = &ctx->inflight;
++
++ lock_sock(sk);
++ tx_nents = skcipher_all_sg_nents(ctx);
+ sreq->tsg = kcalloc(tx_nents, sizeof(*sg), GFP_KERNEL);
+- if (unlikely(!sreq->tsg)) {
+- kfree(req);
++ if (unlikely(!sreq->tsg))
+ goto unlock;
+- }
+ sg_init_table(sreq->tsg, tx_nents);
+- memcpy(sreq->iv, ctx->iv, GET_IV_SIZE(ctx));
+- ablkcipher_request_set_tfm(req, crypto_ablkcipher_reqtfm(&ctx->req));
+- ablkcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+- skcipher_async_cb, sk);
++ memcpy(iv, ctx->iv, ivsize);
++ skcipher_request_set_tfm(req, tfm);
++ skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
++ skcipher_async_cb, sreq);
+
+ while (iov_iter_count(&msg->msg_iter)) {
+ struct skcipher_async_rsgl *rsgl;
+@@ -608,21 +612,23 @@ static int skcipher_recvmsg_async(struct socket *sock, struct msghdr *msg,
+ if (mark)
+ sg_mark_end(sreq->tsg + txbufs - 1);
+
+- ablkcipher_request_set_crypt(req, sreq->tsg, sreq->first_sgl.sgl.sg,
+- len, sreq->iv);
+- err = ctx->enc ? crypto_ablkcipher_encrypt(req) :
+- crypto_ablkcipher_decrypt(req);
++ skcipher_request_set_crypt(req, sreq->tsg, sreq->first_sgl.sgl.sg,
++ len, iv);
++ err = ctx->enc ? crypto_skcipher_encrypt(req) :
++ crypto_skcipher_decrypt(req);
+ if (err == -EINPROGRESS) {
+ atomic_inc(&ctx->inflight);
+ err = -EIOCBQUEUED;
++ sreq = NULL;
+ goto unlock;
+ }
+ free:
+ skcipher_free_async_sgls(sreq);
+- kfree(req);
+ unlock:
+ skcipher_wmem_wakeup(sk);
+ release_sock(sk);
++ kzfree(sreq);
++out:
+ return err;
+ }
+
+@@ -631,9 +637,12 @@ static int skcipher_recvmsg_sync(struct socket *sock, struct msghdr *msg,
+ {
+ struct sock *sk = sock->sk;
+ struct alg_sock *ask = alg_sk(sk);
++ struct sock *psk = ask->parent;
++ struct alg_sock *pask = alg_sk(psk);
+ struct skcipher_ctx *ctx = ask->private;
+- unsigned bs = crypto_ablkcipher_blocksize(crypto_ablkcipher_reqtfm(
+- &ctx->req));
++ struct skcipher_tfm *skc = pask->private;
++ struct crypto_skcipher *tfm = skc->skcipher;
++ unsigned bs = crypto_skcipher_blocksize(tfm);
+ struct skcipher_sg_list *sgl;
+ struct scatterlist *sg;
+ int err = -EAGAIN;
+@@ -642,13 +651,6 @@ static int skcipher_recvmsg_sync(struct socket *sock, struct msghdr *msg,
+
+ lock_sock(sk);
+ while (msg_data_left(msg)) {
+- sgl = list_first_entry(&ctx->tsgl,
+- struct skcipher_sg_list, list);
+- sg = sgl->sg;
+-
+- while (!sg->length)
+- sg++;
+-
+ if (!ctx->used) {
+ err = skcipher_wait_for_data(sk, flags);
+ if (err)
+@@ -669,14 +671,20 @@ static int skcipher_recvmsg_sync(struct socket *sock, struct msghdr *msg,
+ if (!used)
+ goto free;
+
+- ablkcipher_request_set_crypt(&ctx->req, sg,
+- ctx->rsgl.sg, used,
+- ctx->iv);
++ sgl = list_first_entry(&ctx->tsgl,
++ struct skcipher_sg_list, list);
++ sg = sgl->sg;
++
++ while (!sg->length)
++ sg++;
++
++ skcipher_request_set_crypt(&ctx->req, sg, ctx->rsgl.sg, used,
++ ctx->iv);
+
+ err = af_alg_wait_for_completion(
+ ctx->enc ?
+- crypto_ablkcipher_encrypt(&ctx->req) :
+- crypto_ablkcipher_decrypt(&ctx->req),
++ crypto_skcipher_encrypt(&ctx->req) :
++ crypto_skcipher_decrypt(&ctx->req),
+ &ctx->completion);
+
+ free:
+@@ -749,19 +757,139 @@ static struct proto_ops algif_skcipher_ops = {
+ .poll = skcipher_poll,
+ };
+
++static int skcipher_check_key(struct socket *sock)
++{
++ int err = 0;
++ struct sock *psk;
++ struct alg_sock *pask;
++ struct skcipher_tfm *tfm;
++ struct sock *sk = sock->sk;
++ struct alg_sock *ask = alg_sk(sk);
++
++ lock_sock(sk);
++ if (ask->refcnt)
++ goto unlock_child;
++
++ psk = ask->parent;
++ pask = alg_sk(ask->parent);
++ tfm = pask->private;
++
++ err = -ENOKEY;
++ lock_sock_nested(psk, SINGLE_DEPTH_NESTING);
++ if (!tfm->has_key)
++ goto unlock;
++
++ if (!pask->refcnt++)
++ sock_hold(psk);
++
++ ask->refcnt = 1;
++ sock_put(psk);
++
++ err = 0;
++
++unlock:
++ release_sock(psk);
++unlock_child:
++ release_sock(sk);
++
++ return err;
++}
++
++static int skcipher_sendmsg_nokey(struct socket *sock, struct msghdr *msg,
++ size_t size)
++{
++ int err;
++
++ err = skcipher_check_key(sock);
++ if (err)
++ return err;
++
++ return skcipher_sendmsg(sock, msg, size);
++}
++
++static ssize_t skcipher_sendpage_nokey(struct socket *sock, struct page *page,
++ int offset, size_t size, int flags)
++{
++ int err;
++
++ err = skcipher_check_key(sock);
++ if (err)
++ return err;
++
++ return skcipher_sendpage(sock, page, offset, size, flags);
++}
++
++static int skcipher_recvmsg_nokey(struct socket *sock, struct msghdr *msg,
++ size_t ignored, int flags)
++{
++ int err;
++
++ err = skcipher_check_key(sock);
++ if (err)
++ return err;
++
++ return skcipher_recvmsg(sock, msg, ignored, flags);
++}
++
++static struct proto_ops algif_skcipher_ops_nokey = {
++ .family = PF_ALG,
++
++ .connect = sock_no_connect,
++ .socketpair = sock_no_socketpair,
++ .getname = sock_no_getname,
++ .ioctl = sock_no_ioctl,
++ .listen = sock_no_listen,
++ .shutdown = sock_no_shutdown,
++ .getsockopt = sock_no_getsockopt,
++ .mmap = sock_no_mmap,
++ .bind = sock_no_bind,
++ .accept = sock_no_accept,
++ .setsockopt = sock_no_setsockopt,
++
++ .release = af_alg_release,
++ .sendmsg = skcipher_sendmsg_nokey,
++ .sendpage = skcipher_sendpage_nokey,
++ .recvmsg = skcipher_recvmsg_nokey,
++ .poll = skcipher_poll,
++};
++
+ static void *skcipher_bind(const char *name, u32 type, u32 mask)
+ {
+- return crypto_alloc_ablkcipher(name, type, mask);
++ struct skcipher_tfm *tfm;
++ struct crypto_skcipher *skcipher;
++
++ tfm = kzalloc(sizeof(*tfm), GFP_KERNEL);
++ if (!tfm)
++ return ERR_PTR(-ENOMEM);
++
++ skcipher = crypto_alloc_skcipher(name, type, mask);
++ if (IS_ERR(skcipher)) {
++ kfree(tfm);
++ return ERR_CAST(skcipher);
++ }
++
++ tfm->skcipher = skcipher;
++
++ return tfm;
+ }
+
+ static void skcipher_release(void *private)
+ {
+- crypto_free_ablkcipher(private);
++ struct skcipher_tfm *tfm = private;
++
++ crypto_free_skcipher(tfm->skcipher);
++ kfree(tfm);
+ }
+
+ static int skcipher_setkey(void *private, const u8 *key, unsigned int keylen)
+ {
+- return crypto_ablkcipher_setkey(private, key, keylen);
++ struct skcipher_tfm *tfm = private;
++ int err;
++
++ err = crypto_skcipher_setkey(tfm->skcipher, key, keylen);
++ tfm->has_key = !err;
++
++ return err;
+ }
+
+ static void skcipher_wait(struct sock *sk)
+@@ -778,35 +906,37 @@ static void skcipher_sock_destruct(struct sock *sk)
+ {
+ struct alg_sock *ask = alg_sk(sk);
+ struct skcipher_ctx *ctx = ask->private;
+- struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(&ctx->req);
++ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(&ctx->req);
+
+ if (atomic_read(&ctx->inflight))
+ skcipher_wait(sk);
+
+ skcipher_free_sgl(sk);
+- sock_kzfree_s(sk, ctx->iv, crypto_ablkcipher_ivsize(tfm));
++ sock_kzfree_s(sk, ctx->iv, crypto_skcipher_ivsize(tfm));
+ sock_kfree_s(sk, ctx, ctx->len);
+ af_alg_release_parent(sk);
+ }
+
+-static int skcipher_accept_parent(void *private, struct sock *sk)
++static int skcipher_accept_parent_nokey(void *private, struct sock *sk)
+ {
+ struct skcipher_ctx *ctx;
+ struct alg_sock *ask = alg_sk(sk);
+- unsigned int len = sizeof(*ctx) + crypto_ablkcipher_reqsize(private);
++ struct skcipher_tfm *tfm = private;
++ struct crypto_skcipher *skcipher = tfm->skcipher;
++ unsigned int len = sizeof(*ctx) + crypto_skcipher_reqsize(skcipher);
+
+ ctx = sock_kmalloc(sk, len, GFP_KERNEL);
+ if (!ctx)
+ return -ENOMEM;
+
+- ctx->iv = sock_kmalloc(sk, crypto_ablkcipher_ivsize(private),
++ ctx->iv = sock_kmalloc(sk, crypto_skcipher_ivsize(skcipher),
+ GFP_KERNEL);
+ if (!ctx->iv) {
+ sock_kfree_s(sk, ctx, len);
+ return -ENOMEM;
+ }
+
+- memset(ctx->iv, 0, crypto_ablkcipher_ivsize(private));
++ memset(ctx->iv, 0, crypto_skcipher_ivsize(skcipher));
+
+ INIT_LIST_HEAD(&ctx->tsgl);
+ ctx->len = len;
+@@ -819,21 +949,34 @@ static int skcipher_accept_parent(void *private, struct sock *sk)
+
+ ask->private = ctx;
+
+- ablkcipher_request_set_tfm(&ctx->req, private);
+- ablkcipher_request_set_callback(&ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+- af_alg_complete, &ctx->completion);
++ skcipher_request_set_tfm(&ctx->req, skcipher);
++ skcipher_request_set_callback(&ctx->req, CRYPTO_TFM_REQ_MAY_SLEEP |
++ CRYPTO_TFM_REQ_MAY_BACKLOG,
++ af_alg_complete, &ctx->completion);
+
+ sk->sk_destruct = skcipher_sock_destruct;
+
+ return 0;
+ }
+
++static int skcipher_accept_parent(void *private, struct sock *sk)
++{
++ struct skcipher_tfm *tfm = private;
++
++ if (!tfm->has_key && crypto_skcipher_has_setkey(tfm->skcipher))
++ return -ENOKEY;
++
++ return skcipher_accept_parent_nokey(private, sk);
++}
++
+ static const struct af_alg_type algif_type_skcipher = {
+ .bind = skcipher_bind,
+ .release = skcipher_release,
+ .setkey = skcipher_setkey,
+ .accept = skcipher_accept_parent,
++ .accept_nokey = skcipher_accept_parent_nokey,
+ .ops = &algif_skcipher_ops,
++ .ops_nokey = &algif_skcipher_ops_nokey,
+ .name = "skcipher",
+ .owner = THIS_MODULE
+ };
+diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c
+index 11b981492031..8cc1622b2ee0 100644
+--- a/crypto/blkcipher.c
++++ b/crypto/blkcipher.c
+@@ -326,12 +326,12 @@ static int blkcipher_walk_first(struct blkcipher_desc *desc,
+ if (WARN_ON_ONCE(in_irq()))
+ return -EDEADLK;
+
++ walk->iv = desc->info;
+ walk->nbytes = walk->total;
+ if (unlikely(!walk->total))
+ return 0;
+
+ walk->buffer = NULL;
+- walk->iv = desc->info;
+ if (unlikely(((unsigned long)walk->iv & walk->alignmask))) {
+ int err = blkcipher_copy_iv(walk);
+ if (err)
+diff --git a/crypto/crc32c_generic.c b/crypto/crc32c_generic.c
+index 06f1b60f02b2..4c0a0e271876 100644
+--- a/crypto/crc32c_generic.c
++++ b/crypto/crc32c_generic.c
+@@ -172,4 +172,3 @@ MODULE_DESCRIPTION("CRC32c (Castagnoli) calculations wrapper for lib/crc32c");
+ MODULE_LICENSE("GPL");
+ MODULE_ALIAS_CRYPTO("crc32c");
+ MODULE_ALIAS_CRYPTO("crc32c-generic");
+-MODULE_SOFTDEP("pre: crc32c");
+diff --git a/crypto/crypto_user.c b/crypto/crypto_user.c
+index 237f3795cfaa..43fe85f20d57 100644
+--- a/crypto/crypto_user.c
++++ b/crypto/crypto_user.c
+@@ -499,6 +499,7 @@ static int crypto_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+ if (link->dump == NULL)
+ return -EINVAL;
+
++ down_read(&crypto_alg_sem);
+ list_for_each_entry(alg, &crypto_alg_list, cra_list)
+ dump_alloc += CRYPTO_REPORT_MAXSIZE;
+
+@@ -508,8 +509,11 @@ static int crypto_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+ .done = link->done,
+ .min_dump_alloc = dump_alloc,
+ };
+- return netlink_dump_start(crypto_nlsk, skb, nlh, &c);
++ err = netlink_dump_start(crypto_nlsk, skb, nlh, &c);
+ }
++ up_read(&crypto_alg_sem);
++
++ return err;
+ }
+
+ err = nlmsg_parse(nlh, crypto_msg_min[type], attrs, CRYPTOCFGA_MAX,
+diff --git a/crypto/shash.c b/crypto/shash.c
+index ecb1e3d39bf0..359754591653 100644
+--- a/crypto/shash.c
++++ b/crypto/shash.c
+@@ -354,9 +354,10 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
+ crt->final = shash_async_final;
+ crt->finup = shash_async_finup;
+ crt->digest = shash_async_digest;
++ crt->setkey = shash_async_setkey;
++
++ crt->has_setkey = alg->setkey != shash_no_setkey;
+
+- if (alg->setkey)
+- crt->setkey = shash_async_setkey;
+ if (alg->export)
+ crt->export = shash_async_export;
+ if (alg->import)
+diff --git a/crypto/skcipher.c b/crypto/skcipher.c
+index dd5fc1bf6447..bb7d44d2c843 100644
+--- a/crypto/skcipher.c
++++ b/crypto/skcipher.c
+@@ -118,6 +118,7 @@ int crypto_init_skcipher_ops_blkcipher(struct crypto_tfm *tfm)
+ skcipher->decrypt = skcipher_decrypt_blkcipher;
+
+ skcipher->ivsize = crypto_blkcipher_ivsize(blkcipher);
++ skcipher->has_setkey = calg->cra_blkcipher.max_keysize;
+
+ return 0;
+ }
+@@ -210,6 +211,7 @@ int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm)
+ skcipher->ivsize = crypto_ablkcipher_ivsize(ablkcipher);
+ skcipher->reqsize = crypto_ablkcipher_reqsize(ablkcipher) +
+ sizeof(struct ablkcipher_request);
++ skcipher->has_setkey = calg->cra_ablkcipher.max_keysize;
+
+ return 0;
+ }
+diff --git a/crypto/testmgr.h b/crypto/testmgr.h
+index 64b8a8082645..450f30e2c8e4 100644
+--- a/crypto/testmgr.h
++++ b/crypto/testmgr.h
+@@ -270,7 +270,7 @@ static struct akcipher_testvec rsa_tv_template[] = {
+ .c_size = 256,
+ }, {
+ .key =
+- "\x30\x82\x01\x09" /* sequence of 265 bytes */
++ "\x30\x82\x01\x0C" /* sequence of 268 bytes */
+ "\x02\x82\x01\x00" /* modulus - integer of 256 bytes */
+ "\xDB\x10\x1A\xC2\xA3\xF1\xDC\xFF\x13\x6B\xED\x44\xDF\xF0\x02\x6D"
+ "\x13\xC7\x88\xDA\x70\x6B\x54\xF1\xE8\x27\xDC\xC3\x0F\x99\x6A\xFA"
+@@ -288,8 +288,9 @@ static struct akcipher_testvec rsa_tv_template[] = {
+ "\x55\xE6\x29\x69\xD1\xC2\xE8\xB9\x78\x59\xF6\x79\x10\xC6\x4E\xEB"
+ "\x6A\x5E\xB9\x9A\xC7\xC4\x5B\x63\xDA\xA3\x3F\x5E\x92\x7A\x81\x5E"
+ "\xD6\xB0\xE2\x62\x8F\x74\x26\xC2\x0C\xD3\x9A\x17\x47\xE6\x8E\xAB"
+- "\x02\x03\x01\x00\x01", /* public key - integer of 3 bytes */
+- .key_len = 269,
++ "\x02\x03\x01\x00\x01" /* public key - integer of 3 bytes */
++ "\x02\x01\x00", /* private key - integer of 1 byte */
++ .key_len = 272,
+ .m = "\x54\x85\x9b\x34\x2c\x49\xea\x2a",
+ .c =
+ "\xb2\x97\x76\xb4\xae\x3e\x38\x3c\x7e\x64\x1f\xcc\xa2\x7f\xf6\xbe"
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index a46660204e3a..bbd472cadc98 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -264,6 +264,26 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ { PCI_VDEVICE(INTEL, 0x3b2b), board_ahci }, /* PCH RAID */
+ { PCI_VDEVICE(INTEL, 0x3b2c), board_ahci }, /* PCH RAID */
+ { PCI_VDEVICE(INTEL, 0x3b2f), board_ahci }, /* PCH AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b0), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b1), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b2), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b3), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b4), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b5), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b6), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19b7), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19bE), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19bF), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c0), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c1), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c2), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c3), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c4), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c5), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c6), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19c7), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19cE), board_ahci }, /* DNV AHCI */
++ { PCI_VDEVICE(INTEL, 0x19cF), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x1c02), board_ahci }, /* CPT AHCI */
+ { PCI_VDEVICE(INTEL, 0x1c03), board_ahci }, /* CPT AHCI */
+ { PCI_VDEVICE(INTEL, 0x1c04), board_ahci }, /* CPT RAID */
+diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
+index d256a66158be..317f85dfd39a 100644
+--- a/drivers/ata/libahci.c
++++ b/drivers/ata/libahci.c
+@@ -495,8 +495,8 @@ void ahci_save_initial_config(struct device *dev, struct ahci_host_priv *hpriv)
+ }
+ }
+
+- /* fabricate port_map from cap.nr_ports */
+- if (!port_map) {
++ /* fabricate port_map from cap.nr_ports for < AHCI 1.3 */
++ if (!port_map && vers < 0x10300) {
+ port_map = (1 << ahci_nr_ports(cap)) - 1;
+ dev_warn(dev, "forcing PORTS_IMPL to 0x%x\n", port_map);
+
+@@ -1266,6 +1266,15 @@ static int ahci_exec_polled_cmd(struct ata_port *ap, int pmp,
+ ata_tf_to_fis(tf, pmp, is_cmd, fis);
+ ahci_fill_cmd_slot(pp, 0, cmd_fis_len | flags | (pmp << 12));
+
++ /* set port value for softreset of Port Multiplier */
++ if (pp->fbs_enabled && pp->fbs_last_dev != pmp) {
++ tmp = readl(port_mmio + PORT_FBS);
++ tmp &= ~(PORT_FBS_DEV_MASK | PORT_FBS_DEC);
++ tmp |= pmp << PORT_FBS_DEV_OFFSET;
++ writel(tmp, port_mmio + PORT_FBS);
++ pp->fbs_last_dev = pmp;
++ }
++
+ /* issue & wait */
+ writel(1, port_mmio + PORT_CMD_ISSUE);
+
+diff --git a/drivers/base/memory.c b/drivers/base/memory.c
+index 2804aed3f416..25425d3f2575 100644
+--- a/drivers/base/memory.c
++++ b/drivers/base/memory.c
+@@ -303,6 +303,10 @@ static int memory_subsys_offline(struct device *dev)
+ if (mem->state == MEM_OFFLINE)
+ return 0;
+
++ /* Can't offline block with non-present sections */
++ if (mem->section_count != sections_per_block)
++ return -EINVAL;
++
+ return memory_block_change_state(mem, MEM_OFFLINE, MEM_ONLINE);
+ }
+
+diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c
+index 5cb13ca3a3ac..c53617752b93 100644
+--- a/drivers/block/zram/zcomp.c
++++ b/drivers/block/zram/zcomp.c
+@@ -76,7 +76,7 @@ static void zcomp_strm_free(struct zcomp *comp, struct zcomp_strm *zstrm)
+ */
+ static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp)
+ {
+- struct zcomp_strm *zstrm = kmalloc(sizeof(*zstrm), GFP_KERNEL);
++ struct zcomp_strm *zstrm = kmalloc(sizeof(*zstrm), GFP_NOIO);
+ if (!zstrm)
+ return NULL;
+
+@@ -85,7 +85,7 @@ static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp)
+ * allocate 2 pages. 1 for compressed data, plus 1 extra for the
+ * case when compressed size is larger than the original one
+ */
+- zstrm->buffer = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 1);
++ zstrm->buffer = (void *)__get_free_pages(GFP_NOIO | __GFP_ZERO, 1);
+ if (!zstrm->private || !zstrm->buffer) {
+ zcomp_strm_free(comp, zstrm);
+ zstrm = NULL;
+diff --git a/drivers/block/zram/zcomp_lz4.c b/drivers/block/zram/zcomp_lz4.c
+index f2afb7e988c3..dd6083124276 100644
+--- a/drivers/block/zram/zcomp_lz4.c
++++ b/drivers/block/zram/zcomp_lz4.c
+@@ -10,17 +10,36 @@
+ #include <linux/kernel.h>
+ #include <linux/slab.h>
+ #include <linux/lz4.h>
++#include <linux/vmalloc.h>
++#include <linux/mm.h>
+
+ #include "zcomp_lz4.h"
+
+ static void *zcomp_lz4_create(void)
+ {
+- return kzalloc(LZ4_MEM_COMPRESS, GFP_KERNEL);
++ void *ret;
++
++ /*
++ * This function can be called in swapout/fs write path
++ * so we can't use GFP_FS|IO. And it assumes we already
++ * have at least one stream in zram initialization so we
++ * don't do best effort to allocate more stream in here.
++ * A default stream will work well without further multiple
++ * streams. That's why we use NORETRY | NOWARN.
++ */
++ ret = kzalloc(LZ4_MEM_COMPRESS, GFP_NOIO | __GFP_NORETRY |
++ __GFP_NOWARN);
++ if (!ret)
++ ret = __vmalloc(LZ4_MEM_COMPRESS,
++ GFP_NOIO | __GFP_NORETRY | __GFP_NOWARN |
++ __GFP_ZERO | __GFP_HIGHMEM,
++ PAGE_KERNEL);
++ return ret;
+ }
+
+ static void zcomp_lz4_destroy(void *private)
+ {
+- kfree(private);
++ kvfree(private);
+ }
+
+ static int zcomp_lz4_compress(const unsigned char *src, unsigned char *dst,
+diff --git a/drivers/block/zram/zcomp_lzo.c b/drivers/block/zram/zcomp_lzo.c
+index da1bc47d588e..edc549920fa0 100644
+--- a/drivers/block/zram/zcomp_lzo.c
++++ b/drivers/block/zram/zcomp_lzo.c
+@@ -10,17 +10,36 @@
+ #include <linux/kernel.h>
+ #include <linux/slab.h>
+ #include <linux/lzo.h>
++#include <linux/vmalloc.h>
++#include <linux/mm.h>
+
+ #include "zcomp_lzo.h"
+
+ static void *lzo_create(void)
+ {
+- return kzalloc(LZO1X_MEM_COMPRESS, GFP_KERNEL);
++ void *ret;
++
++ /*
++ * This function can be called in swapout/fs write path
++ * so we can't use GFP_FS|IO. And it assumes we already
++ * have at least one stream in zram initialization so we
++ * don't do best effort to allocate more stream in here.
++ * A default stream will work well without further multiple
++ * streams. That's why we use NORETRY | NOWARN.
++ */
++ ret = kzalloc(LZO1X_MEM_COMPRESS, GFP_NOIO | __GFP_NORETRY |
++ __GFP_NOWARN);
++ if (!ret)
++ ret = __vmalloc(LZO1X_MEM_COMPRESS,
++ GFP_NOIO | __GFP_NORETRY | __GFP_NOWARN |
++ __GFP_ZERO | __GFP_HIGHMEM,
++ PAGE_KERNEL);
++ return ret;
+ }
+
+ static void lzo_destroy(void *private)
+ {
+- kfree(private);
++ kvfree(private);
+ }
+
+ static int lzo_compress(const unsigned char *src, unsigned char *dst,
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index 9fa15bb9d118..8022c0a96b36 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -1324,7 +1324,6 @@ static int zram_remove(struct zram *zram)
+
+ pr_info("Removed device: %s\n", zram->disk->disk_name);
+
+- idr_remove(&zram_index_idr, zram->disk->first_minor);
+ blk_cleanup_queue(zram->disk->queue);
+ del_gendisk(zram->disk);
+ put_disk(zram->disk);
+@@ -1366,10 +1365,12 @@ static ssize_t hot_remove_store(struct class *class,
+ mutex_lock(&zram_index_mutex);
+
+ zram = idr_find(&zram_index_idr, dev_id);
+- if (zram)
++ if (zram) {
+ ret = zram_remove(zram);
+- else
++ idr_remove(&zram_index_idr, dev_id);
++ } else {
+ ret = -ENODEV;
++ }
+
+ mutex_unlock(&zram_index_mutex);
+ return ret ? ret : count;
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index 1082d4bb016a..0f8623d88b84 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -231,7 +231,7 @@ int tpm_chip_register(struct tpm_chip *chip)
+
+ /* Make the chip available. */
+ spin_lock(&driver_lock);
+- list_add_rcu(&chip->list, &tpm_chip_list);
++ list_add_tail_rcu(&chip->list, &tpm_chip_list);
+ spin_unlock(&driver_lock);
+
+ chip->flags |= TPM_CHIP_FLAG_REGISTERED;
+diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
+index 2b971b3e5c1c..4bb9727c1047 100644
+--- a/drivers/char/tpm/tpm_crb.c
++++ b/drivers/char/tpm/tpm_crb.c
+@@ -68,7 +68,8 @@ struct crb_control_area {
+ u32 int_enable;
+ u32 int_sts;
+ u32 cmd_size;
+- u64 cmd_pa;
++ u32 cmd_pa_low;
++ u32 cmd_pa_high;
+ u32 rsp_size;
+ u64 rsp_pa;
+ } __packed;
+@@ -263,8 +264,8 @@ static int crb_acpi_add(struct acpi_device *device)
+ return -ENOMEM;
+ }
+
+- memcpy_fromio(&pa, &priv->cca->cmd_pa, 8);
+- pa = le64_to_cpu(pa);
++ pa = ((u64) le32_to_cpu(ioread32(&priv->cca->cmd_pa_high)) << 32) |
++ (u64) le32_to_cpu(ioread32(&priv->cca->cmd_pa_low));
+ priv->cmd = devm_ioremap_nocache(dev, pa,
+ ioread32(&priv->cca->cmd_size));
+ if (!priv->cmd) {
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
+index 27ebf9511cb4..3e6a22658b63 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.c
++++ b/drivers/char/tpm/tpm_ibmvtpm.c
+@@ -491,7 +491,7 @@ static void ibmvtpm_crq_process(struct ibmvtpm_crq *crq,
+ }
+ ibmvtpm->rtce_size = be16_to_cpu(crq->len);
+ ibmvtpm->rtce_buf = kmalloc(ibmvtpm->rtce_size,
+- GFP_KERNEL);
++ GFP_ATOMIC);
+ if (!ibmvtpm->rtce_buf) {
+ dev_err(ibmvtpm->dev, "Failed to allocate memory for rtce buffer\n");
+ return;
+diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c
+index 696ef1d56b4f..19f9c7dc7bc0 100644
+--- a/drivers/char/tpm/tpm_tis.c
++++ b/drivers/char/tpm/tpm_tis.c
+@@ -805,6 +805,8 @@ static int tpm_tis_init(struct device *dev, struct tpm_info *tpm_info,
+ iowrite32(intmask,
+ chip->vendor.iobase +
+ TPM_INT_ENABLE(chip->vendor.locality));
++
++ devm_free_irq(dev, i, chip);
+ }
+ }
+ if (chip->vendor.irq) {
+diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
+index 94433b9fc200..3b70354b0d4b 100644
+--- a/drivers/crypto/caam/caamhash.c
++++ b/drivers/crypto/caam/caamhash.c
+@@ -829,7 +829,7 @@ static int ahash_update_ctx(struct ahash_request *req)
+ state->buf_dma = try_buf_map_to_sec4_sg(jrdev,
+ edesc->sec4_sg + 1,
+ buf, state->buf_dma,
+- *next_buflen, *buflen);
++ *buflen, last_buflen);
+
+ if (src_nents) {
+ src_map_to_sec4_sg(jrdev, req->src, src_nents,
+diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
+index 8abb4bc548cc..69d4a1326fee 100644
+--- a/drivers/crypto/caam/ctrl.c
++++ b/drivers/crypto/caam/ctrl.c
+@@ -534,8 +534,8 @@ static int caam_probe(struct platform_device *pdev)
+ * long pointers in master configuration register
+ */
+ clrsetbits_32(&ctrl->mcr, MCFGR_AWCACHE_MASK, MCFGR_AWCACHE_CACH |
+- MCFGR_WDENABLE | (sizeof(dma_addr_t) == sizeof(u64) ?
+- MCFGR_LONG_PTR : 0));
++ MCFGR_AWCACHE_BUFF | MCFGR_WDENABLE |
++ (sizeof(dma_addr_t) == sizeof(u64) ? MCFGR_LONG_PTR : 0));
+
+ /*
+ * Read the Compile Time paramters and SCFGR to determine
+diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
+index 0643e3366e33..c0656e7f37b5 100644
+--- a/drivers/crypto/marvell/cesa.c
++++ b/drivers/crypto/marvell/cesa.c
+@@ -306,7 +306,7 @@ static int mv_cesa_dev_dma_init(struct mv_cesa_dev *cesa)
+ return -ENOMEM;
+
+ dma->padding_pool = dmam_pool_create("cesa_padding", dev, 72, 1, 0);
+- if (!dma->cache_pool)
++ if (!dma->padding_pool)
+ return -ENOMEM;
+
+ cesa->dma = dma;
+diff --git a/drivers/crypto/nx/nx-aes-ccm.c b/drivers/crypto/nx/nx-aes-ccm.c
+index 73ef49922788..7038f364acb5 100644
+--- a/drivers/crypto/nx/nx-aes-ccm.c
++++ b/drivers/crypto/nx/nx-aes-ccm.c
+@@ -409,7 +409,7 @@ static int ccm_nx_decrypt(struct aead_request *req,
+ processed += to_process;
+ } while (processed < nbytes);
+
+- rc = memcmp(csbcpb->cpb.aes_ccm.out_pat_or_mac, priv->oauth_tag,
++ rc = crypto_memneq(csbcpb->cpb.aes_ccm.out_pat_or_mac, priv->oauth_tag,
+ authsize) ? -EBADMSG : 0;
+ out:
+ spin_unlock_irqrestore(&nx_ctx->lock, irq_flags);
+diff --git a/drivers/crypto/nx/nx-aes-gcm.c b/drivers/crypto/nx/nx-aes-gcm.c
+index eee624f589b6..abd465f479c4 100644
+--- a/drivers/crypto/nx/nx-aes-gcm.c
++++ b/drivers/crypto/nx/nx-aes-gcm.c
+@@ -21,6 +21,7 @@
+
+ #include <crypto/internal/aead.h>
+ #include <crypto/aes.h>
++#include <crypto/algapi.h>
+ #include <crypto/scatterwalk.h>
+ #include <linux/module.h>
+ #include <linux/types.h>
+@@ -418,7 +419,7 @@ mac:
+ itag, req->src, req->assoclen + nbytes,
+ crypto_aead_authsize(crypto_aead_reqtfm(req)),
+ SCATTERWALK_FROM_SG);
+- rc = memcmp(itag, otag,
++ rc = crypto_memneq(itag, otag,
+ crypto_aead_authsize(crypto_aead_reqtfm(req))) ?
+ -EBADMSG : 0;
+ }
+diff --git a/drivers/crypto/qat/qat_common/adf_ctl_drv.c b/drivers/crypto/qat/qat_common/adf_ctl_drv.c
+index cd8a12af8ec5..35bada05608a 100644
+--- a/drivers/crypto/qat/qat_common/adf_ctl_drv.c
++++ b/drivers/crypto/qat/qat_common/adf_ctl_drv.c
+@@ -198,7 +198,7 @@ static int adf_copy_key_value_data(struct adf_accel_dev *accel_dev,
+ goto out_err;
+ }
+
+- params_head = section_head->params;
++ params_head = section.params;
+
+ while (params_head) {
+ if (copy_from_user(&key_val, (void __user *)params_head,
+diff --git a/drivers/crypto/sunxi-ss/sun4i-ss-core.c b/drivers/crypto/sunxi-ss/sun4i-ss-core.c
+index eab6fe227fa0..107cd2a41cae 100644
+--- a/drivers/crypto/sunxi-ss/sun4i-ss-core.c
++++ b/drivers/crypto/sunxi-ss/sun4i-ss-core.c
+@@ -39,6 +39,7 @@ static struct sun4i_ss_alg_template ss_algs[] = {
+ .import = sun4i_hash_import_md5,
+ .halg = {
+ .digestsize = MD5_DIGEST_SIZE,
++ .statesize = sizeof(struct md5_state),
+ .base = {
+ .cra_name = "md5",
+ .cra_driver_name = "md5-sun4i-ss",
+@@ -66,6 +67,7 @@ static struct sun4i_ss_alg_template ss_algs[] = {
+ .import = sun4i_hash_import_sha1,
+ .halg = {
+ .digestsize = SHA1_DIGEST_SIZE,
++ .statesize = sizeof(struct sha1_state),
+ .base = {
+ .cra_name = "sha1",
+ .cra_driver_name = "sha1-sun4i-ss",
+diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
+index 3b20a1bce703..8b327d89a8fc 100644
+--- a/drivers/crypto/talitos.c
++++ b/drivers/crypto/talitos.c
+@@ -1015,7 +1015,7 @@ static void ipsec_esp_decrypt_swauth_done(struct device *dev,
+ } else
+ oicv = (char *)&edesc->link_tbl[0];
+
+- err = memcmp(oicv, icv, authsize) ? -EBADMSG : 0;
++ err = crypto_memneq(oicv, icv, authsize) ? -EBADMSG : 0;
+ }
+
+ kfree(edesc);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
+index 27a79c0c3888..d95eb8659d1b 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
+@@ -28,7 +28,7 @@
+ void
+ nvkm_pmu_pgob(struct nvkm_pmu *pmu, bool enable)
+ {
+- if (pmu->func->pgob)
++ if (pmu && pmu->func->pgob)
+ pmu->func->pgob(pmu, enable);
+ }
+
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 426b2f1a3450..33dfcea5fbc9 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -309,6 +309,41 @@ static struct attribute_group mt_attribute_group = {
+ .attrs = sysfs_attrs
+ };
+
++static void mt_get_feature(struct hid_device *hdev, struct hid_report *report)
++{
++ struct mt_device *td = hid_get_drvdata(hdev);
++ int ret, size = hid_report_len(report);
++ u8 *buf;
++
++ /*
++ * Only fetch the feature report if initial reports are not already
++ * been retrieved. Currently this is only done for Windows 8 touch
++ * devices.
++ */
++ if (!(hdev->quirks & HID_QUIRK_NO_INIT_REPORTS))
++ return;
++ if (td->mtclass.name != MT_CLS_WIN_8)
++ return;
++
++ buf = hid_alloc_report_buf(report, GFP_KERNEL);
++ if (!buf)
++ return;
++
++ ret = hid_hw_raw_request(hdev, report->id, buf, size,
++ HID_FEATURE_REPORT, HID_REQ_GET_REPORT);
++ if (ret < 0) {
++ dev_warn(&hdev->dev, "failed to fetch feature %d\n",
++ report->id);
++ } else {
++ ret = hid_report_raw_event(hdev, HID_FEATURE_REPORT, buf,
++ size, 0);
++ if (ret)
++ dev_warn(&hdev->dev, "failed to report feature\n");
++ }
++
++ kfree(buf);
++}
++
+ static void mt_feature_mapping(struct hid_device *hdev,
+ struct hid_field *field, struct hid_usage *usage)
+ {
+@@ -322,11 +357,24 @@ static void mt_feature_mapping(struct hid_device *hdev,
+ break;
+ }
+
+- td->inputmode = field->report->id;
+- td->inputmode_index = usage->usage_index;
++ if (td->inputmode < 0) {
++ td->inputmode = field->report->id;
++ td->inputmode_index = usage->usage_index;
++ } else {
++ /*
++ * Some elan panels wrongly declare 2 input mode
++ * features, and silently ignore when we set the
++ * value in the second field. Skip the second feature
++ * and hope for the best.
++ */
++ dev_info(&hdev->dev,
++ "Ignoring the extra HID_DG_INPUTMODE\n");
++ }
+
+ break;
+ case HID_DG_CONTACTMAX:
++ mt_get_feature(hdev, field->report);
++
+ td->maxcontact_report_id = field->report->id;
+ td->maxcontacts = field->value[0];
+ if (!td->maxcontacts &&
+@@ -343,6 +391,7 @@ static void mt_feature_mapping(struct hid_device *hdev,
+ break;
+ }
+
++ mt_get_feature(hdev, field->report);
+ if (field->value[usage->usage_index] == MT_BUTTONTYPE_CLICKPAD)
+ td->is_buttonpad = true;
+
+@@ -1026,8 +1075,13 @@ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ * reports. Fortunately, the Win8 spec says that all touches
+ * should be sent during each report, making the initialization
+ * of input reports unnecessary.
++ *
++ * In addition some touchpads do not behave well if we read
++ * all feature reports from them. Instead we prevent
++ * initial report fetching and then selectively fetch each
++ * report we are interested in.
+ */
+- hdev->quirks |= HID_QUIRK_NO_INIT_INPUT_REPORTS;
++ hdev->quirks |= HID_QUIRK_NO_INIT_REPORTS;
+
+ td = devm_kzalloc(&hdev->dev, sizeof(struct mt_device), GFP_KERNEL);
+ if (!td) {
+diff --git a/drivers/hid/usbhid/hid-core.c b/drivers/hid/usbhid/hid-core.c
+index 36712e9f56c2..5dd426fee8cc 100644
+--- a/drivers/hid/usbhid/hid-core.c
++++ b/drivers/hid/usbhid/hid-core.c
+@@ -477,8 +477,6 @@ static void hid_ctrl(struct urb *urb)
+ struct usbhid_device *usbhid = hid->driver_data;
+ int unplug = 0, status = urb->status;
+
+- spin_lock(&usbhid->lock);
+-
+ switch (status) {
+ case 0: /* success */
+ if (usbhid->ctrl[usbhid->ctrltail].dir == USB_DIR_IN)
+@@ -498,6 +496,8 @@ static void hid_ctrl(struct urb *urb)
+ hid_warn(urb->dev, "ctrl urb status %d received\n", status);
+ }
+
++ spin_lock(&usbhid->lock);
++
+ if (unplug) {
+ usbhid->ctrltail = usbhid->ctrlhead;
+ } else {
+diff --git a/drivers/i2c/busses/i2c-at91.c b/drivers/i2c/busses/i2c-at91.c
+index 1c758cd1e1ba..10835d1f559b 100644
+--- a/drivers/i2c/busses/i2c-at91.c
++++ b/drivers/i2c/busses/i2c-at91.c
+@@ -347,8 +347,14 @@ error:
+
+ static void at91_twi_read_next_byte(struct at91_twi_dev *dev)
+ {
+- if (!dev->buf_len)
++ /*
++ * If we are in this case, it means there is garbage data in RHR, so
++ * delete them.
++ */
++ if (!dev->buf_len) {
++ at91_twi_read(dev, AT91_TWI_RHR);
+ return;
++ }
+
+ /* 8bit read works with and without FIFO */
+ *dev->buf = readb_relaxed(dev->base + AT91_TWI_RHR);
+@@ -465,19 +471,73 @@ static irqreturn_t atmel_twi_interrupt(int irq, void *dev_id)
+
+ if (!irqstatus)
+ return IRQ_NONE;
+- else if (irqstatus & AT91_TWI_RXRDY)
++ /*
++ * In reception, the behavior of the twi device (before sama5d2) is
++ * weird. There is some magic about RXRDY flag! When a data has been
++ * almost received, the reception of a new one is anticipated if there
++ * is no stop command to send. That is the reason why ask for sending
++ * the stop command not on the last data but on the second last one.
++ *
++ * Unfortunately, we could still have the RXRDY flag set even if the
++ * transfer is done and we have read the last data. It might happen
++ * when the i2c slave device sends too quickly data after receiving the
++ * ack from the master. The data has been almost received before having
++ * the order to send stop. In this case, sending the stop command could
++ * cause a RXRDY interrupt with a TXCOMP one. It is better to manage
++ * the RXRDY interrupt first in order to not keep garbage data in the
++ * Receive Holding Register for the next transfer.
++ */
++ if (irqstatus & AT91_TWI_RXRDY)
+ at91_twi_read_next_byte(dev);
+- else if (irqstatus & AT91_TWI_TXRDY)
+- at91_twi_write_next_byte(dev);
+-
+- /* catch error flags */
+- dev->transfer_status |= status;
+
++ /*
++ * When a NACK condition is detected, the I2C controller sets the NACK,
++ * TXCOMP and TXRDY bits all together in the Status Register (SR).
++ *
++ * 1 - Handling NACK errors with CPU write transfer.
++ *
++ * In such case, we should not write the next byte into the Transmit
++ * Holding Register (THR) otherwise the I2C controller would start a new
++ * transfer and the I2C slave is likely to reply by another NACK.
++ *
++ * 2 - Handling NACK errors with DMA write transfer.
++ *
++ * By setting the TXRDY bit in the SR, the I2C controller also triggers
++ * the DMA controller to write the next data into the THR. Then the
++ * result depends on the hardware version of the I2C controller.
++ *
++ * 2a - Without support of the Alternative Command mode.
++ *
++ * This is the worst case: the DMA controller is triggered to write the
++ * next data into the THR, hence starting a new transfer: the I2C slave
++ * is likely to reply by another NACK.
++ * Concurrently, this interrupt handler is likely to be called to manage
++ * the first NACK before the I2C controller detects the second NACK and
++ * sets once again the NACK bit into the SR.
++ * When handling the first NACK, this interrupt handler disables the I2C
++ * controller interruptions, especially the NACK interrupt.
++ * Hence, the NACK bit is pending into the SR. This is why we should
++ * read the SR to clear all pending interrupts at the beginning of
++ * at91_do_twi_transfer() before actually starting a new transfer.
++ *
++ * 2b - With support of the Alternative Command mode.
++ *
++ * When a NACK condition is detected, the I2C controller also locks the
++ * THR (and sets the LOCK bit in the SR): even though the DMA controller
++ * is triggered by the TXRDY bit to write the next data into the THR,
++ * this data actually won't go on the I2C bus hence a second NACK is not
++ * generated.
++ */
+ if (irqstatus & (AT91_TWI_TXCOMP | AT91_TWI_NACK)) {
+ at91_disable_twi_interrupts(dev);
+ complete(&dev->cmd_complete);
++ } else if (irqstatus & AT91_TWI_TXRDY) {
++ at91_twi_write_next_byte(dev);
+ }
+
++ /* catch error flags */
++ dev->transfer_status |= status;
++
+ return IRQ_HANDLED;
+ }
+
+@@ -537,6 +597,9 @@ static int at91_do_twi_transfer(struct at91_twi_dev *dev)
+ reinit_completion(&dev->cmd_complete);
+ dev->transfer_status = 0;
+
++ /* Clear pending interrupts, such as NACK. */
++ at91_twi_read(dev, AT91_TWI_SR);
++
+ if (dev->fifo_size) {
+ unsigned fifo_mr = at91_twi_read(dev, AT91_TWI_FMR);
+
+@@ -558,11 +621,6 @@ static int at91_do_twi_transfer(struct at91_twi_dev *dev)
+ } else if (dev->msg->flags & I2C_M_RD) {
+ unsigned start_flags = AT91_TWI_START;
+
+- if (at91_twi_read(dev, AT91_TWI_SR) & AT91_TWI_RXRDY) {
+- dev_err(dev->dev, "RXRDY still set!");
+- at91_twi_read(dev, AT91_TWI_RHR);
+- }
+-
+ /* if only one byte is to be read, immediately stop transfer */
+ if (!has_alt_cmd && dev->buf_len <= 1 &&
+ !(dev->msg->flags & I2C_M_RECV_LEN))
+diff --git a/drivers/i2c/busses/i2c-mv64xxx.c b/drivers/i2c/busses/i2c-mv64xxx.c
+index 5801227b97ab..43207f52e5a3 100644
+--- a/drivers/i2c/busses/i2c-mv64xxx.c
++++ b/drivers/i2c/busses/i2c-mv64xxx.c
+@@ -146,6 +146,8 @@ struct mv64xxx_i2c_data {
+ bool errata_delay;
+ struct reset_control *rstc;
+ bool irq_clear_inverted;
++ /* Clk div is 2 to the power n, not 2 to the power n + 1 */
++ bool clk_n_base_0;
+ };
+
+ static struct mv64xxx_i2c_regs mv64xxx_i2c_regs_mv64xxx = {
+@@ -757,25 +759,29 @@ MODULE_DEVICE_TABLE(of, mv64xxx_i2c_of_match_table);
+ #ifdef CONFIG_OF
+ #ifdef CONFIG_HAVE_CLK
+ static int
+-mv64xxx_calc_freq(const int tclk, const int n, const int m)
++mv64xxx_calc_freq(struct mv64xxx_i2c_data *drv_data,
++ const int tclk, const int n, const int m)
+ {
+- return tclk / (10 * (m + 1) * (2 << n));
++ if (drv_data->clk_n_base_0)
++ return tclk / (10 * (m + 1) * (1 << n));
++ else
++ return tclk / (10 * (m + 1) * (2 << n));
+ }
+
+ static bool
+-mv64xxx_find_baud_factors(const u32 req_freq, const u32 tclk, u32 *best_n,
+- u32 *best_m)
++mv64xxx_find_baud_factors(struct mv64xxx_i2c_data *drv_data,
++ const u32 req_freq, const u32 tclk)
+ {
+ int freq, delta, best_delta = INT_MAX;
+ int m, n;
+
+ for (n = 0; n <= 7; n++)
+ for (m = 0; m <= 15; m++) {
+- freq = mv64xxx_calc_freq(tclk, n, m);
++ freq = mv64xxx_calc_freq(drv_data, tclk, n, m);
+ delta = req_freq - freq;
+ if (delta >= 0 && delta < best_delta) {
+- *best_m = m;
+- *best_n = n;
++ drv_data->freq_m = m;
++ drv_data->freq_n = n;
+ best_delta = delta;
+ }
+ if (best_delta == 0)
+@@ -813,8 +819,11 @@ mv64xxx_of_config(struct mv64xxx_i2c_data *drv_data,
+ if (of_property_read_u32(np, "clock-frequency", &bus_freq))
+ bus_freq = 100000; /* 100kHz by default */
+
+- if (!mv64xxx_find_baud_factors(bus_freq, tclk,
+- &drv_data->freq_n, &drv_data->freq_m)) {
++ if (of_device_is_compatible(np, "allwinner,sun4i-a10-i2c") ||
++ of_device_is_compatible(np, "allwinner,sun6i-a31-i2c"))
++ drv_data->clk_n_base_0 = true;
++
++ if (!mv64xxx_find_baud_factors(drv_data, bus_freq, tclk)) {
+ rc = -EINVAL;
+ goto out;
+ }
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index d8b5a8fee1e6..3191dd9984dc 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -575,7 +575,7 @@ static int rcar_reg_slave(struct i2c_client *slave)
+ if (slave->flags & I2C_CLIENT_TEN)
+ return -EAFNOSUPPORT;
+
+- pm_runtime_forbid(rcar_i2c_priv_to_dev(priv));
++ pm_runtime_get_sync(rcar_i2c_priv_to_dev(priv));
+
+ priv->slave = slave;
+ rcar_i2c_write(priv, ICSAR, slave->addr);
+@@ -597,7 +597,7 @@ static int rcar_unreg_slave(struct i2c_client *slave)
+
+ priv->slave = NULL;
+
+- pm_runtime_allow(rcar_i2c_priv_to_dev(priv));
++ pm_runtime_put(rcar_i2c_priv_to_dev(priv));
+
+ return 0;
+ }
+diff --git a/drivers/i2c/busses/i2c-rk3x.c b/drivers/i2c/busses/i2c-rk3x.c
+index 72e97e306bd9..9c4efd308f10 100644
+--- a/drivers/i2c/busses/i2c-rk3x.c
++++ b/drivers/i2c/busses/i2c-rk3x.c
+@@ -907,7 +907,7 @@ static int rk3x_i2c_probe(struct platform_device *pdev)
+ &i2c->scl_fall_ns))
+ i2c->scl_fall_ns = 300;
+ if (of_property_read_u32(pdev->dev.of_node, "i2c-sda-falling-time-ns",
+- &i2c->scl_fall_ns))
++ &i2c->sda_fall_ns))
+ i2c->sda_fall_ns = i2c->scl_fall_ns;
+
+ strlcpy(i2c->adap.name, "rk3x-i2c", sizeof(i2c->adap.name));
+diff --git a/drivers/i2c/i2c-core.c b/drivers/i2c/i2c-core.c
+index a59c3111f7fb..a4347ba78a51 100644
+--- a/drivers/i2c/i2c-core.c
++++ b/drivers/i2c/i2c-core.c
+@@ -679,7 +679,7 @@ static int i2c_device_probe(struct device *dev)
+ if (wakeirq > 0 && wakeirq != client->irq)
+ status = dev_pm_set_dedicated_wake_irq(dev, wakeirq);
+ else if (client->irq > 0)
+- status = dev_pm_set_wake_irq(dev, wakeirq);
++ status = dev_pm_set_wake_irq(dev, client->irq);
+ else
+ status = 0;
+
+diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
+index 7df97777662d..dad768caa9c5 100644
+--- a/drivers/iommu/io-pgtable-arm.c
++++ b/drivers/iommu/io-pgtable-arm.c
+@@ -405,17 +405,18 @@ static void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl,
+ arm_lpae_iopte *start, *end;
+ unsigned long table_size;
+
+- /* Only leaf entries at the last level */
+- if (lvl == ARM_LPAE_MAX_LEVELS - 1)
+- return;
+-
+ if (lvl == ARM_LPAE_START_LVL(data))
+ table_size = data->pgd_size;
+ else
+ table_size = 1UL << data->pg_shift;
+
+ start = ptep;
+- end = (void *)ptep + table_size;
++
++ /* Only leaf entries at the last level */
++ if (lvl == ARM_LPAE_MAX_LEVELS - 1)
++ end = ptep;
++ else
++ end = (void *)ptep + table_size;
+
+ while (ptep != end) {
+ arm_lpae_iopte pte = *ptep++;
+diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
+index 5a67671a3973..bdc96cd838b8 100644
+--- a/drivers/md/dm-mpath.c
++++ b/drivers/md/dm-mpath.c
+@@ -1569,11 +1569,8 @@ static int multipath_ioctl(struct dm_target *ti, unsigned int cmd,
+ /*
+ * Only pass ioctls through if the device sizes match exactly.
+ */
+- if (!bdev || ti->len != i_size_read(bdev->bd_inode) >> SECTOR_SHIFT) {
+- int err = scsi_verify_blk_ioctl(NULL, cmd);
+- if (err)
+- r = err;
+- }
++ if (!r && ti->len != i_size_read(bdev->bd_inode) >> SECTOR_SHIFT)
++ r = scsi_verify_blk_ioctl(NULL, cmd);
+
+ if (r == -ENOTCONN && !fatal_signal_pending(current)) {
+ spin_lock_irqsave(&m->lock, flags);
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 1b5c6047e4f1..8af4750f20bf 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -2198,6 +2198,13 @@ static void dm_init_md_queue(struct mapped_device *md)
+ * This queue is new, so no concurrency on the queue_flags.
+ */
+ queue_flag_clear_unlocked(QUEUE_FLAG_STACKABLE, md->queue);
++
++ /*
++ * Initialize data that will only be used by a non-blk-mq DM queue
++ * - must do so here (in alloc_dev callchain) before queue is used
++ */
++ md->queue->queuedata = md;
++ md->queue->backing_dev_info.congested_data = md;
+ }
+
+ static void dm_init_old_md_queue(struct mapped_device *md)
+@@ -2208,10 +2215,7 @@ static void dm_init_old_md_queue(struct mapped_device *md)
+ /*
+ * Initialize aspects of queue that aren't relevant for blk-mq
+ */
+- md->queue->queuedata = md;
+ md->queue->backing_dev_info.congested_fn = dm_any_congested;
+- md->queue->backing_dev_info.congested_data = md;
+-
+ blk_queue_bounce_limit(md->queue, BLK_BOUNCE_ANY);
+ }
+
+diff --git a/drivers/md/persistent-data/dm-btree.c b/drivers/md/persistent-data/dm-btree.c
+index 0e09aef43998..88c287db3bde 100644
+--- a/drivers/md/persistent-data/dm-btree.c
++++ b/drivers/md/persistent-data/dm-btree.c
+@@ -471,8 +471,10 @@ static int btree_split_sibling(struct shadow_spine *s, unsigned parent_index,
+
+ r = insert_at(sizeof(__le64), pn, parent_index + 1,
+ le64_to_cpu(rn->keys[0]), &location);
+- if (r)
++ if (r) {
++ unlock_block(s->info, right);
+ return r;
++ }
+
+ if (key < le64_to_cpu(rn->keys[0])) {
+ unlock_block(s->info, right);
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 96f365968306..23bbe61f9ac0 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1944,6 +1944,8 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio)
+
+ first = i;
+ fbio = r10_bio->devs[i].bio;
++ fbio->bi_iter.bi_size = r10_bio->sectors << 9;
++ fbio->bi_iter.bi_idx = 0;
+
+ vcnt = (r10_bio->sectors + (PAGE_SIZE >> 9) - 1) >> (PAGE_SHIFT - 9);
+ /* now find blocks with errors */
+@@ -1987,7 +1989,7 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio)
+ bio_reset(tbio);
+
+ tbio->bi_vcnt = vcnt;
+- tbio->bi_iter.bi_size = r10_bio->sectors << 9;
++ tbio->bi_iter.bi_size = fbio->bi_iter.bi_size;
+ tbio->bi_rw = WRITE;
+ tbio->bi_private = r10_bio;
+ tbio->bi_iter.bi_sector = r10_bio->devs[i].addr;
+diff --git a/drivers/media/i2c/ir-kbd-i2c.c b/drivers/media/i2c/ir-kbd-i2c.c
+index 728d2cc8a3e7..175a76114953 100644
+--- a/drivers/media/i2c/ir-kbd-i2c.c
++++ b/drivers/media/i2c/ir-kbd-i2c.c
+@@ -478,7 +478,6 @@ static const struct i2c_device_id ir_kbd_id[] = {
+ { "ir_rx_z8f0811_hdpvr", 0 },
+ { }
+ };
+-MODULE_DEVICE_TABLE(i2c, ir_kbd_id);
+
+ static struct i2c_driver ir_kbd_driver = {
+ .driver = {
+diff --git a/drivers/media/pci/ivtv/ivtv-driver.c b/drivers/media/pci/ivtv/ivtv-driver.c
+index 8616fa8193bc..c2e60b4f292d 100644
+--- a/drivers/media/pci/ivtv/ivtv-driver.c
++++ b/drivers/media/pci/ivtv/ivtv-driver.c
+@@ -805,11 +805,11 @@ static void ivtv_init_struct2(struct ivtv *itv)
+ {
+ int i;
+
+- for (i = 0; i < IVTV_CARD_MAX_VIDEO_INPUTS - 1; i++)
++ for (i = 0; i < IVTV_CARD_MAX_VIDEO_INPUTS; i++)
+ if (itv->card->video_inputs[i].video_type == 0)
+ break;
+ itv->nof_inputs = i;
+- for (i = 0; i < IVTV_CARD_MAX_AUDIO_INPUTS - 1; i++)
++ for (i = 0; i < IVTV_CARD_MAX_AUDIO_INPUTS; i++)
+ if (itv->card->audio_inputs[i].audio_type == 0)
+ break;
+ itv->nof_audio_inputs = i;
+diff --git a/drivers/media/pci/saa7134/saa7134-alsa.c b/drivers/media/pci/saa7134/saa7134-alsa.c
+index 1d2c310ce838..94f816244407 100644
+--- a/drivers/media/pci/saa7134/saa7134-alsa.c
++++ b/drivers/media/pci/saa7134/saa7134-alsa.c
+@@ -1211,6 +1211,8 @@ static int alsa_device_init(struct saa7134_dev *dev)
+
+ static int alsa_device_exit(struct saa7134_dev *dev)
+ {
++ if (!snd_saa7134_cards[dev->nr])
++ return 1;
+
+ snd_card_free(snd_saa7134_cards[dev->nr]);
+ snd_saa7134_cards[dev->nr] = NULL;
+@@ -1260,7 +1262,8 @@ static void saa7134_alsa_exit(void)
+ int idx;
+
+ for (idx = 0; idx < SNDRV_CARDS; idx++) {
+- snd_card_free(snd_saa7134_cards[idx]);
++ if (snd_saa7134_cards[idx])
++ snd_card_free(snd_saa7134_cards[idx]);
+ }
+
+ saa7134_dmasound_init = NULL;
+diff --git a/drivers/media/platform/sti/c8sectpfe/Kconfig b/drivers/media/platform/sti/c8sectpfe/Kconfig
+index 641ad8f34956..7420a50572d3 100644
+--- a/drivers/media/platform/sti/c8sectpfe/Kconfig
++++ b/drivers/media/platform/sti/c8sectpfe/Kconfig
+@@ -3,7 +3,6 @@ config DVB_C8SECTPFE
+ depends on PINCTRL && DVB_CORE && I2C
+ depends on ARCH_STI || ARCH_MULTIPLATFORM || COMPILE_TEST
+ select FW_LOADER
+- select FW_LOADER_USER_HELPER_FALLBACK
+ select DEBUG_FS
+ select DVB_LNBP21 if MEDIA_SUBDRV_AUTOSELECT
+ select DVB_STV090x if MEDIA_SUBDRV_AUTOSELECT
+diff --git a/drivers/media/platform/vivid/vivid-core.c b/drivers/media/platform/vivid/vivid-core.c
+index a047b4716741..0f5e9143cc7e 100644
+--- a/drivers/media/platform/vivid/vivid-core.c
++++ b/drivers/media/platform/vivid/vivid-core.c
+@@ -1341,8 +1341,11 @@ static int vivid_remove(struct platform_device *pdev)
+ struct vivid_dev *dev;
+ unsigned i;
+
+- for (i = 0; vivid_devs[i]; i++) {
++
++ for (i = 0; i < n_devs; i++) {
+ dev = vivid_devs[i];
++ if (!dev)
++ continue;
+
+ if (dev->has_vid_cap) {
+ v4l2_info(&dev->v4l2_dev, "unregistering %s\n",
+diff --git a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+index af635430524e..788b31c91330 100644
+--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
++++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+@@ -266,7 +266,7 @@ static int put_v4l2_create32(struct v4l2_create_buffers *kp, struct v4l2_create_
+
+ struct v4l2_standard32 {
+ __u32 index;
+- __u32 id[2]; /* __u64 would get the alignment wrong */
++ compat_u64 id;
+ __u8 name[24];
+ struct v4l2_fract frameperiod; /* Frames, not fields */
+ __u32 framelines;
+@@ -286,7 +286,7 @@ static int put_v4l2_standard32(struct v4l2_standard *kp, struct v4l2_standard32
+ {
+ if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_standard32)) ||
+ put_user(kp->index, &up->index) ||
+- copy_to_user(up->id, &kp->id, sizeof(__u64)) ||
++ put_user(kp->id, &up->id) ||
+ copy_to_user(up->name, kp->name, 24) ||
+ copy_to_user(&up->frameperiod, &kp->frameperiod, sizeof(kp->frameperiod)) ||
+ put_user(kp->framelines, &up->framelines) ||
+@@ -587,10 +587,10 @@ struct v4l2_input32 {
+ __u32 type; /* Type of input */
+ __u32 audioset; /* Associated audios (bitfield) */
+ __u32 tuner; /* Associated tuner */
+- v4l2_std_id std;
++ compat_u64 std;
+ __u32 status;
+ __u32 reserved[4];
+-} __attribute__ ((packed));
++};
+
+ /* The 64-bit v4l2_input struct has extra padding at the end of the struct.
+ Otherwise it is identical to the 32-bit version. */
+@@ -738,6 +738,7 @@ static int put_v4l2_ext_controls32(struct v4l2_ext_controls *kp, struct v4l2_ext
+ struct v4l2_event32 {
+ __u32 type;
+ union {
++ compat_s64 value64;
+ __u8 data[64];
+ } u;
+ __u32 pending;
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
+index b6b7dcc1b77d..e1e70b8694a8 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls.c
+@@ -2498,7 +2498,7 @@ int v4l2_query_ext_ctrl(struct v4l2_ctrl_handler *hdl, struct v4l2_query_ext_ctr
+ /* We found a control with the given ID, so just get
+ the next valid one in the list. */
+ list_for_each_entry_continue(ref, &hdl->ctrl_refs, node) {
+- is_compound =
++ is_compound = ref->ctrl->is_array ||
+ ref->ctrl->type >= V4L2_CTRL_COMPOUND_TYPES;
+ if (id < ref->ctrl->id &&
+ (is_compound & mask) == match)
+@@ -2512,7 +2512,7 @@ int v4l2_query_ext_ctrl(struct v4l2_ctrl_handler *hdl, struct v4l2_query_ext_ctr
+ is one, otherwise the first 'if' above would have
+ been true. */
+ list_for_each_entry(ref, &hdl->ctrl_refs, node) {
+- is_compound =
++ is_compound = ref->ctrl->is_array ||
+ ref->ctrl->type >= V4L2_CTRL_COMPOUND_TYPES;
+ if (id < ref->ctrl->id &&
+ (is_compound & mask) == match)
+@@ -2884,7 +2884,7 @@ static int get_ctrl(struct v4l2_ctrl *ctrl, struct v4l2_ext_control *c)
+ * cur_to_user() calls below would need to be modified not to access
+ * userspace memory when called from get_ctrl().
+ */
+- if (!ctrl->is_int)
++ if (!ctrl->is_int && ctrl->type != V4L2_CTRL_TYPE_INTEGER64)
+ return -EINVAL;
+
+ if (ctrl->flags & V4L2_CTRL_FLAG_WRITE_ONLY)
+@@ -2942,9 +2942,9 @@ s64 v4l2_ctrl_g_ctrl_int64(struct v4l2_ctrl *ctrl)
+
+ /* It's a driver bug if this happens. */
+ WARN_ON(ctrl->is_ptr || ctrl->type != V4L2_CTRL_TYPE_INTEGER64);
+- c.value = 0;
++ c.value64 = 0;
+ get_ctrl(ctrl, &c);
+- return c.value;
++ return c.value64;
+ }
+ EXPORT_SYMBOL(v4l2_ctrl_g_ctrl_int64);
+
+@@ -3043,7 +3043,7 @@ static void update_from_auto_cluster(struct v4l2_ctrl *master)
+ {
+ int i;
+
+- for (i = 0; i < master->ncontrols; i++)
++ for (i = 1; i < master->ncontrols; i++)
+ cur_to_new(master->cluster[i]);
+ if (!call_op(master, g_volatile_ctrl))
+ for (i = 1; i < master->ncontrols; i++)
+diff --git a/drivers/media/v4l2-core/videobuf2-dma-contig.c b/drivers/media/v4l2-core/videobuf2-dma-contig.c
+index 2397ceb1dc6b..f42e66624734 100644
+--- a/drivers/media/v4l2-core/videobuf2-dma-contig.c
++++ b/drivers/media/v4l2-core/videobuf2-dma-contig.c
+@@ -100,7 +100,8 @@ static void vb2_dc_prepare(void *buf_priv)
+ if (!sgt || buf->db_attach)
+ return;
+
+- dma_sync_sg_for_device(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir);
++ dma_sync_sg_for_device(buf->dev, sgt->sgl, sgt->orig_nents,
++ buf->dma_dir);
+ }
+
+ static void vb2_dc_finish(void *buf_priv)
+@@ -112,7 +113,7 @@ static void vb2_dc_finish(void *buf_priv)
+ if (!sgt || buf->db_attach)
+ return;
+
+- dma_sync_sg_for_cpu(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir);
++ dma_sync_sg_for_cpu(buf->dev, sgt->sgl, sgt->orig_nents, buf->dma_dir);
+ }
+
+ /*********************************************/
+diff --git a/drivers/media/v4l2-core/videobuf2-dma-sg.c b/drivers/media/v4l2-core/videobuf2-dma-sg.c
+index be7bd6535c9d..07bd2605c2b1 100644
+--- a/drivers/media/v4l2-core/videobuf2-dma-sg.c
++++ b/drivers/media/v4l2-core/videobuf2-dma-sg.c
+@@ -210,7 +210,8 @@ static void vb2_dma_sg_prepare(void *buf_priv)
+ if (buf->db_attach)
+ return;
+
+- dma_sync_sg_for_device(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir);
++ dma_sync_sg_for_device(buf->dev, sgt->sgl, sgt->orig_nents,
++ buf->dma_dir);
+ }
+
+ static void vb2_dma_sg_finish(void *buf_priv)
+@@ -222,7 +223,7 @@ static void vb2_dma_sg_finish(void *buf_priv)
+ if (buf->db_attach)
+ return;
+
+- dma_sync_sg_for_cpu(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir);
++ dma_sync_sg_for_cpu(buf->dev, sgt->sgl, sgt->orig_nents, buf->dma_dir);
+ }
+
+ static void *vb2_dma_sg_get_userptr(void *alloc_ctx, unsigned long vaddr,
+diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
+index 44dc965a2f7c..e7a02ed9fba8 100644
+--- a/drivers/mtd/mtd_blkdevs.c
++++ b/drivers/mtd/mtd_blkdevs.c
+@@ -192,8 +192,8 @@ static int blktrans_open(struct block_device *bdev, fmode_t mode)
+ if (!dev)
+ return -ERESTARTSYS; /* FIXME: busy loop! -arnd*/
+
+- mutex_lock(&dev->lock);
+ mutex_lock(&mtd_table_mutex);
++ mutex_lock(&dev->lock);
+
+ if (dev->open)
+ goto unlock;
+@@ -217,8 +217,8 @@ static int blktrans_open(struct block_device *bdev, fmode_t mode)
+
+ unlock:
+ dev->open++;
+- mutex_unlock(&mtd_table_mutex);
+ mutex_unlock(&dev->lock);
++ mutex_unlock(&mtd_table_mutex);
+ blktrans_dev_put(dev);
+ return ret;
+
+@@ -228,8 +228,8 @@ error_release:
+ error_put:
+ module_put(dev->tr->owner);
+ kref_put(&dev->ref, blktrans_dev_release);
+- mutex_unlock(&mtd_table_mutex);
+ mutex_unlock(&dev->lock);
++ mutex_unlock(&mtd_table_mutex);
+ blktrans_dev_put(dev);
+ return ret;
+ }
+@@ -241,8 +241,8 @@ static void blktrans_release(struct gendisk *disk, fmode_t mode)
+ if (!dev)
+ return;
+
+- mutex_lock(&dev->lock);
+ mutex_lock(&mtd_table_mutex);
++ mutex_lock(&dev->lock);
+
+ if (--dev->open)
+ goto unlock;
+@@ -256,8 +256,8 @@ static void blktrans_release(struct gendisk *disk, fmode_t mode)
+ __put_mtd_device(dev->mtd);
+ }
+ unlock:
+- mutex_unlock(&mtd_table_mutex);
+ mutex_unlock(&dev->lock);
++ mutex_unlock(&mtd_table_mutex);
+ blktrans_dev_put(dev);
+ }
+
+diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c
+index cafdb8855a79..919a936abc42 100644
+--- a/drivers/mtd/mtdpart.c
++++ b/drivers/mtd/mtdpart.c
+@@ -664,8 +664,10 @@ int add_mtd_partitions(struct mtd_info *master,
+
+ for (i = 0; i < nbparts; i++) {
+ slave = allocate_partition(master, parts + i, i, cur_offset);
+- if (IS_ERR(slave))
++ if (IS_ERR(slave)) {
++ del_mtd_partitions(master);
+ return PTR_ERR(slave);
++ }
+
+ mutex_lock(&mtd_partitions_mutex);
+ list_add(&slave->list, &mtd_partitions);
+diff --git a/drivers/mtd/nand/jz4740_nand.c b/drivers/mtd/nand/jz4740_nand.c
+index ebf2cce04cba..ca3270b1299c 100644
+--- a/drivers/mtd/nand/jz4740_nand.c
++++ b/drivers/mtd/nand/jz4740_nand.c
+@@ -25,6 +25,7 @@
+
+ #include <linux/gpio.h>
+
++#include <asm/mach-jz4740/gpio.h>
+ #include <asm/mach-jz4740/jz4740_nand.h>
+
+ #define JZ_REG_NAND_CTRL 0x50
+diff --git a/drivers/mtd/nand/nand_base.c b/drivers/mtd/nand/nand_base.c
+index ceb68ca8277a..066f967e03b1 100644
+--- a/drivers/mtd/nand/nand_base.c
++++ b/drivers/mtd/nand/nand_base.c
+@@ -2964,7 +2964,7 @@ static void nand_resume(struct mtd_info *mtd)
+ */
+ static void nand_shutdown(struct mtd_info *mtd)
+ {
+- nand_get_device(mtd, FL_SHUTDOWN);
++ nand_get_device(mtd, FL_PM_SUSPENDED);
+ }
+
+ /* Set default functions */
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index eb4489f9082f..56065632a5b8 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -603,6 +603,7 @@ static int schedule_erase(struct ubi_device *ubi, struct ubi_wl_entry *e,
+ return 0;
+ }
+
++static int __erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk);
+ /**
+ * do_sync_erase - run the erase worker synchronously.
+ * @ubi: UBI device description object
+@@ -615,20 +616,16 @@ static int schedule_erase(struct ubi_device *ubi, struct ubi_wl_entry *e,
+ static int do_sync_erase(struct ubi_device *ubi, struct ubi_wl_entry *e,
+ int vol_id, int lnum, int torture)
+ {
+- struct ubi_work *wl_wrk;
++ struct ubi_work wl_wrk;
+
+ dbg_wl("sync erase of PEB %i", e->pnum);
+
+- wl_wrk = kmalloc(sizeof(struct ubi_work), GFP_NOFS);
+- if (!wl_wrk)
+- return -ENOMEM;
+-
+- wl_wrk->e = e;
+- wl_wrk->vol_id = vol_id;
+- wl_wrk->lnum = lnum;
+- wl_wrk->torture = torture;
++ wl_wrk.e = e;
++ wl_wrk.vol_id = vol_id;
++ wl_wrk.lnum = lnum;
++ wl_wrk.torture = torture;
+
+- return erase_worker(ubi, wl_wrk, 0);
++ return __erase_worker(ubi, &wl_wrk);
+ }
+
+ /**
+@@ -1014,7 +1011,7 @@ out_unlock:
+ }
+
+ /**
+- * erase_worker - physical eraseblock erase worker function.
++ * __erase_worker - physical eraseblock erase worker function.
+ * @ubi: UBI device description object
+ * @wl_wrk: the work object
+ * @shutdown: non-zero if the worker has to free memory and exit
+@@ -1025,8 +1022,7 @@ out_unlock:
+ * needed. Returns zero in case of success and a negative error code in case of
+ * failure.
+ */
+-static int erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk,
+- int shutdown)
++static int __erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk)
+ {
+ struct ubi_wl_entry *e = wl_wrk->e;
+ int pnum = e->pnum;
+@@ -1034,21 +1030,11 @@ static int erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk,
+ int lnum = wl_wrk->lnum;
+ int err, available_consumed = 0;
+
+- if (shutdown) {
+- dbg_wl("cancel erasure of PEB %d EC %d", pnum, e->ec);
+- kfree(wl_wrk);
+- wl_entry_destroy(ubi, e);
+- return 0;
+- }
+-
+ dbg_wl("erase PEB %d EC %d LEB %d:%d",
+ pnum, e->ec, wl_wrk->vol_id, wl_wrk->lnum);
+
+ err = sync_erase(ubi, e, wl_wrk->torture);
+ if (!err) {
+- /* Fine, we've erased it successfully */
+- kfree(wl_wrk);
+-
+ spin_lock(&ubi->wl_lock);
+ wl_tree_add(e, &ubi->free);
+ ubi->free_count++;
+@@ -1066,7 +1052,6 @@ static int erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk,
+ }
+
+ ubi_err(ubi, "failed to erase PEB %d, error %d", pnum, err);
+- kfree(wl_wrk);
+
+ if (err == -EINTR || err == -ENOMEM || err == -EAGAIN ||
+ err == -EBUSY) {
+@@ -1075,6 +1060,7 @@ static int erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk,
+ /* Re-schedule the LEB for erasure */
+ err1 = schedule_erase(ubi, e, vol_id, lnum, 0);
+ if (err1) {
++ wl_entry_destroy(ubi, e);
+ err = err1;
+ goto out_ro;
+ }
+@@ -1150,6 +1136,25 @@ out_ro:
+ return err;
+ }
+
++static int erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk,
++ int shutdown)
++{
++ int ret;
++
++ if (shutdown) {
++ struct ubi_wl_entry *e = wl_wrk->e;
++
++ dbg_wl("cancel erasure of PEB %d EC %d", e->pnum, e->ec);
++ kfree(wl_wrk);
++ wl_entry_destroy(ubi, e);
++ return 0;
++ }
++
++ ret = __erase_worker(ubi, wl_wrk);
++ kfree(wl_wrk);
++ return ret;
++}
++
+ /**
+ * ubi_wl_put_peb - return a PEB to the wear-leveling sub-system.
+ * @ubi: UBI device description object
+diff --git a/drivers/net/wireless/rtlwifi/rtl8821ae/hw.c b/drivers/net/wireless/rtlwifi/rtl8821ae/hw.c
+index 6e9418ed90c2..bbb789f8990b 100644
+--- a/drivers/net/wireless/rtlwifi/rtl8821ae/hw.c
++++ b/drivers/net/wireless/rtlwifi/rtl8821ae/hw.c
+@@ -2272,7 +2272,7 @@ void rtl8821ae_enable_interrupt(struct ieee80211_hw *hw)
+ struct rtl_priv *rtlpriv = rtl_priv(hw);
+ struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+
+- if (!rtlpci->int_clear)
++ if (rtlpci->int_clear)
+ rtl8821ae_clear_interrupt(hw);/*clear it here first*/
+
+ rtl_write_dword(rtlpriv, REG_HIMR, rtlpci->irq_mask[0] & 0xFFFFFFFF);
+diff --git a/drivers/net/wireless/rtlwifi/rtl8821ae/sw.c b/drivers/net/wireless/rtlwifi/rtl8821ae/sw.c
+index 8ee141a55bc5..142bdff4ed60 100644
+--- a/drivers/net/wireless/rtlwifi/rtl8821ae/sw.c
++++ b/drivers/net/wireless/rtlwifi/rtl8821ae/sw.c
+@@ -448,7 +448,7 @@ MODULE_PARM_DESC(fwlps, "Set to 1 to use FW control power save (default 1)\n");
+ MODULE_PARM_DESC(msi, "Set to 1 to use MSI interrupts mode (default 1)\n");
+ MODULE_PARM_DESC(debug, "Set debug level (0-5) (default 0)");
+ MODULE_PARM_DESC(disable_watchdog, "Set to 1 to disable the watchdog (default 0)\n");
+-MODULE_PARM_DESC(int_clear, "Set to 1 to disable interrupt clear before set (default 0)\n");
++MODULE_PARM_DESC(int_clear, "Set to 0 to disable interrupt clear before set (default 1)\n");
+
+ static SIMPLE_DEV_PM_OPS(rtlwifi_pm_ops, rtl_pci_suspend, rtl_pci_resume);
+
+diff --git a/drivers/net/wireless/ti/wlcore/io.h b/drivers/net/wireless/ti/wlcore/io.h
+index 0305729d0986..10cf3747694d 100644
+--- a/drivers/net/wireless/ti/wlcore/io.h
++++ b/drivers/net/wireless/ti/wlcore/io.h
+@@ -207,19 +207,23 @@ static inline int __must_check wlcore_write_reg(struct wl1271 *wl, int reg,
+
+ static inline void wl1271_power_off(struct wl1271 *wl)
+ {
+- int ret;
++ int ret = 0;
+
+ if (!test_bit(WL1271_FLAG_GPIO_POWER, &wl->flags))
+ return;
+
+- ret = wl->if_ops->power(wl->dev, false);
++ if (wl->if_ops->power)
++ ret = wl->if_ops->power(wl->dev, false);
+ if (!ret)
+ clear_bit(WL1271_FLAG_GPIO_POWER, &wl->flags);
+ }
+
+ static inline int wl1271_power_on(struct wl1271 *wl)
+ {
+- int ret = wl->if_ops->power(wl->dev, true);
++ int ret = 0;
++
++ if (wl->if_ops->power)
++ ret = wl->if_ops->power(wl->dev, true);
+ if (ret == 0)
+ set_bit(WL1271_FLAG_GPIO_POWER, &wl->flags);
+
+diff --git a/drivers/net/wireless/ti/wlcore/spi.c b/drivers/net/wireless/ti/wlcore/spi.c
+index f1ac2839d97c..720e4e4b5a3c 100644
+--- a/drivers/net/wireless/ti/wlcore/spi.c
++++ b/drivers/net/wireless/ti/wlcore/spi.c
+@@ -73,7 +73,10 @@
+ */
+ #define SPI_AGGR_BUFFER_SIZE (4 * PAGE_SIZE)
+
+-#define WSPI_MAX_NUM_OF_CHUNKS (SPI_AGGR_BUFFER_SIZE / WSPI_MAX_CHUNK_SIZE)
++/* Maximum number of SPI write chunks */
++#define WSPI_MAX_NUM_OF_CHUNKS \
++ ((SPI_AGGR_BUFFER_SIZE / WSPI_MAX_CHUNK_SIZE) + 1)
++
+
+ struct wl12xx_spi_glue {
+ struct device *dev;
+@@ -268,9 +271,10 @@ static int __must_check wl12xx_spi_raw_write(struct device *child, int addr,
+ void *buf, size_t len, bool fixed)
+ {
+ struct wl12xx_spi_glue *glue = dev_get_drvdata(child->parent);
+- struct spi_transfer t[2 * (WSPI_MAX_NUM_OF_CHUNKS + 1)];
++ /* SPI write buffers - 2 for each chunk */
++ struct spi_transfer t[2 * WSPI_MAX_NUM_OF_CHUNKS];
+ struct spi_message m;
+- u32 commands[WSPI_MAX_NUM_OF_CHUNKS];
++ u32 commands[WSPI_MAX_NUM_OF_CHUNKS]; /* 1 command per chunk */
+ u32 *cmd;
+ u32 chunk_len;
+ int i;
+diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c
+index d3346d23963b..89b3befc7155 100644
+--- a/drivers/pci/bus.c
++++ b/drivers/pci/bus.c
+@@ -140,6 +140,8 @@ static int pci_bus_alloc_from_region(struct pci_bus *bus, struct resource *res,
+ type_mask |= IORESOURCE_TYPE_BITS;
+
+ pci_bus_for_each_resource(bus, r, i) {
++ resource_size_t min_used = min;
++
+ if (!r)
+ continue;
+
+@@ -163,12 +165,12 @@ static int pci_bus_alloc_from_region(struct pci_bus *bus, struct resource *res,
+ * overrides "min".
+ */
+ if (avail.start)
+- min = avail.start;
++ min_used = avail.start;
+
+ max = avail.end;
+
+ /* Ok, try it out.. */
+- ret = allocate_resource(r, res, size, min, max,
++ ret = allocate_resource(r, res, size, min_used, max,
+ align, alignf, alignf_data);
+ if (ret == 0)
+ return 0;
+diff --git a/drivers/pci/host/pci-dra7xx.c b/drivers/pci/host/pci-dra7xx.c
+index 199e29a044cd..ee978705a6f5 100644
+--- a/drivers/pci/host/pci-dra7xx.c
++++ b/drivers/pci/host/pci-dra7xx.c
+@@ -295,7 +295,8 @@ static int __init dra7xx_add_pcie_port(struct dra7xx_pcie *dra7xx,
+ }
+
+ ret = devm_request_irq(&pdev->dev, pp->irq,
+- dra7xx_pcie_msi_irq_handler, IRQF_SHARED,
++ dra7xx_pcie_msi_irq_handler,
++ IRQF_SHARED | IRQF_NO_THREAD,
+ "dra7-pcie-msi", pp);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to request irq\n");
+diff --git a/drivers/pci/host/pci-exynos.c b/drivers/pci/host/pci-exynos.c
+index f9f468d9a819..7b6be7791d33 100644
+--- a/drivers/pci/host/pci-exynos.c
++++ b/drivers/pci/host/pci-exynos.c
+@@ -523,7 +523,8 @@ static int __init exynos_add_pcie_port(struct pcie_port *pp,
+
+ ret = devm_request_irq(&pdev->dev, pp->msi_irq,
+ exynos_pcie_msi_irq_handler,
+- IRQF_SHARED, "exynos-pcie", pp);
++ IRQF_SHARED | IRQF_NO_THREAD,
++ "exynos-pcie", pp);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to request msi irq\n");
+ return ret;
+diff --git a/drivers/pci/host/pci-imx6.c b/drivers/pci/host/pci-imx6.c
+index 8f3a9813c4e5..58713d57c426 100644
+--- a/drivers/pci/host/pci-imx6.c
++++ b/drivers/pci/host/pci-imx6.c
+@@ -536,7 +536,8 @@ static int __init imx6_add_pcie_port(struct pcie_port *pp,
+
+ ret = devm_request_irq(&pdev->dev, pp->msi_irq,
+ imx6_pcie_msi_handler,
+- IRQF_SHARED, "mx6-pcie-msi", pp);
++ IRQF_SHARED | IRQF_NO_THREAD,
++ "mx6-pcie-msi", pp);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to request MSI irq\n");
+ return -ENODEV;
+diff --git a/drivers/pci/host/pci-tegra.c b/drivers/pci/host/pci-tegra.c
+index 81df0c1fe063..46d5e07dbdfe 100644
+--- a/drivers/pci/host/pci-tegra.c
++++ b/drivers/pci/host/pci-tegra.c
+@@ -1288,7 +1288,7 @@ static int tegra_pcie_enable_msi(struct tegra_pcie *pcie)
+
+ msi->irq = err;
+
+- err = request_irq(msi->irq, tegra_pcie_msi_irq, 0,
++ err = request_irq(msi->irq, tegra_pcie_msi_irq, IRQF_NO_THREAD,
+ tegra_msi_irq_chip.name, pcie);
+ if (err < 0) {
+ dev_err(&pdev->dev, "failed to request IRQ: %d\n", err);
+diff --git a/drivers/pci/host/pcie-rcar.c b/drivers/pci/host/pcie-rcar.c
+index 7678fe0820d7..b86e42fcf19f 100644
+--- a/drivers/pci/host/pcie-rcar.c
++++ b/drivers/pci/host/pcie-rcar.c
+@@ -694,14 +694,16 @@ static int rcar_pcie_enable_msi(struct rcar_pcie *pcie)
+
+ /* Two irqs are for MSI, but they are also used for non-MSI irqs */
+ err = devm_request_irq(&pdev->dev, msi->irq1, rcar_pcie_msi_irq,
+- IRQF_SHARED, rcar_msi_irq_chip.name, pcie);
++ IRQF_SHARED | IRQF_NO_THREAD,
++ rcar_msi_irq_chip.name, pcie);
+ if (err < 0) {
+ dev_err(&pdev->dev, "failed to request IRQ: %d\n", err);
+ goto err;
+ }
+
+ err = devm_request_irq(&pdev->dev, msi->irq2, rcar_pcie_msi_irq,
+- IRQF_SHARED, rcar_msi_irq_chip.name, pcie);
++ IRQF_SHARED | IRQF_NO_THREAD,
++ rcar_msi_irq_chip.name, pcie);
+ if (err < 0) {
+ dev_err(&pdev->dev, "failed to request IRQ: %d\n", err);
+ goto err;
+diff --git a/drivers/pci/host/pcie-spear13xx.c b/drivers/pci/host/pcie-spear13xx.c
+index 98d2683181bc..4aca7167ed95 100644
+--- a/drivers/pci/host/pcie-spear13xx.c
++++ b/drivers/pci/host/pcie-spear13xx.c
+@@ -163,34 +163,36 @@ static int spear13xx_pcie_establish_link(struct pcie_port *pp)
+ * default value in capability register is 512 bytes. So force
+ * it to 128 here.
+ */
+- dw_pcie_cfg_read(pp->dbi_base, exp_cap_off + PCI_EXP_DEVCTL, 4, &val);
++ dw_pcie_cfg_read(pp->dbi_base + exp_cap_off + PCI_EXP_DEVCTL,
++ 0, 2, &val);
+ val &= ~PCI_EXP_DEVCTL_READRQ;
+- dw_pcie_cfg_write(pp->dbi_base, exp_cap_off + PCI_EXP_DEVCTL, 4, val);
++ dw_pcie_cfg_write(pp->dbi_base + exp_cap_off + PCI_EXP_DEVCTL,
++ 0, 2, val);
+
+- dw_pcie_cfg_write(pp->dbi_base, PCI_VENDOR_ID, 2, 0x104A);
+- dw_pcie_cfg_write(pp->dbi_base, PCI_DEVICE_ID, 2, 0xCD80);
++ dw_pcie_cfg_write(pp->dbi_base + PCI_VENDOR_ID, 0, 2, 0x104A);
++ dw_pcie_cfg_write(pp->dbi_base + PCI_VENDOR_ID, 2, 2, 0xCD80);
+
+ /*
+ * if is_gen1 is set then handle it, so that some buggy card
+ * also works
+ */
+ if (spear13xx_pcie->is_gen1) {
+- dw_pcie_cfg_read(pp->dbi_base, exp_cap_off + PCI_EXP_LNKCAP, 4,
+- &val);
++ dw_pcie_cfg_read(pp->dbi_base + exp_cap_off + PCI_EXP_LNKCAP,
++ 0, 4, &val);
+ if ((val & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_2_5GB) {
+ val &= ~((u32)PCI_EXP_LNKCAP_SLS);
+ val |= PCI_EXP_LNKCAP_SLS_2_5GB;
+- dw_pcie_cfg_write(pp->dbi_base, exp_cap_off +
+- PCI_EXP_LNKCAP, 4, val);
++ dw_pcie_cfg_write(pp->dbi_base + exp_cap_off +
++ PCI_EXP_LNKCAP, 0, 4, val);
+ }
+
+- dw_pcie_cfg_read(pp->dbi_base, exp_cap_off + PCI_EXP_LNKCTL2, 4,
+- &val);
++ dw_pcie_cfg_read(pp->dbi_base + exp_cap_off + PCI_EXP_LNKCTL2,
++ 0, 2, &val);
+ if ((val & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_2_5GB) {
+ val &= ~((u32)PCI_EXP_LNKCAP_SLS);
+ val |= PCI_EXP_LNKCAP_SLS_2_5GB;
+- dw_pcie_cfg_write(pp->dbi_base, exp_cap_off +
+- PCI_EXP_LNKCTL2, 4, val);
++ dw_pcie_cfg_write(pp->dbi_base + exp_cap_off +
++ PCI_EXP_LNKCTL2, 0, 2, val);
+ }
+ }
+
+@@ -279,7 +281,8 @@ static int spear13xx_add_pcie_port(struct pcie_port *pp,
+ return -ENODEV;
+ }
+ ret = devm_request_irq(dev, pp->irq, spear13xx_pcie_irq_handler,
+- IRQF_SHARED, "spear1340-pcie", pp);
++ IRQF_SHARED | IRQF_NO_THREAD,
++ "spear1340-pcie", pp);
+ if (ret) {
+ dev_err(dev, "failed to request irq %d\n", pp->irq);
+ return ret;
+diff --git a/drivers/pci/host/pcie-xilinx.c b/drivers/pci/host/pcie-xilinx.c
+index 3c7a0d580b1e..4cfa46360d12 100644
+--- a/drivers/pci/host/pcie-xilinx.c
++++ b/drivers/pci/host/pcie-xilinx.c
+@@ -781,7 +781,8 @@ static int xilinx_pcie_parse_dt(struct xilinx_pcie_port *port)
+
+ port->irq = irq_of_parse_and_map(node, 0);
+ err = devm_request_irq(dev, port->irq, xilinx_pcie_intr_handler,
+- IRQF_SHARED, "xilinx-pcie", port);
++ IRQF_SHARED | IRQF_NO_THREAD,
++ "xilinx-pcie", port);
+ if (err) {
+ dev_err(dev, "unable to request irq %d\n", port->irq);
+ return err;
+diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c
+index ee0ebff103a4..1eadc74d88b4 100644
+--- a/drivers/pci/iov.c
++++ b/drivers/pci/iov.c
+@@ -54,24 +54,29 @@ static inline void pci_iov_set_numvfs(struct pci_dev *dev, int nr_virtfn)
+ * The PF consumes one bus number. NumVFs, First VF Offset, and VF Stride
+ * determine how many additional bus numbers will be consumed by VFs.
+ *
+- * Iterate over all valid NumVFs and calculate the maximum number of bus
+- * numbers that could ever be required.
++ * Iterate over all valid NumVFs, validate offset and stride, and calculate
++ * the maximum number of bus numbers that could ever be required.
+ */
+-static inline u8 virtfn_max_buses(struct pci_dev *dev)
++static int compute_max_vf_buses(struct pci_dev *dev)
+ {
+ struct pci_sriov *iov = dev->sriov;
+- int nr_virtfn;
+- u8 max = 0;
+- int busnr;
++ int nr_virtfn, busnr, rc = 0;
+
+- for (nr_virtfn = 1; nr_virtfn <= iov->total_VFs; nr_virtfn++) {
++ for (nr_virtfn = iov->total_VFs; nr_virtfn; nr_virtfn--) {
+ pci_iov_set_numvfs(dev, nr_virtfn);
++ if (!iov->offset || (nr_virtfn > 1 && !iov->stride)) {
++ rc = -EIO;
++ goto out;
++ }
++
+ busnr = pci_iov_virtfn_bus(dev, nr_virtfn - 1);
+- if (busnr > max)
+- max = busnr;
++ if (busnr > iov->max_VF_buses)
++ iov->max_VF_buses = busnr;
+ }
+
+- return max;
++out:
++ pci_iov_set_numvfs(dev, 0);
++ return rc;
+ }
+
+ static struct pci_bus *virtfn_add_bus(struct pci_bus *bus, int busnr)
+@@ -384,7 +389,7 @@ static int sriov_init(struct pci_dev *dev, int pos)
+ int rc;
+ int nres;
+ u32 pgsz;
+- u16 ctrl, total, offset, stride;
++ u16 ctrl, total;
+ struct pci_sriov *iov;
+ struct resource *res;
+ struct pci_dev *pdev;
+@@ -414,11 +419,6 @@ static int sriov_init(struct pci_dev *dev, int pos)
+
+ found:
+ pci_write_config_word(dev, pos + PCI_SRIOV_CTRL, ctrl);
+- pci_write_config_word(dev, pos + PCI_SRIOV_NUM_VF, 0);
+- pci_read_config_word(dev, pos + PCI_SRIOV_VF_OFFSET, &offset);
+- pci_read_config_word(dev, pos + PCI_SRIOV_VF_STRIDE, &stride);
+- if (!offset || (total > 1 && !stride))
+- return -EIO;
+
+ pci_read_config_dword(dev, pos + PCI_SRIOV_SUP_PGSIZE, &pgsz);
+ i = PAGE_SHIFT > 12 ? PAGE_SHIFT - 12 : 0;
+@@ -456,8 +456,6 @@ found:
+ iov->nres = nres;
+ iov->ctrl = ctrl;
+ iov->total_VFs = total;
+- iov->offset = offset;
+- iov->stride = stride;
+ iov->pgsz = pgsz;
+ iov->self = dev;
+ pci_read_config_dword(dev, pos + PCI_SRIOV_CAP, &iov->cap);
+@@ -474,10 +472,15 @@ found:
+
+ dev->sriov = iov;
+ dev->is_physfn = 1;
+- iov->max_VF_buses = virtfn_max_buses(dev);
++ rc = compute_max_vf_buses(dev);
++ if (rc)
++ goto fail_max_buses;
+
+ return 0;
+
++fail_max_buses:
++ dev->sriov = NULL;
++ dev->is_physfn = 0;
+ failed:
+ for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) {
+ res = &dev->resource[i + PCI_IOV_RESOURCES];
+diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
+index 92618686604c..eead54cd01b2 100644
+--- a/drivers/pci/pci-sysfs.c
++++ b/drivers/pci/pci-sysfs.c
+@@ -216,7 +216,10 @@ static ssize_t numa_node_store(struct device *dev,
+ if (ret)
+ return ret;
+
+- if (node >= MAX_NUMNODES || !node_online(node))
++ if ((node < 0 && node != NUMA_NO_NODE) || node >= MAX_NUMNODES)
++ return -EINVAL;
++
++ if (node != NUMA_NO_NODE && !node_online(node))
+ return -EINVAL;
+
+ add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK);
+diff --git a/drivers/remoteproc/remoteproc_debugfs.c b/drivers/remoteproc/remoteproc_debugfs.c
+index 9d30809bb407..916af5096f57 100644
+--- a/drivers/remoteproc/remoteproc_debugfs.c
++++ b/drivers/remoteproc/remoteproc_debugfs.c
+@@ -156,7 +156,7 @@ rproc_recovery_write(struct file *filp, const char __user *user_buf,
+ char buf[10];
+ int ret;
+
+- if (count > sizeof(buf))
++ if (count < 1 || count > sizeof(buf))
+ return count;
+
+ ret = copy_from_user(buf, user_buf, count);
+diff --git a/drivers/spi/spi-atmel.c b/drivers/spi/spi-atmel.c
+index 63318e2afba1..3fff59ce065f 100644
+--- a/drivers/spi/spi-atmel.c
++++ b/drivers/spi/spi-atmel.c
+@@ -773,7 +773,8 @@ static int atmel_spi_next_xfer_dma_submit(struct spi_master *master,
+
+ *plen = len;
+
+- if (atmel_spi_dma_slave_config(as, &slave_config, 8))
++ if (atmel_spi_dma_slave_config(as, &slave_config,
++ xfer->bits_per_word))
+ goto err_exit;
+
+ /* Send both scatterlists */
+diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
+index 3d09e0b69b73..1f8903d356e5 100644
+--- a/drivers/spi/spi-omap2-mcspi.c
++++ b/drivers/spi/spi-omap2-mcspi.c
+@@ -1217,6 +1217,33 @@ out:
+ return status;
+ }
+
++static int omap2_mcspi_prepare_message(struct spi_master *master,
++ struct spi_message *msg)
++{
++ struct omap2_mcspi *mcspi = spi_master_get_devdata(master);
++ struct omap2_mcspi_regs *ctx = &mcspi->ctx;
++ struct omap2_mcspi_cs *cs;
++
++ /* Only a single channel can have the FORCE bit enabled
++ * in its chconf0 register.
++ * Scan all channels and disable them except the current one.
++ * A FORCE can remain from a last transfer having cs_change enabled
++ */
++ list_for_each_entry(cs, &ctx->cs, node) {
++ if (msg->spi->controller_state == cs)
++ continue;
++
++ if ((cs->chconf0 & OMAP2_MCSPI_CHCONF_FORCE)) {
++ cs->chconf0 &= ~OMAP2_MCSPI_CHCONF_FORCE;
++ writel_relaxed(cs->chconf0,
++ cs->base + OMAP2_MCSPI_CHCONF0);
++ readl_relaxed(cs->base + OMAP2_MCSPI_CHCONF0);
++ }
++ }
++
++ return 0;
++}
++
+ static int omap2_mcspi_transfer_one(struct spi_master *master,
+ struct spi_device *spi, struct spi_transfer *t)
+ {
+@@ -1344,6 +1371,7 @@ static int omap2_mcspi_probe(struct platform_device *pdev)
+ master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32);
+ master->setup = omap2_mcspi_setup;
+ master->auto_runtime_pm = true;
++ master->prepare_message = omap2_mcspi_prepare_message;
+ master->transfer_one = omap2_mcspi_transfer_one;
+ master->set_cs = omap2_mcspi_set_cs;
+ master->cleanup = omap2_mcspi_cleanup;
+diff --git a/drivers/spi/spi-ti-qspi.c b/drivers/spi/spi-ti-qspi.c
+index aa6d284131e0..81b84858cfee 100644
+--- a/drivers/spi/spi-ti-qspi.c
++++ b/drivers/spi/spi-ti-qspi.c
+@@ -410,11 +410,10 @@ static int ti_qspi_start_transfer_one(struct spi_master *master,
+
+ mutex_unlock(&qspi->list_lock);
+
++ ti_qspi_write(qspi, qspi->cmd | QSPI_INVAL, QSPI_SPI_CMD_REG);
+ m->status = status;
+ spi_finalize_current_message(master);
+
+- ti_qspi_write(qspi, qspi->cmd | QSPI_INVAL, QSPI_SPI_CMD_REG);
+-
+ return status;
+ }
+
+diff --git a/drivers/spi/spi-xilinx.c b/drivers/spi/spi-xilinx.c
+index a339c1e9997a..3009121173cd 100644
+--- a/drivers/spi/spi-xilinx.c
++++ b/drivers/spi/spi-xilinx.c
+@@ -270,6 +270,7 @@ static int xilinx_spi_txrx_bufs(struct spi_device *spi, struct spi_transfer *t)
+
+ while (remaining_words) {
+ int n_words, tx_words, rx_words;
++ u32 sr;
+
+ n_words = min(remaining_words, xspi->buffer_size);
+
+@@ -284,24 +285,33 @@ static int xilinx_spi_txrx_bufs(struct spi_device *spi, struct spi_transfer *t)
+ if (use_irq) {
+ xspi->write_fn(cr, xspi->regs + XSPI_CR_OFFSET);
+ wait_for_completion(&xspi->done);
+- } else
+- while (!(xspi->read_fn(xspi->regs + XSPI_SR_OFFSET) &
+- XSPI_SR_TX_EMPTY_MASK))
+- ;
+-
+- /* A transmit has just completed. Process received data and
+- * check for more data to transmit. Always inhibit the
+- * transmitter while the Isr refills the transmit register/FIFO,
+- * or make sure it is stopped if we're done.
+- */
+- if (use_irq)
++ /* A transmit has just completed. Process received data
++ * and check for more data to transmit. Always inhibit
++ * the transmitter while the Isr refills the transmit
++ * register/FIFO, or make sure it is stopped if we're
++ * done.
++ */
+ xspi->write_fn(cr | XSPI_CR_TRANS_INHIBIT,
+- xspi->regs + XSPI_CR_OFFSET);
++ xspi->regs + XSPI_CR_OFFSET);
++ sr = XSPI_SR_TX_EMPTY_MASK;
++ } else
++ sr = xspi->read_fn(xspi->regs + XSPI_SR_OFFSET);
+
+ /* Read out all the data from the Rx FIFO */
+ rx_words = n_words;
+- while (rx_words--)
+- xilinx_spi_rx(xspi);
++ while (rx_words) {
++ if ((sr & XSPI_SR_TX_EMPTY_MASK) && (rx_words > 1)) {
++ xilinx_spi_rx(xspi);
++ rx_words--;
++ continue;
++ }
++
++ sr = xspi->read_fn(xspi->regs + XSPI_SR_OFFSET);
++ if (!(sr & XSPI_SR_RX_EMPTY_MASK)) {
++ xilinx_spi_rx(xspi);
++ rx_words--;
++ }
++ }
+
+ remaining_words -= n_words;
+ }
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index a5f53de813d3..a83d9d07df58 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1627,7 +1627,7 @@ struct spi_master *spi_alloc_master(struct device *dev, unsigned size)
+ master->bus_num = -1;
+ master->num_chipselect = 1;
+ master->dev.class = &spi_master_class;
+- master->dev.parent = get_device(dev);
++ master->dev.parent = dev;
+ spi_master_set_devdata(master, &master[1]);
+
+ return master;
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index a0285da0244c..1fc7a84a2ded 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -258,16 +258,13 @@ static void n_tty_check_throttle(struct tty_struct *tty)
+
+ static void n_tty_check_unthrottle(struct tty_struct *tty)
+ {
+- if (tty->driver->type == TTY_DRIVER_TYPE_PTY &&
+- tty->link->ldisc->ops->write_wakeup == n_tty_write_wakeup) {
++ if (tty->driver->type == TTY_DRIVER_TYPE_PTY) {
+ if (chars_in_buffer(tty) > TTY_THRESHOLD_UNTHROTTLE)
+ return;
+ if (!tty->count)
+ return;
+ n_tty_kick_worker(tty);
+- n_tty_write_wakeup(tty->link);
+- if (waitqueue_active(&tty->link->write_wait))
+- wake_up_interruptible_poll(&tty->link->write_wait, POLLOUT);
++ tty_wakeup(tty->link);
+ return;
+ }
+
+@@ -2058,13 +2055,13 @@ static int canon_copy_from_read_buf(struct tty_struct *tty,
+ size_t eol;
+ size_t tail;
+ int ret, found = 0;
+- bool eof_push = 0;
+
+ /* N.B. avoid overrun if nr == 0 */
+- n = min(*nr, smp_load_acquire(&ldata->canon_head) - ldata->read_tail);
+- if (!n)
++ if (!*nr)
+ return 0;
+
++ n = min(*nr + 1, smp_load_acquire(&ldata->canon_head) - ldata->read_tail);
++
+ tail = ldata->read_tail & (N_TTY_BUF_SIZE - 1);
+ size = min_t(size_t, tail + n, N_TTY_BUF_SIZE);
+
+@@ -2085,12 +2082,11 @@ static int canon_copy_from_read_buf(struct tty_struct *tty,
+ n = eol - tail;
+ if (n > N_TTY_BUF_SIZE)
+ n += N_TTY_BUF_SIZE;
+- n += found;
+- c = n;
++ c = n + found;
+
+- if (found && !ldata->push && read_buf(ldata, eol) == __DISABLED_CHAR) {
+- n--;
+- eof_push = !n && ldata->read_tail != ldata->line_start;
++ if (!found || read_buf(ldata, eol) != __DISABLED_CHAR) {
++ c = min(*nr, c);
++ n = c;
+ }
+
+ n_tty_trace("%s: eol:%zu found:%d n:%zu c:%zu size:%zu more:%zu\n",
+@@ -2120,7 +2116,7 @@ static int canon_copy_from_read_buf(struct tty_struct *tty,
+ ldata->push = 0;
+ tty_audit_push(tty);
+ }
+- return eof_push ? -EAGAIN : 0;
++ return 0;
+ }
+
+ extern ssize_t redirected_tty_write(struct file *, const char __user *,
+@@ -2299,10 +2295,7 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
+
+ if (ldata->icanon && !L_EXTPROC(tty)) {
+ retval = canon_copy_from_read_buf(tty, &b, &nr);
+- if (retval == -EAGAIN) {
+- retval = 0;
+- continue;
+- } else if (retval)
++ if (retval)
+ break;
+ } else {
+ int uncopied;
+diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
+index a660ab181cca..bcf2de080471 100644
+--- a/drivers/tty/tty_buffer.c
++++ b/drivers/tty/tty_buffer.c
+@@ -450,7 +450,7 @@ receive_buf(struct tty_struct *tty, struct tty_buffer *head, int count)
+ count = disc->ops->receive_buf2(tty, p, f, count);
+ else {
+ count = min_t(int, count, tty->receive_room);
+- if (count)
++ if (count && disc->ops->receive_buf)
+ disc->ops->receive_buf(tty, p, f, count);
+ }
+ return count;
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index f435977de740..ff7f15f0f1fb 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -1462,13 +1462,13 @@ static int tty_reopen(struct tty_struct *tty)
+ {
+ struct tty_driver *driver = tty->driver;
+
+- if (!tty->count)
+- return -EIO;
+-
+ if (driver->type == TTY_DRIVER_TYPE_PTY &&
+ driver->subtype == PTY_TYPE_MASTER)
+ return -EIO;
+
++ if (!tty->count)
++ return -EAGAIN;
++
+ if (test_bit(TTY_EXCLUSIVE, &tty->flags) && !capable(CAP_SYS_ADMIN))
+ return -EBUSY;
+
+@@ -2087,7 +2087,11 @@ retry_open:
+
+ if (IS_ERR(tty)) {
+ retval = PTR_ERR(tty);
+- goto err_file;
++ if (retval != -EAGAIN || signal_pending(current))
++ goto err_file;
++ tty_free_file(filp);
++ schedule();
++ goto retry_open;
+ }
+
+ tty_add_file(tty, filp);
+@@ -2654,6 +2658,28 @@ static int tiocsetd(struct tty_struct *tty, int __user *p)
+ }
+
+ /**
++ * tiocgetd - get line discipline
++ * @tty: tty device
++ * @p: pointer to user data
++ *
++ * Retrieves the line discipline id directly from the ldisc.
++ *
++ * Locking: waits for ldisc reference (in case the line discipline
++ * is changing or the tty is being hungup)
++ */
++
++static int tiocgetd(struct tty_struct *tty, int __user *p)
++{
++ struct tty_ldisc *ld;
++ int ret;
++
++ ld = tty_ldisc_ref_wait(tty);
++ ret = put_user(ld->ops->num, p);
++ tty_ldisc_deref(ld);
++ return ret;
++}
++
++/**
+ * send_break - performed time break
+ * @tty: device to break on
+ * @duration: timeout in mS
+@@ -2879,7 +2905,7 @@ long tty_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ case TIOCGSID:
+ return tiocgsid(tty, real_tty, p);
+ case TIOCGETD:
+- return put_user(tty->ldisc->ops->num, (int __user *)p);
++ return tiocgetd(tty, p);
+ case TIOCSETD:
+ return tiocsetd(tty, p);
+ case TIOCVHANGUP:
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 26ca4f910cb0..e4c70dce3e7c 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -428,7 +428,8 @@ static void acm_read_bulk_callback(struct urb *urb)
+ set_bit(rb->index, &acm->read_urbs_free);
+ dev_dbg(&acm->data->dev, "%s - non-zero urb status: %d\n",
+ __func__, status);
+- return;
++ if ((status != -ENOENT) || (urb->actual_length == 0))
++ return;
+ }
+
+ usb_mark_last_busy(acm->dev);
+@@ -1404,6 +1405,8 @@ made_compressed_probe:
+ usb_sndbulkpipe(usb_dev, epwrite->bEndpointAddress),
+ NULL, acm->writesize, acm_write_bulk, snd);
+ snd->urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
++ if (quirks & SEND_ZERO_PACKET)
++ snd->urb->transfer_flags |= URB_ZERO_PACKET;
+ snd->instance = acm;
+ }
+
+@@ -1861,6 +1864,10 @@ static const struct usb_device_id acm_ids[] = {
+ { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,
+ USB_CDC_ACM_PROTO_AT_CDMA) },
+
++ { USB_DEVICE(0x1519, 0x0452), /* Intel 7260 modem */
++ .driver_info = SEND_ZERO_PACKET,
++ },
++
+ { }
+ };
+
+diff --git a/drivers/usb/class/cdc-acm.h b/drivers/usb/class/cdc-acm.h
+index dd9af38e7cda..ccfaba9ab4e4 100644
+--- a/drivers/usb/class/cdc-acm.h
++++ b/drivers/usb/class/cdc-acm.h
+@@ -134,3 +134,4 @@ struct acm {
+ #define IGNORE_DEVICE BIT(5)
+ #define QUIRK_CONTROL_LINE_STATE BIT(6)
+ #define CLEAR_HALT_CONDITIONS BIT(7)
++#define SEND_ZERO_PACKET BIT(8)
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 62084335a608..a2fef797d553 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -5377,7 +5377,6 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ }
+
+ bos = udev->bos;
+- udev->bos = NULL;
+
+ for (i = 0; i < SET_CONFIG_TRIES; ++i) {
+
+@@ -5470,8 +5469,11 @@ done:
+ usb_set_usb2_hardware_lpm(udev, 1);
+ usb_unlocked_enable_lpm(udev);
+ usb_enable_ltm(udev);
+- usb_release_bos_descriptor(udev);
+- udev->bos = bos;
++ /* release the new BOS descriptor allocated by hub_port_init() */
++ if (udev->bos != bos) {
++ usb_release_bos_descriptor(udev);
++ udev->bos = bos;
++ }
+ return 0;
+
+ re_enumerate:
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 78241b5550df..1018f563465b 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -616,8 +616,30 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
+ if ((raw_port_status & PORT_RESET) ||
+ !(raw_port_status & PORT_PE))
+ return 0xffffffff;
+- if (time_after_eq(jiffies,
+- bus_state->resume_done[wIndex])) {
++ /* did port event handler already start resume timing? */
++ if (!bus_state->resume_done[wIndex]) {
++ /* If not, maybe we are in a host initated resume? */
++ if (test_bit(wIndex, &bus_state->resuming_ports)) {
++ /* Host initated resume doesn't time the resume
++ * signalling using resume_done[].
++ * It manually sets RESUME state, sleeps 20ms
++ * and sets U0 state. This should probably be
++ * changed, but not right now.
++ */
++ } else {
++ /* port resume was discovered now and here,
++ * start resume timing
++ */
++ unsigned long timeout = jiffies +
++ msecs_to_jiffies(USB_RESUME_TIMEOUT);
++
++ set_bit(wIndex, &bus_state->resuming_ports);
++ bus_state->resume_done[wIndex] = timeout;
++ mod_timer(&hcd->rh_timer, timeout);
++ }
++ /* Has resume been signalled for USB_RESUME_TIME yet? */
++ } else if (time_after_eq(jiffies,
++ bus_state->resume_done[wIndex])) {
+ int time_left;
+
+ xhci_dbg(xhci, "Resume USB2 port %d\n",
+@@ -658,13 +680,24 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
+ } else {
+ /*
+ * The resume has been signaling for less than
+- * 20ms. Report the port status as SUSPEND,
+- * let the usbcore check port status again
+- * and clear resume signaling later.
++ * USB_RESUME_TIME. Report the port status as SUSPEND,
++ * let the usbcore check port status again and clear
++ * resume signaling later.
+ */
+ status |= USB_PORT_STAT_SUSPEND;
+ }
+ }
++ /*
++ * Clear stale usb2 resume signalling variables in case port changed
++ * state during resume signalling. For example on error
++ */
++ if ((bus_state->resume_done[wIndex] ||
++ test_bit(wIndex, &bus_state->resuming_ports)) &&
++ (raw_port_status & PORT_PLS_MASK) != XDEV_U3 &&
++ (raw_port_status & PORT_PLS_MASK) != XDEV_RESUME) {
++ bus_state->resume_done[wIndex] = 0;
++ clear_bit(wIndex, &bus_state->resuming_ports);
++ }
+ if ((raw_port_status & PORT_PLS_MASK) == XDEV_U0
+ && (raw_port_status & PORT_POWER)
+ && (bus_state->suspended_ports & (1 << wIndex))) {
+@@ -995,6 +1028,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ if ((temp & PORT_PE) == 0)
+ goto error;
+
++ set_bit(wIndex, &bus_state->resuming_ports);
+ xhci_set_link_state(xhci, port_array, wIndex,
+ XDEV_RESUME);
+ spin_unlock_irqrestore(&xhci->lock, flags);
+@@ -1002,6 +1036,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ spin_lock_irqsave(&xhci->lock, flags);
+ xhci_set_link_state(xhci, port_array, wIndex,
+ XDEV_U0);
++ clear_bit(wIndex, &bus_state->resuming_ports);
+ }
+ bus_state->port_c_suspend |= 1 << wIndex;
+
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index c47d3e480586..a22c430faea7 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -28,7 +28,9 @@
+ #include "xhci.h"
+ #include "xhci-trace.h"
+
+-#define PORT2_SSIC_CONFIG_REG2 0x883c
++#define SSIC_PORT_NUM 2
++#define SSIC_PORT_CFG2 0x880c
++#define SSIC_PORT_CFG2_OFFSET 0x30
+ #define PROG_DONE (1 << 30)
+ #define SSIC_PORT_UNUSED (1 << 31)
+
+@@ -45,6 +47,7 @@
+ #define PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI 0x22b5
+ #define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI 0xa12f
+ #define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI 0x9d2f
++#define PCI_DEVICE_ID_INTEL_BROXTON_M_XHCI 0x0aa8
+
+ static const char hcd_name[] = "xhci_hcd";
+
+@@ -152,7 +155,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+ (pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI ||
+ pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI ||
+- pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI)) {
++ pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
++ pdev->device == PCI_DEVICE_ID_INTEL_BROXTON_M_XHCI)) {
+ xhci->quirks |= XHCI_PME_STUCK_QUIRK;
+ }
+ if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+@@ -316,28 +320,36 @@ static void xhci_pme_quirk(struct usb_hcd *hcd, bool suspend)
+ struct pci_dev *pdev = to_pci_dev(hcd->self.controller);
+ u32 val;
+ void __iomem *reg;
++ int i;
+
+ if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+ pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI) {
+
+- reg = (void __iomem *) xhci->cap_regs + PORT2_SSIC_CONFIG_REG2;
+-
+- /* Notify SSIC that SSIC profile programming is not done */
+- val = readl(reg) & ~PROG_DONE;
+- writel(val, reg);
+-
+- /* Mark SSIC port as unused(suspend) or used(resume) */
+- val = readl(reg);
+- if (suspend)
+- val |= SSIC_PORT_UNUSED;
+- else
+- val &= ~SSIC_PORT_UNUSED;
+- writel(val, reg);
+-
+- /* Notify SSIC that SSIC profile programming is done */
+- val = readl(reg) | PROG_DONE;
+- writel(val, reg);
+- readl(reg);
++ for (i = 0; i < SSIC_PORT_NUM; i++) {
++ reg = (void __iomem *) xhci->cap_regs +
++ SSIC_PORT_CFG2 +
++ i * SSIC_PORT_CFG2_OFFSET;
++
++ /*
++ * Notify SSIC that SSIC profile programming
++ * is not done.
++ */
++ val = readl(reg) & ~PROG_DONE;
++ writel(val, reg);
++
++ /* Mark SSIC port as unused(suspend) or used(resume) */
++ val = readl(reg);
++ if (suspend)
++ val |= SSIC_PORT_UNUSED;
++ else
++ val &= ~SSIC_PORT_UNUSED;
++ writel(val, reg);
++
++ /* Notify SSIC that SSIC profile programming is done */
++ val = readl(reg) | PROG_DONE;
++ writel(val, reg);
++ readl(reg);
++ }
+ }
+
+ reg = (void __iomem *) xhci->cap_regs + 0x80a4;
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index fe9e2d3a223c..23c712ec7541 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1583,7 +1583,8 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ */
+ bogus_port_status = true;
+ goto cleanup;
+- } else {
++ } else if (!test_bit(faked_port_index,
++ &bus_state->resuming_ports)) {
+ xhci_dbg(xhci, "resume HS port %d\n", port_id);
+ bus_state->resume_done[faked_port_index] = jiffies +
+ msecs_to_jiffies(USB_RESUME_TIMEOUT);
+diff --git a/drivers/usb/phy/phy-msm-usb.c b/drivers/usb/phy/phy-msm-usb.c
+index c58c3c0dbe35..260e510578e7 100644
+--- a/drivers/usb/phy/phy-msm-usb.c
++++ b/drivers/usb/phy/phy-msm-usb.c
+@@ -1599,6 +1599,8 @@ static int msm_otg_read_dt(struct platform_device *pdev, struct msm_otg *motg)
+ &motg->id.nb);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "register ID notifier failed\n");
++ extcon_unregister_notifier(motg->vbus.extcon,
++ EXTCON_USB, &motg->vbus.nb);
+ return ret;
+ }
+
+@@ -1660,15 +1662,6 @@ static int msm_otg_probe(struct platform_device *pdev)
+ if (!motg)
+ return -ENOMEM;
+
+- pdata = dev_get_platdata(&pdev->dev);
+- if (!pdata) {
+- if (!np)
+- return -ENXIO;
+- ret = msm_otg_read_dt(pdev, motg);
+- if (ret)
+- return ret;
+- }
+-
+ motg->phy.otg = devm_kzalloc(&pdev->dev, sizeof(struct usb_otg),
+ GFP_KERNEL);
+ if (!motg->phy.otg)
+@@ -1710,6 +1703,15 @@ static int msm_otg_probe(struct platform_device *pdev)
+ if (!motg->regs)
+ return -ENOMEM;
+
++ pdata = dev_get_platdata(&pdev->dev);
++ if (!pdata) {
++ if (!np)
++ return -ENXIO;
++ ret = msm_otg_read_dt(pdev, motg);
++ if (ret)
++ return ret;
++ }
++
+ /*
+ * NOTE: The PHYs can be multiplexed between the chipidea controller
+ * and the dwc3 controller, using a single bit. It is important that
+@@ -1717,8 +1719,10 @@ static int msm_otg_probe(struct platform_device *pdev)
+ */
+ if (motg->phy_number) {
+ phy_select = devm_ioremap_nocache(&pdev->dev, USB2_PHY_SEL, 4);
+- if (!phy_select)
+- return -ENOMEM;
++ if (!phy_select) {
++ ret = -ENOMEM;
++ goto unregister_extcon;
++ }
+ /* Enable second PHY with the OTG port */
+ writel(0x1, phy_select);
+ }
+@@ -1728,7 +1732,8 @@ static int msm_otg_probe(struct platform_device *pdev)
+ motg->irq = platform_get_irq(pdev, 0);
+ if (motg->irq < 0) {
+ dev_err(&pdev->dev, "platform_get_irq failed\n");
+- return motg->irq;
++ ret = motg->irq;
++ goto unregister_extcon;
+ }
+
+ regs[0].supply = "vddcx";
+@@ -1737,7 +1742,7 @@ static int msm_otg_probe(struct platform_device *pdev)
+
+ ret = devm_regulator_bulk_get(motg->phy.dev, ARRAY_SIZE(regs), regs);
+ if (ret)
+- return ret;
++ goto unregister_extcon;
+
+ motg->vddcx = regs[0].consumer;
+ motg->v3p3 = regs[1].consumer;
+@@ -1834,6 +1839,12 @@ disable_clks:
+ clk_disable_unprepare(motg->clk);
+ if (!IS_ERR(motg->core_clk))
+ clk_disable_unprepare(motg->core_clk);
++unregister_extcon:
++ extcon_unregister_notifier(motg->id.extcon,
++ EXTCON_USB_HOST, &motg->id.nb);
++ extcon_unregister_notifier(motg->vbus.extcon,
++ EXTCON_USB, &motg->vbus.nb);
++
+ return ret;
+ }
+
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 59b2126b21a3..1dd9919081f8 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -98,6 +98,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x10C4, 0x81AC) }, /* MSD Dash Hawk */
+ { USB_DEVICE(0x10C4, 0x81AD) }, /* INSYS USB Modem */
+ { USB_DEVICE(0x10C4, 0x81C8) }, /* Lipowsky Industrie Elektronik GmbH, Baby-JTAG */
++ { USB_DEVICE(0x10C4, 0x81D7) }, /* IAI Corp. RCB-CV-USB USB to RS485 Adaptor */
+ { USB_DEVICE(0x10C4, 0x81E2) }, /* Lipowsky Industrie Elektronik GmbH, Baby-LIN */
+ { USB_DEVICE(0x10C4, 0x81E7) }, /* Aerocomm Radio */
+ { USB_DEVICE(0x10C4, 0x81E8) }, /* Zephyr Bioharness */
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index a5a0376bbd48..8c660ae401d8 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -824,6 +824,7 @@ static const struct usb_device_id id_table_combined[] = {
+ { USB_DEVICE(FTDI_VID, FTDI_TURTELIZER_PID),
+ .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
+ { USB_DEVICE(RATOC_VENDOR_ID, RATOC_PRODUCT_ID_USB60F) },
++ { USB_DEVICE(RATOC_VENDOR_ID, RATOC_PRODUCT_ID_SCU18) },
+ { USB_DEVICE(FTDI_VID, FTDI_REU_TINY_PID) },
+
+ /* Papouch devices based on FTDI chip */
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 67c6d4469730..a84df2513994 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -615,6 +615,7 @@
+ */
+ #define RATOC_VENDOR_ID 0x0584
+ #define RATOC_PRODUCT_ID_USB60F 0xb020
++#define RATOC_PRODUCT_ID_SCU18 0xb03a
+
+ /*
+ * Infineon Technologies
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index e945b5195258..35622fba4305 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -271,6 +271,8 @@ static void option_instat_callback(struct urb *urb);
+ #define TELIT_PRODUCT_CC864_SINGLE 0x1006
+ #define TELIT_PRODUCT_DE910_DUAL 0x1010
+ #define TELIT_PRODUCT_UE910_V2 0x1012
++#define TELIT_PRODUCT_LE922_USBCFG0 0x1042
++#define TELIT_PRODUCT_LE922_USBCFG3 0x1043
+ #define TELIT_PRODUCT_LE920 0x1200
+ #define TELIT_PRODUCT_LE910 0x1201
+
+@@ -623,6 +625,16 @@ static const struct option_blacklist_info sierra_mc73xx_blacklist = {
+ .reserved = BIT(8) | BIT(10) | BIT(11),
+ };
+
++static const struct option_blacklist_info telit_le922_blacklist_usbcfg0 = {
++ .sendsetup = BIT(2),
++ .reserved = BIT(0) | BIT(1) | BIT(3),
++};
++
++static const struct option_blacklist_info telit_le922_blacklist_usbcfg3 = {
++ .sendsetup = BIT(0),
++ .reserved = BIT(1) | BIT(2) | BIT(3),
++};
++
+ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COLT) },
+ { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_RICOLA) },
+@@ -1172,6 +1184,10 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_SINGLE) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_DE910_DUAL) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UE910_V2) },
++ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG0),
++ .driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg0 },
++ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG3),
++ .driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg3 },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910),
+ .driver_info = (kernel_ulong_t)&telit_le910_blacklist },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920),
+@@ -1691,7 +1707,7 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_EU3_P) },
+ { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_PH8),
+ .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
+- { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_AHXX) },
++ { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_AHXX, 0xff) },
+ { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_PLXX),
+ .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
+ { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) },
+diff --git a/drivers/usb/serial/visor.c b/drivers/usb/serial/visor.c
+index 60afb39eb73c..337a0be89fcf 100644
+--- a/drivers/usb/serial/visor.c
++++ b/drivers/usb/serial/visor.c
+@@ -544,6 +544,11 @@ static int treo_attach(struct usb_serial *serial)
+ (serial->num_interrupt_in == 0))
+ return 0;
+
++ if (serial->num_bulk_in < 2 || serial->num_interrupt_in < 2) {
++ dev_err(&serial->interface->dev, "missing endpoints\n");
++ return -ENODEV;
++ }
++
+ /*
+ * It appears that Treos and Kyoceras want to use the
+ * 1st bulk in endpoint to communicate with the 2nd bulk out endpoint,
+@@ -597,8 +602,10 @@ static int clie_5_attach(struct usb_serial *serial)
+ */
+
+ /* some sanity check */
+- if (serial->num_ports < 2)
+- return -1;
++ if (serial->num_bulk_out < 2) {
++ dev_err(&serial->interface->dev, "missing bulk out endpoints\n");
++ return -ENODEV;
++ }
+
+ /* port 0 now uses the modified endpoint Address */
+ port = serial->port[0];
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 6b659967898e..e6572a665b2e 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -759,16 +759,16 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ */
+ would_dump(bprm, interpreter);
+
+- retval = kernel_read(interpreter, 0, bprm->buf,
+- BINPRM_BUF_SIZE);
+- if (retval != BINPRM_BUF_SIZE) {
++ /* Get the exec headers */
++ retval = kernel_read(interpreter, 0,
++ (void *)&loc->interp_elf_ex,
++ sizeof(loc->interp_elf_ex));
++ if (retval != sizeof(loc->interp_elf_ex)) {
+ if (retval >= 0)
+ retval = -EIO;
+ goto out_free_dentry;
+ }
+
+- /* Get the exec headers */
+- loc->interp_elf_ex = *((struct elfhdr *)bprm->buf);
+ break;
+ }
+ elf_ppnt++;
+diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c
+index 3cbb0e834694..3ce646792c7e 100644
+--- a/fs/cachefiles/rdwr.c
++++ b/fs/cachefiles/rdwr.c
+@@ -885,7 +885,7 @@ int cachefiles_write_page(struct fscache_storage *op, struct page *page)
+ loff_t pos, eof;
+ size_t len;
+ void *data;
+- int ret;
++ int ret = -ENOBUFS;
+
+ ASSERT(op != NULL);
+ ASSERT(page != NULL);
+@@ -905,6 +905,15 @@ int cachefiles_write_page(struct fscache_storage *op, struct page *page)
+ cache = container_of(object->fscache.cache,
+ struct cachefiles_cache, cache);
+
++ pos = (loff_t)page->index << PAGE_SHIFT;
++
++ /* We mustn't write more data than we have, so we have to beware of a
++ * partial page at EOF.
++ */
++ eof = object->fscache.store_limit_l;
++ if (pos >= eof)
++ goto error;
++
+ /* write the page to the backing filesystem and let it store it in its
+ * own time */
+ path.mnt = cache->mnt;
+@@ -912,40 +921,38 @@ int cachefiles_write_page(struct fscache_storage *op, struct page *page)
+ file = dentry_open(&path, O_RDWR | O_LARGEFILE, cache->cache_cred);
+ if (IS_ERR(file)) {
+ ret = PTR_ERR(file);
+- } else {
+- pos = (loff_t) page->index << PAGE_SHIFT;
+-
+- /* we mustn't write more data than we have, so we have
+- * to beware of a partial page at EOF */
+- eof = object->fscache.store_limit_l;
+- len = PAGE_SIZE;
+- if (eof & ~PAGE_MASK) {
+- ASSERTCMP(pos, <, eof);
+- if (eof - pos < PAGE_SIZE) {
+- _debug("cut short %llx to %llx",
+- pos, eof);
+- len = eof - pos;
+- ASSERTCMP(pos + len, ==, eof);
+- }
+- }
+-
+- data = kmap(page);
+- ret = __kernel_write(file, data, len, &pos);
+- kunmap(page);
+- if (ret != len)
+- ret = -EIO;
+- fput(file);
++ goto error_2;
+ }
+
+- if (ret < 0) {
+- if (ret == -EIO)
+- cachefiles_io_error_obj(
+- object, "Write page to backing file failed");
+- ret = -ENOBUFS;
++ len = PAGE_SIZE;
++ if (eof & ~PAGE_MASK) {
++ if (eof - pos < PAGE_SIZE) {
++ _debug("cut short %llx to %llx",
++ pos, eof);
++ len = eof - pos;
++ ASSERTCMP(pos + len, ==, eof);
++ }
+ }
+
+- _leave(" = %d", ret);
+- return ret;
++ data = kmap(page);
++ ret = __kernel_write(file, data, len, &pos);
++ kunmap(page);
++ fput(file);
++ if (ret != len)
++ goto error_eio;
++
++ _leave(" = 0");
++ return 0;
++
++error_eio:
++ ret = -EIO;
++error_2:
++ if (ret == -EIO)
++ cachefiles_io_error_obj(object,
++ "Write page to backing file failed");
++error:
++ _leave(" = -ENOBUFS [%d]", ret);
++ return -ENOBUFS;
+ }
+
+ /*
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index 900e19cf9ef6..2597b0663bf2 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -566,6 +566,8 @@ static int parse_options(char *options, struct super_block *sb)
+ /* Fall through */
+ case Opt_dax:
+ #ifdef CONFIG_FS_DAX
++ ext2_msg(sb, KERN_WARNING,
++ "DAX enabled. Warning: EXPERIMENTAL, use at your own risk");
+ set_opt(sbi->s_mount_opt, DAX);
+ #else
+ ext2_msg(sb, KERN_INFO, "dax option not supported");
+diff --git a/fs/ext4/crypto.c b/fs/ext4/crypto.c
+index 2fab243a4c9e..7d6cda4738a4 100644
+--- a/fs/ext4/crypto.c
++++ b/fs/ext4/crypto.c
+@@ -408,7 +408,7 @@ int ext4_encrypted_zeroout(struct inode *inode, struct ext4_extent *ex)
+ struct ext4_crypto_ctx *ctx;
+ struct page *ciphertext_page = NULL;
+ struct bio *bio;
+- ext4_lblk_t lblk = ex->ee_block;
++ ext4_lblk_t lblk = le32_to_cpu(ex->ee_block);
+ ext4_fsblk_t pblk = ext4_ext_pblock(ex);
+ unsigned int len = ext4_ext_get_actual_len(ex);
+ int ret, err = 0;
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index fd1f28be5296..eb897089fbd0 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -26,6 +26,7 @@
+ #include <linux/seqlock.h>
+ #include <linux/mutex.h>
+ #include <linux/timer.h>
++#include <linux/version.h>
+ #include <linux/wait.h>
+ #include <linux/blockgroup_lock.h>
+ #include <linux/percpu_counter.h>
+@@ -723,19 +724,55 @@ struct move_extent {
+ <= (EXT4_GOOD_OLD_INODE_SIZE + \
+ (einode)->i_extra_isize)) \
+
++/*
++ * We use an encoding that preserves the times for extra epoch "00":
++ *
++ * extra msb of adjust for signed
++ * epoch 32-bit 32-bit tv_sec to
++ * bits time decoded 64-bit tv_sec 64-bit tv_sec valid time range
++ * 0 0 1 -0x80000000..-0x00000001 0x000000000 1901-12-13..1969-12-31
++ * 0 0 0 0x000000000..0x07fffffff 0x000000000 1970-01-01..2038-01-19
++ * 0 1 1 0x080000000..0x0ffffffff 0x100000000 2038-01-19..2106-02-07
++ * 0 1 0 0x100000000..0x17fffffff 0x100000000 2106-02-07..2174-02-25
++ * 1 0 1 0x180000000..0x1ffffffff 0x200000000 2174-02-25..2242-03-16
++ * 1 0 0 0x200000000..0x27fffffff 0x200000000 2242-03-16..2310-04-04
++ * 1 1 1 0x280000000..0x2ffffffff 0x300000000 2310-04-04..2378-04-22
++ * 1 1 0 0x300000000..0x37fffffff 0x300000000 2378-04-22..2446-05-10
++ *
++ * Note that previous versions of the kernel on 64-bit systems would
++ * incorrectly use extra epoch bits 1,1 for dates between 1901 and
++ * 1970. e2fsck will correct this, assuming that it is run on the
++ * affected filesystem before 2242.
++ */
++
+ static inline __le32 ext4_encode_extra_time(struct timespec *time)
+ {
+- return cpu_to_le32((sizeof(time->tv_sec) > 4 ?
+- (time->tv_sec >> 32) & EXT4_EPOCH_MASK : 0) |
+- ((time->tv_nsec << EXT4_EPOCH_BITS) & EXT4_NSEC_MASK));
++ u32 extra = sizeof(time->tv_sec) > 4 ?
++ ((time->tv_sec - (s32)time->tv_sec) >> 32) & EXT4_EPOCH_MASK : 0;
++ return cpu_to_le32(extra | (time->tv_nsec << EXT4_EPOCH_BITS));
+ }
+
+ static inline void ext4_decode_extra_time(struct timespec *time, __le32 extra)
+ {
+- if (sizeof(time->tv_sec) > 4)
+- time->tv_sec |= (__u64)(le32_to_cpu(extra) & EXT4_EPOCH_MASK)
+- << 32;
+- time->tv_nsec = (le32_to_cpu(extra) & EXT4_NSEC_MASK) >> EXT4_EPOCH_BITS;
++ if (unlikely(sizeof(time->tv_sec) > 4 &&
++ (extra & cpu_to_le32(EXT4_EPOCH_MASK)))) {
++#if LINUX_VERSION_CODE < KERNEL_VERSION(4,20,0)
++ /* Handle legacy encoding of pre-1970 dates with epoch
++ * bits 1,1. We assume that by kernel version 4.20,
++ * everyone will have run fsck over the affected
++ * filesystems to correct the problem. (This
++ * backwards compatibility may be removed before this
++ * time, at the discretion of the ext4 developers.)
++ */
++ u64 extra_bits = le32_to_cpu(extra) & EXT4_EPOCH_MASK;
++ if (extra_bits == 3 && ((time->tv_sec) & 0x80000000) != 0)
++ extra_bits = 0;
++ time->tv_sec += extra_bits << 32;
++#else
++ time->tv_sec += (u64)(le32_to_cpu(extra) & EXT4_EPOCH_MASK) << 32;
++#endif
++ }
++ time->tv_nsec = (le32_to_cpu(extra) & EXT4_NSEC_MASK) >> EXT4_EPOCH_BITS;
+ }
+
+ #define EXT4_INODE_SET_XTIME(xtime, inode, raw_inode) \
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index cf0c472047e3..c7c53fd46a41 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1040,7 +1040,7 @@ exit_free:
+ * do not copy the full number of backups at this time. The resize
+ * which changed s_groups_count will backup again.
+ */
+-static void update_backups(struct super_block *sb, int blk_off, char *data,
++static void update_backups(struct super_block *sb, sector_t blk_off, char *data,
+ int size, int meta_bg)
+ {
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+@@ -1065,7 +1065,7 @@ static void update_backups(struct super_block *sb, int blk_off, char *data,
+ group = ext4_list_backups(sb, &three, &five, &seven);
+ last = sbi->s_groups_count;
+ } else {
+- group = ext4_meta_bg_first_group(sb, group) + 1;
++ group = ext4_get_group_number(sb, blk_off) + 1;
+ last = (ext4_group_t)(group + EXT4_DESC_PER_BLOCK(sb) - 2);
+ }
+
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index df84bd256c9f..7683892d855c 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1664,8 +1664,12 @@ static int handle_mount_opt(struct super_block *sb, char *opt, int token,
+ }
+ sbi->s_jquota_fmt = m->mount_opt;
+ #endif
+-#ifndef CONFIG_FS_DAX
+ } else if (token == Opt_dax) {
++#ifdef CONFIG_FS_DAX
++ ext4_msg(sb, KERN_WARNING,
++ "DAX enabled. Warning: EXPERIMENTAL, use at your own risk");
++ sbi->s_mount_opt |= m->mount_opt;
++#else
+ ext4_msg(sb, KERN_INFO, "dax option not supported");
+ return -1;
+ #endif
+diff --git a/fs/ext4/symlink.c b/fs/ext4/symlink.c
+index c677f2c1044b..3627fd7cf4a0 100644
+--- a/fs/ext4/symlink.c
++++ b/fs/ext4/symlink.c
+@@ -52,7 +52,7 @@ static const char *ext4_encrypted_follow_link(struct dentry *dentry, void **cook
+ /* Symlink is encrypted */
+ sd = (struct ext4_encrypted_symlink_data *)caddr;
+ cstr.name = sd->encrypted_path;
+- cstr.len = le32_to_cpu(sd->len);
++ cstr.len = le16_to_cpu(sd->len);
+ if ((cstr.len +
+ sizeof(struct ext4_encrypted_symlink_data) - 1) >
+ max_size) {
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index 8f15fc134040..6726c4a5efa2 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -787,7 +787,6 @@ bool f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
+ else
+ d_type = DT_UNKNOWN;
+
+- /* encrypted case */
+ de_name.name = d->filename[bit_pos];
+ de_name.len = le16_to_cpu(de->name_len);
+
+@@ -795,12 +794,20 @@ bool f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
+ int save_len = fstr->len;
+ int ret;
+
++ de_name.name = kmalloc(de_name.len, GFP_NOFS);
++ if (!de_name.name)
++ return false;
++
++ memcpy(de_name.name, d->filename[bit_pos], de_name.len);
++
+ ret = f2fs_fname_disk_to_usr(d->inode, &de->hash_code,
+ &de_name, fstr);
+- de_name = *fstr;
+- fstr->len = save_len;
++ kfree(de_name.name);
+ if (ret < 0)
+ return true;
++
++ de_name = *fstr;
++ fstr->len = save_len;
+ }
+
+ if (!dir_emit(ctx, de_name.name, de_name.len,
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index a680bf38e4f0..dfa01c88b34b 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -947,8 +947,13 @@ static const char *f2fs_encrypted_follow_link(struct dentry *dentry, void **cook
+
+ /* Symlink is encrypted */
+ sd = (struct f2fs_encrypted_symlink_data *)caddr;
+- cstr.name = sd->encrypted_path;
+ cstr.len = le16_to_cpu(sd->len);
++ cstr.name = kmalloc(cstr.len, GFP_NOFS);
++ if (!cstr.name) {
++ res = -ENOMEM;
++ goto errout;
++ }
++ memcpy(cstr.name, sd->encrypted_path, cstr.len);
+
+ /* this is broken symlink case */
+ if (cstr.name[0] == 0 && cstr.len == 0) {
+@@ -970,6 +975,8 @@ static const char *f2fs_encrypted_follow_link(struct dentry *dentry, void **cook
+ if (res < 0)
+ goto errout;
+
++ kfree(cstr.name);
++
+ paddr = pstr.name;
+
+ /* Null-terminate the name */
+@@ -979,6 +986,7 @@ static const char *f2fs_encrypted_follow_link(struct dentry *dentry, void **cook
+ page_cache_release(cpage);
+ return *cookie = paddr;
+ errout:
++ kfree(cstr.name);
+ f2fs_fname_crypto_free_buffer(&pstr);
+ kunmap(cpage);
+ page_cache_release(cpage);
+diff --git a/fs/fat/dir.c b/fs/fat/dir.c
+index 4afc4d9d2e41..8b2127ffb226 100644
+--- a/fs/fat/dir.c
++++ b/fs/fat/dir.c
+@@ -610,9 +610,9 @@ parse_record:
+ int status = fat_parse_long(inode, &cpos, &bh, &de,
+ &unicode, &nr_slots);
+ if (status < 0) {
+- ctx->pos = cpos;
++ bh = NULL;
+ ret = status;
+- goto out;
++ goto end_of_dir;
+ } else if (status == PARSE_INVALID)
+ goto record_end;
+ else if (status == PARSE_NOT_LONGNAME)
+@@ -654,8 +654,9 @@ parse_record:
+ fill_len = short_len;
+
+ start_filldir:
+- if (!fake_offset)
+- ctx->pos = cpos - (nr_slots + 1) * sizeof(struct msdos_dir_entry);
++ ctx->pos = cpos - (nr_slots + 1) * sizeof(struct msdos_dir_entry);
++ if (fake_offset && ctx->pos < 2)
++ ctx->pos = 2;
+
+ if (!memcmp(de->name, MSDOS_DOT, MSDOS_NAME)) {
+ if (!dir_emit_dot(file, ctx))
+@@ -681,14 +682,19 @@ record_end:
+ fake_offset = 0;
+ ctx->pos = cpos;
+ goto get_new;
++
+ end_of_dir:
+- ctx->pos = cpos;
++ if (fake_offset && cpos < 2)
++ ctx->pos = 2;
++ else
++ ctx->pos = cpos;
+ fill_failed:
+ brelse(bh);
+ if (unicode)
+ __putname(unicode);
+ out:
+ mutex_unlock(&sbi->s_lock);
++
+ return ret;
+ }
+
+diff --git a/fs/fscache/netfs.c b/fs/fscache/netfs.c
+index 6d941f56faf4..9b28649df3a1 100644
+--- a/fs/fscache/netfs.c
++++ b/fs/fscache/netfs.c
+@@ -22,6 +22,7 @@ static LIST_HEAD(fscache_netfs_list);
+ int __fscache_register_netfs(struct fscache_netfs *netfs)
+ {
+ struct fscache_netfs *ptr;
++ struct fscache_cookie *cookie;
+ int ret;
+
+ _enter("{%s}", netfs->name);
+@@ -29,29 +30,25 @@ int __fscache_register_netfs(struct fscache_netfs *netfs)
+ INIT_LIST_HEAD(&netfs->link);
+
+ /* allocate a cookie for the primary index */
+- netfs->primary_index =
+- kmem_cache_zalloc(fscache_cookie_jar, GFP_KERNEL);
++ cookie = kmem_cache_zalloc(fscache_cookie_jar, GFP_KERNEL);
+
+- if (!netfs->primary_index) {
++ if (!cookie) {
+ _leave(" = -ENOMEM");
+ return -ENOMEM;
+ }
+
+ /* initialise the primary index cookie */
+- atomic_set(&netfs->primary_index->usage, 1);
+- atomic_set(&netfs->primary_index->n_children, 0);
+- atomic_set(&netfs->primary_index->n_active, 1);
++ atomic_set(&cookie->usage, 1);
++ atomic_set(&cookie->n_children, 0);
++ atomic_set(&cookie->n_active, 1);
+
+- netfs->primary_index->def = &fscache_fsdef_netfs_def;
+- netfs->primary_index->parent = &fscache_fsdef_index;
+- netfs->primary_index->netfs_data = netfs;
+- netfs->primary_index->flags = 1 << FSCACHE_COOKIE_ENABLED;
++ cookie->def = &fscache_fsdef_netfs_def;
++ cookie->parent = &fscache_fsdef_index;
++ cookie->netfs_data = netfs;
++ cookie->flags = 1 << FSCACHE_COOKIE_ENABLED;
+
+- atomic_inc(&netfs->primary_index->parent->usage);
+- atomic_inc(&netfs->primary_index->parent->n_children);
+-
+- spin_lock_init(&netfs->primary_index->lock);
+- INIT_HLIST_HEAD(&netfs->primary_index->backing_objects);
++ spin_lock_init(&cookie->lock);
++ INIT_HLIST_HEAD(&cookie->backing_objects);
+
+ /* check the netfs type is not already present */
+ down_write(&fscache_addremove_sem);
+@@ -62,6 +59,10 @@ int __fscache_register_netfs(struct fscache_netfs *netfs)
+ goto already_registered;
+ }
+
++ atomic_inc(&cookie->parent->usage);
++ atomic_inc(&cookie->parent->n_children);
++
++ netfs->primary_index = cookie;
+ list_add(&netfs->link, &fscache_netfs_list);
+ ret = 0;
+
+@@ -70,11 +71,8 @@ int __fscache_register_netfs(struct fscache_netfs *netfs)
+ already_registered:
+ up_write(&fscache_addremove_sem);
+
+- if (ret < 0) {
+- netfs->primary_index->parent = NULL;
+- __fscache_cookie_put(netfs->primary_index);
+- netfs->primary_index = NULL;
+- }
++ if (ret < 0)
++ kmem_cache_free(fscache_cookie_jar, cookie);
+
+ _leave(" = %d", ret);
+ return ret;
+diff --git a/fs/fscache/page.c b/fs/fscache/page.c
+index 483bbc613bf0..ca916af5a7c4 100644
+--- a/fs/fscache/page.c
++++ b/fs/fscache/page.c
+@@ -816,7 +816,7 @@ static void fscache_write_op(struct fscache_operation *_op)
+ goto superseded;
+ page = results[0];
+ _debug("gang %d [%lx]", n, page->index);
+- if (page->index > op->store_limit) {
++ if (page->index >= op->store_limit) {
+ fscache_stat(&fscache_n_store_pages_over_limit);
+ goto superseded;
+ }
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index 316adb968b65..de4bdfac0cec 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -332,12 +332,17 @@ static void remove_huge_page(struct page *page)
+ * truncation is indicated by end of range being LLONG_MAX
+ * In this case, we first scan the range and release found pages.
+ * After releasing pages, hugetlb_unreserve_pages cleans up region/reserv
+- * maps and global counts.
++ * maps and global counts. Page faults can not race with truncation
++ * in this routine. hugetlb_no_page() prevents page faults in the
++ * truncated range. It checks i_size before allocation, and again after
++ * with the page table lock for the page held. The same lock must be
++ * acquired to unmap a page.
+ * hole punch is indicated if end is not LLONG_MAX
+ * In the hole punch case we scan the range and release found pages.
+ * Only when releasing a page is the associated region/reserv map
+ * deleted. The region/reserv map for ranges without associated
+- * pages are not modified.
++ * pages are not modified. Page faults can race with hole punch.
++ * This is indicated if we find a mapped page.
+ * Note: If the passed end of range value is beyond the end of file, but
+ * not LLONG_MAX this routine still performs a hole punch operation.
+ */
+@@ -361,46 +366,37 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
+ next = start;
+ while (next < end) {
+ /*
+- * Make sure to never grab more pages that we
+- * might possibly need.
++ * Don't grab more pages than the number left in the range.
+ */
+ if (end - next < lookup_nr)
+ lookup_nr = end - next;
+
+ /*
+- * This pagevec_lookup() may return pages past 'end',
+- * so we must check for page->index > end.
++ * When no more pages are found, we are done.
+ */
+- if (!pagevec_lookup(&pvec, mapping, next, lookup_nr)) {
+- if (next == start)
+- break;
+- next = start;
+- continue;
+- }
++ if (!pagevec_lookup(&pvec, mapping, next, lookup_nr))
++ break;
+
+ for (i = 0; i < pagevec_count(&pvec); ++i) {
+ struct page *page = pvec.pages[i];
+ u32 hash;
+
++ /*
++ * The page (index) could be beyond end. This is
++ * only possible in the punch hole case as end is
++ * max page offset in the truncate case.
++ */
++ next = page->index;
++ if (next >= end)
++ break;
++
+ hash = hugetlb_fault_mutex_hash(h, current->mm,
+ &pseudo_vma,
+ mapping, next, 0);
+ mutex_lock(&hugetlb_fault_mutex_table[hash]);
+
+ lock_page(page);
+- if (page->index >= end) {
+- unlock_page(page);
+- mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+- next = end; /* we are done */
+- break;
+- }
+-
+- /*
+- * If page is mapped, it was faulted in after being
+- * unmapped. Do nothing in this race case. In the
+- * normal case page is not mapped.
+- */
+- if (!page_mapped(page)) {
++ if (likely(!page_mapped(page))) {
+ bool rsv_on_error = !PagePrivate(page);
+ /*
+ * We must free the huge page and remove
+@@ -421,17 +417,23 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
+ hugetlb_fix_reserve_counts(
+ inode, rsv_on_error);
+ }
++ } else {
++ /*
++ * If page is mapped, it was faulted in after
++ * being unmapped. It indicates a race between
++ * hole punch and page fault. Do nothing in
++ * this case. Getting here in a truncate
++ * operation is a bug.
++ */
++ BUG_ON(truncate_op);
+ }
+
+- if (page->index > next)
+- next = page->index;
+-
+- ++next;
+ unlock_page(page);
+-
+ mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+ }
++ ++next;
+ huge_pagevec_release(&pvec);
++ cond_resched();
+ }
+
+ if (truncate_op)
+@@ -647,9 +649,6 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
+ if (!(mode & FALLOC_FL_KEEP_SIZE) && offset + len > inode->i_size)
+ i_size_write(inode, offset + len);
+ inode->i_ctime = CURRENT_TIME;
+- spin_lock(&inode->i_lock);
+- inode->i_private = NULL;
+- spin_unlock(&inode->i_lock);
+ out:
+ mutex_unlock(&inode->i_mutex);
+ return error;
+diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
+index 8c44654ce274..684996c8a3a4 100644
+--- a/fs/jbd2/checkpoint.c
++++ b/fs/jbd2/checkpoint.c
+@@ -427,7 +427,6 @@ static int journal_clean_one_cp_list(struct journal_head *jh, bool destroy)
+ struct journal_head *last_jh;
+ struct journal_head *next_jh = jh;
+ int ret;
+- int freed = 0;
+
+ if (!jh)
+ return 0;
+@@ -441,10 +440,9 @@ static int journal_clean_one_cp_list(struct journal_head *jh, bool destroy)
+ else
+ ret = __jbd2_journal_remove_checkpoint(jh) + 1;
+ if (!ret)
+- return freed;
++ return 0;
+ if (ret == 2)
+ return 1;
+- freed = 1;
+ /*
+ * This function only frees up some memory
+ * if possible so we dont have an obligation
+@@ -452,10 +450,10 @@ static int journal_clean_one_cp_list(struct journal_head *jh, bool destroy)
+ * requested:
+ */
+ if (need_resched())
+- return freed;
++ return 0;
+ } while (jh != last_jh);
+
+- return freed;
++ return 0;
+ }
+
+ /*
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 6b8338ec2464..1498ad9f731a 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -1009,7 +1009,8 @@ out:
+ }
+
+ /* Fast check whether buffer is already attached to the required transaction */
+-static bool jbd2_write_access_granted(handle_t *handle, struct buffer_head *bh)
++static bool jbd2_write_access_granted(handle_t *handle, struct buffer_head *bh,
++ bool undo)
+ {
+ struct journal_head *jh;
+ bool ret = false;
+@@ -1036,6 +1037,9 @@ static bool jbd2_write_access_granted(handle_t *handle, struct buffer_head *bh)
+ jh = READ_ONCE(bh->b_private);
+ if (!jh)
+ goto out;
++ /* For undo access buffer must have data copied */
++ if (undo && !jh->b_committed_data)
++ goto out;
+ if (jh->b_transaction != handle->h_transaction &&
+ jh->b_next_transaction != handle->h_transaction)
+ goto out;
+@@ -1073,7 +1077,7 @@ int jbd2_journal_get_write_access(handle_t *handle, struct buffer_head *bh)
+ struct journal_head *jh;
+ int rc;
+
+- if (jbd2_write_access_granted(handle, bh))
++ if (jbd2_write_access_granted(handle, bh, false))
+ return 0;
+
+ jh = jbd2_journal_add_journal_head(bh);
+@@ -1210,7 +1214,7 @@ int jbd2_journal_get_undo_access(handle_t *handle, struct buffer_head *bh)
+ char *committed_data = NULL;
+
+ JBUFFER_TRACE(jh, "entry");
+- if (jbd2_write_access_granted(handle, bh))
++ if (jbd2_write_access_granted(handle, bh, true))
+ return 0;
+
+ jh = jbd2_journal_add_journal_head(bh);
+@@ -2152,6 +2156,7 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh,
+
+ if (!buffer_dirty(bh)) {
+ /* bdflush has written it. We can drop it now */
++ __jbd2_journal_remove_checkpoint(jh);
+ goto zap_buffer;
+ }
+
+@@ -2181,6 +2186,7 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh,
+ /* The orphan record's transaction has
+ * committed. We can cleanse this buffer */
+ clear_buffer_jbddirty(bh);
++ __jbd2_journal_remove_checkpoint(jh);
+ goto zap_buffer;
+ }
+ }
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 5133bb18830e..c8bd1ddb7df8 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -8060,7 +8060,6 @@ static void nfs4_layoutreturn_release(void *calldata)
+ pnfs_set_layout_stateid(lo, &lrp->res.stateid, true);
+ pnfs_mark_matching_lsegs_invalid(lo, &freeme, &lrp->args.range);
+ pnfs_clear_layoutreturn_waitbit(lo);
+- lo->plh_block_lgets--;
+ spin_unlock(&lo->plh_inode->i_lock);
+ pnfs_free_lseg_list(&freeme);
+ pnfs_put_layout_hdr(lrp->args.layout);
+diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c
+index ce38b4ccc9ab..5e355650eea0 100644
+--- a/fs/ocfs2/dlm/dlmmaster.c
++++ b/fs/ocfs2/dlm/dlmmaster.c
+@@ -2519,6 +2519,11 @@ static int dlm_migrate_lockres(struct dlm_ctxt *dlm,
+ spin_lock(&dlm->master_lock);
+ ret = dlm_add_migration_mle(dlm, res, mle, &oldmle, name,
+ namelen, target, dlm->node_num);
++ /* get an extra reference on the mle.
++ * otherwise the assert_master from the new
++ * master will destroy this.
++ */
++ dlm_get_mle_inuse(mle);
+ spin_unlock(&dlm->master_lock);
+ spin_unlock(&dlm->spinlock);
+
+@@ -2554,6 +2559,7 @@ fail:
+ if (mle_added) {
+ dlm_mle_detach_hb_events(dlm, mle);
+ dlm_put_mle(mle);
++ dlm_put_mle_inuse(mle);
+ } else if (mle) {
+ kmem_cache_free(dlm_mle_cache, mle);
+ mle = NULL;
+@@ -2571,17 +2577,6 @@ fail:
+ * ensure that all assert_master work is flushed. */
+ flush_workqueue(dlm->dlm_worker);
+
+- /* get an extra reference on the mle.
+- * otherwise the assert_master from the new
+- * master will destroy this.
+- * also, make sure that all callers of dlm_get_mle
+- * take both dlm->spinlock and dlm->master_lock */
+- spin_lock(&dlm->spinlock);
+- spin_lock(&dlm->master_lock);
+- dlm_get_mle_inuse(mle);
+- spin_unlock(&dlm->master_lock);
+- spin_unlock(&dlm->spinlock);
+-
+ /* notify new node and send all lock state */
+ /* call send_one_lockres with migration flag.
+ * this serves as notice to the target node that a
+@@ -3310,6 +3305,15 @@ top:
+ mle->new_master != dead_node)
+ continue;
+
++ if (mle->new_master == dead_node && mle->inuse) {
++ mlog(ML_NOTICE, "%s: target %u died during "
++ "migration from %u, the MLE is "
++ "still keep used, ignore it!\n",
++ dlm->name, dead_node,
++ mle->master);
++ continue;
++ }
++
+ /* If we have reached this point, this mle needs to be
+ * removed from the list and freed. */
+ dlm_clean_migration_mle(dlm, mle);
+diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
+index 58eaa5c0d387..572a3dc4861f 100644
+--- a/fs/ocfs2/dlm/dlmrecovery.c
++++ b/fs/ocfs2/dlm/dlmrecovery.c
+@@ -2360,6 +2360,8 @@ static void dlm_do_local_recovery_cleanup(struct dlm_ctxt *dlm, u8 dead_node)
+ break;
+ }
+ }
++ dlm_lockres_clear_refmap_bit(dlm, res,
++ dead_node);
+ spin_unlock(&res->spinlock);
+ continue;
+ }
+diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
+index 1c91103c1333..9a894aecc960 100644
+--- a/fs/ocfs2/dlmglue.c
++++ b/fs/ocfs2/dlmglue.c
+@@ -1390,6 +1390,7 @@ static int __ocfs2_cluster_lock(struct ocfs2_super *osb,
+ unsigned int gen;
+ int noqueue_attempted = 0;
+ int dlm_locked = 0;
++ int kick_dc = 0;
+
+ if (!(lockres->l_flags & OCFS2_LOCK_INITIALIZED)) {
+ mlog_errno(-EINVAL);
+@@ -1524,7 +1525,12 @@ update_holders:
+ unlock:
+ lockres_clear_flags(lockres, OCFS2_LOCK_UPCONVERT_FINISHING);
+
++ /* ocfs2_unblock_lock reques on seeing OCFS2_LOCK_UPCONVERT_FINISHING */
++ kick_dc = (lockres->l_flags & OCFS2_LOCK_BLOCKED);
++
+ spin_unlock_irqrestore(&lockres->l_lock, flags);
++ if (kick_dc)
++ ocfs2_wake_downconvert_thread(osb);
+ out:
+ /*
+ * This is helping work around a lock inversion between the page lock
+diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
+index 12bfa9ca5583..c0356c18cdf5 100644
+--- a/fs/ocfs2/namei.c
++++ b/fs/ocfs2/namei.c
+@@ -369,13 +369,11 @@ static int ocfs2_mknod(struct inode *dir,
+ goto leave;
+ }
+
+- status = posix_acl_create(dir, &mode, &default_acl, &acl);
++ status = posix_acl_create(dir, &inode->i_mode, &default_acl, &acl);
+ if (status) {
+ mlog_errno(status);
+ goto leave;
+ }
+- /* update inode->i_mode after mask with "umask". */
+- inode->i_mode = mode;
+
+ handle = ocfs2_start_trans(osb, ocfs2_mknod_credits(osb->sb,
+ S_ISDIR(mode),
+diff --git a/fs/ocfs2/resize.c b/fs/ocfs2/resize.c
+index d5da6f624142..79b8021302b3 100644
+--- a/fs/ocfs2/resize.c
++++ b/fs/ocfs2/resize.c
+@@ -54,11 +54,12 @@
+ static u16 ocfs2_calc_new_backup_super(struct inode *inode,
+ struct ocfs2_group_desc *gd,
+ u16 cl_cpg,
++ u16 old_bg_clusters,
+ int set)
+ {
+ int i;
+ u16 backups = 0;
+- u32 cluster;
++ u32 cluster, lgd_cluster;
+ u64 blkno, gd_blkno, lgd_blkno = le64_to_cpu(gd->bg_blkno);
+
+ for (i = 0; i < OCFS2_MAX_BACKUP_SUPERBLOCKS; i++) {
+@@ -71,6 +72,12 @@ static u16 ocfs2_calc_new_backup_super(struct inode *inode,
+ else if (gd_blkno > lgd_blkno)
+ break;
+
++ /* check if already done backup super */
++ lgd_cluster = ocfs2_blocks_to_clusters(inode->i_sb, lgd_blkno);
++ lgd_cluster += old_bg_clusters;
++ if (lgd_cluster >= cluster)
++ continue;
++
+ if (set)
+ ocfs2_set_bit(cluster % cl_cpg,
+ (unsigned long *)gd->bg_bitmap);
+@@ -99,6 +106,7 @@ static int ocfs2_update_last_group_and_inode(handle_t *handle,
+ u16 chain, num_bits, backups = 0;
+ u16 cl_bpc = le16_to_cpu(cl->cl_bpc);
+ u16 cl_cpg = le16_to_cpu(cl->cl_cpg);
++ u16 old_bg_clusters;
+
+ trace_ocfs2_update_last_group_and_inode(new_clusters,
+ first_new_cluster);
+@@ -112,6 +120,7 @@ static int ocfs2_update_last_group_and_inode(handle_t *handle,
+
+ group = (struct ocfs2_group_desc *)group_bh->b_data;
+
++ old_bg_clusters = le16_to_cpu(group->bg_bits) / cl_bpc;
+ /* update the group first. */
+ num_bits = new_clusters * cl_bpc;
+ le16_add_cpu(&group->bg_bits, num_bits);
+@@ -125,7 +134,7 @@ static int ocfs2_update_last_group_and_inode(handle_t *handle,
+ OCFS2_FEATURE_COMPAT_BACKUP_SB)) {
+ backups = ocfs2_calc_new_backup_super(bm_inode,
+ group,
+- cl_cpg, 1);
++ cl_cpg, old_bg_clusters, 1);
+ le16_add_cpu(&group->bg_free_bits_count, -1 * backups);
+ }
+
+@@ -163,7 +172,7 @@ out_rollback:
+ if (ret < 0) {
+ ocfs2_calc_new_backup_super(bm_inode,
+ group,
+- cl_cpg, 0);
++ cl_cpg, old_bg_clusters, 0);
+ le16_add_cpu(&group->bg_free_bits_count, backups);
+ le16_add_cpu(&group->bg_bits, -1 * num_bits);
+ le16_add_cpu(&group->bg_free_bits_count, -1 * num_bits);
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 8865f7963700..14788ddcd3f3 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -366,18 +366,17 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
+ int offset = buf->offset + buf->len;
+
+ if (ops->can_merge && offset + chars <= PAGE_SIZE) {
+- int error = ops->confirm(pipe, buf);
+- if (error)
++ ret = ops->confirm(pipe, buf);
++ if (ret)
+ goto out;
+
+ ret = copy_page_from_iter(buf->page, offset, chars, from);
+ if (unlikely(ret < chars)) {
+- error = -EFAULT;
++ ret = -EFAULT;
+ goto out;
+ }
+ do_wakeup = 1;
+- buf->len += chars;
+- ret = chars;
++ buf->len += ret;
+ if (!iov_iter_count(from))
+ goto out;
+ }
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 29595af32866..4b6fb2cbd928 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -2484,6 +2484,7 @@ static ssize_t proc_coredump_filter_write(struct file *file,
+ mm = get_task_mm(task);
+ if (!mm)
+ goto out_no_mm;
++ ret = 0;
+
+ for (i = 0, mask = 1; i < MMF_DUMP_FILTER_BITS; i++, mask <<= 1) {
+ if (val & mask)
+diff --git a/fs/proc/fd.c b/fs/proc/fd.c
+index 6e5fcd00733e..3c2a915c695a 100644
+--- a/fs/proc/fd.c
++++ b/fs/proc/fd.c
+@@ -291,11 +291,19 @@ static struct dentry *proc_lookupfd(struct inode *dir, struct dentry *dentry,
+ */
+ int proc_fd_permission(struct inode *inode, int mask)
+ {
+- int rv = generic_permission(inode, mask);
++ struct task_struct *p;
++ int rv;
++
++ rv = generic_permission(inode, mask);
+ if (rv == 0)
+- return 0;
+- if (task_tgid(current) == proc_pid(inode))
++ return rv;
++
++ rcu_read_lock();
++ p = pid_task(proc_pid(inode), PIDTYPE_PID);
++ if (p && same_thread_group(p, current))
+ rv = 0;
++ rcu_read_unlock();
++
+ return rv;
+ }
+
+diff --git a/fs/seq_file.c b/fs/seq_file.c
+index 225586e141ca..a8e288755f24 100644
+--- a/fs/seq_file.c
++++ b/fs/seq_file.c
+@@ -25,12 +25,17 @@ static void seq_set_overflow(struct seq_file *m)
+ static void *seq_buf_alloc(unsigned long size)
+ {
+ void *buf;
++ gfp_t gfp = GFP_KERNEL;
+
+ /*
+- * __GFP_NORETRY to avoid oom-killings with high-order allocations -
+- * it's better to fall back to vmalloc() than to kill things.
++ * For high order allocations, use __GFP_NORETRY to avoid oom-killing -
++ * it's better to fall back to vmalloc() than to kill things. For small
++ * allocations, just use GFP_KERNEL which will oom kill, thus no need
++ * for vmalloc fallback.
+ */
+- buf = kmalloc(size, GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN);
++ if (size > PAGE_SIZE)
++ gfp |= __GFP_NORETRY | __GFP_NOWARN;
++ buf = kmalloc(size, gfp);
+ if (!buf && size > PAGE_SIZE)
+ buf = vmalloc(size);
+ return buf;
+diff --git a/fs/sysv/inode.c b/fs/sysv/inode.c
+index 590ad9206e3f..02fa1dcc5969 100644
+--- a/fs/sysv/inode.c
++++ b/fs/sysv/inode.c
+@@ -162,15 +162,8 @@ void sysv_set_inode(struct inode *inode, dev_t rdev)
+ inode->i_fop = &sysv_dir_operations;
+ inode->i_mapping->a_ops = &sysv_aops;
+ } else if (S_ISLNK(inode->i_mode)) {
+- if (inode->i_blocks) {
+- inode->i_op = &sysv_symlink_inode_operations;
+- inode->i_mapping->a_ops = &sysv_aops;
+- } else {
+- inode->i_op = &simple_symlink_inode_operations;
+- inode->i_link = (char *)SYSV_I(inode)->i_data;
+- nd_terminate_link(inode->i_link, inode->i_size,
+- sizeof(SYSV_I(inode)->i_data) - 1);
+- }
++ inode->i_op = &sysv_symlink_inode_operations;
++ inode->i_mapping->a_ops = &sysv_aops;
+ } else
+ init_special_inode(inode, inode->i_mode, rdev);
+ }
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index cbc8d5d2755a..c66f2423e1f5 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -340,8 +340,12 @@ static struct dentry *start_creating(const char *name, struct dentry *parent)
+ dput(dentry);
+ dentry = ERR_PTR(-EEXIST);
+ }
+- if (IS_ERR(dentry))
++
++ if (IS_ERR(dentry)) {
+ mutex_unlock(&parent->d_inode->i_mutex);
++ simple_release_fs(&tracefs_mount, &tracefs_mount_count);
++ }
++
+ return dentry;
+ }
+
+diff --git a/include/crypto/hash.h b/include/crypto/hash.h
+index 8e920b44c0ac..da791ac52f13 100644
+--- a/include/crypto/hash.h
++++ b/include/crypto/hash.h
+@@ -204,6 +204,7 @@ struct crypto_ahash {
+ unsigned int keylen);
+
+ unsigned int reqsize;
++ bool has_setkey;
+ struct crypto_tfm base;
+ };
+
+@@ -361,6 +362,11 @@ static inline void *ahash_request_ctx(struct ahash_request *req)
+ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen);
+
++static inline bool crypto_ahash_has_setkey(struct crypto_ahash *tfm)
++{
++ return tfm->has_setkey;
++}
++
+ /**
+ * crypto_ahash_finup() - update and finalize message digest
+ * @req: reference to the ahash_request handle that holds all information
+diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
+index 018afb264ac2..a2bfd7843f18 100644
+--- a/include/crypto/if_alg.h
++++ b/include/crypto/if_alg.h
+@@ -30,6 +30,9 @@ struct alg_sock {
+
+ struct sock *parent;
+
++ unsigned int refcnt;
++ unsigned int nokey_refcnt;
++
+ const struct af_alg_type *type;
+ void *private;
+ };
+@@ -50,9 +53,11 @@ struct af_alg_type {
+ void (*release)(void *private);
+ int (*setkey)(void *private, const u8 *key, unsigned int keylen);
+ int (*accept)(void *private, struct sock *sk);
++ int (*accept_nokey)(void *private, struct sock *sk);
+ int (*setauthsize)(void *private, unsigned int authsize);
+
+ struct proto_ops *ops;
++ struct proto_ops *ops_nokey;
+ struct module *owner;
+ char name[14];
+ };
+@@ -67,6 +72,7 @@ int af_alg_register_type(const struct af_alg_type *type);
+ int af_alg_unregister_type(const struct af_alg_type *type);
+
+ int af_alg_release(struct socket *sock);
++void af_alg_release_parent(struct sock *sk);
+ int af_alg_accept(struct sock *sk, struct socket *newsock);
+
+ int af_alg_make_sg(struct af_alg_sgl *sgl, struct iov_iter *iter, int len);
+@@ -83,11 +89,6 @@ static inline struct alg_sock *alg_sk(struct sock *sk)
+ return (struct alg_sock *)sk;
+ }
+
+-static inline void af_alg_release_parent(struct sock *sk)
+-{
+- sock_put(alg_sk(sk)->parent);
+-}
+-
+ static inline void af_alg_init_completion(struct af_alg_completion *completion)
+ {
+ init_completion(&completion->completion);
+diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
+index d8dd41fb034f..fd8742a40ff3 100644
+--- a/include/crypto/skcipher.h
++++ b/include/crypto/skcipher.h
+@@ -61,6 +61,8 @@ struct crypto_skcipher {
+ unsigned int ivsize;
+ unsigned int reqsize;
+
++ bool has_setkey;
++
+ struct crypto_tfm base;
+ };
+
+@@ -305,6 +307,11 @@ static inline int crypto_skcipher_setkey(struct crypto_skcipher *tfm,
+ return tfm->setkey(tfm, key, keylen);
+ }
+
++static inline bool crypto_skcipher_has_setkey(struct crypto_skcipher *tfm)
++{
++ return tfm->has_setkey;
++}
++
+ /**
+ * crypto_skcipher_reqtfm() - obtain cipher handle from request
+ * @req: skcipher_request out of which the cipher handle is to be obtained
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 19c2e947d4d1..3a3ff074782f 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -767,7 +767,6 @@ extern void blk_rq_set_block_pc(struct request *);
+ extern void blk_requeue_request(struct request_queue *, struct request *);
+ extern void blk_add_request_payload(struct request *rq, struct page *page,
+ unsigned int len);
+-extern int blk_rq_check_limits(struct request_queue *q, struct request *rq);
+ extern int blk_lld_busy(struct request_queue *q);
+ extern int blk_rq_prep_clone(struct request *rq, struct request *rq_src,
+ struct bio_set *bs, gfp_t gfp_mask,
+diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
+index 76dd4f0da5ca..2ead22dd74a0 100644
+--- a/include/linux/hrtimer.h
++++ b/include/linux/hrtimer.h
+@@ -87,7 +87,8 @@ enum hrtimer_restart {
+ * @function: timer expiry callback function
+ * @base: pointer to the timer base (per cpu and per clock)
+ * @state: state information (See bit values above)
+- * @start_pid: timer statistics field to store the pid of the task which
++ * @is_rel: Set if the timer was armed relative
++ * @start_pid: timer statistics field to store the pid of the task which
+ * started the timer
+ * @start_site: timer statistics field to store the site where the timer
+ * was started
+@@ -101,7 +102,8 @@ struct hrtimer {
+ ktime_t _softexpires;
+ enum hrtimer_restart (*function)(struct hrtimer *);
+ struct hrtimer_clock_base *base;
+- unsigned long state;
++ u8 state;
++ u8 is_rel;
+ #ifdef CONFIG_TIMER_STATS
+ int start_pid;
+ void *start_site;
+@@ -321,6 +323,27 @@ static inline void clock_was_set_delayed(void) { }
+
+ #endif
+
++static inline ktime_t
++__hrtimer_expires_remaining_adjusted(const struct hrtimer *timer, ktime_t now)
++{
++ ktime_t rem = ktime_sub(timer->node.expires, now);
++
++ /*
++ * Adjust relative timers for the extra we added in
++ * hrtimer_start_range_ns() to prevent short timeouts.
++ */
++ if (IS_ENABLED(CONFIG_TIME_LOW_RES) && timer->is_rel)
++ rem.tv64 -= hrtimer_resolution;
++ return rem;
++}
++
++static inline ktime_t
++hrtimer_expires_remaining_adjusted(const struct hrtimer *timer)
++{
++ return __hrtimer_expires_remaining_adjusted(timer,
++ timer->base->get_time());
++}
++
+ extern void clock_was_set(void);
+ #ifdef CONFIG_TIMERFD
+ extern void timerfd_clock_was_set(void);
+@@ -390,7 +413,12 @@ static inline void hrtimer_restart(struct hrtimer *timer)
+ }
+
+ /* Query timers: */
+-extern ktime_t hrtimer_get_remaining(const struct hrtimer *timer);
++extern ktime_t __hrtimer_get_remaining(const struct hrtimer *timer, bool adjust);
++
++static inline ktime_t hrtimer_get_remaining(const struct hrtimer *timer)
++{
++ return __hrtimer_get_remaining(timer, false);
++}
+
+ extern u64 hrtimer_get_next_event(void);
+
+diff --git a/include/linux/signal.h b/include/linux/signal.h
+index ab1e0392b5ac..92557bbce7e7 100644
+--- a/include/linux/signal.h
++++ b/include/linux/signal.h
+@@ -239,7 +239,6 @@ extern int sigprocmask(int, sigset_t *, sigset_t *);
+ extern void set_current_blocked(sigset_t *);
+ extern void __set_current_blocked(const sigset_t *);
+ extern int show_unhandled_signals;
+-extern int sigsuspend(sigset_t *);
+
+ struct sigaction {
+ #ifndef __ARCH_HAS_IRIX_SIGACTION
+diff --git a/include/sound/rawmidi.h b/include/sound/rawmidi.h
+index f6cbef78db62..3b91ad5d5115 100644
+--- a/include/sound/rawmidi.h
++++ b/include/sound/rawmidi.h
+@@ -167,6 +167,10 @@ int snd_rawmidi_transmit_peek(struct snd_rawmidi_substream *substream,
+ int snd_rawmidi_transmit_ack(struct snd_rawmidi_substream *substream, int count);
+ int snd_rawmidi_transmit(struct snd_rawmidi_substream *substream,
+ unsigned char *buffer, int count);
++int __snd_rawmidi_transmit_peek(struct snd_rawmidi_substream *substream,
++ unsigned char *buffer, int count);
++int __snd_rawmidi_transmit_ack(struct snd_rawmidi_substream *substream,
++ int count);
+
+ /* main midi functions */
+
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 8f0324ef72ab..9982616ce712 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -269,6 +269,9 @@ static u32 clear_idx;
+ #define PREFIX_MAX 32
+ #define LOG_LINE_MAX (1024 - PREFIX_MAX)
+
++#define LOG_LEVEL(v) ((v) & 0x07)
++#define LOG_FACILITY(v) ((v) >> 3 & 0xff)
++
+ /* record buffer */
+ #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
+ #define LOG_ALIGN 4
+@@ -611,7 +614,6 @@ struct devkmsg_user {
+ static ssize_t devkmsg_write(struct kiocb *iocb, struct iov_iter *from)
+ {
+ char *buf, *line;
+- int i;
+ int level = default_message_loglevel;
+ int facility = 1; /* LOG_USER */
+ size_t len = iov_iter_count(from);
+@@ -641,12 +643,13 @@ static ssize_t devkmsg_write(struct kiocb *iocb, struct iov_iter *from)
+ line = buf;
+ if (line[0] == '<') {
+ char *endp = NULL;
++ unsigned int u;
+
+- i = simple_strtoul(line+1, &endp, 10);
++ u = simple_strtoul(line + 1, &endp, 10);
+ if (endp && endp[0] == '>') {
+- level = i & 7;
+- if (i >> 3)
+- facility = i >> 3;
++ level = LOG_LEVEL(u);
++ if (LOG_FACILITY(u) != 0)
++ facility = LOG_FACILITY(u);
+ endp++;
+ len -= endp - line;
+ line = endp;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index bcd214e4b4d6..9a584b204e85 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -6678,7 +6678,7 @@ static void sched_init_numa(void)
+
+ sched_domains_numa_masks[i][j] = mask;
+
+- for (k = 0; k < nr_node_ids; k++) {
++ for_each_node(k) {
+ if (node_distance(j, k) > sched_domains_numa_distance[i])
+ continue;
+
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 0f6bbbe77b46..6c863ca42a76 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -3552,7 +3552,7 @@ SYSCALL_DEFINE0(pause)
+
+ #endif
+
+-int sigsuspend(sigset_t *set)
++static int sigsuspend(sigset_t *set)
+ {
+ current->saved_sigmask = current->blocked;
+ set_current_blocked(set);
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 457a373e2181..8be98288dcbc 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -897,10 +897,10 @@ static int enqueue_hrtimer(struct hrtimer *timer,
+ */
+ static void __remove_hrtimer(struct hrtimer *timer,
+ struct hrtimer_clock_base *base,
+- unsigned long newstate, int reprogram)
++ u8 newstate, int reprogram)
+ {
+ struct hrtimer_cpu_base *cpu_base = base->cpu_base;
+- unsigned int state = timer->state;
++ u8 state = timer->state;
+
+ timer->state = newstate;
+ if (!(state & HRTIMER_STATE_ENQUEUED))
+@@ -930,7 +930,7 @@ static inline int
+ remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base, bool restart)
+ {
+ if (hrtimer_is_queued(timer)) {
+- unsigned long state = timer->state;
++ u8 state = timer->state;
+ int reprogram;
+
+ /*
+@@ -954,6 +954,22 @@ remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base, bool rest
+ return 0;
+ }
+
++static inline ktime_t hrtimer_update_lowres(struct hrtimer *timer, ktime_t tim,
++ const enum hrtimer_mode mode)
++{
++#ifdef CONFIG_TIME_LOW_RES
++ /*
++ * CONFIG_TIME_LOW_RES indicates that the system has no way to return
++ * granular time values. For relative timers we add hrtimer_resolution
++ * (i.e. one jiffie) to prevent short timeouts.
++ */
++ timer->is_rel = mode & HRTIMER_MODE_REL;
++ if (timer->is_rel)
++ tim = ktime_add_safe(tim, ktime_set(0, hrtimer_resolution));
++#endif
++ return tim;
++}
++
+ /**
+ * hrtimer_start_range_ns - (re)start an hrtimer on the current CPU
+ * @timer: the timer to be added
+@@ -974,19 +990,10 @@ void hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ /* Remove an active timer from the queue: */
+ remove_hrtimer(timer, base, true);
+
+- if (mode & HRTIMER_MODE_REL) {
++ if (mode & HRTIMER_MODE_REL)
+ tim = ktime_add_safe(tim, base->get_time());
+- /*
+- * CONFIG_TIME_LOW_RES is a temporary way for architectures
+- * to signal that they simply return xtime in
+- * do_gettimeoffset(). In this case we want to round up by
+- * resolution when starting a relative timer, to avoid short
+- * timeouts. This will go away with the GTOD framework.
+- */
+-#ifdef CONFIG_TIME_LOW_RES
+- tim = ktime_add_safe(tim, ktime_set(0, hrtimer_resolution));
+-#endif
+- }
++
++ tim = hrtimer_update_lowres(timer, tim, mode);
+
+ hrtimer_set_expires_range_ns(timer, tim, delta_ns);
+
+@@ -1074,19 +1081,23 @@ EXPORT_SYMBOL_GPL(hrtimer_cancel);
+ /**
+ * hrtimer_get_remaining - get remaining time for the timer
+ * @timer: the timer to read
++ * @adjust: adjust relative timers when CONFIG_TIME_LOW_RES=y
+ */
+-ktime_t hrtimer_get_remaining(const struct hrtimer *timer)
++ktime_t __hrtimer_get_remaining(const struct hrtimer *timer, bool adjust)
+ {
+ unsigned long flags;
+ ktime_t rem;
+
+ lock_hrtimer_base(timer, &flags);
+- rem = hrtimer_expires_remaining(timer);
++ if (IS_ENABLED(CONFIG_TIME_LOW_RES) && adjust)
++ rem = hrtimer_expires_remaining_adjusted(timer);
++ else
++ rem = hrtimer_expires_remaining(timer);
+ unlock_hrtimer_base(timer, &flags);
+
+ return rem;
+ }
+-EXPORT_SYMBOL_GPL(hrtimer_get_remaining);
++EXPORT_SYMBOL_GPL(__hrtimer_get_remaining);
+
+ #ifdef CONFIG_NO_HZ_COMMON
+ /**
+@@ -1220,6 +1231,14 @@ static void __run_hrtimer(struct hrtimer_cpu_base *cpu_base,
+ fn = timer->function;
+
+ /*
++ * Clear the 'is relative' flag for the TIME_LOW_RES case. If the
++ * timer is restarted with a period then it becomes an absolute
++ * timer. If its not restarted it does not matter.
++ */
++ if (IS_ENABLED(CONFIG_TIME_LOW_RES))
++ timer->is_rel = false;
++
++ /*
+ * Because we run timers from hardirq context, there is no chance
+ * they get migrated to another cpu, therefore its safe to unlock
+ * the timer base.
+diff --git a/kernel/time/timer_list.c b/kernel/time/timer_list.c
+index f75e35b60149..ba7d8b288bb3 100644
+--- a/kernel/time/timer_list.c
++++ b/kernel/time/timer_list.c
+@@ -69,7 +69,7 @@ print_timer(struct seq_file *m, struct hrtimer *taddr, struct hrtimer *timer,
+ print_name_offset(m, taddr);
+ SEQ_printf(m, ", ");
+ print_name_offset(m, timer->function);
+- SEQ_printf(m, ", S:%02lx", timer->state);
++ SEQ_printf(m, ", S:%02x", timer->state);
+ #ifdef CONFIG_TIMER_STATS
+ SEQ_printf(m, ", ");
+ print_name_offset(m, timer->start_site);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 6e79408674aa..69f97541ce6e 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -6602,7 +6602,7 @@ static int instance_rmdir(const char *name)
+ tracing_set_nop(tr);
+ event_trace_del_tracer(tr);
+ ftrace_destroy_function_files(tr);
+- debugfs_remove_recursive(tr->dir);
++ tracefs_remove_recursive(tr->dir);
+ free_trace_buffers(tr);
+
+ kfree(tr->name);
+diff --git a/kernel/trace/trace_printk.c b/kernel/trace/trace_printk.c
+index 36c1455b7567..2dbffe22cbcf 100644
+--- a/kernel/trace/trace_printk.c
++++ b/kernel/trace/trace_printk.c
+@@ -267,6 +267,7 @@ static const char **find_next(void *v, loff_t *pos)
+ if (*pos < last_index + start_index)
+ return __start___tracepoint_str + (*pos - last_index);
+
++ start_index += last_index;
+ return find_next_mod_format(start_index, v, fmt, pos);
+ }
+
+diff --git a/lib/hexdump.c b/lib/hexdump.c
+index 8d74c20d8595..992457b1284c 100644
+--- a/lib/hexdump.c
++++ b/lib/hexdump.c
+@@ -169,11 +169,15 @@ int hex_dump_to_buffer(const void *buf, size_t len, int rowsize, int groupsize,
+ }
+ } else {
+ for (j = 0; j < len; j++) {
+- if (linebuflen < lx + 3)
++ if (linebuflen < lx + 2)
+ goto overflow2;
+ ch = ptr[j];
+ linebuf[lx++] = hex_asc_hi(ch);
++ if (linebuflen < lx + 2)
++ goto overflow2;
+ linebuf[lx++] = hex_asc_lo(ch);
++ if (linebuflen < lx + 2)
++ goto overflow2;
+ linebuf[lx++] = ' ';
+ }
+ if (j)
+diff --git a/lib/libcrc32c.c b/lib/libcrc32c.c
+index 6a08ce7d6adc..acf9da449f81 100644
+--- a/lib/libcrc32c.c
++++ b/lib/libcrc32c.c
+@@ -74,3 +74,4 @@ module_exit(libcrc32c_mod_fini);
+ MODULE_AUTHOR("Clay Haapala <chaapala@cisco.com>");
+ MODULE_DESCRIPTION("CRC32c (Castagnoli) calculations");
+ MODULE_LICENSE("GPL");
++MODULE_SOFTDEP("pre: crc32c");
+diff --git a/mm/backing-dev.c b/mm/backing-dev.c
+index 619984fc07ec..4cf30e1ca163 100644
+--- a/mm/backing-dev.c
++++ b/mm/backing-dev.c
+@@ -957,8 +957,9 @@ EXPORT_SYMBOL(congestion_wait);
+ * jiffies for either a BDI to exit congestion of the given @sync queue
+ * or a write to complete.
+ *
+- * In the absence of zone congestion, cond_resched() is called to yield
+- * the processor if necessary but otherwise does not sleep.
++ * In the absence of zone congestion, a short sleep or a cond_resched is
++ * performed to yield the processor and to allow other subsystems to make
++ * a forward progress.
+ *
+ * The return value is 0 if the sleep is for the full timeout. Otherwise,
+ * it is the number of jiffies that were still remaining when the function
+@@ -978,7 +979,19 @@ long wait_iff_congested(struct zone *zone, int sync, long timeout)
+ */
+ if (atomic_read(&nr_wb_congested[sync]) == 0 ||
+ !test_bit(ZONE_CONGESTED, &zone->flags)) {
+- cond_resched();
++
++ /*
++ * Memory allocation/reclaim might be called from a WQ
++ * context and the current implementation of the WQ
++ * concurrency control doesn't recognize that a particular
++ * WQ is congested if the worker thread is looping without
++ * ever sleeping. Therefore we have to do a short sleep
++ * here rather than calling cond_resched().
++ */
++ if (current->flags & PF_WQ_WORKER)
++ schedule_timeout_uninterruptible(1);
++ else
++ cond_resched();
+
+ /* In case we scheduled, work out time remaining */
+ ret = timeout - (jiffies - start);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 9cc773483624..960f0ace6824 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -372,8 +372,10 @@ retry_locked:
+ spin_unlock(&resv->lock);
+
+ trg = kmalloc(sizeof(*trg), GFP_KERNEL);
+- if (!trg)
++ if (!trg) {
++ kfree(nrg);
+ return -ENOMEM;
++ }
+
+ spin_lock(&resv->lock);
+ list_add(&trg->link, &resv->region_cache);
+@@ -483,8 +485,16 @@ static long region_del(struct resv_map *resv, long f, long t)
+ retry:
+ spin_lock(&resv->lock);
+ list_for_each_entry_safe(rg, trg, head, link) {
+- if (rg->to <= f)
++ /*
++ * Skip regions before the range to be deleted. file_region
++ * ranges are normally of the form [from, to). However, there
++ * may be a "placeholder" entry in the map which is of the form
++ * (from, to) with from == to. Check for placeholder entries
++ * at the beginning of the range to be deleted.
++ */
++ if (rg->to <= f && (rg->to != rg->from || rg->to != f))
+ continue;
++
+ if (rg->from >= t)
+ break;
+
+@@ -1790,7 +1800,10 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
+ page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
+ if (!page)
+ goto out_uncharge_cgroup;
+-
++ if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
++ SetPagePrivate(page);
++ h->resv_huge_pages--;
++ }
+ spin_lock(&hugetlb_lock);
+ list_move(&page->lru, &h->hugepage_activelist);
+ /* Fall through */
+@@ -3587,12 +3600,12 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+ } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry)))
+ return VM_FAULT_HWPOISON_LARGE |
+ VM_FAULT_SET_HINDEX(hstate_index(h));
++ } else {
++ ptep = huge_pte_alloc(mm, address, huge_page_size(h));
++ if (!ptep)
++ return VM_FAULT_OOM;
+ }
+
+- ptep = huge_pte_alloc(mm, address, huge_page_size(h));
+- if (!ptep)
+- return VM_FAULT_OOM;
+-
+ mapping = vma->vm_file->f_mapping;
+ idx = vma_hugecache_offset(h, vma, address);
+
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index c57c4423c688..2233233282c6 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -902,14 +902,20 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root,
+ if (prev && reclaim->generation != iter->generation)
+ goto out_unlock;
+
+- do {
++ while (1) {
+ pos = READ_ONCE(iter->position);
++ if (!pos || css_tryget(&pos->css))
++ break;
+ /*
+- * A racing update may change the position and
+- * put the last reference, hence css_tryget(),
+- * or retry to see the updated position.
++ * css reference reached zero, so iter->position will
++ * be cleared by ->css_released. However, we should not
++ * rely on this happening soon, because ->css_released
++ * is called from a work queue, and by busy-waiting we
++ * might block it. So we clear iter->position right
++ * away.
+ */
+- } while (pos && !css_tryget(&pos->css));
++ (void)cmpxchg(&iter->position, pos, NULL);
++ }
+ }
+
+ if (pos)
+@@ -955,17 +961,13 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root,
+ }
+
+ if (reclaim) {
+- if (cmpxchg(&iter->position, pos, memcg) == pos) {
+- if (memcg)
+- css_get(&memcg->css);
+- if (pos)
+- css_put(&pos->css);
+- }
+-
+ /*
+- * pairs with css_tryget when dereferencing iter->position
+- * above.
++ * The position could have already been updated by a competing
++ * thread, so check that the value hasn't changed since we read
++ * it to avoid reclaiming from the same cgroup twice.
+ */
++ (void)cmpxchg(&iter->position, pos, memcg);
++
+ if (pos)
+ css_put(&pos->css);
+
+@@ -998,6 +1000,28 @@ void mem_cgroup_iter_break(struct mem_cgroup *root,
+ css_put(&prev->css);
+ }
+
++static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg)
++{
++ struct mem_cgroup *memcg = dead_memcg;
++ struct mem_cgroup_reclaim_iter *iter;
++ struct mem_cgroup_per_zone *mz;
++ int nid, zid;
++ int i;
++
++ while ((memcg = parent_mem_cgroup(memcg))) {
++ for_each_node(nid) {
++ for (zid = 0; zid < MAX_NR_ZONES; zid++) {
++ mz = &memcg->nodeinfo[nid]->zoneinfo[zid];
++ for (i = 0; i <= DEF_PRIORITY; i++) {
++ iter = &mz->iter[i];
++ cmpxchg(&iter->position,
++ dead_memcg, NULL);
++ }
++ }
++ }
++ }
++}
++
+ /*
+ * Iteration constructs for visiting all cgroups (under a tree). If
+ * loops are exited prematurely (break), mem_cgroup_iter_break() must
+@@ -2836,9 +2860,9 @@ static unsigned long tree_stat(struct mem_cgroup *memcg,
+ return val;
+ }
+
+-static inline u64 mem_cgroup_usage(struct mem_cgroup *memcg, bool swap)
++static inline unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap)
+ {
+- u64 val;
++ unsigned long val;
+
+ if (mem_cgroup_is_root(memcg)) {
+ val = tree_stat(memcg, MEM_CGROUP_STAT_CACHE);
+@@ -2851,7 +2875,7 @@ static inline u64 mem_cgroup_usage(struct mem_cgroup *memcg, bool swap)
+ else
+ val = page_counter_read(&memcg->memsw);
+ }
+- return val << PAGE_SHIFT;
++ return val;
+ }
+
+ enum {
+@@ -2885,9 +2909,9 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css,
+ switch (MEMFILE_ATTR(cft->private)) {
+ case RES_USAGE:
+ if (counter == &memcg->memory)
+- return mem_cgroup_usage(memcg, false);
++ return (u64)mem_cgroup_usage(memcg, false) * PAGE_SIZE;
+ if (counter == &memcg->memsw)
+- return mem_cgroup_usage(memcg, true);
++ return (u64)mem_cgroup_usage(memcg, true) * PAGE_SIZE;
+ return (u64)page_counter_read(counter) * PAGE_SIZE;
+ case RES_LIMIT:
+ return (u64)counter->limit * PAGE_SIZE;
+@@ -3387,7 +3411,6 @@ static int __mem_cgroup_usage_register_event(struct mem_cgroup *memcg,
+ ret = page_counter_memparse(args, "-1", &threshold);
+ if (ret)
+ return ret;
+- threshold <<= PAGE_SHIFT;
+
+ mutex_lock(&memcg->thresholds_lock);
+
+@@ -4361,6 +4384,13 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
+ wb_memcg_offline(memcg);
+ }
+
++static void mem_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++ struct mem_cgroup *memcg = mem_cgroup_from_css(css);
++
++ invalidate_reclaim_iterators(memcg);
++}
++
+ static void mem_cgroup_css_free(struct cgroup_subsys_state *css)
+ {
+ struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+@@ -5217,6 +5247,7 @@ struct cgroup_subsys memory_cgrp_subsys = {
+ .css_alloc = mem_cgroup_css_alloc,
+ .css_online = mem_cgroup_css_online,
+ .css_offline = mem_cgroup_css_offline,
++ .css_released = mem_cgroup_css_released,
+ .css_free = mem_cgroup_css_free,
+ .css_reset = mem_cgroup_css_reset,
+ .can_attach = mem_cgroup_can_attach,
+diff --git a/mm/oom_kill.c b/mm/oom_kill.c
+index 1ecc0bcaecc5..8ad35aa45436 100644
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -554,6 +554,12 @@ void oom_kill_process(struct oom_control *oc, struct task_struct *p,
+
+ /* mm cannot safely be dereferenced after task_unlock(victim) */
+ mm = victim->mm;
++ /*
++ * We should send SIGKILL before setting TIF_MEMDIE in order to prevent
++ * the OOM victim from depleting the memory reserves from the user
++ * space under its control.
++ */
++ do_send_sig_info(SIGKILL, SEND_SIG_FORCED, victim, true);
+ mark_oom_victim(victim);
+ pr_err("Killed process %d (%s) total-vm:%lukB, anon-rss:%lukB, file-rss:%lukB\n",
+ task_pid_nr(victim), victim->comm, K(victim->mm->total_vm),
+@@ -585,7 +591,6 @@ void oom_kill_process(struct oom_control *oc, struct task_struct *p,
+ }
+ rcu_read_unlock();
+
+- do_send_sig_info(SIGKILL, SEND_SIG_FORCED, victim, true);
+ put_task_struct(victim);
+ }
+ #undef K
+diff --git a/mm/slab.c b/mm/slab.c
+index 4fcc5dd8d5a6..461935bab9ef 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -282,6 +282,7 @@ static void kmem_cache_node_init(struct kmem_cache_node *parent)
+
+ #define CFLGS_OFF_SLAB (0x80000000UL)
+ #define OFF_SLAB(x) ((x)->flags & CFLGS_OFF_SLAB)
++#define OFF_SLAB_MIN_SIZE (max_t(size_t, PAGE_SIZE >> 5, KMALLOC_MIN_SIZE + 1))
+
+ #define BATCHREFILL_LIMIT 16
+ /*
+@@ -2212,7 +2213,7 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
+ * it too early on. Always use on-slab management when
+ * SLAB_NOLEAKTRACE to avoid recursive calls into kmemleak)
+ */
+- if ((size >= (PAGE_SIZE >> 5)) && !slab_early_init &&
++ if (size >= OFF_SLAB_MIN_SIZE && !slab_early_init &&
+ !(flags & SLAB_NOLEAKTRACE))
+ /*
+ * Size is large, assume best to place the slab management obj
+@@ -2276,7 +2277,7 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
+ /*
+ * This is a possibility for one of the kmalloc_{dma,}_caches.
+ * But since we go off slab only for object size greater than
+- * PAGE_SIZE/8, and kmalloc_{dma,}_caches get created
++ * OFF_SLAB_MIN_SIZE, and kmalloc_{dma,}_caches get created
+ * in ascending order,this should not happen at all.
+ * But leave a BUG_ON for some lucky dude.
+ */
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index fbf14485a049..34444706a3a7 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -1357,6 +1357,7 @@ static const struct file_operations proc_vmstat_file_operations = {
+ #endif /* CONFIG_PROC_FS */
+
+ #ifdef CONFIG_SMP
++static struct workqueue_struct *vmstat_wq;
+ static DEFINE_PER_CPU(struct delayed_work, vmstat_work);
+ int sysctl_stat_interval __read_mostly = HZ;
+ static cpumask_var_t cpu_stat_off;
+@@ -1369,7 +1370,7 @@ static void vmstat_update(struct work_struct *w)
+ * to occur in the future. Keep on running the
+ * update worker thread.
+ */
+- schedule_delayed_work_on(smp_processor_id(),
++ queue_delayed_work_on(smp_processor_id(), vmstat_wq,
+ this_cpu_ptr(&vmstat_work),
+ round_jiffies_relative(sysctl_stat_interval));
+ } else {
+@@ -1438,7 +1439,7 @@ static void vmstat_shepherd(struct work_struct *w)
+ if (need_update(cpu) &&
+ cpumask_test_and_clear_cpu(cpu, cpu_stat_off))
+
+- schedule_delayed_work_on(cpu,
++ queue_delayed_work_on(cpu, vmstat_wq,
+ &per_cpu(vmstat_work, cpu), 0);
+
+ put_online_cpus();
+@@ -1527,6 +1528,7 @@ static int __init setup_vmstat(void)
+
+ start_shepherd_timer();
+ cpu_notifier_register_done();
++ vmstat_wq = alloc_workqueue("vmstat", WQ_FREEZABLE|WQ_MEM_RECLAIM, 0);
+ #endif
+ #ifdef CONFIG_PROC_FS
+ proc_create("buddyinfo", S_IRUGO, NULL, &fragmentation_file_operations);
+diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
+index f135b1b6fcdc..3734fc4d2e74 100644
+--- a/mm/zsmalloc.c
++++ b/mm/zsmalloc.c
+@@ -304,7 +304,12 @@ static void free_handle(struct zs_pool *pool, unsigned long handle)
+
+ static void record_obj(unsigned long handle, unsigned long obj)
+ {
+- *(unsigned long *)handle = obj;
++ /*
++ * lsb of @obj represents handle lock while other bits
++ * represent object value the handle is pointing so
++ * updating shouldn't do store tearing.
++ */
++ WRITE_ONCE(*(unsigned long *)handle, obj);
+ }
+
+ /* zpool driver */
+@@ -1629,6 +1634,13 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
+ free_obj = obj_malloc(d_page, class, handle);
+ zs_object_copy(free_obj, used_obj, class);
+ index++;
++ /*
++ * record_obj updates handle's value to free_obj and it will
++ * invalidate lock bit(ie, HANDLE_PIN_BIT) of handle, which
++ * breaks synchronization using pin_tag(e,g, zs_free) so
++ * let's keep the lock bit.
++ */
++ free_obj |= BIT(HANDLE_PIN_BIT);
+ record_obj(handle, free_obj);
+ unpin_tag(handle);
+ obj_free(pool, class, used_obj);
+diff --git a/security/integrity/digsig.c b/security/integrity/digsig.c
+index 36fb6b527829..5be9ffbe90ba 100644
+--- a/security/integrity/digsig.c
++++ b/security/integrity/digsig.c
+@@ -105,7 +105,7 @@ int __init integrity_load_x509(const unsigned int id, const char *path)
+ rc,
+ ((KEY_POS_ALL & ~KEY_POS_SETATTR) |
+ KEY_USR_VIEW | KEY_USR_READ),
+- KEY_ALLOC_NOT_IN_QUOTA | KEY_ALLOC_TRUSTED);
++ KEY_ALLOC_NOT_IN_QUOTA);
+ if (IS_ERR(key)) {
+ rc = PTR_ERR(key);
+ pr_err("Problem loading X.509 certificate (%d): %s\n",
+diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
+index 1334e02ae8f4..3d145a3ffccf 100644
+--- a/security/integrity/evm/evm_main.c
++++ b/security/integrity/evm/evm_main.c
+@@ -23,6 +23,7 @@
+ #include <linux/integrity.h>
+ #include <linux/evm.h>
+ #include <crypto/hash.h>
++#include <crypto/algapi.h>
+ #include "evm.h"
+
+ int evm_initialized;
+@@ -148,7 +149,7 @@ static enum integrity_status evm_verify_hmac(struct dentry *dentry,
+ xattr_value_len, calc.digest);
+ if (rc)
+ break;
+- rc = memcmp(xattr_data->digest, calc.digest,
++ rc = crypto_memneq(xattr_data->digest, calc.digest,
+ sizeof(calc.digest));
+ if (rc)
+ rc = -EINVAL;
+diff --git a/sound/core/compress_offload.c b/sound/core/compress_offload.c
+index b123c42e7dc8..b554d7f9e3be 100644
+--- a/sound/core/compress_offload.c
++++ b/sound/core/compress_offload.c
+@@ -44,6 +44,13 @@
+ #include <sound/compress_offload.h>
+ #include <sound/compress_driver.h>
+
++/* struct snd_compr_codec_caps overflows the ioctl bit size for some
++ * architectures, so we need to disable the relevant ioctls.
++ */
++#if _IOC_SIZEBITS < 14
++#define COMPR_CODEC_CAPS_OVERFLOW
++#endif
++
+ /* TODO:
+ * - add substream support for multiple devices in case of
+ * SND_DYNAMIC_MINORS is not used
+@@ -438,6 +445,7 @@ out:
+ return retval;
+ }
+
++#ifndef COMPR_CODEC_CAPS_OVERFLOW
+ static int
+ snd_compr_get_codec_caps(struct snd_compr_stream *stream, unsigned long arg)
+ {
+@@ -461,6 +469,7 @@ out:
+ kfree(caps);
+ return retval;
+ }
++#endif /* !COMPR_CODEC_CAPS_OVERFLOW */
+
+ /* revisit this with snd_pcm_preallocate_xxx */
+ static int snd_compr_allocate_buffer(struct snd_compr_stream *stream,
+@@ -799,9 +808,11 @@ static long snd_compr_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+ case _IOC_NR(SNDRV_COMPRESS_GET_CAPS):
+ retval = snd_compr_get_caps(stream, arg);
+ break;
++#ifndef COMPR_CODEC_CAPS_OVERFLOW
+ case _IOC_NR(SNDRV_COMPRESS_GET_CODEC_CAPS):
+ retval = snd_compr_get_codec_caps(stream, arg);
+ break;
++#endif
+ case _IOC_NR(SNDRV_COMPRESS_SET_PARAMS):
+ retval = snd_compr_set_params(stream, arg);
+ break;
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index 58550cc93f28..33e72c809e50 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -834,7 +834,8 @@ static int choose_rate(struct snd_pcm_substream *substream,
+ return snd_pcm_hw_param_near(substream, params, SNDRV_PCM_HW_PARAM_RATE, best_rate, NULL);
+ }
+
+-static int snd_pcm_oss_change_params(struct snd_pcm_substream *substream)
++static int snd_pcm_oss_change_params(struct snd_pcm_substream *substream,
++ bool trylock)
+ {
+ struct snd_pcm_runtime *runtime = substream->runtime;
+ struct snd_pcm_hw_params *params, *sparams;
+@@ -848,7 +849,10 @@ static int snd_pcm_oss_change_params(struct snd_pcm_substream *substream)
+ struct snd_mask sformat_mask;
+ struct snd_mask mask;
+
+- if (mutex_lock_interruptible(&runtime->oss.params_lock))
++ if (trylock) {
++ if (!(mutex_trylock(&runtime->oss.params_lock)))
++ return -EAGAIN;
++ } else if (mutex_lock_interruptible(&runtime->oss.params_lock))
+ return -EINTR;
+ sw_params = kmalloc(sizeof(*sw_params), GFP_KERNEL);
+ params = kmalloc(sizeof(*params), GFP_KERNEL);
+@@ -1092,7 +1096,7 @@ static int snd_pcm_oss_get_active_substream(struct snd_pcm_oss_file *pcm_oss_fil
+ if (asubstream == NULL)
+ asubstream = substream;
+ if (substream->runtime->oss.params) {
+- err = snd_pcm_oss_change_params(substream);
++ err = snd_pcm_oss_change_params(substream, false);
+ if (err < 0)
+ return err;
+ }
+@@ -1132,7 +1136,7 @@ static int snd_pcm_oss_make_ready(struct snd_pcm_substream *substream)
+ return 0;
+ runtime = substream->runtime;
+ if (runtime->oss.params) {
+- err = snd_pcm_oss_change_params(substream);
++ err = snd_pcm_oss_change_params(substream, false);
+ if (err < 0)
+ return err;
+ }
+@@ -2163,7 +2167,7 @@ static int snd_pcm_oss_get_space(struct snd_pcm_oss_file *pcm_oss_file, int stre
+ runtime = substream->runtime;
+
+ if (runtime->oss.params &&
+- (err = snd_pcm_oss_change_params(substream)) < 0)
++ (err = snd_pcm_oss_change_params(substream, false)) < 0)
+ return err;
+
+ info.fragsize = runtime->oss.period_bytes;
+@@ -2800,7 +2804,12 @@ static int snd_pcm_oss_mmap(struct file *file, struct vm_area_struct *area)
+ return -EIO;
+
+ if (runtime->oss.params) {
+- if ((err = snd_pcm_oss_change_params(substream)) < 0)
++ /* use mutex_trylock() for params_lock for avoiding a deadlock
++ * between mmap_sem and params_lock taken by
++ * copy_from/to_user() in snd_pcm_oss_write/read()
++ */
++ err = snd_pcm_oss_change_params(substream, true);
++ if (err < 0)
+ return err;
+ }
+ #ifdef CONFIG_SND_PCM_OSS_PLUGINS
+diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
+index a7759846fbaa..795437b10082 100644
+--- a/sound/core/rawmidi.c
++++ b/sound/core/rawmidi.c
+@@ -942,31 +942,36 @@ static long snd_rawmidi_kernel_read1(struct snd_rawmidi_substream *substream,
+ unsigned long flags;
+ long result = 0, count1;
+ struct snd_rawmidi_runtime *runtime = substream->runtime;
++ unsigned long appl_ptr;
+
++ spin_lock_irqsave(&runtime->lock, flags);
+ while (count > 0 && runtime->avail) {
+ count1 = runtime->buffer_size - runtime->appl_ptr;
+ if (count1 > count)
+ count1 = count;
+- spin_lock_irqsave(&runtime->lock, flags);
+ if (count1 > (int)runtime->avail)
+ count1 = runtime->avail;
++
++ /* update runtime->appl_ptr before unlocking for userbuf */
++ appl_ptr = runtime->appl_ptr;
++ runtime->appl_ptr += count1;
++ runtime->appl_ptr %= runtime->buffer_size;
++ runtime->avail -= count1;
++
+ if (kernelbuf)
+- memcpy(kernelbuf + result, runtime->buffer + runtime->appl_ptr, count1);
++ memcpy(kernelbuf + result, runtime->buffer + appl_ptr, count1);
+ if (userbuf) {
+ spin_unlock_irqrestore(&runtime->lock, flags);
+ if (copy_to_user(userbuf + result,
+- runtime->buffer + runtime->appl_ptr, count1)) {
++ runtime->buffer + appl_ptr, count1)) {
+ return result > 0 ? result : -EFAULT;
+ }
+ spin_lock_irqsave(&runtime->lock, flags);
+ }
+- runtime->appl_ptr += count1;
+- runtime->appl_ptr %= runtime->buffer_size;
+- runtime->avail -= count1;
+- spin_unlock_irqrestore(&runtime->lock, flags);
+ result += count1;
+ count -= count1;
+ }
++ spin_unlock_irqrestore(&runtime->lock, flags);
+ return result;
+ }
+
+@@ -1055,23 +1060,16 @@ int snd_rawmidi_transmit_empty(struct snd_rawmidi_substream *substream)
+ EXPORT_SYMBOL(snd_rawmidi_transmit_empty);
+
+ /**
+- * snd_rawmidi_transmit_peek - copy data from the internal buffer
++ * __snd_rawmidi_transmit_peek - copy data from the internal buffer
+ * @substream: the rawmidi substream
+ * @buffer: the buffer pointer
+ * @count: data size to transfer
+ *
+- * Copies data from the internal output buffer to the given buffer.
+- *
+- * Call this in the interrupt handler when the midi output is ready,
+- * and call snd_rawmidi_transmit_ack() after the transmission is
+- * finished.
+- *
+- * Return: The size of copied data, or a negative error code on failure.
++ * This is a variant of snd_rawmidi_transmit_peek() without spinlock.
+ */
+-int snd_rawmidi_transmit_peek(struct snd_rawmidi_substream *substream,
++int __snd_rawmidi_transmit_peek(struct snd_rawmidi_substream *substream,
+ unsigned char *buffer, int count)
+ {
+- unsigned long flags;
+ int result, count1;
+ struct snd_rawmidi_runtime *runtime = substream->runtime;
+
+@@ -1081,7 +1079,6 @@ int snd_rawmidi_transmit_peek(struct snd_rawmidi_substream *substream,
+ return -EINVAL;
+ }
+ result = 0;
+- spin_lock_irqsave(&runtime->lock, flags);
+ if (runtime->avail >= runtime->buffer_size) {
+ /* warning: lowlevel layer MUST trigger down the hardware */
+ goto __skip;
+@@ -1106,25 +1103,47 @@ int snd_rawmidi_transmit_peek(struct snd_rawmidi_substream *substream,
+ }
+ }
+ __skip:
++ return result;
++}
++EXPORT_SYMBOL(__snd_rawmidi_transmit_peek);
++
++/**
++ * snd_rawmidi_transmit_peek - copy data from the internal buffer
++ * @substream: the rawmidi substream
++ * @buffer: the buffer pointer
++ * @count: data size to transfer
++ *
++ * Copies data from the internal output buffer to the given buffer.
++ *
++ * Call this in the interrupt handler when the midi output is ready,
++ * and call snd_rawmidi_transmit_ack() after the transmission is
++ * finished.
++ *
++ * Return: The size of copied data, or a negative error code on failure.
++ */
++int snd_rawmidi_transmit_peek(struct snd_rawmidi_substream *substream,
++ unsigned char *buffer, int count)
++{
++ struct snd_rawmidi_runtime *runtime = substream->runtime;
++ int result;
++ unsigned long flags;
++
++ spin_lock_irqsave(&runtime->lock, flags);
++ result = __snd_rawmidi_transmit_peek(substream, buffer, count);
+ spin_unlock_irqrestore(&runtime->lock, flags);
+ return result;
+ }
+ EXPORT_SYMBOL(snd_rawmidi_transmit_peek);
+
+ /**
+- * snd_rawmidi_transmit_ack - acknowledge the transmission
++ * __snd_rawmidi_transmit_ack - acknowledge the transmission
+ * @substream: the rawmidi substream
+ * @count: the transferred count
+ *
+- * Advances the hardware pointer for the internal output buffer with
+- * the given size and updates the condition.
+- * Call after the transmission is finished.
+- *
+- * Return: The advanced size if successful, or a negative error code on failure.
++ * This is a variant of __snd_rawmidi_transmit_ack() without spinlock.
+ */
+-int snd_rawmidi_transmit_ack(struct snd_rawmidi_substream *substream, int count)
++int __snd_rawmidi_transmit_ack(struct snd_rawmidi_substream *substream, int count)
+ {
+- unsigned long flags;
+ struct snd_rawmidi_runtime *runtime = substream->runtime;
+
+ if (runtime->buffer == NULL) {
+@@ -1132,7 +1151,6 @@ int snd_rawmidi_transmit_ack(struct snd_rawmidi_substream *substream, int count)
+ "snd_rawmidi_transmit_ack: output is not active!!!\n");
+ return -EINVAL;
+ }
+- spin_lock_irqsave(&runtime->lock, flags);
+ snd_BUG_ON(runtime->avail + count > runtime->buffer_size);
+ runtime->hw_ptr += count;
+ runtime->hw_ptr %= runtime->buffer_size;
+@@ -1142,9 +1160,32 @@ int snd_rawmidi_transmit_ack(struct snd_rawmidi_substream *substream, int count)
+ if (runtime->drain || snd_rawmidi_ready(substream))
+ wake_up(&runtime->sleep);
+ }
+- spin_unlock_irqrestore(&runtime->lock, flags);
+ return count;
+ }
++EXPORT_SYMBOL(__snd_rawmidi_transmit_ack);
++
++/**
++ * snd_rawmidi_transmit_ack - acknowledge the transmission
++ * @substream: the rawmidi substream
++ * @count: the transferred count
++ *
++ * Advances the hardware pointer for the internal output buffer with
++ * the given size and updates the condition.
++ * Call after the transmission is finished.
++ *
++ * Return: The advanced size if successful, or a negative error code on failure.
++ */
++int snd_rawmidi_transmit_ack(struct snd_rawmidi_substream *substream, int count)
++{
++ struct snd_rawmidi_runtime *runtime = substream->runtime;
++ int result;
++ unsigned long flags;
++
++ spin_lock_irqsave(&runtime->lock, flags);
++ result = __snd_rawmidi_transmit_ack(substream, count);
++ spin_unlock_irqrestore(&runtime->lock, flags);
++ return result;
++}
+ EXPORT_SYMBOL(snd_rawmidi_transmit_ack);
+
+ /**
+@@ -1160,12 +1201,22 @@ EXPORT_SYMBOL(snd_rawmidi_transmit_ack);
+ int snd_rawmidi_transmit(struct snd_rawmidi_substream *substream,
+ unsigned char *buffer, int count)
+ {
++ struct snd_rawmidi_runtime *runtime = substream->runtime;
++ int result;
++ unsigned long flags;
++
++ spin_lock_irqsave(&runtime->lock, flags);
+ if (!substream->opened)
+- return -EBADFD;
+- count = snd_rawmidi_transmit_peek(substream, buffer, count);
+- if (count < 0)
+- return count;
+- return snd_rawmidi_transmit_ack(substream, count);
++ result = -EBADFD;
++ else {
++ count = __snd_rawmidi_transmit_peek(substream, buffer, count);
++ if (count <= 0)
++ result = count;
++ else
++ result = __snd_rawmidi_transmit_ack(substream, count);
++ }
++ spin_unlock_irqrestore(&runtime->lock, flags);
++ return result;
+ }
+ EXPORT_SYMBOL(snd_rawmidi_transmit);
+
+@@ -1177,8 +1228,9 @@ static long snd_rawmidi_kernel_write1(struct snd_rawmidi_substream *substream,
+ unsigned long flags;
+ long count1, result;
+ struct snd_rawmidi_runtime *runtime = substream->runtime;
++ unsigned long appl_ptr;
+
+- if (snd_BUG_ON(!kernelbuf && !userbuf))
++ if (!kernelbuf && !userbuf)
+ return -EINVAL;
+ if (snd_BUG_ON(!runtime->buffer))
+ return -EINVAL;
+@@ -1197,12 +1249,19 @@ static long snd_rawmidi_kernel_write1(struct snd_rawmidi_substream *substream,
+ count1 = count;
+ if (count1 > (long)runtime->avail)
+ count1 = runtime->avail;
++
++ /* update runtime->appl_ptr before unlocking for userbuf */
++ appl_ptr = runtime->appl_ptr;
++ runtime->appl_ptr += count1;
++ runtime->appl_ptr %= runtime->buffer_size;
++ runtime->avail -= count1;
++
+ if (kernelbuf)
+- memcpy(runtime->buffer + runtime->appl_ptr,
++ memcpy(runtime->buffer + appl_ptr,
+ kernelbuf + result, count1);
+ else if (userbuf) {
+ spin_unlock_irqrestore(&runtime->lock, flags);
+- if (copy_from_user(runtime->buffer + runtime->appl_ptr,
++ if (copy_from_user(runtime->buffer + appl_ptr,
+ userbuf + result, count1)) {
+ spin_lock_irqsave(&runtime->lock, flags);
+ result = result > 0 ? result : -EFAULT;
+@@ -1210,9 +1269,6 @@ static long snd_rawmidi_kernel_write1(struct snd_rawmidi_substream *substream,
+ }
+ spin_lock_irqsave(&runtime->lock, flags);
+ }
+- runtime->appl_ptr += count1;
+- runtime->appl_ptr %= runtime->buffer_size;
+- runtime->avail -= count1;
+ result += count1;
+ count -= count1;
+ }
+diff --git a/sound/core/seq/oss/seq_oss_init.c b/sound/core/seq/oss/seq_oss_init.c
+index b1221b29728e..6779e82b46dd 100644
+--- a/sound/core/seq/oss/seq_oss_init.c
++++ b/sound/core/seq/oss/seq_oss_init.c
+@@ -202,7 +202,7 @@ snd_seq_oss_open(struct file *file, int level)
+
+ dp->index = i;
+ if (i >= SNDRV_SEQ_OSS_MAX_CLIENTS) {
+- pr_err("ALSA: seq_oss: too many applications\n");
++ pr_debug("ALSA: seq_oss: too many applications\n");
+ rc = -ENOMEM;
+ goto _error;
+ }
+diff --git a/sound/core/seq/oss/seq_oss_synth.c b/sound/core/seq/oss/seq_oss_synth.c
+index 0f3b38184fe5..b16dbef04174 100644
+--- a/sound/core/seq/oss/seq_oss_synth.c
++++ b/sound/core/seq/oss/seq_oss_synth.c
+@@ -308,7 +308,7 @@ snd_seq_oss_synth_cleanup(struct seq_oss_devinfo *dp)
+ struct seq_oss_synth *rec;
+ struct seq_oss_synthinfo *info;
+
+- if (snd_BUG_ON(dp->max_synthdev >= SNDRV_SEQ_OSS_MAX_SYNTH_DEVS))
++ if (snd_BUG_ON(dp->max_synthdev > SNDRV_SEQ_OSS_MAX_SYNTH_DEVS))
+ return;
+ for (i = 0; i < dp->max_synthdev; i++) {
+ info = &dp->synths[i];
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 13cfa815732d..58e79e02f217 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -678,6 +678,9 @@ static int deliver_to_subscribers(struct snd_seq_client *client,
+ else
+ down_read(&grp->list_mutex);
+ list_for_each_entry(subs, &grp->list_head, src_list) {
++ /* both ports ready? */
++ if (atomic_read(&subs->ref_count) != 2)
++ continue;
+ event->dest = subs->info.dest;
+ if (subs->info.flags & SNDRV_SEQ_PORT_SUBS_TIMESTAMP)
+ /* convert time according to flag with subscription */
+diff --git a/sound/core/seq/seq_ports.c b/sound/core/seq/seq_ports.c
+index 55170a20ae72..921fb2bd8fad 100644
+--- a/sound/core/seq/seq_ports.c
++++ b/sound/core/seq/seq_ports.c
+@@ -173,10 +173,6 @@ struct snd_seq_client_port *snd_seq_create_port(struct snd_seq_client *client,
+ }
+
+ /* */
+-enum group_type {
+- SRC_LIST, DEST_LIST
+-};
+-
+ static int subscribe_port(struct snd_seq_client *client,
+ struct snd_seq_client_port *port,
+ struct snd_seq_port_subs_info *grp,
+@@ -203,6 +199,20 @@ static struct snd_seq_client_port *get_client_port(struct snd_seq_addr *addr,
+ return NULL;
+ }
+
++static void delete_and_unsubscribe_port(struct snd_seq_client *client,
++ struct snd_seq_client_port *port,
++ struct snd_seq_subscribers *subs,
++ bool is_src, bool ack);
++
++static inline struct snd_seq_subscribers *
++get_subscriber(struct list_head *p, bool is_src)
++{
++ if (is_src)
++ return list_entry(p, struct snd_seq_subscribers, src_list);
++ else
++ return list_entry(p, struct snd_seq_subscribers, dest_list);
++}
++
+ /*
+ * remove all subscribers on the list
+ * this is called from port_delete, for each src and dest list.
+@@ -210,7 +220,7 @@ static struct snd_seq_client_port *get_client_port(struct snd_seq_addr *addr,
+ static void clear_subscriber_list(struct snd_seq_client *client,
+ struct snd_seq_client_port *port,
+ struct snd_seq_port_subs_info *grp,
+- int grptype)
++ int is_src)
+ {
+ struct list_head *p, *n;
+
+@@ -219,15 +229,13 @@ static void clear_subscriber_list(struct snd_seq_client *client,
+ struct snd_seq_client *c;
+ struct snd_seq_client_port *aport;
+
+- if (grptype == SRC_LIST) {
+- subs = list_entry(p, struct snd_seq_subscribers, src_list);
++ subs = get_subscriber(p, is_src);
++ if (is_src)
+ aport = get_client_port(&subs->info.dest, &c);
+- } else {
+- subs = list_entry(p, struct snd_seq_subscribers, dest_list);
++ else
+ aport = get_client_port(&subs->info.sender, &c);
+- }
+- list_del(p);
+- unsubscribe_port(client, port, grp, &subs->info, 0);
++ delete_and_unsubscribe_port(client, port, subs, is_src, false);
++
+ if (!aport) {
+ /* looks like the connected port is being deleted.
+ * we decrease the counter, and when both ports are deleted
+@@ -235,21 +243,14 @@ static void clear_subscriber_list(struct snd_seq_client *client,
+ */
+ if (atomic_dec_and_test(&subs->ref_count))
+ kfree(subs);
+- } else {
+- /* ok we got the connected port */
+- struct snd_seq_port_subs_info *agrp;
+- agrp = (grptype == SRC_LIST) ? &aport->c_dest : &aport->c_src;
+- down_write(&agrp->list_mutex);
+- if (grptype == SRC_LIST)
+- list_del(&subs->dest_list);
+- else
+- list_del(&subs->src_list);
+- up_write(&agrp->list_mutex);
+- unsubscribe_port(c, aport, agrp, &subs->info, 1);
+- kfree(subs);
+- snd_seq_port_unlock(aport);
+- snd_seq_client_unlock(c);
++ continue;
+ }
++
++ /* ok we got the connected port */
++ delete_and_unsubscribe_port(c, aport, subs, !is_src, true);
++ kfree(subs);
++ snd_seq_port_unlock(aport);
++ snd_seq_client_unlock(c);
+ }
+ }
+
+@@ -262,8 +263,8 @@ static int port_delete(struct snd_seq_client *client,
+ snd_use_lock_sync(&port->use_lock);
+
+ /* clear subscribers info */
+- clear_subscriber_list(client, port, &port->c_src, SRC_LIST);
+- clear_subscriber_list(client, port, &port->c_dest, DEST_LIST);
++ clear_subscriber_list(client, port, &port->c_src, true);
++ clear_subscriber_list(client, port, &port->c_dest, false);
+
+ if (port->private_free)
+ port->private_free(port->private_data);
+@@ -479,85 +480,120 @@ static int match_subs_info(struct snd_seq_port_subscribe *r,
+ return 0;
+ }
+
+-
+-/* connect two ports */
+-int snd_seq_port_connect(struct snd_seq_client *connector,
+- struct snd_seq_client *src_client,
+- struct snd_seq_client_port *src_port,
+- struct snd_seq_client *dest_client,
+- struct snd_seq_client_port *dest_port,
+- struct snd_seq_port_subscribe *info)
++static int check_and_subscribe_port(struct snd_seq_client *client,
++ struct snd_seq_client_port *port,
++ struct snd_seq_subscribers *subs,
++ bool is_src, bool exclusive, bool ack)
+ {
+- struct snd_seq_port_subs_info *src = &src_port->c_src;
+- struct snd_seq_port_subs_info *dest = &dest_port->c_dest;
+- struct snd_seq_subscribers *subs, *s;
+- int err, src_called = 0;
+- unsigned long flags;
+- int exclusive;
++ struct snd_seq_port_subs_info *grp;
++ struct list_head *p;
++ struct snd_seq_subscribers *s;
++ int err;
+
+- subs = kzalloc(sizeof(*subs), GFP_KERNEL);
+- if (! subs)
+- return -ENOMEM;
+-
+- subs->info = *info;
+- atomic_set(&subs->ref_count, 2);
+-
+- down_write(&src->list_mutex);
+- down_write_nested(&dest->list_mutex, SINGLE_DEPTH_NESTING);
+-
+- exclusive = info->flags & SNDRV_SEQ_PORT_SUBS_EXCLUSIVE ? 1 : 0;
++ grp = is_src ? &port->c_src : &port->c_dest;
+ err = -EBUSY;
++ down_write(&grp->list_mutex);
+ if (exclusive) {
+- if (! list_empty(&src->list_head) || ! list_empty(&dest->list_head))
++ if (!list_empty(&grp->list_head))
+ goto __error;
+ } else {
+- if (src->exclusive || dest->exclusive)
++ if (grp->exclusive)
+ goto __error;
+ /* check whether already exists */
+- list_for_each_entry(s, &src->list_head, src_list) {
+- if (match_subs_info(info, &s->info))
+- goto __error;
+- }
+- list_for_each_entry(s, &dest->list_head, dest_list) {
+- if (match_subs_info(info, &s->info))
++ list_for_each(p, &grp->list_head) {
++ s = get_subscriber(p, is_src);
++ if (match_subs_info(&subs->info, &s->info))
+ goto __error;
+ }
+ }
+
+- if ((err = subscribe_port(src_client, src_port, src, info,
+- connector->number != src_client->number)) < 0)
+- goto __error;
+- src_called = 1;
+-
+- if ((err = subscribe_port(dest_client, dest_port, dest, info,
+- connector->number != dest_client->number)) < 0)
++ err = subscribe_port(client, port, grp, &subs->info, ack);
++ if (err < 0) {
++ grp->exclusive = 0;
+ goto __error;
++ }
+
+ /* add to list */
+- write_lock_irqsave(&src->list_lock, flags);
+- // write_lock(&dest->list_lock); // no other lock yet
+- list_add_tail(&subs->src_list, &src->list_head);
+- list_add_tail(&subs->dest_list, &dest->list_head);
+- // write_unlock(&dest->list_lock); // no other lock yet
+- write_unlock_irqrestore(&src->list_lock, flags);
++ write_lock_irq(&grp->list_lock);
++ if (is_src)
++ list_add_tail(&subs->src_list, &grp->list_head);
++ else
++ list_add_tail(&subs->dest_list, &grp->list_head);
++ grp->exclusive = exclusive;
++ atomic_inc(&subs->ref_count);
++ write_unlock_irq(&grp->list_lock);
++ err = 0;
++
++ __error:
++ up_write(&grp->list_mutex);
++ return err;
++}
+
+- src->exclusive = dest->exclusive = exclusive;
++static void delete_and_unsubscribe_port(struct snd_seq_client *client,
++ struct snd_seq_client_port *port,
++ struct snd_seq_subscribers *subs,
++ bool is_src, bool ack)
++{
++ struct snd_seq_port_subs_info *grp;
++
++ grp = is_src ? &port->c_src : &port->c_dest;
++ down_write(&grp->list_mutex);
++ write_lock_irq(&grp->list_lock);
++ if (is_src)
++ list_del(&subs->src_list);
++ else
++ list_del(&subs->dest_list);
++ grp->exclusive = 0;
++ write_unlock_irq(&grp->list_lock);
++ up_write(&grp->list_mutex);
++
++ unsubscribe_port(client, port, grp, &subs->info, ack);
++}
++
++/* connect two ports */
++int snd_seq_port_connect(struct snd_seq_client *connector,
++ struct snd_seq_client *src_client,
++ struct snd_seq_client_port *src_port,
++ struct snd_seq_client *dest_client,
++ struct snd_seq_client_port *dest_port,
++ struct snd_seq_port_subscribe *info)
++{
++ struct snd_seq_subscribers *subs;
++ bool exclusive;
++ int err;
++
++ subs = kzalloc(sizeof(*subs), GFP_KERNEL);
++ if (!subs)
++ return -ENOMEM;
++
++ subs->info = *info;
++ atomic_set(&subs->ref_count, 0);
++ INIT_LIST_HEAD(&subs->src_list);
++ INIT_LIST_HEAD(&subs->dest_list);
++
++ exclusive = !!(info->flags & SNDRV_SEQ_PORT_SUBS_EXCLUSIVE);
++
++ err = check_and_subscribe_port(src_client, src_port, subs, true,
++ exclusive,
++ connector->number != src_client->number);
++ if (err < 0)
++ goto error;
++ err = check_and_subscribe_port(dest_client, dest_port, subs, false,
++ exclusive,
++ connector->number != dest_client->number);
++ if (err < 0)
++ goto error_dest;
+
+- up_write(&dest->list_mutex);
+- up_write(&src->list_mutex);
+ return 0;
+
+- __error:
+- if (src_called)
+- unsubscribe_port(src_client, src_port, src, info,
+- connector->number != src_client->number);
++ error_dest:
++ delete_and_unsubscribe_port(src_client, src_port, subs, true,
++ connector->number != src_client->number);
++ error:
+ kfree(subs);
+- up_write(&dest->list_mutex);
+- up_write(&src->list_mutex);
+ return err;
+ }
+
+-
+ /* remove the connection */
+ int snd_seq_port_disconnect(struct snd_seq_client *connector,
+ struct snd_seq_client *src_client,
+@@ -567,37 +603,28 @@ int snd_seq_port_disconnect(struct snd_seq_client *connector,
+ struct snd_seq_port_subscribe *info)
+ {
+ struct snd_seq_port_subs_info *src = &src_port->c_src;
+- struct snd_seq_port_subs_info *dest = &dest_port->c_dest;
+ struct snd_seq_subscribers *subs;
+ int err = -ENOENT;
+- unsigned long flags;
+
+ down_write(&src->list_mutex);
+- down_write_nested(&dest->list_mutex, SINGLE_DEPTH_NESTING);
+-
+ /* look for the connection */
+ list_for_each_entry(subs, &src->list_head, src_list) {
+ if (match_subs_info(info, &subs->info)) {
+- write_lock_irqsave(&src->list_lock, flags);
+- // write_lock(&dest->list_lock); // no lock yet
+- list_del(&subs->src_list);
+- list_del(&subs->dest_list);
+- // write_unlock(&dest->list_lock);
+- write_unlock_irqrestore(&src->list_lock, flags);
+- src->exclusive = dest->exclusive = 0;
+- unsubscribe_port(src_client, src_port, src, info,
+- connector->number != src_client->number);
+- unsubscribe_port(dest_client, dest_port, dest, info,
+- connector->number != dest_client->number);
+- kfree(subs);
++ atomic_dec(&subs->ref_count); /* mark as not ready */
+ err = 0;
+ break;
+ }
+ }
+-
+- up_write(&dest->list_mutex);
+ up_write(&src->list_mutex);
+- return err;
++ if (err < 0)
++ return err;
++
++ delete_and_unsubscribe_port(src_client, src_port, subs, true,
++ connector->number != src_client->number);
++ delete_and_unsubscribe_port(dest_client, dest_port, subs, false,
++ connector->number != dest_client->number);
++ kfree(subs);
++ return 0;
+ }
+
+
+diff --git a/sound/core/seq/seq_timer.c b/sound/core/seq/seq_timer.c
+index 82b220c769c1..293104926098 100644
+--- a/sound/core/seq/seq_timer.c
++++ b/sound/core/seq/seq_timer.c
+@@ -90,6 +90,9 @@ void snd_seq_timer_delete(struct snd_seq_timer **tmr)
+
+ void snd_seq_timer_defaults(struct snd_seq_timer * tmr)
+ {
++ unsigned long flags;
++
++ spin_lock_irqsave(&tmr->lock, flags);
+ /* setup defaults */
+ tmr->ppq = 96; /* 96 PPQ */
+ tmr->tempo = 500000; /* 120 BPM */
+@@ -105,21 +108,25 @@ void snd_seq_timer_defaults(struct snd_seq_timer * tmr)
+ tmr->preferred_resolution = seq_default_timer_resolution;
+
+ tmr->skew = tmr->skew_base = SKEW_BASE;
++ spin_unlock_irqrestore(&tmr->lock, flags);
+ }
+
+-void snd_seq_timer_reset(struct snd_seq_timer * tmr)
++static void seq_timer_reset(struct snd_seq_timer *tmr)
+ {
+- unsigned long flags;
+-
+- spin_lock_irqsave(&tmr->lock, flags);
+-
+ /* reset time & songposition */
+ tmr->cur_time.tv_sec = 0;
+ tmr->cur_time.tv_nsec = 0;
+
+ tmr->tick.cur_tick = 0;
+ tmr->tick.fraction = 0;
++}
++
++void snd_seq_timer_reset(struct snd_seq_timer *tmr)
++{
++ unsigned long flags;
+
++ spin_lock_irqsave(&tmr->lock, flags);
++ seq_timer_reset(tmr);
+ spin_unlock_irqrestore(&tmr->lock, flags);
+ }
+
+@@ -138,8 +145,11 @@ static void snd_seq_timer_interrupt(struct snd_timer_instance *timeri,
+ tmr = q->timer;
+ if (tmr == NULL)
+ return;
+- if (!tmr->running)
++ spin_lock_irqsave(&tmr->lock, flags);
++ if (!tmr->running) {
++ spin_unlock_irqrestore(&tmr->lock, flags);
+ return;
++ }
+
+ resolution *= ticks;
+ if (tmr->skew != tmr->skew_base) {
+@@ -148,8 +158,6 @@ static void snd_seq_timer_interrupt(struct snd_timer_instance *timeri,
+ (((resolution & 0xffff) * tmr->skew) >> 16);
+ }
+
+- spin_lock_irqsave(&tmr->lock, flags);
+-
+ /* update timer */
+ snd_seq_inc_time_nsec(&tmr->cur_time, resolution);
+
+@@ -296,26 +304,30 @@ int snd_seq_timer_open(struct snd_seq_queue *q)
+ t->callback = snd_seq_timer_interrupt;
+ t->callback_data = q;
+ t->flags |= SNDRV_TIMER_IFLG_AUTO;
++ spin_lock_irq(&tmr->lock);
+ tmr->timeri = t;
++ spin_unlock_irq(&tmr->lock);
+ return 0;
+ }
+
+ int snd_seq_timer_close(struct snd_seq_queue *q)
+ {
+ struct snd_seq_timer *tmr;
++ struct snd_timer_instance *t;
+
+ tmr = q->timer;
+ if (snd_BUG_ON(!tmr))
+ return -EINVAL;
+- if (tmr->timeri) {
+- snd_timer_stop(tmr->timeri);
+- snd_timer_close(tmr->timeri);
+- tmr->timeri = NULL;
+- }
++ spin_lock_irq(&tmr->lock);
++ t = tmr->timeri;
++ tmr->timeri = NULL;
++ spin_unlock_irq(&tmr->lock);
++ if (t)
++ snd_timer_close(t);
+ return 0;
+ }
+
+-int snd_seq_timer_stop(struct snd_seq_timer * tmr)
++static int seq_timer_stop(struct snd_seq_timer *tmr)
+ {
+ if (! tmr->timeri)
+ return -EINVAL;
+@@ -326,6 +338,17 @@ int snd_seq_timer_stop(struct snd_seq_timer * tmr)
+ return 0;
+ }
+
++int snd_seq_timer_stop(struct snd_seq_timer *tmr)
++{
++ unsigned long flags;
++ int err;
++
++ spin_lock_irqsave(&tmr->lock, flags);
++ err = seq_timer_stop(tmr);
++ spin_unlock_irqrestore(&tmr->lock, flags);
++ return err;
++}
++
+ static int initialize_timer(struct snd_seq_timer *tmr)
+ {
+ struct snd_timer *t;
+@@ -358,13 +381,13 @@ static int initialize_timer(struct snd_seq_timer *tmr)
+ return 0;
+ }
+
+-int snd_seq_timer_start(struct snd_seq_timer * tmr)
++static int seq_timer_start(struct snd_seq_timer *tmr)
+ {
+ if (! tmr->timeri)
+ return -EINVAL;
+ if (tmr->running)
+- snd_seq_timer_stop(tmr);
+- snd_seq_timer_reset(tmr);
++ seq_timer_stop(tmr);
++ seq_timer_reset(tmr);
+ if (initialize_timer(tmr) < 0)
+ return -EINVAL;
+ snd_timer_start(tmr->timeri, tmr->ticks);
+@@ -373,14 +396,25 @@ int snd_seq_timer_start(struct snd_seq_timer * tmr)
+ return 0;
+ }
+
+-int snd_seq_timer_continue(struct snd_seq_timer * tmr)
++int snd_seq_timer_start(struct snd_seq_timer *tmr)
++{
++ unsigned long flags;
++ int err;
++
++ spin_lock_irqsave(&tmr->lock, flags);
++ err = seq_timer_start(tmr);
++ spin_unlock_irqrestore(&tmr->lock, flags);
++ return err;
++}
++
++static int seq_timer_continue(struct snd_seq_timer *tmr)
+ {
+ if (! tmr->timeri)
+ return -EINVAL;
+ if (tmr->running)
+ return -EBUSY;
+ if (! tmr->initialized) {
+- snd_seq_timer_reset(tmr);
++ seq_timer_reset(tmr);
+ if (initialize_timer(tmr) < 0)
+ return -EINVAL;
+ }
+@@ -390,11 +424,24 @@ int snd_seq_timer_continue(struct snd_seq_timer * tmr)
+ return 0;
+ }
+
++int snd_seq_timer_continue(struct snd_seq_timer *tmr)
++{
++ unsigned long flags;
++ int err;
++
++ spin_lock_irqsave(&tmr->lock, flags);
++ err = seq_timer_continue(tmr);
++ spin_unlock_irqrestore(&tmr->lock, flags);
++ return err;
++}
++
+ /* return current 'real' time. use timeofday() to get better granularity. */
+ snd_seq_real_time_t snd_seq_timer_get_cur_time(struct snd_seq_timer *tmr)
+ {
+ snd_seq_real_time_t cur_time;
++ unsigned long flags;
+
++ spin_lock_irqsave(&tmr->lock, flags);
+ cur_time = tmr->cur_time;
+ if (tmr->running) {
+ struct timeval tm;
+@@ -410,7 +457,7 @@ snd_seq_real_time_t snd_seq_timer_get_cur_time(struct snd_seq_timer *tmr)
+ }
+ snd_seq_sanity_real_time(&cur_time);
+ }
+-
++ spin_unlock_irqrestore(&tmr->lock, flags);
+ return cur_time;
+ }
+
+diff --git a/sound/core/seq/seq_virmidi.c b/sound/core/seq/seq_virmidi.c
+index 56e0f4cd3f82..81134e067184 100644
+--- a/sound/core/seq/seq_virmidi.c
++++ b/sound/core/seq/seq_virmidi.c
+@@ -155,21 +155,26 @@ static void snd_virmidi_output_trigger(struct snd_rawmidi_substream *substream,
+ struct snd_virmidi *vmidi = substream->runtime->private_data;
+ int count, res;
+ unsigned char buf[32], *pbuf;
++ unsigned long flags;
+
+ if (up) {
+ vmidi->trigger = 1;
+ if (vmidi->seq_mode == SNDRV_VIRMIDI_SEQ_DISPATCH &&
+ !(vmidi->rdev->flags & SNDRV_VIRMIDI_SUBSCRIBE)) {
+- snd_rawmidi_transmit_ack(substream, substream->runtime->buffer_size - substream->runtime->avail);
+- return; /* ignored */
++ while (snd_rawmidi_transmit(substream, buf,
++ sizeof(buf)) > 0) {
++ /* ignored */
++ }
++ return;
+ }
+ if (vmidi->event.type != SNDRV_SEQ_EVENT_NONE) {
+ if (snd_seq_kernel_client_dispatch(vmidi->client, &vmidi->event, in_atomic(), 0) < 0)
+ return;
+ vmidi->event.type = SNDRV_SEQ_EVENT_NONE;
+ }
++ spin_lock_irqsave(&substream->runtime->lock, flags);
+ while (1) {
+- count = snd_rawmidi_transmit_peek(substream, buf, sizeof(buf));
++ count = __snd_rawmidi_transmit_peek(substream, buf, sizeof(buf));
+ if (count <= 0)
+ break;
+ pbuf = buf;
+@@ -179,16 +184,18 @@ static void snd_virmidi_output_trigger(struct snd_rawmidi_substream *substream,
+ snd_midi_event_reset_encode(vmidi->parser);
+ continue;
+ }
+- snd_rawmidi_transmit_ack(substream, res);
++ __snd_rawmidi_transmit_ack(substream, res);
+ pbuf += res;
+ count -= res;
+ if (vmidi->event.type != SNDRV_SEQ_EVENT_NONE) {
+ if (snd_seq_kernel_client_dispatch(vmidi->client, &vmidi->event, in_atomic(), 0) < 0)
+- return;
++ goto out;
+ vmidi->event.type = SNDRV_SEQ_EVENT_NONE;
+ }
+ }
+ }
++ out:
++ spin_unlock_irqrestore(&substream->runtime->lock, flags);
+ } else {
+ vmidi->trigger = 0;
+ }
+@@ -254,9 +261,13 @@ static int snd_virmidi_output_open(struct snd_rawmidi_substream *substream)
+ */
+ static int snd_virmidi_input_close(struct snd_rawmidi_substream *substream)
+ {
++ struct snd_virmidi_dev *rdev = substream->rmidi->private_data;
+ struct snd_virmidi *vmidi = substream->runtime->private_data;
+- snd_midi_event_free(vmidi->parser);
++
++ write_lock_irq(&rdev->filelist_lock);
+ list_del(&vmidi->list);
++ write_unlock_irq(&rdev->filelist_lock);
++ snd_midi_event_free(vmidi->parser);
+ substream->runtime->private_data = NULL;
+ kfree(vmidi);
+ return 0;
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index 0a049c4578f1..f24c9fccf008 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -305,8 +305,7 @@ int snd_timer_open(struct snd_timer_instance **ti,
+ return 0;
+ }
+
+-static int _snd_timer_stop(struct snd_timer_instance *timeri,
+- int keep_flag, int event);
++static int _snd_timer_stop(struct snd_timer_instance *timeri, int event);
+
+ /*
+ * close a timer instance
+@@ -348,7 +347,7 @@ int snd_timer_close(struct snd_timer_instance *timeri)
+ spin_unlock_irq(&timer->lock);
+ mutex_lock(®ister_mutex);
+ list_del(&timeri->open_list);
+- if (timer && list_empty(&timer->open_list_head) &&
++ if (list_empty(&timer->open_list_head) &&
+ timer->hw.close)
+ timer->hw.close(timer);
+ /* remove slave links */
+@@ -423,7 +422,7 @@ static void snd_timer_notify1(struct snd_timer_instance *ti, int event)
+ spin_lock_irqsave(&timer->lock, flags);
+ list_for_each_entry(ts, &ti->slave_active_head, active_list)
+ if (ts->ccallback)
+- ts->ccallback(ti, event + 100, &tstamp, resolution);
++ ts->ccallback(ts, event + 100, &tstamp, resolution);
+ spin_unlock_irqrestore(&timer->lock, flags);
+ }
+
+@@ -452,6 +451,10 @@ static int snd_timer_start_slave(struct snd_timer_instance *timeri)
+ unsigned long flags;
+
+ spin_lock_irqsave(&slave_active_lock, flags);
++ if (timeri->flags & SNDRV_TIMER_IFLG_RUNNING) {
++ spin_unlock_irqrestore(&slave_active_lock, flags);
++ return -EBUSY;
++ }
+ timeri->flags |= SNDRV_TIMER_IFLG_RUNNING;
+ if (timeri->master && timeri->timer) {
+ spin_lock(&timeri->timer->lock);
+@@ -476,7 +479,8 @@ int snd_timer_start(struct snd_timer_instance *timeri, unsigned int ticks)
+ return -EINVAL;
+ if (timeri->flags & SNDRV_TIMER_IFLG_SLAVE) {
+ result = snd_timer_start_slave(timeri);
+- snd_timer_notify1(timeri, SNDRV_TIMER_EVENT_START);
++ if (result >= 0)
++ snd_timer_notify1(timeri, SNDRV_TIMER_EVENT_START);
+ return result;
+ }
+ timer = timeri->timer;
+@@ -485,16 +489,22 @@ int snd_timer_start(struct snd_timer_instance *timeri, unsigned int ticks)
+ if (timer->card && timer->card->shutdown)
+ return -ENODEV;
+ spin_lock_irqsave(&timer->lock, flags);
++ if (timeri->flags & (SNDRV_TIMER_IFLG_RUNNING |
++ SNDRV_TIMER_IFLG_START)) {
++ result = -EBUSY;
++ goto unlock;
++ }
+ timeri->ticks = timeri->cticks = ticks;
+ timeri->pticks = 0;
+ result = snd_timer_start1(timer, timeri, ticks);
++ unlock:
+ spin_unlock_irqrestore(&timer->lock, flags);
+- snd_timer_notify1(timeri, SNDRV_TIMER_EVENT_START);
++ if (result >= 0)
++ snd_timer_notify1(timeri, SNDRV_TIMER_EVENT_START);
+ return result;
+ }
+
+-static int _snd_timer_stop(struct snd_timer_instance * timeri,
+- int keep_flag, int event)
++static int _snd_timer_stop(struct snd_timer_instance *timeri, int event)
+ {
+ struct snd_timer *timer;
+ unsigned long flags;
+@@ -503,19 +513,30 @@ static int _snd_timer_stop(struct snd_timer_instance * timeri,
+ return -ENXIO;
+
+ if (timeri->flags & SNDRV_TIMER_IFLG_SLAVE) {
+- if (!keep_flag) {
+- spin_lock_irqsave(&slave_active_lock, flags);
+- timeri->flags &= ~SNDRV_TIMER_IFLG_RUNNING;
+- list_del_init(&timeri->ack_list);
+- list_del_init(&timeri->active_list);
++ spin_lock_irqsave(&slave_active_lock, flags);
++ if (!(timeri->flags & SNDRV_TIMER_IFLG_RUNNING)) {
+ spin_unlock_irqrestore(&slave_active_lock, flags);
++ return -EBUSY;
+ }
++ if (timeri->timer)
++ spin_lock(&timeri->timer->lock);
++ timeri->flags &= ~SNDRV_TIMER_IFLG_RUNNING;
++ list_del_init(&timeri->ack_list);
++ list_del_init(&timeri->active_list);
++ if (timeri->timer)
++ spin_unlock(&timeri->timer->lock);
++ spin_unlock_irqrestore(&slave_active_lock, flags);
+ goto __end;
+ }
+ timer = timeri->timer;
+ if (!timer)
+ return -EINVAL;
+ spin_lock_irqsave(&timer->lock, flags);
++ if (!(timeri->flags & (SNDRV_TIMER_IFLG_RUNNING |
++ SNDRV_TIMER_IFLG_START))) {
++ spin_unlock_irqrestore(&timer->lock, flags);
++ return -EBUSY;
++ }
+ list_del_init(&timeri->ack_list);
+ list_del_init(&timeri->active_list);
+ if (timer->card && timer->card->shutdown) {
+@@ -534,9 +555,7 @@ static int _snd_timer_stop(struct snd_timer_instance * timeri,
+ }
+ }
+ }
+- if (!keep_flag)
+- timeri->flags &=
+- ~(SNDRV_TIMER_IFLG_RUNNING | SNDRV_TIMER_IFLG_START);
++ timeri->flags &= ~(SNDRV_TIMER_IFLG_RUNNING | SNDRV_TIMER_IFLG_START);
+ spin_unlock_irqrestore(&timer->lock, flags);
+ __end:
+ if (event != SNDRV_TIMER_EVENT_RESOLUTION)
+@@ -555,7 +574,7 @@ int snd_timer_stop(struct snd_timer_instance *timeri)
+ unsigned long flags;
+ int err;
+
+- err = _snd_timer_stop(timeri, 0, SNDRV_TIMER_EVENT_STOP);
++ err = _snd_timer_stop(timeri, SNDRV_TIMER_EVENT_STOP);
+ if (err < 0)
+ return err;
+ timer = timeri->timer;
+@@ -587,10 +606,15 @@ int snd_timer_continue(struct snd_timer_instance *timeri)
+ if (timer->card && timer->card->shutdown)
+ return -ENODEV;
+ spin_lock_irqsave(&timer->lock, flags);
++ if (timeri->flags & SNDRV_TIMER_IFLG_RUNNING) {
++ result = -EBUSY;
++ goto unlock;
++ }
+ if (!timeri->cticks)
+ timeri->cticks = 1;
+ timeri->pticks = 0;
+ result = snd_timer_start1(timer, timeri, timer->sticks);
++ unlock:
+ spin_unlock_irqrestore(&timer->lock, flags);
+ snd_timer_notify1(timeri, SNDRV_TIMER_EVENT_CONTINUE);
+ return result;
+@@ -601,7 +625,7 @@ int snd_timer_continue(struct snd_timer_instance *timeri)
+ */
+ int snd_timer_pause(struct snd_timer_instance * timeri)
+ {
+- return _snd_timer_stop(timeri, 0, SNDRV_TIMER_EVENT_PAUSE);
++ return _snd_timer_stop(timeri, SNDRV_TIMER_EVENT_PAUSE);
+ }
+
+ /*
+@@ -724,8 +748,8 @@ void snd_timer_interrupt(struct snd_timer * timer, unsigned long ticks_left)
+ ti->cticks = ti->ticks;
+ } else {
+ ti->flags &= ~SNDRV_TIMER_IFLG_RUNNING;
+- if (--timer->running)
+- list_del_init(&ti->active_list);
++ --timer->running;
++ list_del_init(&ti->active_list);
+ }
+ if ((timer->hw.flags & SNDRV_TIMER_HW_TASKLET) ||
+ (ti->flags & SNDRV_TIMER_IFLG_FAST))
+@@ -1900,6 +1924,7 @@ static ssize_t snd_timer_user_read(struct file *file, char __user *buffer,
+ {
+ struct snd_timer_user *tu;
+ long result = 0, unit;
++ int qhead;
+ int err = 0;
+
+ tu = file->private_data;
+@@ -1911,7 +1936,7 @@ static ssize_t snd_timer_user_read(struct file *file, char __user *buffer,
+
+ if ((file->f_flags & O_NONBLOCK) != 0 || result > 0) {
+ err = -EAGAIN;
+- break;
++ goto _error;
+ }
+
+ set_current_state(TASK_INTERRUPTIBLE);
+@@ -1926,42 +1951,37 @@ static ssize_t snd_timer_user_read(struct file *file, char __user *buffer,
+
+ if (tu->disconnected) {
+ err = -ENODEV;
+- break;
++ goto _error;
+ }
+ if (signal_pending(current)) {
+ err = -ERESTARTSYS;
+- break;
++ goto _error;
+ }
+ }
+
++ qhead = tu->qhead++;
++ tu->qhead %= tu->queue_size;
+ spin_unlock_irq(&tu->qlock);
+- if (err < 0)
+- goto _error;
+
+ if (tu->tread) {
+- if (copy_to_user(buffer, &tu->tqueue[tu->qhead++],
+- sizeof(struct snd_timer_tread))) {
++ if (copy_to_user(buffer, &tu->tqueue[qhead],
++ sizeof(struct snd_timer_tread)))
+ err = -EFAULT;
+- goto _error;
+- }
+ } else {
+- if (copy_to_user(buffer, &tu->queue[tu->qhead++],
+- sizeof(struct snd_timer_read))) {
++ if (copy_to_user(buffer, &tu->queue[qhead],
++ sizeof(struct snd_timer_read)))
+ err = -EFAULT;
+- goto _error;
+- }
+ }
+
+- tu->qhead %= tu->queue_size;
+-
+- result += unit;
+- buffer += unit;
+-
+ spin_lock_irq(&tu->qlock);
+ tu->qused--;
++ if (err < 0)
++ goto _error;
++ result += unit;
++ buffer += unit;
+ }
+- spin_unlock_irq(&tu->qlock);
+ _error:
++ spin_unlock_irq(&tu->qlock);
+ return result > 0 ? result : err;
+ }
+
+diff --git a/sound/drivers/dummy.c b/sound/drivers/dummy.c
+index 016e451ed506..a9f7a75702d2 100644
+--- a/sound/drivers/dummy.c
++++ b/sound/drivers/dummy.c
+@@ -109,6 +109,9 @@ struct dummy_timer_ops {
+ snd_pcm_uframes_t (*pointer)(struct snd_pcm_substream *);
+ };
+
++#define get_dummy_ops(substream) \
++ (*(const struct dummy_timer_ops **)(substream)->runtime->private_data)
++
+ struct dummy_model {
+ const char *name;
+ int (*playback_constraints)(struct snd_pcm_runtime *runtime);
+@@ -137,7 +140,6 @@ struct snd_dummy {
+ int iobox;
+ struct snd_kcontrol *cd_volume_ctl;
+ struct snd_kcontrol *cd_switch_ctl;
+- const struct dummy_timer_ops *timer_ops;
+ };
+
+ /*
+@@ -231,6 +233,8 @@ static struct dummy_model *dummy_models[] = {
+ */
+
+ struct dummy_systimer_pcm {
++ /* ops must be the first item */
++ const struct dummy_timer_ops *timer_ops;
+ spinlock_t lock;
+ struct timer_list timer;
+ unsigned long base_time;
+@@ -366,6 +370,8 @@ static struct dummy_timer_ops dummy_systimer_ops = {
+ */
+
+ struct dummy_hrtimer_pcm {
++ /* ops must be the first item */
++ const struct dummy_timer_ops *timer_ops;
+ ktime_t base_time;
+ ktime_t period_time;
+ atomic_t running;
+@@ -492,31 +498,25 @@ static struct dummy_timer_ops dummy_hrtimer_ops = {
+
+ static int dummy_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
+ {
+- struct snd_dummy *dummy = snd_pcm_substream_chip(substream);
+-
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+- return dummy->timer_ops->start(substream);
++ return get_dummy_ops(substream)->start(substream);
+ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+- return dummy->timer_ops->stop(substream);
++ return get_dummy_ops(substream)->stop(substream);
+ }
+ return -EINVAL;
+ }
+
+ static int dummy_pcm_prepare(struct snd_pcm_substream *substream)
+ {
+- struct snd_dummy *dummy = snd_pcm_substream_chip(substream);
+-
+- return dummy->timer_ops->prepare(substream);
++ return get_dummy_ops(substream)->prepare(substream);
+ }
+
+ static snd_pcm_uframes_t dummy_pcm_pointer(struct snd_pcm_substream *substream)
+ {
+- struct snd_dummy *dummy = snd_pcm_substream_chip(substream);
+-
+- return dummy->timer_ops->pointer(substream);
++ return get_dummy_ops(substream)->pointer(substream);
+ }
+
+ static struct snd_pcm_hardware dummy_pcm_hardware = {
+@@ -562,17 +562,19 @@ static int dummy_pcm_open(struct snd_pcm_substream *substream)
+ struct snd_dummy *dummy = snd_pcm_substream_chip(substream);
+ struct dummy_model *model = dummy->model;
+ struct snd_pcm_runtime *runtime = substream->runtime;
++ const struct dummy_timer_ops *ops;
+ int err;
+
+- dummy->timer_ops = &dummy_systimer_ops;
++ ops = &dummy_systimer_ops;
+ #ifdef CONFIG_HIGH_RES_TIMERS
+ if (hrtimer)
+- dummy->timer_ops = &dummy_hrtimer_ops;
++ ops = &dummy_hrtimer_ops;
+ #endif
+
+- err = dummy->timer_ops->create(substream);
++ err = ops->create(substream);
+ if (err < 0)
+ return err;
++ get_dummy_ops(substream) = ops;
+
+ runtime->hw = dummy->pcm_hw;
+ if (substream->pcm->device & 1) {
+@@ -594,7 +596,7 @@ static int dummy_pcm_open(struct snd_pcm_substream *substream)
+ err = model->capture_constraints(substream->runtime);
+ }
+ if (err < 0) {
+- dummy->timer_ops->free(substream);
++ get_dummy_ops(substream)->free(substream);
+ return err;
+ }
+ return 0;
+@@ -602,8 +604,7 @@ static int dummy_pcm_open(struct snd_pcm_substream *substream)
+
+ static int dummy_pcm_close(struct snd_pcm_substream *substream)
+ {
+- struct snd_dummy *dummy = snd_pcm_substream_chip(substream);
+- dummy->timer_ops->free(substream);
++ get_dummy_ops(substream)->free(substream);
+ return 0;
+ }
+
+diff --git a/sound/firewire/bebob/bebob_stream.c b/sound/firewire/bebob/bebob_stream.c
+index 5be5242e1ed8..7fdf34e80d5e 100644
+--- a/sound/firewire/bebob/bebob_stream.c
++++ b/sound/firewire/bebob/bebob_stream.c
+@@ -47,14 +47,16 @@ static const unsigned int bridgeco_freq_table[] = {
+ [6] = 0x07,
+ };
+
+-static unsigned int
+-get_formation_index(unsigned int rate)
++static int
++get_formation_index(unsigned int rate, unsigned int *index)
+ {
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(snd_bebob_rate_table); i++) {
+- if (snd_bebob_rate_table[i] == rate)
+- return i;
++ if (snd_bebob_rate_table[i] == rate) {
++ *index = i;
++ return 0;
++ }
+ }
+ return -EINVAL;
+ }
+@@ -424,7 +426,9 @@ make_both_connections(struct snd_bebob *bebob, unsigned int rate)
+ goto end;
+
+ /* confirm params for both streams */
+- index = get_formation_index(rate);
++ err = get_formation_index(rate, &index);
++ if (err < 0)
++ goto end;
+ pcm_channels = bebob->tx_stream_formations[index].pcm;
+ midi_channels = bebob->tx_stream_formations[index].midi;
+ amdtp_stream_set_parameters(&bebob->tx_stream,
+diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
+index 24f91114a32c..98fe629c1f86 100644
+--- a/sound/pci/hda/hda_generic.c
++++ b/sound/pci/hda/hda_generic.c
+@@ -771,9 +771,6 @@ static void activate_amp(struct hda_codec *codec, hda_nid_t nid, int dir,
+ unsigned int caps;
+ unsigned int mask, val;
+
+- if (!enable && is_active_nid(codec, nid, dir, idx_to_check))
+- return;
+-
+ caps = query_amp_caps(codec, nid, dir);
+ val = get_amp_val_to_activate(codec, nid, dir, caps, enable);
+ mask = get_amp_mask_to_modify(codec, nid, dir, idx_to_check, caps);
+@@ -784,12 +781,22 @@ static void activate_amp(struct hda_codec *codec, hda_nid_t nid, int dir,
+ update_amp(codec, nid, dir, idx, mask, val);
+ }
+
++static void check_and_activate_amp(struct hda_codec *codec, hda_nid_t nid,
++ int dir, int idx, int idx_to_check,
++ bool enable)
++{
++ /* check whether the given amp is still used by others */
++ if (!enable && is_active_nid(codec, nid, dir, idx_to_check))
++ return;
++ activate_amp(codec, nid, dir, idx, idx_to_check, enable);
++}
++
+ static void activate_amp_out(struct hda_codec *codec, struct nid_path *path,
+ int i, bool enable)
+ {
+ hda_nid_t nid = path->path[i];
+ init_amp(codec, nid, HDA_OUTPUT, 0);
+- activate_amp(codec, nid, HDA_OUTPUT, 0, 0, enable);
++ check_and_activate_amp(codec, nid, HDA_OUTPUT, 0, 0, enable);
+ }
+
+ static void activate_amp_in(struct hda_codec *codec, struct nid_path *path,
+@@ -817,9 +824,16 @@ static void activate_amp_in(struct hda_codec *codec, struct nid_path *path,
+ * when aa-mixer is available, we need to enable the path as well
+ */
+ for (n = 0; n < nums; n++) {
+- if (n != idx && (!add_aamix || conn[n] != spec->mixer_merge_nid))
+- continue;
+- activate_amp(codec, nid, HDA_INPUT, n, idx, enable);
++ if (n != idx) {
++ if (conn[n] != spec->mixer_merge_nid)
++ continue;
++ /* when aamix is disabled, force to off */
++ if (!add_aamix) {
++ activate_amp(codec, nid, HDA_INPUT, n, n, false);
++ continue;
++ }
++ }
++ check_and_activate_amp(codec, nid, HDA_INPUT, n, idx, enable);
+ }
+ }
+
+@@ -1580,6 +1594,12 @@ static bool map_singles(struct hda_codec *codec, int outs,
+ return found;
+ }
+
++static inline bool has_aamix_out_paths(struct hda_gen_spec *spec)
++{
++ return spec->aamix_out_paths[0] || spec->aamix_out_paths[1] ||
++ spec->aamix_out_paths[2];
++}
++
+ /* create a new path including aamix if available, and return its index */
+ static int check_aamix_out_path(struct hda_codec *codec, int path_idx)
+ {
+@@ -2422,25 +2442,51 @@ static void update_aamix_paths(struct hda_codec *codec, bool do_mix,
+ }
+ }
+
++/* re-initialize the output paths; only called from loopback_mixing_put() */
++static void update_output_paths(struct hda_codec *codec, int num_outs,
++ const int *paths)
++{
++ struct hda_gen_spec *spec = codec->spec;
++ struct nid_path *path;
++ int i;
++
++ for (i = 0; i < num_outs; i++) {
++ path = snd_hda_get_path_from_idx(codec, paths[i]);
++ if (path)
++ snd_hda_activate_path(codec, path, path->active,
++ spec->aamix_mode);
++ }
++}
++
+ static int loopback_mixing_put(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+ {
+ struct hda_codec *codec = snd_kcontrol_chip(kcontrol);
+ struct hda_gen_spec *spec = codec->spec;
++ const struct auto_pin_cfg *cfg = &spec->autocfg;
+ unsigned int val = ucontrol->value.enumerated.item[0];
+
+ if (val == spec->aamix_mode)
+ return 0;
+ spec->aamix_mode = val;
+- update_aamix_paths(codec, val, spec->out_paths[0],
+- spec->aamix_out_paths[0],
+- spec->autocfg.line_out_type);
+- update_aamix_paths(codec, val, spec->hp_paths[0],
+- spec->aamix_out_paths[1],
+- AUTO_PIN_HP_OUT);
+- update_aamix_paths(codec, val, spec->speaker_paths[0],
+- spec->aamix_out_paths[2],
+- AUTO_PIN_SPEAKER_OUT);
++ if (has_aamix_out_paths(spec)) {
++ update_aamix_paths(codec, val, spec->out_paths[0],
++ spec->aamix_out_paths[0],
++ cfg->line_out_type);
++ update_aamix_paths(codec, val, spec->hp_paths[0],
++ spec->aamix_out_paths[1],
++ AUTO_PIN_HP_OUT);
++ update_aamix_paths(codec, val, spec->speaker_paths[0],
++ spec->aamix_out_paths[2],
++ AUTO_PIN_SPEAKER_OUT);
++ } else {
++ update_output_paths(codec, cfg->line_outs, spec->out_paths);
++ if (cfg->line_out_type != AUTO_PIN_HP_OUT)
++ update_output_paths(codec, cfg->hp_outs, spec->hp_paths);
++ if (cfg->line_out_type != AUTO_PIN_SPEAKER_OUT)
++ update_output_paths(codec, cfg->speaker_outs,
++ spec->speaker_paths);
++ }
+ return 1;
+ }
+
+@@ -2458,12 +2504,13 @@ static int create_loopback_mixing_ctl(struct hda_codec *codec)
+
+ if (!spec->mixer_nid)
+ return 0;
+- if (!(spec->aamix_out_paths[0] || spec->aamix_out_paths[1] ||
+- spec->aamix_out_paths[2]))
+- return 0;
+ if (!snd_hda_gen_add_kctl(spec, NULL, &loopback_mixing_enum))
+ return -ENOMEM;
+ spec->have_aamix_ctl = 1;
++ /* if no explicit aamix path is present (e.g. for Realtek codecs),
++ * enable aamix as default -- just for compatibility
++ */
++ spec->aamix_mode = !has_aamix_out_paths(spec);
+ return 0;
+ }
+
+@@ -3998,9 +4045,9 @@ static void pin_power_callback(struct hda_codec *codec,
+ struct hda_jack_callback *jack,
+ bool on)
+ {
+- if (jack && jack->tbl->nid)
++ if (jack && jack->nid)
+ sync_power_state_change(codec,
+- set_pin_power_jack(codec, jack->tbl->nid, on));
++ set_pin_power_jack(codec, jack->nid, on));
+ }
+
+ /* callback only doing power up -- called at first */
+@@ -5664,6 +5711,8 @@ static void init_aamix_paths(struct hda_codec *codec)
+
+ if (!spec->have_aamix_ctl)
+ return;
++ if (!has_aamix_out_paths(spec))
++ return;
+ update_aamix_paths(codec, spec->aamix_mode, spec->out_paths[0],
+ spec->aamix_out_paths[0],
+ spec->autocfg.line_out_type);
+diff --git a/sound/pci/hda/hda_jack.c b/sound/pci/hda/hda_jack.c
+index 366efbf87d41..b6dbe653b74f 100644
+--- a/sound/pci/hda/hda_jack.c
++++ b/sound/pci/hda/hda_jack.c
+@@ -259,7 +259,7 @@ snd_hda_jack_detect_enable_callback(struct hda_codec *codec, hda_nid_t nid,
+ if (!callback)
+ return ERR_PTR(-ENOMEM);
+ callback->func = func;
+- callback->tbl = jack;
++ callback->nid = jack->nid;
+ callback->next = jack->callback;
+ jack->callback = callback;
+ }
+diff --git a/sound/pci/hda/hda_jack.h b/sound/pci/hda/hda_jack.h
+index 387d30984dfe..1009909ea158 100644
+--- a/sound/pci/hda/hda_jack.h
++++ b/sound/pci/hda/hda_jack.h
+@@ -21,7 +21,7 @@ struct hda_jack_callback;
+ typedef void (*hda_jack_callback_fn) (struct hda_codec *, struct hda_jack_callback *);
+
+ struct hda_jack_callback {
+- struct hda_jack_tbl *tbl;
++ hda_nid_t nid;
+ hda_jack_callback_fn func;
+ unsigned int private_data; /* arbitrary data */
+ struct hda_jack_callback *next;
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 5b8a5b84a03c..304a0d7a6481 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -4427,13 +4427,16 @@ static void ca0132_process_dsp_response(struct hda_codec *codec,
+ static void hp_callback(struct hda_codec *codec, struct hda_jack_callback *cb)
+ {
+ struct ca0132_spec *spec = codec->spec;
++ struct hda_jack_tbl *tbl;
+
+ /* Delay enabling the HP amp, to let the mic-detection
+ * state machine run.
+ */
+ cancel_delayed_work_sync(&spec->unsol_hp_work);
+ schedule_delayed_work(&spec->unsol_hp_work, msecs_to_jiffies(500));
+- cb->tbl->block_report = 1;
++ tbl = snd_hda_jack_tbl_get(codec, cb->nid);
++ if (tbl)
++ tbl->block_report = 1;
+ }
+
+ static void amic_callback(struct hda_codec *codec, struct hda_jack_callback *cb)
+diff --git a/sound/pci/hda/patch_cirrus.c b/sound/pci/hda/patch_cirrus.c
+index 85813de26da8..ac38222cce49 100644
+--- a/sound/pci/hda/patch_cirrus.c
++++ b/sound/pci/hda/patch_cirrus.c
+@@ -613,6 +613,7 @@ enum {
+ CS4208_MAC_AUTO,
+ CS4208_MBA6,
+ CS4208_MBP11,
++ CS4208_MACMINI,
+ CS4208_GPIO0,
+ };
+
+@@ -620,6 +621,7 @@ static const struct hda_model_fixup cs4208_models[] = {
+ { .id = CS4208_GPIO0, .name = "gpio0" },
+ { .id = CS4208_MBA6, .name = "mba6" },
+ { .id = CS4208_MBP11, .name = "mbp11" },
++ { .id = CS4208_MACMINI, .name = "macmini" },
+ {}
+ };
+
+@@ -631,6 +633,7 @@ static const struct snd_pci_quirk cs4208_fixup_tbl[] = {
+ /* codec SSID matching */
+ static const struct snd_pci_quirk cs4208_mac_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x106b, 0x5e00, "MacBookPro 11,2", CS4208_MBP11),
++ SND_PCI_QUIRK(0x106b, 0x6c00, "MacMini 7,1", CS4208_MACMINI),
+ SND_PCI_QUIRK(0x106b, 0x7100, "MacBookAir 6,1", CS4208_MBA6),
+ SND_PCI_QUIRK(0x106b, 0x7200, "MacBookAir 6,2", CS4208_MBA6),
+ SND_PCI_QUIRK(0x106b, 0x7b00, "MacBookPro 12,1", CS4208_MBP11),
+@@ -665,6 +668,24 @@ static void cs4208_fixup_mac(struct hda_codec *codec,
+ snd_hda_apply_fixup(codec, action);
+ }
+
++/* MacMini 7,1 has the inverted jack detection */
++static void cs4208_fixup_macmini(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ static const struct hda_pintbl pincfgs[] = {
++ { 0x18, 0x00ab9150 }, /* mic (audio-in) jack: disable detect */
++ { 0x21, 0x004be140 }, /* SPDIF: disable detect */
++ { }
++ };
++
++ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++ /* HP pin (0x10) has an inverted detection */
++ codec->inv_jack_detect = 1;
++ /* disable the bogus Mic and SPDIF jack detections */
++ snd_hda_apply_pincfgs(codec, pincfgs);
++ }
++}
++
+ static int cs4208_spdif_sw_put(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+ {
+@@ -708,6 +729,12 @@ static const struct hda_fixup cs4208_fixups[] = {
+ .chained = true,
+ .chain_id = CS4208_GPIO0,
+ },
++ [CS4208_MACMINI] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = cs4208_fixup_macmini,
++ .chained = true,
++ .chain_id = CS4208_GPIO0,
++ },
+ [CS4208_GPIO0] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = cs4208_fixup_gpio0,
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index d1c74295a362..57d4cce06ab6 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -438,7 +438,8 @@ static int hdmi_eld_ctl_get(struct snd_kcontrol *kcontrol,
+ eld = &per_pin->sink_eld;
+
+ mutex_lock(&per_pin->lock);
+- if (eld->eld_size > ARRAY_SIZE(ucontrol->value.bytes.data)) {
++ if (eld->eld_size > ARRAY_SIZE(ucontrol->value.bytes.data) ||
++ eld->eld_size > ELD_MAX_SIZE) {
+ mutex_unlock(&per_pin->lock);
+ snd_BUG();
+ return -EINVAL;
+@@ -1183,7 +1184,7 @@ static void check_presence_and_report(struct hda_codec *codec, hda_nid_t nid)
+ static void jack_callback(struct hda_codec *codec,
+ struct hda_jack_callback *jack)
+ {
+- check_presence_and_report(codec, jack->tbl->nid);
++ check_presence_and_report(codec, jack->nid);
+ }
+
+ static void hdmi_intrinsic_event(struct hda_codec *codec, unsigned int res)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 887f37761f18..f33c58d3850e 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -282,7 +282,7 @@ static void alc_update_knob_master(struct hda_codec *codec,
+ uctl = kzalloc(sizeof(*uctl), GFP_KERNEL);
+ if (!uctl)
+ return;
+- val = snd_hda_codec_read(codec, jack->tbl->nid, 0,
++ val = snd_hda_codec_read(codec, jack->nid, 0,
+ AC_VERB_GET_VOLUME_KNOB_CONTROL, 0);
+ val &= HDA_AMP_VOLMASK;
+ uctl->value.integer.value[0] = val;
+@@ -1795,7 +1795,6 @@ enum {
+ ALC882_FIXUP_NO_PRIMARY_HP,
+ ALC887_FIXUP_ASUS_BASS,
+ ALC887_FIXUP_BASS_CHMAP,
+- ALC882_FIXUP_DISABLE_AAMIX,
+ };
+
+ static void alc889_fixup_coef(struct hda_codec *codec,
+@@ -1957,8 +1956,6 @@ static void alc882_fixup_no_primary_hp(struct hda_codec *codec,
+
+ static void alc_fixup_bass_chmap(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action);
+-static void alc_fixup_disable_aamix(struct hda_codec *codec,
+- const struct hda_fixup *fix, int action);
+
+ static const struct hda_fixup alc882_fixups[] = {
+ [ALC882_FIXUP_ABIT_AW9D_MAX] = {
+@@ -2196,10 +2193,6 @@ static const struct hda_fixup alc882_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc_fixup_bass_chmap,
+ },
+- [ALC882_FIXUP_DISABLE_AAMIX] = {
+- .type = HDA_FIXUP_FUNC,
+- .v.func = alc_fixup_disable_aamix,
+- },
+ };
+
+ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+@@ -2238,6 +2231,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x104d, 0x9047, "Sony Vaio TT", ALC889_FIXUP_VAIO_TT),
+ SND_PCI_QUIRK(0x104d, 0x905a, "Sony Vaio Z", ALC882_FIXUP_NO_PRIMARY_HP),
+ SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP),
++ SND_PCI_QUIRK(0x104d, 0x9044, "Sony VAIO AiO", ALC882_FIXUP_NO_PRIMARY_HP),
+
+ /* All Apple entries are in codec SSIDs */
+ SND_PCI_QUIRK(0x106b, 0x00a0, "MacBookPro 3,1", ALC889_FIXUP_MBP_VREF),
+@@ -2267,7 +2261,6 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1462, 0x7350, "MSI-7350", ALC889_FIXUP_CD),
+ SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3),
+ SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE),
+- SND_PCI_QUIRK(0x1458, 0xa182, "Gigabyte Z170X-UD3", ALC882_FIXUP_DISABLE_AAMIX),
+ SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX),
+ SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD),
+ SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD),
+diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
+index 14a62b8117fd..79f78989a7b6 100644
+--- a/sound/pci/hda/patch_sigmatel.c
++++ b/sound/pci/hda/patch_sigmatel.c
+@@ -493,9 +493,9 @@ static void jack_update_power(struct hda_codec *codec,
+ if (!spec->num_pwrs)
+ return;
+
+- if (jack && jack->tbl->nid) {
+- stac_toggle_power_map(codec, jack->tbl->nid,
+- snd_hda_jack_detect(codec, jack->tbl->nid),
++ if (jack && jack->nid) {
++ stac_toggle_power_map(codec, jack->nid,
++ snd_hda_jack_detect(codec, jack->nid),
+ true);
+ return;
+ }
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index 5c101af0ac63..a06cdcfffd2a 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -500,7 +500,7 @@ static const struct snd_kcontrol_new rt5645_snd_controls[] = {
+
+ /* IN1/IN2 Control */
+ SOC_SINGLE_TLV("IN1 Boost", RT5645_IN1_CTRL1,
+- RT5645_BST_SFT1, 8, 0, bst_tlv),
++ RT5645_BST_SFT1, 12, 0, bst_tlv),
+ SOC_SINGLE_TLV("IN2 Boost", RT5645_IN2_CTRL,
+ RT5645_BST_SFT2, 8, 0, bst_tlv),
+
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 70e4b9d8bdcd..81133fe5f2d0 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1696,7 +1696,8 @@ int dpcm_be_dai_hw_free(struct snd_soc_pcm_runtime *fe, int stream)
+ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PREPARE) &&
+ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_HW_FREE) &&
+ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED) &&
+- (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP))
++ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP) &&
++ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_SUSPEND))
+ continue;
+
+ dev_dbg(be->dev, "ASoC: hw_free BE %s\n",
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index bec63e0d2605..f059326a4914 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -2451,7 +2451,6 @@ int snd_usbmidi_create(struct snd_card *card,
+ else
+ err = snd_usbmidi_create_endpoints(umidi, endpoints);
+ if (err < 0) {
+- snd_usbmidi_free(umidi);
+ return err;
+ }
+
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index fb9a8a5787a6..37d8ababfc04 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1118,6 +1118,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
+ switch (chip->usb_id) {
+ case USB_ID(0x045E, 0x075D): /* MS Lifecam Cinema */
+ case USB_ID(0x045E, 0x076D): /* MS Lifecam HD-5000 */
++ case USB_ID(0x045E, 0x076F): /* MS Lifecam HD-6000 */
+ case USB_ID(0x045E, 0x0772): /* MS Lifecam Studio */
+ case USB_ID(0x045E, 0x0779): /* MS Lifecam HD-3000 */
+ case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */
+@@ -1202,8 +1203,12 @@ void snd_usb_set_interface_quirk(struct usb_device *dev)
+ * "Playback Design" products need a 50ms delay after setting the
+ * USB interface.
+ */
+- if (le16_to_cpu(dev->descriptor.idVendor) == 0x23ba)
++ switch (le16_to_cpu(dev->descriptor.idVendor)) {
++ case 0x23ba: /* Playback Design */
++ case 0x0644: /* TEAC Corp. */
+ mdelay(50);
++ break;
++ }
+ }
+
+ void snd_usb_ctl_msg_quirk(struct usb_device *dev, unsigned int pipe,
+@@ -1218,6 +1223,14 @@ void snd_usb_ctl_msg_quirk(struct usb_device *dev, unsigned int pipe,
+ (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
+ mdelay(20);
+
++ /*
++ * "TEAC Corp." products need a 20ms delay after each
++ * class compliant request
++ */
++ if ((le16_to_cpu(dev->descriptor.idVendor) == 0x0644) &&
++ (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
++ mdelay(20);
++
+ /* Marantz/Denon devices with USB DAC functionality need a delay
+ * after each class compliant request
+ */
+@@ -1266,7 +1279,7 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ case USB_ID(0x20b1, 0x3008): /* iFi Audio micro/nano iDSD */
+ case USB_ID(0x20b1, 0x2008): /* Matrix Audio X-Sabre */
+ case USB_ID(0x20b1, 0x300a): /* Matrix Audio Mini-i Pro */
+- case USB_ID(0x22d8, 0x0416): /* OPPO HA-1*/
++ case USB_ID(0x22d9, 0x0416): /* OPPO HA-1 */
+ if (fp->altsetting == 2)
+ return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+ break;
+@@ -1275,6 +1288,7 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ case USB_ID(0x20b1, 0x2009): /* DIYINHK DSD DXD 384kHz USB to I2S/DSD */
+ case USB_ID(0x20b1, 0x2023): /* JLsounds I2SoverUSB */
+ case USB_ID(0x20b1, 0x3023): /* Aune X1S 32BIT/384 DSD DAC */
++ case USB_ID(0x2616, 0x0106): /* PS Audio NuWave DAC */
+ if (fp->altsetting == 3)
+ return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+ break;
^ permalink raw reply related [flat|nested] 9+ messages in thread
end of thread, other threads:[~2016-02-19 23:30 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-12-11 0:31 [gentoo-commits] proj/linux-patches:4.3 commit in: / Mike Pagano
-- strict thread matches above, loose matches on Subject: below --
2016-02-19 23:30 Mike Pagano
2016-01-31 23:31 Mike Pagano
2016-01-23 17:49 Mike Pagano
2016-01-20 13:15 Mike Pagano
2015-12-15 11:14 Mike Pagano
2015-12-10 20:16 Mike Pagano
2015-11-06 0:24 Mike Pagano
2015-11-02 1:08 Mike Pagano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox